{"text": "Posted by Tom B. Brown and Catherine Olsson, Research Engineers, Google Brain Team \n\n \n\nMachine learning is being deployed in more and more real-world applications, including [medicine](https://ai.googleblog.com/2018/07/automating-drug-discoveries-using.html), [chemistry](https://ai.googleblog.com/2017/04/predicting-properties-of-molecules-with.html) and [agriculture](https://arxiv.org/abs/1807.11809). When it comes to deploying machine learning in safety-critical contexts, significant challenges remain. In particular, all known machine learning algorithms are vulnerable to [adversarial examples](https://ai.google/research/pubs/pub43405) — inputs that an attacker has intentionally designed to cause the model to make a mistake. While [previous research on adversarial examples](https://arxiv.org/abs/1312.6199) has mostly focused on investigating mistakes caused by small modifications in order to develop [improved models](https://arxiv.org/abs/1611.01236), real-world adversarial agents are often [not subject to the “small modification” constraint](https://arxiv.org/abs/1807.06732). Furthermore, machine learning algorithms can often make [confident errors when faced with an adversary](https://arxiv.org/abs/1705.07263), which makes the development of classifiers that don’t make *any* confident mistakes, even in the presence of an adversary which can submit arbitrary inputs to try to fool the system, an important open problem. \n\n \n\nToday we're announcing the [Unrestricted Adversarial Examples Challenge](https://github.com/google/unrestricted-adversarial-examples), a community-based challenge to incentivize and measure progress towards the goal of zero confident classification errors in machine learning models. While previous research has focused on adversarial examples that are restricted to small changes to pre-labeled data points (allowing researchers to assume the image should have the same label after a small perturbation), this challenge allows unrestricted inputs, allowing participants to submit arbitrary images from the target classes to develop and test models on a wider variety of adversarial examples. \n\n\n\n| |\n| --- |\n| |\n| Adversarial examples can be generated through a variety of means, including by making [small modifications to the input pixels](https://arxiv.org/abs/1412.6572), but also using [spatial transformations](https://arxiv.org/pdf/1712.02779.pdf), or [simple guess-and-check](https://arxiv.org/abs/1807.06732) to find misclassified inputs. |\n\n**Structure of the Challenge** \n\nParticipants can submit entries one of two roles: as a *defender*, by submitting a classifier which has been designed to be difficult to fool, or as an *attacker*, by submitting arbitrary inputs to try to fool the defenders' models. In a “warm-up” period before the challenge, we will present a set of fixed attacks for participants to design networks to defend against. After the community can conclusively beat those fixed attacks, we will launch the [full two-sided challenge with prizes for both attacks and defenses](https://github.com/google/unrestricted-adversarial-examples/blob/master/contest_proposal.md#user-content-prizes). \n\n[![](https://3.bp.blogspot.com/-fh13zeHOhqc/W5mlXOdWC8I/AAAAAAAADVE/NNsd5oX0th031rIAAu0RtGl6cmnMUhvDwCLcBGAs/s640/image3.png)](https://3.bp.blogspot.com/-fh13zeHOhqc/W5mlXOdWC8I/AAAAAAAADVE/NNsd5oX0th031rIAAu0RtGl6cmnMUhvDwCLcBGAs/s1600/image3.png) \n\nFor the purposes of this challenge, we have created a simple “bird-or-bicycle” classification task, where a classifier must answer the following: “*Is this an unambiguous picture of a bird, a bicycle, or is it ambiguous / not obvious?*” We selected this task because telling birds and bicycles apart is very easy for humans, but all known machine learning techniques struggle at the task when in the presence of an adversary. \n\n \n\nThe *defender's goal* is to correctly label a clean test set of birds and bicycles with high accuracy, while also making no confident errors on any attacker-provided bird or bicycle image. The *attacker's goal* is to find an image of a bird that the defending classifier confidently labels as a bicycle (or vice versa). We want to make the challenge as easy as possible for the defenders, so we discard all images that are ambiguous (such as a bird riding a bicycle) or not obvious (such as an aerial view of a park, or random noise). \n\n\n\n| |\n| --- |\n| |\n| Examples of ambiguous and unambiguous images. Defenders must make no confident mistakes on unambiguous bird or bicycle images. We discard all images that humans find ambiguous or not obvious. All images under CC licenses [1](https://commons.wikimedia.org/wiki/File:Neophema_chrysogaster_male_-_Melaleuca.jpg), [2](https://commons.wikimedia.org/wiki/File:Villy_Custom_Luxury_Fashion_Bicycle,_Highland_Park.jpg), [3](https://commons.wikimedia.org/wiki/File:Ara_macao_-on_a_small_bicycle-8.jpg), [4](https://commons.wikimedia.org/wiki/File:Singapore_Bishan_Park_Aerial.jpg). |\n\nAttackers may submit absolutely any image of a bird or a bicycle in an attempt to fool the defending classifier. For example, an attacker could take photographs of birds, use 3D rendering software, make image composites using image editing software, produce novel bird images with a generative model, or any other technique. \n\n \n\nIn order to validate new attacker-provided images, we ask an ensemble of humans to label the image. This procedure lets us allow attackers to submit arbitrary images, not just test set images modified in small ways. If the defending classifier confidently classifies as \"bird\" any attacker-provided image which the human labelers unanimously labeled as a bicycle, the defending model has been broken. You can learn more details about the structure of the challenge in [our paper](https://drive.google.com/open?id=1T0yiu9LPv_Qh-qYhYFLj9dxjnkca8fkG). \n\n \n\n**How to Participate** \n\nIf you’re interested in participating, guidelines for getting started can be found on [the project on github](https://github.com/google/unrestricted-adversarial-examples). We’ve already released our dataset, the evaluation pipeline, and baseline attacks for the warm-up, and we’ll be keeping an up-to-date [leaderboard](https://github.com/google/unrestricted-adversarial-examples#user-content-leaderboard) with the best defenses from the community. We look forward to your entries! \n\n \n\n**Acknowledgements** \n\n*The team behind the Unrestricted Adversarial Examples Challenge includes Tom Brown, Catherine Olsson, Nicholas Carlini, Chiyuan Zhang, and Ian Goodfellow from Google, and Paul Christiano from OpenAI.*", "url": "http://ai.googleblog.com/2018/09/introducing-unrestricted-adversarial.html", "title": "Introducing the Unrestricted Adversarial Examples Challenge", "source": "html_articles", "source_type": "blogPost", "source_filetype": "pdf", "date_published": "2018-09-12T22:00:00Z", "authors": ["Tom B Brown", "Catherine Olsson"], "summary": [], "id": "395246930e0845e7fa211ef64ab53586"} {"text": "Posted by Samuel S. Schoenholz, Senior Research Scientist and Roman Novak, Research Engineer, Google Research \n\n \n\nThe widespread success of deep learning across a range of domains such as [natural language processing](https://ai.googleblog.com/2020/02/exploring-transfer-learning-with-t5.html), [conversational agents](https://ai.googleblog.com/2020/01/towards-conversational-agent-that-can.html), and [connectomics](https://ai.googleblog.com/2020/01/releasing-drosophila-hemibrain.html), has transformed the landscape of research in machine learning and left researchers with a number of interesting and important open questions such as: Why do deep neural networks (DNNs) generalize so well despite being overparameterized? What is the relationship between architecture, training, and performance for deep networks? How can one extract salient features from deep learning models? \n\n \n\nOne of the key theoretical insights that has allowed us to make progress in recent years has been that increasing the width of DNNs results in more regular behavior, and makes them *easier* to understand. A number of recent results have shown that DNNs that are allowed to become infinitely wide [converge to](https://arxiv.org/abs/1711.00165) another, simpler, class of models called [Gaussian processes](https://distill.pub/2019/visual-exploration-gaussian-processes/). In this limit, complicated phenomena (like *[Bayesian inference](https://en.wikipedia.org/wiki/Bayesian_inference)* or *[gradient descent](https://en.wikipedia.org/wiki/Stochastic_gradient_descent) dynamics* of a [convolutional neural network](https://en.wikipedia.org/wiki/Convolutional_neural_network)) boil down to simple linear algebra equations. Insights from these *infinitely wide networks* frequently carry over to their finite counterparts. As such, infinite-width networks can be used as a lens to study deep learning, but also as [useful models in their own right](https://arxiv.org/abs/1910.01663). \n\n\n\n| |\n| --- |\n| |\n| **Left:** A schematic showing how deep neural networks induce simple input / output maps as they become infinitely wide. **Right:** As the width of a neural network increases , we see that the distribution of outputs over different random instantiations of the network becomes Gaussian. |\n\n\nUnfortunately, deriving the infinite-width limit of a finite network requires significant mathematical expertise and has to be worked out separately for each architecture studied. Once the infinite-width model is derived, coming up with an efficient and scalable implementation further requires significant engineering proficiency. Together, the process of taking a finite-width model to its corresponding infinite-width network could take months and might be the topic of a research paper in its own right. \n\n \n\nTo address this issue and to accelerate theoretical progress in deep learning, we present [Neural Tangents](https://arxiv.org/abs/1912.02803), a [new open-source software library](https://github.com/google/neural-tangents) written in [JAX](https://github.com/google/jax) that allows researchers to build and train infinitely wide neural networks as easily as finite neural networks. At its core, Neural Tangents provides an easy-to-use neural network library that builds finite- and infinite-width versions of neural networks simultaneously. \n\n \n\nAs an example of the utility of Neural Tangents, imagine training a fully-connected neural network on some data. Normally, a neural network is randomly initialized and then trained using gradient descent. Initializing and training many of these neural networks results in an *ensemble.* Often researchers and practitioners average the predictions from different members of the ensemble together for better performance. Additionally, the variance in the predictions of members of the ensemble can be used to estimate uncertainty. The downside is that training an ensemble of networks requires a significant computational budget, so it is often avoided. However, when the neural networks become *infinitely wide*, the ensemble [is described by](https://arxiv.org/abs/1806.07572) a Gaussian process with a mean and variance that [can be computed throughout training](https://arxiv.org/abs/1902.06720). \n\n \n\nWith Neural Tangents, one can construct and train *ensembles* *of these infinite-width networks* at once using only five lines of code! The resulting training process is displayed below, and an interactive colaboratory notebook going through this experiment can [be found here](https://colab.sandbox.google.com/github/google/neural-tangents/blob/master/notebooks/neural_tangents_cookbook.ipynb). \n\n\n\n| |\n| --- |\n| |\n| In both plots we compare training of an ensemble of finite neural networks with the infinite-width ensemble of the same architecture. The empirical mean and variance of the finite ensemble is displayed as a dashed black line between two dotted black lines. The [closed-form](https://en.wikipedia.org/wiki/Closed-form_expression) mean and variance of the infinite-width ensemble is displayed as a solid colored line inside a filled color region. In both plots finite- and infinite-width ensembles match very closely and can be hard to distinguish. **Left:** Outputs (vertical f-axis) on the input data (horizontal x-axis) as the training progresses. **Right:** Train and test loss with uncertainty over the course of training. |\n\n\nDespite the fact that the infinite-width ensemble is governed by a simple closed-form expression, it exhibits remarkable agreement with the finite-width ensemble. And since the infinite-width ensemble is a Gaussian process, it naturally provides closed-form uncertainty estimates (filled colored regions in the figure above). These uncertainty estimates closely match the variation of predictions that are observed when training many different copies of the finite network (dashed lines). \n\n \n\nThe above example shows the power of infinite-width neural networks to capture training dynamics. However, networks built using Neural Tangents can be applied to any problem on which you could apply a regular neural network. For example, below we compare three different infinite-width neural network architectures on image recognition using the [CIFAR-10](https://www.cs.toronto.edu/~kriz/cifar.html) dataset. Remarkably, we can evaluate ensembles of highly-elaborate models like infinitely [wide residual networks](https://arxiv.org/abs/1605.07146) in closed-form under both gradient descent and fully-Bayesian inference (an intractable task in the finite-width regime). \n\n\n[![](https://1.bp.blogspot.com/-cKoEM7KdTtg/Xmu2-NFbodI/AAAAAAAAFeY/utJylBrJkVYZN2F32flhE7BmNdXFmRkGgCLcBGAsYHQ/s640/image2.png)](https://1.bp.blogspot.com/-cKoEM7KdTtg/Xmu2-NFbodI/AAAAAAAAFeY/utJylBrJkVYZN2F32flhE7BmNdXFmRkGgCLcBGAsYHQ/s1600/image2.png)\nWe see that, mimicking finite neural networks, infinite-width networks follow a similar hierarchy of performance with fully-connected networks performing worse than convolutional networks, which in turn perform worse than wide residual networks. However, unlike regular training, the learning dynamics of these models is completely tractable in closed-form, which allows unprecedented insight into their behavior. \n\n \n\nWe invite everyone to explore the infinite-width versions of their models with Neural Tangents, and help us open the black box of deep learning. To get started, please check out the [paper](https://arxiv.org/abs/1912.02803), the [tutorial Colab notebook](https://colab.sandbox.google.com/github/google/neural-tangents/blob/master/notebooks/neural_tangents_cookbook.ipynb#scrollTo=Lt74vgCVNN2b), and the [Github repo](https://github.com/google/neural-tangents) — contributions, feature requests, and bug reports are very welcome. This work has been accepted as a spotlight at [ICLR 2020](https://iclr.cc/Conferences/2020). \n\n \n\n**Acknowledgements** \n\n*Neural Tangents is being actively developed by Lechao Xiao, Roman Novak, Jiri Hron, Jaehoon Lee, Alex Alemi, Jascha Sohl-Dickstein, and Samuel S. Schoenholz. We also thank Yasaman Bahri and Greg Yang for the ongoing contributions to improve the library, as well as Sergey Ioffe, Ben Adlam, Ravid Ziv, and Jeffrey Pennington for frequent discussion and useful feedback. Finally, we thank Tom Small for creating the animation in the first figure.*", "url": "http://ai.googleblog.com/2020/03/fast-and-easy-infinitely-wide-networks.html", "title": "Fast and Easy Infinitely Wide Networks with Neural Tangents", "source": "html_articles", "source_type": "blogPost", "source_filetype": "pdf", "date_published": "2020-03-12T23:00:00Z", "authors": ["Samuel S Schoenholz", "Roman Novak"], "summary": [], "id": "1c10337aa6b5a448c770c9d87edf238a"} {"text": "Posted by Yonglong Tian, Student Researcher and Chen Sun, Staff Research Scientist, Google Research\n\nMost people take for granted the ability to view an object from several different angles, but still recognize that it's the same object— a dog viewed from the front is still a dog when viewed from the side. While people do this naturally, computer scientists need to explicitly enable machines to [learn representations](https://arxiv.org/abs/1206.5538) that are *view-invariant*, with the goal of seeking robust data representations that retain information that is useful to downstream tasks. \n\n\n\n\nOf course, in order to learn these representations, manually annotated training data can be used. However, as in many cases such annotations aren’t available, which gives rise to a series of [self-](https://ai.googleblog.com/2020/04/advancing-self-supervised-and-semi.html) and [crossmodal](https://ai.googleblog.com/2019/09/learning-cross-modal-temporal.html) supervised approaches that do not require manually annotated training data. Currently, a popular paradigm for training with such data is [contrastive multiview learning](https://ai.googleblog.com/2020/04/advancing-self-supervised-and-semi.html), where two views of the same scene (for example, [different image channels](https://arxiv.org/abs/1906.05849), [augmentations of the same image](https://arxiv.org/abs/1805.01978), and [video and text pairs](https://arxiv.org/abs/1906.05743)) will tend to converge in representation space while two views of different scenes diverge. Despite their success, one important question remains: “If one doesn’t have annotated labels readily available, how does one select the views to which the representations should be invariant?” In other words, how does one identify an object using information that resides in the pixels of the image itself, while still remaining accurate when that image is viewed from disparate viewpoints?\n\n\n\n\nIn “[What makes for good views for contrastive learning](https://arxiv.org/abs/2005.10243)”, we use theoretical and empirical analysis to better understand the importance of view selection, and argue that one should reduce the [mutual information](https://en.wikipedia.org/wiki/Mutual_information) between views while keeping task-relevant information intact. To verify this hypothesis, we devise unsupervised and semi-supervised frameworks that learn effective views by aiming to reduce their mutual information. We also consider data augmentation as a way to reduce mutual information, and show that increasing data augmentation indeed leads to decreasing mutual information while improving downstream classification accuracy. To encourage further research in this space, we have open-sourced the [code and pre-trained models](https://github.com/HobbitLong/PyContrast/tree/master/pycontrast).\n\n\n\n\n**The InfoMin Hypothesis** \n\nThe goal of contrastive multiview learning is to learn a parametric encoder, whose output representations can be used to discriminate between pairs of views with the same identities, and pairs with different identities. The amount and type of information shared between the views determines how well the resulting model performs on downstream tasks. We hypothesize that the views that yield the best results should discard as much information in the input as possible except for the task relevant information (e.g., object labels), which we call the *InfoMin principle*. \n\n\n\n\nConsider the example below in which two patches of the same image represent the different “views”. The training objective is to identify that the two views belong to the same image. It is undesirable to have views that share too much information, for example, where low-level color and texture cues can be exploited as “shortcuts” (left), or to have views that share too little information to identify that they belong to the same image (right). Rather, views at the “sweet spot” share the information related to downstream tasks, such as patches corresponding to different parts of the panda for an object classification task (center).\n\n\n\n| |\n| --- |\n| |\n| An illustration of three regimes of information captured during contrastive multiview learning. Views should not share too much information (**left**) or too little information (**right**), but should find an optimal mix (the “sweet spot”, **middle**) that maximizes the downstream performance. |\n\n\n\n**A Unified View on Contrastive Learning** \n\nWe design several sets of experiments to verify the InfoMin hypothesis, motivated by the fact that there are simple ways to control the mutual information shared between views without any supervision. For example, we can sample different patches from the same images, and reduce their mutual information simply by increasing the distance between the patches. Here, we estimate the mutual information using [InfoNCE](https://arxiv.org/abs/1807.03748) (INCE), which is a quantitative measure of the mutual information lower bound.. Indeed, we observe a reverse U-shape curve: as mutual information is reduced, the downstream task accuracy first increases and then begins to decrease.\n\n\n\n| |\n| --- |\n| |\n| Downstream classification accuracy on [STL-10](https://cs.stanford.edu/~acoates/stl10/) (**left**) and [CIFAR-10](https://www.cs.toronto.edu/~kriz/cifar.html) (**right**) by applying linear classifiers on representations learned with contrastive learning. Same as the previous illustration, the views are sampled as different patches from the same images. Increasing the Euclidean distance between patches leads to decreasing mutual information. A reverse U-shape curve between classification accuracy and *INCE* (patch distance) is observed. |\n\n\n\nFurthermore, we demonstrate that several state-of-the-art contrastive learning methods ([InstDis](https://openaccess.thecvf.com/content_cvpr_2018/html/Wu_Unsupervised_Feature_Learning_CVPR_2018_paper.html), [MoCo](https://arxiv.org/abs/1911.05722), [CMC](https://arxiv.org/abs/1906.05849), [PIRL](https://arxiv.org/abs/1912.01991), [SimCLR](https://arxiv.org/abs/2002.05709) and [CPC](https://arxiv.org/abs/1807.03748)) can be unified through the perspective of view selection: despite the differences in architecture, objective and engineering details, all recent contrastive learning methods create two views that implicitly follow the InfoMin hypothesis, where the information shared between views are controlled by the strength of data augmentation. Motivated by this, we propose a new set of data augmentations, which outperforms the prior state of the art, [SimCLR](https://ai.googleblog.com/2020/04/advancing-self-supervised-and-semi.html), by nearly 4% on the ImageNet [linear readout benchmark](https://arxiv.org/abs/1807.03748). We also found that transferring our unsupervised pre-trained models to [object detection](https://en.wikipedia.org/wiki/Object_detection) and [instance segmentation](https://en.wikipedia.org/wiki/Image_segmentation) consistently outperforms ImageNet pre-training.\n\n\n\n\n**Learning to Generate Views** \n\nIn our work, we design unsupervised and semi-supervised methods that synthesize novel views following the InfoMin hypothesis. We learn [flow-based models](https://arxiv.org/abs/1505.05770) that transfer natural color spaces into novel color spaces, from which we split the channels to get views. For the unsupervised setup, the view generators are optimized to minimize the InfoNCE bound between views. As shown in the results below, we observe a similar reverse U-shape trend while minimizing the InfoNCE bound.\n\n\n\n| |\n| --- |\n| |\n| View generators learned by unsupervised (**left**) and semi-supervised (**right**) objectives. |\n\n\n\nTo reach the sweet spot without overly minimizing mutual information, we can use the semi-supervised setup and guide the view generator to retain label information. As expected, all learned views are now centered around the sweet spot, no matter what the input color space is.\n\n\n\n\n**Code and Pretrained Models** \n\nTo accelerate research in self-supervised contastive learning, we are excited to share the code and pretrained models of InfoMin with the academic community. They can be found [here](https://github.com/HobbitLong/PyContrast/tree/master/pycontrast).\n\n\n\n\n**Acknowledgements** \n\n*The core team includes Yonglong Tian, Chen Sun, Ben Poole, Dilip Krishnan, Cordelia Schmid and Phillip Isola. We would like to thank Kevin Murphy for insightful discussion; Lucas Beyer for feedback on the manuscript; and the Google Cloud team for computation support.*", "url": "http://ai.googleblog.com/2020/08/understanding-view-selection-for.html", "title": "Understanding View Selection for Contrastive Learning", "source": "html_articles", "source_type": "blogPost", "source_filetype": "pdf", "date_published": "2020-08-20T22:00:00Z", "authors": ["Yonglong Tian"], "summary": [], "id": "bcb3c1089278a89015b027ff7eed19f6"} {"text": "It would be great if we could all have household robots do our chores for us.\nChores are tasks that we want done to make our houses cater more to our\npreferences; they are a way in which we want our house to be *different* from\nthe way it currently is. However, most “different” states are not very\ndesirable:\n\n\n\n![](http://bair.berkeley.edu/static/blog/preferences/different.png)\n \n\n\n\n\n\nSurely our robot wouldn’t be so dumb as to go around breaking stuff when we ask\nit to clean our house? Unfortunately, **AI systems trained with [reinforcement\nlearning](https://en.wikipedia.org/wiki/Reinforcement_learning) only optimize features specified in the reward function** and are\nindifferent to anything we might’ve inadvertently left out. Generally, it is\neasy to get the reward wrong by forgetting to include preferences for things\nthat should stay the same, since we are so used to having these preferences\nsatisfied, and there are *so many of them*. Consider the room below, and imagine\nthat we want a robot waiter that serves people at the dining table efficiently.\nWe might implement this using a reward function that provides 1 reward whenever\nthe robot serves a dish, and use discounting so that the robot is incentivized\nto be efficient. What could go wrong with such a reward function? How would we\nneed to modify the reward function to take this into account? Take a minute to\nthink about it.\n\n\n\n\n![](http://bair.berkeley.edu/static/blog/preferences/fancy-room.png)\n \n\n\n\n\nHere’s an incomplete list we came up with:\n\n\n* The robot might track dirt and oil onto the pristine furniture while serving\nfood, even if it could clean itself up, because there’s no reason to clean but\nthere is a reason to hurry.\n* In its hurry to deliver dishes, the robot might knock over the cabinet of wine\nbottles, or slide plates to people and knock over the glasses.\n* In case of an emergency, such as the electricity going out, we don’t want the\nrobot to keep trying to serve dishes – it should at least be out of the way,\nif not trying to help us.\n* The robot may serve empty or incomplete dishes, dishes that no one at the\ntable wants, or even split apart dishes into smaller dishes so there are more\nof them.\n\n\nNote that we’re not talking about problems with robustness and distributional\nshift: while those problems are worth tackling, the point is that *even if* we\nachieve robustness, the simple reward function still incentivizes the above\nunwanted behaviors.\n\n\nIt’s common to hear the informal solution that the robot should try to minimize\nits impact on the environment, while still accomplishing the task. This could\npotentially allow us to avoid the first three problems above, though the last\none still remains as an example of [specification gaming](https://vkrakovna.wordpress.com/2018/04/02/specification-gaming-examples-in-ai/). This idea leads to\n[impact](https://vkrakovna.wordpress.com/2018/06/05/measuring-and-avoiding-side-effects-using-relative-reachability/) [measures](https://www.alignmentforum.org/posts/yEa7kwoMpsBgaBCgb/towards-a-new-impact-measure) that attempt to quantify the “impact” that an agent\nhas, typically by looking at the difference between what actually happened and\nwhat would have happened had the robot done nothing. However, this also\npenalizes things we want the robot to do. For example, if we ask our robot to\nget us coffee, it might buy coffee rather than making coffee itself, because\nthat would have “impact” on the water, the coffee maker, etc. Ultimately, we’d\nlike to only prevent *negative* impacts, which means that we need our AI to have\na better idea of what the *right* reward function is.\n\n\nOur key insight is that while it might be hard for humans to make their\npreferences explicit, some preferences are implicit in the way the world looks:\n**the world state is a result of humans having acted to optimize their\npreferences**. This explains why we often want the robot to by default “do\nnothing” – if we have already optimized the world state for our preferences,\nthen most ways of changing it will be bad, and so doing nothing will often\n(though not always) be one of the better options available to the robot.\n\n\nSince the world state is a result of optimization for human preferences, we\nshould be able to use that state to infer what humans care about. For example,\nwe surely don’t want dirty floors in our pristine room; otherwise we would have\ndone that ourselves. We also can’t be indifferent to dirty floors, because then\nat some point we would have walked around the room with dirty shoes and gotten a\ndirty floor. The only explanation is that we want the floor to be clean.\n\n\nA simple setting\n================\n\n\nLet’s see if we can apply this insight in the simplest possible setting:\ngridworlds with a small number of states, a small number of actions, a known\ndynamics model (i.e. a model of “how the world works”), but an incorrect reward\nfunction. This is a simple enough setting that our robot understands all of the\nconsequences of its actions. Nevertheless, the problem remains: while the robot\nunderstands *what* will happen, it still cannot distinguish good consequences\nfrom bad ones, since its reward function is incorrect. In these simple\nenvironments, it’s easy to figure out what the correct reward function is, but\nthis is infeasible in a real, complex environment.\n\n\n\n![](http://bair.berkeley.edu/static/blog/preferences/room.png)\n\n\n\n\nFor example, consider the room to the right, where Alice asks her robot to\nnavigate to the purple door. If we were to encode this as a reward function that\nonly rewards the robot while it is at the purple door, the robot would take the\nshortest path to the purple door, knocking over and breaking the vase – since\nno one said it shouldn’t do that. The robot is perfectly aware that its plan\ncauses it to break the vase, but by default it doesn’t realize that it\n*shouldn’t* break the vase.\n\n\nIn this environment, does it help us to realize that Alice was optimizing the\nstate of the room for her preferences? Well, if Alice didn’t care about whether\nthe vase was broken, she would have probably broken it some time in the past. If\nshe *wanted* the vase broken, she definitely would have broken it some time in\nthe past. So the only consistent explanation is that Alice cared about the vase\nbeing intact, as illustrated in the gif below.\n\n\n\n![](http://bair.berkeley.edu/static/blog/preferences/overview.gif)\n \n\n\n\n\nWhile this example has the robot infer that it shouldn’t take the action of\nbreaking a vase, the robot can also infer goals that it should actively pursue.\nFor example, if the robot observes a basket of apples near an apple tree, it can\nreasonably infer that Alice wants to harvest apples, since the apples didn’t\nwalk into the basket themselves – Alice must have put effort into picking the\napples and placing them in the basket.\n\n\nReward Learning by Simulating the Past\n======================================\n\n\nWe formalize this idea by considering an MDP in which our robot observes the\ninitial state $s\\_0$ at deployment, and assumes that it is the result of a human\noptimizing some unknown reward for $T$ timesteps.\n\n\nBefore we get to our actual algorithm, consider a completely intractable\nalgorithm that should do well: for each possible reward function, simulate the\ntrajectories that Alice would take if she had that reward, and see if the\nresulting states are compatible with $s\\_0$. This set of compatible reward\nfunctions give the candidates for Alice’s reward function. This is the algorithm\nthat we implicitly use in the gif above.\n\n\nIntuitively, this works because:\n\n\n* Anything that requires effort on Alice’s part (e.g. keeping a vase intact)\nwill not happen for the vast majority of reward functions, and will force the\nreward functions to incentivize that behavior (e.g. by rewarding intact\nvases).\n* Anything that does not require effort on Alice’s part (e.g. a vase becoming\ndusty) will happen for most reward functions, and so the inferred reward\nfunctions need not incentivize that behavior (e.g. there’s no particular value\non dusty/clean vases).\n\n\nAnother way to think of it is that we can consider all possible past\ntrajectories that are compatible with $s\\_0$, infer the reward function that\nmakes those trajectories most likely, and keep those reward functions as\nplausible candidates, weighted by the number of past trajectories they explain.\nSuch an algorithm should work for similar reasons. Phrased this way, it sounds\nlike we want to use [inverse reinforcement learning](https://people.eecs.berkeley.edu/~russell/papers/colt98-uncertainty.pdf) to infer rewards for\nevery possible past trajectory, and aggregate the results. This is still\nintractable, but it turns out we can take this insight and turn it into a\ntractable algorithm.\n\n\nWe follow [Maximum Causal Entropy Inverse Reinforcement Learning](http://www.cs.cmu.edu/~bziebart/publications/maximum-causal-entropy.pdf) (MCEIRL), a\ncommonly used algorithm for small MDPs. In this framework, we know the action\nspace and dynamics of the MDP, as well as a set of good features of the state,\nand the reward is assumed to be linear in these features. In addition, the human\nis modelled as Boltzmann-rational: Alice’s probability of taking a particular\naction from a given state is assumed to be proportional to the exponent of the\nstate-action value function Q, computed using soft value iteration. Given these\nassumptions, we can calculate $p(\\tau \\mid \\theta\\_A)$, the distribution over the\npossible trajectories $\\tau = s\\_{-T} a\\_{-T} \\dots s\\_{-1} a\\_{-1} s\\_0$ under the\nassumption that Alice’s reward was $\\theta\\_A$. MCEIRL then finds the $\\theta\\_A$\nthat maximizes the probability of a set of trajectories \\(\\{\\tau\\_i\\}\\).\n\n\nRather than considering all possible trajectories and running MCEIRL on all of\nthem to maximize each of their probabilities individually, we instead maximize\nthe probability of the evidence that we see: the single state $s\\_0$. To get a\ndistribution over $s\\_0$, we marginalize out the human’s behavior prior to the\nrobot’s initialization:\n\n\n\n\\[P(s\\_0 \\mid \\theta\\_A) = \\sum\\limits\\_{s\\_{-T}a\\_{-T} \\dots s\\_{-1}a\\_{-1}} P(s\\_{-T} a\\_{-T} \\dots s\\_{-1} a\\_{-1} s\\_0 \\mid \\theta\\_A)\\]\n\nWe then find a reward $\\theta\\_A$ that maximizes the likelihood above using\ngradient ascent, where the gradient is analytically computed using dynamic\nprogramming. We call this algorithm *Reward Learning by Simulating the Past\n(RLSP)* since it infers the unknown human reward from a single state by\nconsidering what must have happened in the past.\n\n\nUsing the inferred reward\n=========================\n\n\nWhile RLSP infers a reward that captures the information about human preferences\ncontained in the initial state, it is not clear how we should *use* that reward.\nThis is a challenging problem – we have two sources of information, the\ninferred reward from $s\\_0$, and the specified reward $\\theta\\_{\\text{spec}}$, and\nthey will conflict. If Alice has a messy room, $\\theta\\_A$ is not going to\nincentivize cleanliness, even though $\\theta\\_{\\text{spec}}$ might.\n\n\nIdeally, we would note the scenarios under which the two rewards conflict, and\nask Alice how she would like to proceed. However, in this work, to demonstrate\nthe algorithm we use the simple heuristic of adding the two rewards, giving us a\nfinal reward $\\theta\\_A + \\lambda \\theta\\_{\\text{spec}}$, where $\\lambda$ is a\nhyperparameter that controls the tradeoff between the rewards.\n\n\nWe designed a suite of simple gridworlds to showcase the properties of RLSP. The\ntop row shows the behavior when optimizing the (incorrect) specified reward,\nwhile the bottom row shows the behavior you get when you take into account the\nreward inferred by RLSP. A more thorough description of each environment is\ngiven in the paper. The last environment in particular shows a limitation of our\nmethod. In a room where the vase is far away from Alice’s most probable\ntrajectories, the only trajectories that Alice could have taken to break the\nvase are all very long and contribute little to the RLSP likelihood. As a\nresult, observing the intact vase doesn’t tell the robot much about whether\nAlice wanted to actively avoid breaking the vase, since she wouldn’t have been\nlikely to break it in any case.\n\n\n\n![](http://bair.berkeley.edu/static/blog/preferences/all-environments.gif)\n \n\n\n\n\nWhat’s next?\n============\n\n\nNow that we have a basic algorithm that can learn the human preferences from one\nstate, the natural next step is to scale it to realistic environments where the\nstates cannot be enumerated, the dynamics are not known, and the reward function\nis not linear. This could be done by adapting existing inverse RL algorithms,\nsimilarly to how we adapted Maximum Causal Entropy IRL to the one-state setting.\n\n\nThe unknown dynamics setting, where we don’t know “how the world works”, is\nparticularly challenging. Our algorithm relies heavily on the assumption that\nour robot knows how the world works – this is what gives it the ability to\nsimulate what Alice “must have done” in the past. We certainly can’t learn how\nthe world works just by observing a single state of the world, so we would have\nto learn a dynamics model while acting that can then be used to simulate the\npast (and these simulations will get better as the model gets better).\n\n\nAnother avenue for future work is to investigate the ways to decompose the\ninferred reward into $\\theta\\_{A, \\text{task}}$ which says which task Alice is\nperforming (“go to the black door”), and $\\theta\\_{\\text{frame}}$, which captures\nwhat Alice prefers to keep unchanged (“don’t break the vase”). Given the\nseparate $\\theta\\_{\\text{frame}}$, the robot could optimize\n$\\theta\\_{\\text{spec}}+\\theta\\_{\\text{frame}}$ and ignore the parts of the reward\nfunction that correspond to the task Alice is trying to perform.\n\n\nSince $\\theta\\_{\\text{frame}}$ is in large part shared across many humans, we\ncould infer it using models where multiple humans are optimizing their own\nunique $\\theta\\_{H,\\text{task}}$ but the same $\\theta\\_{\\text{frame}}$, or we\ncould have one human whose task change over time. Another direction would be to\nassume a different structure for what Alice prefers to keep unchanged, such as\nconstraints, and learn them separately.\n\n\nYou can learn more about this research by reading [our paper](https://openreview.net/forum?id=rkevMnRqYQ¬eId=r1eINIUbe4), or by checking\nout our poster at ICLR 2019. The code is available [here](https://github.com/HumanCompatibleAI/rlsp).", "url": "http://bair.berkeley.edu/blog/2019/02/11/learning_preferences/", "title": "Learning Preferences by Looking at the World", "source": "html_articles", "source_type": "webpage", "source_filetype": "pdf", "date_published": "2019-02-10T23:00:00Z", "authors": ["Daniel Seita"], "summary": [], "id": "f98ad65a197e2e50c0a57e5a4351ec5a"} {"text": "AI agents have learned to play Dota, StarCraft, and Go, by training to beat an\nautomated system that increases in difficulty as the agent gains skill at the\ngame: in vanilla self-play, the AI agent plays games against itself, while in\npopulation-based training, each agent must play against a population of other\nagents, and the entire population learns to play the game.\n\n\nThis technique has a lot going for it. There is a natural curriculum in\ndifficulty: as the agent improves, the task it faces gets harder, which leads\nto efficient learning. It doesn’t require any manual design of opponents, or\nhandcrafted features of the environment. And most notably, in all of the games\nabove, the resulting agents have beaten human champions.\n\n\nThe technique has also been used in collaborative settings: OpenAI had one\npublic match where each team was composed of three OpenAI Five agents alongside\ntwo human experts, and the For The Win (FTW) agents trained to play Quake were\npaired with both humans and other agents during evaluation. In the [Quake\ncase](https://deepmind.com/blog/article/capture-the-flag-science), humans rated the FTW agents as more collaborative than fellow humans\nin a participant survey.\n\n\n\nHowever, when we dig into the weeds, we can see that this is not a panacea. In\nthe 2.5 minute discussion after the [OpenAI Five cooperative game](https://openai.com/blog/how-to-train-your-openai-five/) (see\n4:33:05 onwards in the video), we can see that some issues did arise[1](#fn:quotes):\n\n\n\n> \n> Sheever: Actually it was nice; my Viper gave his life for me at some point.\n> He tried to help me, thinking ***“I’m sure she knows what she’s doing”.\n> Obviously I didn’t***, but you know, he believed in me. I don’t get that a\n> lot with [human] teammates. \n> \n> \n> Christy: They are perfectly selfless. \n> \n> \n> Sheever: Yeah, they are. \n> \n> \n> Michael: They also expect you to be. \n> \n> \n> Sheever: Yeah. (laughing) Didn’t work out that way.\n> \n> \n> \n\n\n\n\n> \n> Blitz: It was interesting because I could tell that we were doing something\n> wrong, because they weren’t coming with us. I was like, “this is clearly an\n> ‘us’ issue”, and I didn’t really know how to fix that. Regardless of what lane\n> I went to, it just felt like I was making the wrong play, and it felt kind of\n> bad in that regard. But it was cool because I knew that when I did make a move\n> and they decided to go with me, that they deemed that was the correct thing to\n> do. ***It felt like I was trying to solve a puzzle while playing the game***.\n> \n> \n> \n\n\nObservers could also [tell](https://twitter.com/mtrc/status/1117179732074868736) that the AIs were not collaborating well with\ntheir human teammates. The agents were simply behaving as though they had AI\nteammates, rather than Sheever and Blitz. The agents’ models of their teammates\nwere *incorrect*[2](#fn:model). While this means they will sacrifice themselves when\nit is in the team’s interest, it also means that they’ll leave without any\nnotice assuming that Sheever and Blitz will coordinate perfectly, as the AIs\nwould.\n\n\nSo is self-play actually a good algorithm to use to create *collaborative*\nagents? We decided to put it to the test.\n\n\nOvercooked\n==========\n\n\nTo investigate this further, we wanted a simple collaborative environment that\nnonetheless has a wide variety of potential strategies, so that the optimal\nstrategy is not obvious. This led us to consider the game [Overcooked](http://www.ghosttowngames.com/overcooked/), in\nwhich players collaborate to cook up recipes quickly and serve them to hungry\ncustomers. The game is particularly hard to coordinate in, primarily because of\nthe significant time pressure (which is not an issue for AI agents). Here’s an\nexample of good human play (starting at 15 seconds):\n\n\n\n\n\n \n\n\n\nWe created a simplified version of Overcooked, that allows us to focus on\nparticular coordination challenges that underlie joint planning for teams. In\nour version, players must create and deliver soups. They must get onions from\nthe onion supply, place three of them in a pot, wait for the soup to cook, put\nthe soup in a plate, and then deliver the plate to a serving location. Players\nneed to employ both a good strategy (e.g. “you get the onions, I’ll grab the\ndish”) as well as low level motion coordination (e.g. “let’s go clockwise so we\ndon’t crash into each other”). Despite its apparent simplicity, it is quite\nchallenging to act well in the environment: we developed a near-optimal\nhierarchical A\\* planner, but the planning problem is difficult enough that our\nplanner can only solve two of our five layouts in a reasonable amount of time.\n\n\n\n![](https://bair.berkeley.edu/static/blog/coordination/1 Game Dynamics.png)\n \n\n\n\n\nLet’s suppose you and your friend Alice are playing on the layout above, and\nyou are trying to beat Bob and Charlie (who are playing on the same layout).\nYou’ve got a good strategy: at the start, Alice puts onions onto the counter in\nthe middle, while you go to the top to transfer the onions into the pot. As you\nglance over at Bob and Charlie, you notice that they haven’t figured out this\nstrategy: they pick up each onion separately, and make a long trudge around the\nlayout to put the onion in the pot. Well, all the better for you; it looks like\nyou’re going to beat them even more soundly than you thought:\n\n\n\n\n![](https://bair.berkeley.edu/static/blog/coordination/2 Alice _ You successful coord.gif)\n![](https://bair.berkeley.edu/static/blog/coordination/3 – Bob Charlie Long Way.gif)\n \n\n*Left: Alice (green) and you (blue) passing onions. Right: Bob (green) and\nCharlie (blue) taking the long way.*\n\n\n\nBut what if *Alice* doesn’t know about your strategy? In that case you head up\ntowards the pots, but to your chagrin Alice isn’t passing you onions – she’s\npicked up a single onion and is making the long trudge over to place it in the\npot. You stand in front of the pot, staring at her pointedly, hoping she’ll\npass you some onions, but she continues to carry onions alone. You sigh, and\nhead back to get an onion yourself. Meanwhile, Bob and Charlie didn’t waste any\ntime, and so they win.\n\n\n\n![](https://bair.berkeley.edu/static/blog/coordination/4 Alice _ You unsuccessful coord.gif)\n![](https://bair.berkeley.edu/static/blog/coordination/5 – Bob Charlie Long Way.gif)\n \n\n*Left: Alice (green) and you (blue) fail to coordinate. Right: Bob (green) and\nCharlie (blue) taking the long way.*\n\n\n\nInterestingly, even though you knew a good strategy that the others did not,\nBob and Charlie still managed to beat you and Alice. This is the key\ndifference. In *competitive* settings (like between your team and Bob’s), if\nyour opponent is suboptimal and you don’t know it, you’ll simply beat them even\nmore soundly. In contrast, in *collaborative* settings, if your partner is\nsuboptimal and you don’t know it, team performance can be arbitrarily poor:\neven worse than if you were exactly like your partner, with all their\nsuboptimalities.\n\n\nAs we saw above, self-play makes poor assumptions about its human partners (or\nopponents, for that matter). Failing to accurately model your opponents doesn’t\nmatter much, since it is a competitive setting, but failing to accurately model\nyour partners in collaborative settings can be arbitrarily bad.\n\n\nUnderstanding the differences\n=============================\n\n\nIn the language of [game theory](https://en.wikipedia.org/wiki/Game_theory), competition corresponds to a zero-sum game\n(my gain is your loss and vice versa), while collaboration corresponds to a\ncommon payoff game (my gain is your gain and vice versa).[3](#fn:gt)\n\n\n**Two player zero sum games**. Self-play algorithms train the agent by having\nthe agent play games with itself, and updating so that it will be more likely\nto win such games in the future. So, we would expect training to converge to an\nequilibrium where the agent cannot improve its strategy when playing either\nside of the game. For two player zero sum games, every such equilibrium\ncorresponds to a [min-max policy](https://en.wikipedia.org/wiki/Minimax#In_zero-sum_games). That is, the agent tries to *maximize*\nthe value it is going to get, assuming that its opponent is trying to\n*minimize* the value the agent gets (which corresponds to maximizing their own\nvalue, since the game is zero-sum).\n\n\nAn interesting fact about minimax policies is that an agent playing a minimax\npolicy is guaranteed to get *at least as much value* as if it were playing\nitself. This is because of the dynamic we saw above: in competitive games, if\nyour opponent is suboptimal, you’ll beat them even more soundly. Indeed, it\nseems almost obvious: if your opponent isn’t optimal, then they must be taking\nan action that isn’t maximizing their value, which means it isn’t minimizing\nyour value, which means you’re going to do better than you expected.\n\n\n![](https://bair.berkeley.edu/static/blog/coordination/6 Competitive Game Tree.png)\nWe can see this dynamic in the very simple game tree on the right. When\nchoosing an action, the agent reasons that if it takes the left path, the human\ncould go left, in which case it gets 1 reward, whereas if it takes the right\npath, the human could go left, in which case it gets 3 reward. So, it goes\nright. However, if the human then makes the suboptimal choice to go right, the\nrobot gets 7 reward instead: more than the 3 it expected.[4](#fn:tree)\n\n\n**Common payoff games**. Now let’s consider common payoff games, where both the\nagent and the human get exactly the same reward. The self-play agent is still\ngoing to end up in an equilibrium where it can’t improve its strategy when\nplaying either side of the game. The agent is going to reach a max-max policy,\nwhere the agent tries to *maximize* its own value, assuming that its partner is\nalso trying to maximize the same value. Unlike min-max policies, max-max\npolicies do not provide a lower bound on reward obtained when the partner\n*doesn’t* maximize value, and in fact performance can become arbitrarily bad.\n\n\n![](https://bair.berkeley.edu/static/blog/coordination/7 Collaborative Game Tree.png)\nConsider the game tree on the right. Since the agent models the human as a\nmaximizer, it assumes that they can coordinate to reach the situation with 8\nreward, and so goes left. However, if our suboptimal human ends up going left,\nthen the agent only gets 1 reward: the worst possible outcome!\n\n\n**Caveat**. This argument applies to algorithms that reach equilibria. In\npractice, due to the difficulty in training neural networks, our agents do not.\nFor example, neural nets are often very vulnerable to distribution shift. Since\nhumans likely play differently from the agent has seen during self-play\ntraining, the agents could have had no idea what to do, which might cause them\nto behave randomly. (This argument applies to both competitive and\ncollaborative settings.)\n\n\nIn what follows, we train an agent not with an optimal partner through\nself-play, but with a model of a (suboptimal) human partner that we obtain from\nhuman gameplay. We’ll call such agents “human-aware”.\n\n\nHypotheses\n==========\n\n\nWith all of this conceptual groundwork, we can make some testable hypotheses\nfor the Overcooked environment in particular. Firstly, since playing with\nhumans induces a distribution shift, and since it is a collaborative game,\nwhere self-play doesn’t provide an opponent-independent guarantee:\n\n\n**H1. A self-play agent will perform much more poorly when partnered with a\nhuman (relative to being partnered with itself).**\n\n\nSince a human-aware agent will have a better model of their partner than a\nself-play agent:\n\n\n**H2. When partnered with a human, a human-aware agent will achieve higher\nperformance than a self-play agent, though not as high as a self-play agent\npartnered with itself.**\n\n\nOf course, a human-aware agent will require access to a dataset of human\ngameplay. Couldn’t we use the dataset to train an agent using imitation\nlearning? Unfortunately, this would copy over the human’s suboptimalities: what\nwe actually want is an agent that knows how the human is suboptimal and deals\nwith it appropriately.\n\n\n**H3. When partnered with a human, a human-aware agent will achieve higher\nperformance than an agent trained via imitation learning.**\n\n\n![](https://bair.berkeley.edu/static/blog/coordination/8 Training Diagram.png)\nTo test these hypotheses, we need an implementation of a human-aware agent. In\nthis work, we take the most basic approach: given a dataset of human-human\ngameplay, we train a *human model* using behavior cloning, and then train an\nagent that plays well with this (fixed) human model using deep RL\n(specifically, PPO). There are many ways to improve on this basic approach, as\nwe discuss in the Future Work section, but we expect that even this will be\nenough to outperform self-play in our Overcooked environment.\n\n\nExperiments\n===========\n\n\nTo test our hypotheses, we created five different Overcooked layouts, shown\nbelow.\n\n\n\n![](https://bair.berkeley.edu/static/blog/coordination/9 All Layouts.png)\n \n\n*From left to right: Cramped Room, Asymmetric Advantages, Coordination Ring,\nForced Coordination, Counter Circuit.*\n\n\n\nSince the agent can play either of the two players, this creates ten scenarios.\nWe first test in simulation: we train a human model using behavior cloning on a\ndataset of human-human gameplay. This model will stand in for our test-time\nhuman, and so is called H\\_{proxy}. We manipulate the agent that must play\nalongside H\\_{proxy}, where the options are an agent trained via self-play\n(SP), an agent trained to imitate (BC), and a human-aware agent trained to play\nwell alongside a human model (PPO\\_{BC}). Note that the human-human gameplay\nused to train BC is entirely separate from that used to train H\\_{proxy}.\n\n\nWe also report the performance of self-play with itself (SP + SP), which serves\nas a rough upper bound on the optimal team performance, as well as a\nhuman-aware agent that is given access to the test-time human model\n(PPO\\_{H\\_{proxy}} + H\\_{proxy}), which serves as a rough upper bound on\nthe optimal performance when the agent must play with the test-time human.\n\n\nThe results are shown below. We see that all three hypotheses are supported. It\nis interesting to note that even vanilla behavioral cloning often outperforms\nself-play agents when paired with H\\_{proxy}.\n\n\n\n![](https://bair.berkeley.edu/static/blog/coordination/10 SP Performances.png)\n \n\n\n\n\nQualitative results\n-------------------\n\n\nHow exactly is the human-aware agent getting better results? One reason is that\nit is more robust to different plans the human could have. In Coordination\nRing, PBT and SP agents often insist upon moving in a particular direction.\nWhen the human wants to go the other way, they collide and get stuck. In\ncontrast, the human-aware agent simply chooses whichever path the human isn’t\ntaking.\n\n\n\n\n![](https://bair.berkeley.edu/static/blog/coordination/11 SP failure.gif)\n![](https://bair.berkeley.edu/static/blog/coordination/12 PPO_BC success.gif)\n![](https://bair.berkeley.edu/static/blog/coordination/13 PPO_BC success other way.gif)\n \n\n*Self-play agent “stubbornly” colliding with the human (left), Human-aware agent\ntaking the appropriate route depending on the human’s direction (middle and\nright).*\n\n\n\nConsider the gif with the self-play agent above. In the initial state, the\nhuman is holding an onion and is facing up. What does the SP agent think the\nhuman will do? Well, the SP agent “expects” the human to be like itself, and it\nwould have a 0-30% chance of up and 57-99.9% chance of down. (The ranges are\nreporting the minimum and maximum across 5 seeds.) Thus, expecting the human to\nmove out of the way, SP decides to take the counterclockwise route – leading SP\nto crash into the human.\n\n\nMeanwhile, if we exclude the noop action, the BC model we used in training\nassigns 99.8% chance of up and <0.01% chance of down, since the human is facing\nup. Since the human is moving clockwise, it too moves clockwise to avoid\ncolliding with the human. Conversely, when the human is oriented in the\ncounterclockwise direction, the human-aware agent goes counterclockwise to\ndeliver the soup (even though that route is longer). It adaptively chooses the\nroute depending on the position and direction of the human.\n\n\nCould the agent just be fragile?\n--------------------------------\n\n\nThere is one other salient explanation for our quantitative and qualitative\nresults: perhaps the self-play agent is being forced off-distribution when it\nplays with H\\_{proxy}, and the problem is not just that it doesn’t know\nabout its partner: it just doesn’t know how to play *at all* (even with itself)\nin these new states it hasn’t encountered before. Meanwhile, playing with BC\ncauses the human-aware agent to be trained on such states. This is at least\npart of the explanation for our results.\n\n\n\nThis fragility to distributional shift argument would suggest that\npopulation-based training (PBT) would perform much better, since it involves a\npopulation of agents and so the winning agent needs to be robust to the entire\npopulation, rather than just itself. However, when repeating the experiment\nwith agents trained via PBT, we see broadly similar results.\n\n\nAnother way to test this is to implement an agent that does not suffer from\ndistributional shift, but still suffers from incorrect expectations about its\npartner. We do this by implementing a *planning agent*, that uses a\nhierarchical A\\* search to select the best plan for the team to take, and then\nexecutes its part of the best plan’s first joint action. For the human-aware\nversion, we perform a hierarchical A\\* search, where the partner is assumed to\nalways take the action predicted as most likely by BC. We again see broadly\nsimilar results, though only the version that gets access to the test-time\nhuman does well.\n\n\nUser study\n----------\n\n\nOf course, the true test is whether these results will hold with actual humans.\nBy and large, they do, but not as clearly or strongly. H1 is clearly supported:\nself-play agents perform worse with humans than with themselves. H2 is also\nsupported: PPO\\_{BC} is statistically significantly better than SP or PBT,\nthough the effect is much less pronounced than before. Since our method only\nbeats teams of humans in 5/10 configurations, the data is inconclusive about\nH3.\n\n\n\n![](https://bair.berkeley.edu/static/blog/coordination/14 Human Performances.png)\n \n\n\n\n\nWe speculate that there are two main reasons why the results are different with\nreal humans:\n\n\n1. The difference between real humans and BC is much larger than the\ndifference between H\\_{proxy} and BC (both of which are trained on\nhuman-human gameplay). As a result, PPO\\_{BC} doesn’t generalize to real\nhumans as well as it generalizes to H\\_{proxy}. This is particularly true on\nthe fourth and fifth layouts, where the BC-trained human model is quite bad.\n2. Humans are able to figure out the coordination mechanisms that SP and PBT\nuse, and adapt to use those mechanisms themselves. In contrast, the BC model is\nnot able to adapt in this way. This significantly increases the performance of\nSP and PBT.\n\n\nYou can see these effects for yourself, by [playing the demo](https://humancompatibleai.github.io/overcooked-demo/)!\n\n\nDiscussion\n==========\n\n\nSo far we’ve seen that self play algorithms form an incorrect “expectation”\nabout their partner, and incorporating even the naive human model produced by\nbehavior cloning beats self play when playing with humans. It even beats\nhuman-human teams sometimes!\n\n\nYou might hope that rather than understanding humans, which requires expensive\nhuman data, we could instead simply train our agents to be robust to a wide\nvariety of agents, which would automatically make them robust to humans.\nHowever, this is exactly what PBT is supposed to do, and we found that PBT\nended up having the same kinds of problems as SP. Nonetheless, it could be that\nwith a larger population or other tweaks to the algorithm, PBT could be\nimproved.\n\n\nYou might also think that our results are primarily explained by analyzing how\nmany states an algorithm has been trained on: SP and PBT fall into\nnear-deterministic patterns, while PPO\\_{BC} must cope with the\nstochasticity of BC, and so it is trained on a wider variety of states, which\nmakes it work better with humans. However, we saw approximately the same\npattern with the planning agent, which is robust on all states. In addition,\nthe entropy bonus in PPO keeps SP and PBT at least somewhat stochastic.\n\n\nOne way to view the problem we have outlined is that AI systems trained via\nself-play end up using coordination protocols that humans do not use. However,\nit is possible that this only happens because we are running the algorithms on\na single layout at the time, and so they learn a protocol that is specialized\nto that layout. In contrast, human coordination protocols are likely much more\ngeneral. This suggests that we could make AI protocols similar to human ones by\nforcing the AI protocols to be more general. In particular, if we train AI\nsystems via self-play to play on *arbitrary* maps, they will have to learn more\ngeneral coordination protocols, that may work well with human protocols. We\nwould like to investigate this possibility in the future.\n\n\nFuture Work\n===========\n\n\nTo demonstrate how important it is to model humans, we used the most naive\nhuman model we could and showed that even that leads to significant\nimprovements over self-play. Of course, for best performance, we’d like to use\nbetter human models. There are several areas for improvement:\n\n\n1. We could use more data to make the model more accurate, or use more\nsophisticated methods than behavior cloning to learn the human model\n2. While the human model is trained on human-human gameplay, it is used in the\ncontext of human-AI gameplay, which may be very different and cause the BC\nmodel to suffer from distributional shift. We could alternate between training\nPPO\\_{BC} and collecting new human-AI gameplay to improve the BC model.\n3. Alternatively, we could try to use models that are more robust to\ndistributional shift, such as models based on Theory of Mind, where the human\nis modeled as approximately optimizing some reward function.\n4. So far, we have made the obviously false assumption that all humans play\nexactly the same. Instead, we could learn a space of strategies that humans\ntend to use, and try to identify the test human’s strategy and adapt to it on\nthe fly.\n5. Another obviously false assumption we make is that the human is\n*stationary*, that is, the human’s policy doesn’t change over time. But of\ncourse, humans learn and adapt to their partners (and we see strong\nobservational evidence of this in the user study, where humans learn the\nprotocols that SP and PBT use). If we are able to model this learning, we\ncould build agents that actively *teach* humans better coordination protocols\nthat achieve higher reward.\n\n\nAlternatively, rather than attempting to completely fix the model’s\nexpectations about its partner, we could train it to be robust to a wide\nvariety of partners. This will limit the peak performance, since the agent\ncannot specialize to humans in particular, but it could still give a suitably\ngood result, and in particular it should beat imitation learning. We showed\nthat vanilla PBT was insufficient for this task, but we find it plausible that\nvariants of PBT could work.\n\n\nAnother aspect to investigate further is the extent to which these problems are\ncaused by a lack of robustness to *states* as opposed to *partners*. Currently,\nwhen a self-play agent is forced off distribution, it behaves in a clearly\nsuboptimal way (such that the agent wouldn’t coordinate well even with itself).\nIf we had agents that at least played coherently with respect to *some* partner\non all states, that could potentially fix most of the problem. (However, our\nplanning experiments show that some problems will remain.) With deep RL,\nperhaps this could be done by incentivizing exploration via intrinsic\nmotivation, or by generating a random initial state instead of a fixed one\nduring each episode.\n\n\nWe’re excited by the potential of Overcooked as a benchmark for human-AI\ncollaboration, and we hope to see more research that paves the way to AI\nsystems that are increasingly beneficial for humans.\n\n\n*This post is based on the paper “[On the Utility of Learning about Humans for\nHuman-AI Coordination](https://arxiv.org/abs/1910.05789)”, to be presented at NeurIPS 2019. You can play with\nour trained agents or watch them play each other [here](https://humancompatibleai.github.io/overcooked-demo/). We’ve taken\nparticular care to separately publish our [environment code](https://github.com/HumanCompatibleAI/overcooked_ai), [DRL code](https://github.com/HumanCompatibleAI/human_aware_rl),\n[visualization code](https://github.com/HumanCompatibleAI/overcooked-demo), and [user study code](https://github.com/HumanCompatibleAI/overcooked-hAI-exp), so that each can be reused\nand modified. We would particularly welcome pull requests to add more\nfunctionality to the environment.*\n\n\n\n\n---\n\n\n\n1. Quotes have been edited for clarity. [↩](#fnref:quotes)\n2. Although this point also applies to the competitive setting, the\nproblems it causes are not as significant, as we will see later in the\npost. [↩](#fnref:model)\n3. Other general-sum games typically have both competitive and\ncollaborative aspects. While we don’t study them in this work, our results\nsuggest that the more collaborative the game is, the worse self-play will\nperform. [↩](#fnref:gt)\n4. That said, the agent might have been able to do better if it knew how\nthe human would behave. Suppose it knew that if it went left, the human\nwould then have gone right. Then by going left, the agent would get 8\nreward; better than the 7 reward it ended up getting by going right. [↩](#fnref:tree)", "url": "http://bair.berkeley.edu/blog/2019/10/21/coordination/", "title": "Collaborating with Humans Requires Understanding Them", "source": "html_articles", "source_type": "blogPost", "source_filetype": "pdf", "date_published": "2019-10-20T22:00:00Z", "authors": ["Rohin Shah", "Micah Carroll"], "summary": [], "id": "8740b5a5fad4e18eb3e0dde785d6cfc8"} {"text": "![](https://bair.berkeley.edu/static/blog/laikago/00_teaser.gif) \n\n*Quadruped robot learning locomotion skills by imitating a dog.*\n\n\n\nWhether it’s a dog chasing after a ball, or a monkey swinging through the\ntrees, animals can effortlessly perform an incredibly rich repertoire of agile\nlocomotion skills. But designing controllers that enable legged robots to\nreplicate these agile behaviors can be a very challenging task. The superior\nagility seen in animals, as compared to robots, might lead one to wonder: can\nwe create more agile robotic controllers with less effort by directly imitating\nanimals?\n\n\nIn this work, we present a framework for learning robotic locomotion skills by\nimitating animals. Given a reference motion clip recorded from an animal (e.g.\na dog), our framework uses reinforcement learning to train a control policy\nthat enables a robot to imitate the motion in the real world. Then, by simply\nproviding the system with different reference motions, we are able to train a\nquadruped robot to perform a diverse set of agile behaviors, ranging from fast\nwalking gaits to dynamic hops and turns. The policies are trained primarily in\nsimulation, and then transferred to the real world using a latent space\nadaptation technique, which is able to efficiently adapt a policy using only a\nfew minutes of data from the real robot.\n\n\n\n\n\n\nFramework\n---------\n\n\nOur framework consists of three main components: motion retargeting, motion\nimitation, and domain adaptation. 1) First, given a reference motion, the\nmotion retargeting stage maps the motion from the original animal’s morphology\nto the robot’s morphology. 2) Next, the motion imitation stage uses the\nretargeted reference motion to train a policy for imitating the motion in\nsimulation. 3) Finally, the domain adaptation stage transfers the policy from\nsimulation to a real robot via a sample efficient domain adaptation process. We\napply this framework to learn a variety of agile locomotion skills for a\n[Laikago](http://www.unitree.cc/e/action/ShowInfo.php?classid=6&id=1)\nquadruped robot.\n\n\n\n![](https://bair.berkeley.edu/static/blog/laikago/01_overview.gif) \n\n*The framework consists of three stages: motion retargeting, motion imitation,\nand domain adaptation. It receives as input motion data recorded from an\nanimal, and outputs a control policy that enables a robot to reproduce the\nmotion in the real world.*\n\n\n\n### Motion Retargeting\n\n\nAn animal’s body is generally quite different from a robot’s body. So before\nthe robot can imitate the animal’s motion, we must first map the motion to the\nrobot’s body. The goal of the retargeting process is to construct a reference\nmotion for the robot that captures the important characteristics of the\nanimal’s motion. To do this, we first identify a set of source keypoints on the\nanimal’s body, such as the hips and the feet. Then, corresponding target\nkeypoints are specified on the robot’s body.\n\n\n\n![](https://bair.berkeley.edu/static/blog/laikago/02_keypoints_dog.png)\n![](https://bair.berkeley.edu/static/blog/laikago/02_keypoints_robot.png)\n \n\n*Inverse-kinematics (IK) is used to retarget mocap clips recorded from a real\ndog (left) to the robot (right). Corresponding pairs of keypoints (red) are\nspecified on the dog and robot’s bodies, and then IK is used to compute a pose\nfor the robot that tracks the keypoints.*\n\n\n\nNext, inverse-kinematics is used to construct a reference motion for the robot\nthat tracks the corresponding keypoints from the animal at every timestep.\n\n\n\n![](https://bair.berkeley.edu/static/blog/laikago/03_retarget_pace.gif)\n![](https://bair.berkeley.edu/static/blog/laikago/04_retarget_spin.gif)\n \n\n*Inverse-kinematics is used to retarget mocap clips recorded from a dog to the robot.*\n\n\n\n### Motion Imitation\n\n\nAfter retargeting the reference motion to the robot, the next step is to train\na control policy to imitate the retargeted motion. But reinforcement learning\nalgorithms can take a long time to learn an effective policy, and directly\ntraining on a real robot can be fairly dangerous (both for the robot and its\nhuman companions). So, we instead opt to perform most of the training in the\ncomforts of simulation, and then transfer the learned policy to the real world\nusing more sample efficient adaptation techniques. All simulations are\nperformed using [PyBullet](https://pybullet.org/).\n\n\nThe policy $\\pi(\\mathbf{a} | \\mathbf{s}, \\mathbf{g})$, takes as input a state\n$\\mathbf{s}$, which represents the configuration of the robot’s body, and a\ngoal $\\mathbf{g}$, which specifies target poses from the reference motion that\nthe robot is to imitate. It then outputs an action $\\mathbf{a}$, which\nspecifies target angles for PD controllers at each of the robot’s joints. To\ntrain the policy to imitate a reference motion, we use a reward function that\nencourages the robot to minimize the difference between the pose of the\nreference motion $\\hat{\\mathbf{q}}\\_t$ and the pose of the simulated character\n$\\mathbf{q}\\_t$ at every timestep $t$,\n\n\n\n\\[r\\_t = \\exp \\Big[ - \\| \\hat{\\mathbf{q}}\\_t - \\mathbf{q}\\_t \\|^2 \\Big]\\]\n\nBy simply using different reference motions in the reward function, we can\ntrain a simulated robot to imitate a variety of different skills.\n\n\n\n![](https://bair.berkeley.edu/static/blog/laikago/05_sim_pace.gif)\n![](https://bair.berkeley.edu/static/blog/laikago/06_sim_spin.gif)\n \n\n*Reinforcement learning is used to train a simulated robot to imitate the\nretargeted reference motions.*\n\n\n\n### Domain Adaptation\n\n\nSince simulators generally provide only a coarse approximation of the real\nworld, policies trained in simulation often perform fairly poorly when deployed\non a real robot. Therefore, to transfer a policy trained in simulation to the\nreal world, we use a sample efficient domain adaptation techniques that can\nadapt the policy to the real world using only a small number of trials on the\nreal robot. To do this, we first apply\n[domain randomization](https://xbpeng.github.io/projects/SimToReal/index.html) during training in simulation, which randomly varies the\ndynamics parameters, such as mass and friction. The dynamics parameters are\nthen also collected into a vector $\\mu$ and encoded into a latent presentation\n$\\mathbf{z}$ by an encoder $E(\\mathbf{z} | \\mu)$. The latent encoding is\npassed as an additional input to the policy $\\pi(\\mathbf{a} | \\mathbf{s},\n\\mathbf{g}, \\mathbf{z})$.\n\n\n\n![](https://bair.berkeley.edu/static/blog/laikago/07_policy.png)\n \n\n*The dynamics parameters of the simulation are varied during training, and also\nencoded into a latent representation that is provided as an additional input\nto the policy.*\n\n\n\nWhen transferring the policy to a real robot, we remove the encoder and\ndirectly search for a $\\mathbf{z}$ that maximizes the robot’s rewards in the\nreal world. This is done using\n[advantage weighted regression](https://xbpeng.github.io/projects/AWR/index.html),\na simple off-policy reinforcement learning algorithm. In our experiments, this\ntechnique is often able to adapt a policy to the real world with less than 50\ntrials, which corresponds to roughly 8 minutes of real-world data.\n\n\n\n![](https://bair.berkeley.edu/static/blog/laikago/08_adaptation_pace.gif) \n\n![](https://bair.berkeley.edu/static/blog/laikago/09_adaptation_spin.gif)\n \n\n*Comparison of policies before and after adaptation on the real robot. Before\nadaptation, the robot is prone to falling. But after adaptation, the policies\nare able to more consistently execute the desired skills.*\n\n\n\nResults\n-------\n\n\nOur framework is able to train a robot to imitate various locomotion skills\nfrom a dog, including different walking gaits, such as pacing and trotting, as\nwell as a fast spinning motion. By simply playing the forwards walking motions\nbackwards, we are also able to train the robot to walk backwards.\n\n\n\n![](https://bair.berkeley.edu/static/blog/laikago/10_real_pace.gif)\n![](https://bair.berkeley.edu/static/blog/laikago/11_real_trot.gif) \n\n![](https://bair.berkeley.edu/static/blog/laikago/12_real_spin.gif)\n![](https://bair.berkeley.edu/static/blog/laikago/13_real_backward_trot.gif)\n \n\n*Laikago imitating various skills from a dog.*\n\n\n\nIn addition to imitating motions from real dogs, we can also imitate\nartist-animated keyframe motion, including a dynamic hop-turn:\n\n\n\n![](https://bair.berkeley.edu/static/blog/laikago/14_real_sidesteps.gif)\n![](https://bair.berkeley.edu/static/blog/laikago/14_real_turn.gif) \n\nHop Turn\n![](https://bair.berkeley.edu/static/blog/laikago/15_real_hopturn.gif)\n \n\n*Skills learned by imitating artist-animated keyframe motions.*\n\n\n\nWe also compared the learned policies with the manually-designed controllers\nprovided by the manufacturer. Our policies are able to learn faster gaits.\n\n\n\n![](https://bair.berkeley.edu/static/blog/laikago/16_comp_trot.gif) \n\n \n\n*Comparison of learned trotting gait with the built-in gait provided by the\nmanufacturer.*\n\n\n\nOverall, our system has been able to reproduce a fairly diverse corpus of\nbehaviors with a quadruped robot. However, due to hardware and algorithmic\nlimitations, we have not been able to imitate more dynamic motions such as\nrunning and jumping. The learned policies are also not as robust as the best\nmanually-designed controllers. Exploring techniques for further improving the\nagility and robustness of these learned policies could be a valuable step\ntowards more complex real-world applications. Extending this framework to learn\n[skills from videos](https://xbpeng.github.io/projects/SFV/index.html)\nwould also be an exciting direction, which can substantially increase the\nvolume of data from which robots can learn from.\n\n\nTo learn more,\n[check out the paper and code](https://xbpeng.github.io/projects/Robotic_Imitation/index.html).\n\n\nWe would like to thank Erwin Coumans, Tingnan Zhang, Tsang-Wei Lee, Jie Tan,\nSergey Levine, Byron David, Thinh Nguyen, Gus Kouretas, Krista Reymann, and\nBonny Ho for all their support and contribution to this work. This project was\ndone in collaboration with Google Brain.", "url": "http://bair.berkeley.edu/blog/2020/04/03/laikago/", "title": "Robots Learning to Move like Animals", "source": "html_articles", "source_type": "webpage", "source_filetype": "pdf", "date_published": "2020-04-02T22:00:00Z", "authors": ["Daniel Seita"], "summary": [], "id": "d58d3cc2af616108050e222e4e7a2227"} {"text": "Measuring Computation\n=====================\n\n\nThe computational performance of microprocessors can be quantified by measuring the number of floating-point arithmetic operations the processor can perform per second (FLOPS). This number is very useful for comparing different hardware being used for numerically intensive applications like scientific computing or mining [fake internet points](https://en.wikipedia.org/wiki/Cryptocurrency), but some have attempted to quantify the computation done by the human brain in these terms to reason about how difficult it would be to run a human-level intelligence on modern computing hardware.\n\n\nThis post will discuss a few of the issues associated with measuring the computational performance of the brain with FLOPS, and a follow-up post will consider specific estimates.\n\n\nDoes it make sense to think about the computational capacity of the brain in terms of FLOPS?\n============================================================================================\n\n\nThere is a line of thinking that goes something like:\n\n\n\n> \n> Neurons generate action potentials. Action potentials are stereotyped signals, so the computation that happens in the brain is essentially digital, so it makes sense to compare brains to digital computers, and synaptic operations are kind of like arithmetic operations.\n> \n> \n> \n\n\nThis may or may not be a good enough approximation, but it’s definitely a lossy approximation.\n\n\nBrains probably aren’t bottlenecked on arithmetic\n-------------------------------------------------\n\n\nA common objection to measuring the performance of the brain in FLOPS is that computation in the brain isn’t bottlenecked by arithmetic capacity, but rather by information flow, so the capacity of the brain should be measured in *traversed edges per second* (TEPS) rather than FLOPS. Synaptic connections between neurons tend to be sparse and axons tend to be long, which seems to suggest a lot of neural tissue is dedicated to pushing signals around rather than performing arithmetic on them[1](#fn-1).\n\n\nBrains are asynchronous\n-----------------------\n\n\nMicroprocessors are clocked circuits. When a computation unfolds on a microprocessor, it proceeds in discrete, well-delineated steps with one occurring each processor cycle. This method of computation is fundamentally synchronous.\n\n\nBrains don’t have a clock: neurons fire when they fire, which usually isn’t very often (one to ten times a second), but is sometimes much faster (up to around 1000 Hz)[2](#fn-2). And the phase of the neural spike trains also seem to be important[3](#fn-3), which further complicates the comparison.\n\n\nNon-spiking neurons\n-------------------\n\n\nMany neurons [don’t even spike](https://en.wikipedia.org/wiki/Non-spiking_neuron), having graded, non-stereotyped potentials. The best-studied are the photo-receptive neurons in the retina, but they occur throughout the brain and it’s unclear how to integrate them into the larger computational picture of the brain.\n\n\nConclusion\n==========\n\n\nThis post was not meant to be comprehensive, and is merely meant to highlight the strangeness and limitations of thinking of the limits of neural computation in terms of FLOPS.\n\n\n\n\n\n---\n\n\n1. Limitations in the ability of evolution to modify the basic vertebrate developmental plan lead can lead to bizarre inefficiencies, like the optic nerve needing to carry signals [from the retina to the back of the head](https://en.wikipedia.org/wiki/Lateral_geniculate_nucleus) before being processed in the visual cortex, or in the case of giraffes the laryngeal nerve needing to take a [>4 meter detour](https://en.wikipedia.org/wiki/Recurrent_laryngeal_nerve#Evidence_of_evolution). [↩](#fnref-1 \"Jump back to footnote 1 in the text\")\n2. See [sparse coding](https://en.wikipedia.org/wiki/Neural_coding#Sparse_coding). [↩](#fnref-2 \"Jump back to footnote 2 in the text\")\n3. See [phase coding](https://en.wikipedia.org/wiki/Neural_coding#Phase-of-firing_code). [↩](#fnref-3 \"Jump back to footnote 3 in the text\")", "url": "http://mediangroup.org/brain1.html", "title": "The Brain and Computation", "source": "html_articles", "source_type": "blogPost", "source_filetype": "pdf", "date_published": "2017-12-31T23:00:00Z", "authors": ["Baeo Maltinsky"], "summary": [], "id": "223b8fb9ae45eba96dfd88c7886901da"} {"text": "Note: Equations will not render properly with javascript disabled.\n \n\n\n\nIntroduction\n============\n\n\nMany of the impressive results in deep learning in recent years have been achieved through massive investment in hardware needed for training, with projects like AlphaGo Zero using [$25 million](https://www.nature.com/news/self-taught-ai-is-best-yet-at-strategy-game-go-1.22858#/ref-link-2) worth of computer hardware. Given this, improvements in price-performance of hardware used for deep learning will play an important role in determining the scale of projects in the coming years.\n\n\nWhile machine learning [ASICs](https://en.wikipedia.org/wiki/Application-specific_integrated_circuit) like TPUs are likely the future, the recent deep learning boom was powered by GPUs[1](#fn-1). The architectures of TPUs and GPUs differ in important ways, but much of the design and fabrication process is similar and both are largely focused on efficient, parallelized arithmetic[2](#fn-2), so trends observed in GPUs can inform us about what to expect from TPUs.\n\n\nCommonly mentioned figures for the price-performance generalization of Moore’s Law suggest that price-performance doubles roughly every two years, but this figure warranted further investigation. \n\n\nData\n====\n\n\nWe constructed a [data set](/data/gpu.csv) containing the model name, launch date, single precision performance in GFLOPS, and release price in non-inflation adjusted US dollars for 223 Nvidia and AMD GPUs (scraped from Wikipedia)[3](#fn-3). The data set covered almost two decades, so prices were adjusted to 2018 US dollars using the [Consumer Price Index](https://fred.stlouisfed.org/series/CPIAUCNS).\n\n\nAnalysis\n========\n\n\nFitting an exponential to the data-set yielded the curve (where *t* equals the number of years since the start of 2070):\n\n\n\n\\begin{equation}\n f(t) \\approx 2.26 e^{0.470 t}\n\\end{equation}\n![GPU Price-Performance](/images/gpu_full_fit.png)\n\n\nThis implies a doubling time of ~1.5 years. It should be noted that this is somewhat misleading because the price-performance curve isn’t a clean exponential. Inspecting a log-plot suggests that price-performance has been in a distinctly slower growth regime since around 2012.\n\n\n![Log-plot of GPU price-performance](/images/gpu_log.png)\n\n\nFitting to data from 2012 or later yields the curve:\n\n\n\n\\begin{equation}\n f(t) \\approx 15.2 e^{0.176 t},\n\\end{equation}\n\ncorresponding to a doubling time of ~3.9 years.\n\n\n![Log-plot of GPU price-performance](/images/gpu_log_fit.png)\n\n\nExternal Discussion\n===================\n\n\n* [Comments on LessWrong](https://www.lesswrong.com/posts/iGznDsxfB564Lobam/how-rapidly-are-gpus-improving-in-price-performance) about this article\n\n\n\n\n\n---\n\n\n1. GPUs are still more cost effective than TPUs, but have lower serial computation speed. [↩](#fnref-1 \"Jump back to footnote 1 in the text\")\n2. This is not nearly as true as with CPUs which have managed to extract performance improvements from [increasingly arcane changes to control circuitry](http://www.lighterra.com/papers/modernmicroprocessors/). [↩](#fnref-2 \"Jump back to footnote 2 in the text\")\n3. AI Impacts has a [similar data set](https://docs.google.com/spreadsheets/d/1yqX2cENwkOxC26wV_sBOvV0NxHzzfmL6tU7StzrFXRc/edit#gid=51141192). [↩](#fnref-3 \"Jump back to footnote 3 in the text\")\n\n\n\nif (!document.getElementById('mathjaxscript\\_pelican\\_#%@#$@#')) {\n var align = \"center\",\n indent = \"0em\",\n linebreak = \"false\";\n\n if (false) {\n align = (screen.width < 768) ? \"left\" : align;\n indent = (screen.width < 768) ? \"0em\" : indent;\n linebreak = (screen.width < 768) ? 'true' : linebreak;\n }\n\n var mathjaxscript = document.createElement('script');\n mathjaxscript.id = 'mathjaxscript\\_pelican\\_#%@#$@#';\n mathjaxscript.type = 'text/javascript';\n mathjaxscript.src = 'https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.3/latest.js?config=TeX-AMS-MML\\_HTMLorMML';\n mathjaxscript[(window.opera ? \"innerHTML\" : \"text\")] =\n \"MathJax.Hub.Config({\" +\n \" config: ['MMLorHTML.js'],\" +\n \" TeX: { extensions: ['AMSmath.js','AMSsymbols.js','noErrors.js','noUndefined.js'], equationNumbers: { autoNumber: 'AMS' } },\" +\n \" jax: ['input/TeX','input/MathML','output/HTML-CSS'],\" +\n \" extensions: ['tex2jax.js','mml2jax.js','MathMenu.js','MathZoom.js'],\" +\n \" displayAlign: '\"+ align +\"',\" +\n \" displayIndent: '\"+ indent +\"',\" +\n \" showMathMenu: true,\" +\n \" messageStyle: 'normal',\" +\n \" tex2jax: { \" +\n \" inlineMath: [ ['\\\\\\\\(','\\\\\\\\)'] ], \" +\n \" displayMath: [ ['$$','$$'] ],\" +\n \" processEscapes: true,\" +\n \" preview: 'TeX',\" +\n \" }, \" +\n \" 'HTML-CSS': { \" +\n \" styles: { '.MathJax\\_Display, .MathJax .mo, .MathJax .mi, .MathJax .mn': {color: 'inherit ! important'} },\" +\n \" linebreaks: { automatic: \"+ linebreak +\", width: '90% container' },\" +\n \" }, \" +\n \"}); \" +\n \"if ('default' !== 'default') {\" +\n \"MathJax.Hub.Register.StartupHook('HTML-CSS Jax Ready',function () {\" +\n \"var VARIANT = MathJax.OutputJax['HTML-CSS'].FONTDATA.VARIANT;\" +\n \"VARIANT['normal'].fonts.unshift('MathJax\\_default');\" +\n \"VARIANT['bold'].fonts.unshift('MathJax\\_default-bold');\" +\n \"VARIANT['italic'].fonts.unshift('MathJax\\_default-italic');\" +\n \"VARIANT['-tex-mathit'].fonts.unshift('MathJax\\_default-italic');\" +\n \"});\" +\n \"MathJax.Hub.Register.StartupHook('SVG Jax Ready',function () {\" +\n \"var VARIANT = MathJax.OutputJax.SVG.FONTDATA.VARIANT;\" +\n \"VARIANT['normal'].fonts.unshift('MathJax\\_default');\" +\n \"VARIANT['bold'].fonts.unshift('MathJax\\_default-bold');\" +\n \"VARIANT['italic'].fonts.unshift('MathJax\\_default-italic');\" +\n \"VARIANT['-tex-mathit'].fonts.unshift('MathJax\\_default-italic');\" +\n \"});\" +\n \"}\";\n (document.body || document.getElementsByTagName('head')[0]).appendChild(mathjaxscript);\n}", "url": "http://mediangroup.org/gpu.html", "title": "How rapidly are GPUs improving in price performance?", "source": "html_articles", "source_type": "blogPost", "source_filetype": "pdf", "date_published": "2017-12-31T23:00:00Z", "authors": ["Baeo Maltinsky"], "summary": [], "id": "e484f9ed989f86e89ff59eaf5c469664"} {"text": "Note: the demos on this page will not work with javascript disabled. \n \n\n\n\n\n For various “percentages of the way” done AI research could be, in terms of percentage of the necessary insights discovered, what is the probability that AI research is not yet that percentage done?\n \n\n\n\n\n\n100%\n\n\nP(no more than this much of the way done)\n\n\n0%\n\n\n\n\n\n\n\n\n0%\n\n\n\n100%\n\n\n\n\n \n\n\nProportion of required insights that have been discovered\n\n\n\n\n \n\nClear\nPre-set priors\n--------------\n\n\nInstead of drawing a cumulative distribution function, you can instead use a pre-set prior. These priors are based on the [Pareto distribution](https://en.wikipedia.org/wiki/Pareto_distribution). To make choice of the parameter more intuitive, we parameterize the distribution in terms of a probability *q*, equal to the probability that a doubling in number of insights (starting from the minimum number of insights) would result in a sufficient set of insights. \n\n\nMinimum plausible number of insights required: \n\n\n*q*: Pareto distribution\n\n\nPareto distribution, *q* ~ Uniform(0,1)\n*α*: \n*β*: \nPareto distribution, *q* ~ Beta(α, β)\n\n\nResulting timeline\n------------------\n\n\nAssuming a linear increase in number of required insights over time, the following cumulative distribution function for time when all required insights are discovered is implied by these beliefs.\n\n\n\n Adjust the maximum year displayed: \n Derivation\n===========\n\n\n How was this data generated? Jessica Taylor, Jack Gallagher, and Baeo Maltinsky spent a few hours generating a list of AI insights that seemed around the same order of significance or more significant than the insight of LSTM (specifically, the insight of inventing LSTM given that RNNs were already invented). The following is a plot of number of AI insights in our list over time since 1850.\n\n\n\nThe model assumes that insights increase linearly over time. The increase has been roughly linear since 1945, but this could change due to low hanging fruit, expanding research avenues, changes in the number and effectiveness of research institutions, and so on. The model does not distinguish between insights in our list (which we selected according to some subjective estimation of importance) and specifically *required* insights; however, if the percentage of insights that are actually required stays somewhat constant over time, this does not significantly affect the timeline.\n\n\nThe list of insights and their years can be found in [this document](../docs/AI_insights.pdf).", "url": "http://mediangroup.org/insights", "title": "Insight-based AI timelines model", "source": "html_articles", "source_type": "blogPost", "source_filetype": "pdf", "date_published": "2017-12-31T23:00:00Z", "authors": ["Baeo Maltinsky"], "summary": [], "id": "96eaff113c61b2ee40013b5cda7106d5"} {"text": "Note: This demo is in beta, and you may experience issues such as strange numerical behavior at this time.\n \n\n\n\n\n Last year, we released our [insights-based model](http://mediangroup.org/insights) that generated a projected timeline using historical data and a prior distribution. We’ve revisited it to address its limitations and improve the data it draws from.\n \n\n\n\n The model relies on the assumption that progress in AI relies on accumulating *insights*, fundamental advances in our understanding, that allow for improvements in capacity without increase in resources expended. This choice makes an attempt to separate out the effects of true technological advancement from the effects of an increase in computing power devoted to a problem, both of which can increase the capacity of machine intelligence to solve complex problems. Computational power is an expensive, finite resource, and without a paradigm-shifting improvement in computing itself, precise allocation of that power alone will not be enough to continue advancing AI’s problem-solving capabilities.\n \n\n\n\n The interactive model below provides two methods of capturing a prior about how many more advances in understanding are required to achieve human-level machine intelligence. Based on that prior, and on the pace of insight discovery during a particular historical period, we compute a probability distribution over time of the likelihood humans will develop human-level AI. Results of this calculation are shown in the “Implied timeline” graph below.\n \n\n\n\n\nStep 1: Specify a prior for current progress\n--------------------------------------------\n\n\n### Option A: Draw a distribution\n\n\n\n For various “percentages of the way” done AI research could be, in terms of percentage of the necessary insights discovered, what is the probability that AI research is not yet that percentage done?\n \n\n\n\n The graph below allows you to draw a distribution of how likely it is we have achieved a particular portion of the insights required for human-level machine intelligence.\n \n\n\n\n\nReset\n\n### Option B: Pre-set priors from Pareto distribution\n\n\n\n Instead of drawing a cumulative distribution function, you can instead use a pre-set prior based on a [Pareto distribution](https://en.wikipedia.org/wiki/Pareto_distribution).\n \n\n\n\n To make the choice of Pareto distribution more intuitive, we parameterize the distribution in terms of a probability *q*, equal to the probability that a doubling in number of insights (starting from the minimum number of insights) would result in a sufficient set of insights. *q* can be set directly, or we can sample from a mixture of Pareto distirbutions, where the *q* parameters are sampled from a uniform distribution or a beta distribution.\n \n\n\n\n\n#### Number of samples to take when running the simulation\n\n\n\n\n\n\n#### Set *q* directly\n\n\n*q*\n\n \n\n\n Set *q*\n\n\n\n#### Sample *q* uniformly over (0,1)\n\n\n\n Sample *q*\n\n\n\n#### Sample *q* with Beta(α, β)\n\n\n*α*:\n\n \n\n*β*:\n\n \n\n\n Sample *q*\n\n\n\n\n\n Note: The simulator can be very slow for larger values of *q*, as most of the samples need to be thrown away.\n \n\n\n\n\n\nStep 2: Specify pace of progress\n--------------------------------\n\n\n\n Which period in history is most representative of the future pace of AI insight discovery?\n \n The graph below plots the aggregate of insights discovered over time and allows selection of a particular period of history in AI research. The curve fit to that period (linear, exponential, or sigmoidal) is used to project the future distribution of discoveries.\n \n\n\n\n\n\n\n\n\n \n\nRegression mode:\n\nLinear\nExponential\nSigmoidal\n\n\n\n\n\nResult: Implied timeline\n------------------------\n\n\n\n\n\n\n\n Sources\n--------\n\n\n\n The data used in this model is available as a [JSON file](http://mediangroup.org/docs/insights.json).\n The [source code](https://github.com/Median-Group/insights2) for the demo can be found on the Median Group [github](https://github.com/Median-Group/).", "url": "http://mediangroup.org/insights2.html", "title": "Revisiting the Insights model", "source": "html_articles", "source_type": "blogPost", "source_filetype": "pdf", "date_published": "2018-12-31T23:00:00Z", "authors": ["Median Group"], "summary": [], "id": "54e53ec18c03be06a1556a11f248a838"} {"text": "We, Borg\n========\n\n\nSpeculations on Hive Minds as a Posthuman State\n-----------------------------------------------\n\n\n*by Anders Sandberg* \n\n> \n> The designers of our species set out to produce a being that might be capable of an\n> order of mentality higher than their own. The only possibility of doing so lay in planning\n> a great increase in brain organisation. But they knew that the brain of an individual\n> human being could not safely be allowed to exceed a certain weight. They therefore sought\n> to produce the new order of mentality in a system of distinct and specialised brains held\n> in \"telepathic\" unity by means of ethereal radiation. Material brains were to be\n> capable of becoming on some occasions mere nodes in a system of radiation which itself\n> should then constitute the physical basis of a single mind. \n> \n> Olaf Stapledon, Last and First Men \n> \n> \n> \n\n\nHive minds where the individual is subsumed into a collective consciousness has been a\nrecurring idea in science fiction since Olaf Stapledon's influential novels *Last and\nFirst Men* (1931) and *Star Maker* (1937), although the concept in some sense had\nbeen suggested by *Leviathan* of Thomas Hobbes (1651). They have often in western\nscience fiction been used as an allegory for communism or the anonymity of industrial\ncivilisation, and have usually been portrayed in a terrifying light (Nicholls 1982). The\nlatest such portrayal is the Borg in *Star Trek: the Next Generation*: a race of\nbionically augmented humanoids linked together into a collective mind, striving to\nassimilate every other intelligent species into the Collective. \n\n\n\nDue to the popularity of the show several new words for hive minds have been coined\n(Morrow 1996): \n\n\n**Borganism:** \n1) An organization of formerly autonomous beings who have merged their individual wills\n to create one, collectively conscious being; 2) The social and political theory that\n advocates the creation of borganisms. \n**Borganise:** \nTo form a borganism, to organise its structure. \n\nI will in the following call the beings making up the borganism **units** (calling\nthem individuals would be erroneous since they by definition lack individuality, and the\nborganism is clearly divisible, hence it cannot be called an individual either). The word\nborganism is especially suitable since I will look at hive minds from a cybernetic point\nof view (cybernetics -> cyborg -> borg). \n\n\nThis essay seeks to look into the psychology and sociology of borganisms, and to\ndiscuss borganisms as a possible posthuman state. \n\n\nBorganisms in Nature\n--------------------\n\n\nBorganisms might at first appear to be fanciful ideas, more grounded in science fiction\nand human desires/fears than in practical reality. But in nature there already exists\nseveral systems that suggests otherwise. The most common example used is the hives of\nsocial insects, where all individuals work for the common good with little regard for\nthemselves. Although it has been argued that hives lack collective minds (Nicholls 1982)\nit should be noted that all such species communicate with chemical signals, and at least\nin the case of ants chemical trails can be seen as collective cognitive maps distributed\nin the environment (Chiavlo & Millonas 1995). There may exist degrees of\nborganisation, and they are tied to how closely the units communicate. \n\n\nAnother natural system of interest is the structure of multicellular organisms. The\ntransition from single-celled life to multicellular life can be seen as borganisation. The\nchemical \"minds\" of cells are closely connected, and in some cases cells have\ngap-junctions connecting their cytoplasm or even merge to form a cyncyticum. In a\nmulticellular organism the cells are differentiated into different tissues with different\nfunctions, which sometimes include the planned death of cells (such as in the case of the\nformation of the protective outer layer of skin, the stratum corneum). Differentiation is\nmediated through chemical signals from other cells which affect the genetic expression of\nproteins and continued cell behaviour. This examples shows that a borganism can have a\ncomplex internal structure. All units do not need to be equal, and specialisation and\nhierarchical control is a possibility. \n\n\nThe third example of a borganism-like system in nature is the human brain. It consists\nof several parts able to act independently but closely tied together, so closely that\nnormally these divisions go unnoticed. In some cases the system is disturbed and the\npotential independence of the parts can become apparent. One example is split brain\npatients whose hemispheres have been disconnected; most of the time this does not cause\nany noticeable change, but under some circumstances the two sides come into conflict or\ninterfere with each other. Another example is the dissociative states that can occur\nduring hypnosis or traumatic situations where the mind is divided into two or more parts\nhaving different access to sensory information and motor control (Hilgard 1977, Putnam\n1989). The brain shows that the borganism might not even need to be aware of the units\nmaking it up, it can exist on a higher level, perhaps as a metasystem (Turchin &\nJoslyn 1993). \n\n\nCommunication and Structure in Borganisms\n-----------------------------------------\n\n\n\n> \n> Many other triumphs of eugenical experiment we observed up and down the worlds. The\n> general level of individual intelligence was, of course, raised far beyond the range of\n> Homo Sapiens. But also that superintelligence which can be attained only by a psychically\n> unified community was greatly developed on the highest practicable plane, that of the\n> conscious individuality of a whole world. This, of course, was impossible until the social\n> cohesion of individuals within the world-community has become as closeknit as the\n> integration of the elements of a nervous system. \n> \n> Olaf Stapledon, Star Maker \n> \n> \n> \n\n\nCommunication is central to borganisation. By definition the units making up a\nborganism will be in close mental contact; the bandwidth and structure of this contact\nwill determine much of the properties of the borganism. \n\n\nIt may be hard to tell when a group of individuals becomes a borganism; the psychology\nof a group can be significantly different from the psychology of the individuals, and even\namong humans individuality can be subsumed by group identity under some conditions.\nHowever, so far intra-group communication has been mainly verbal, kinetic and possibly\nchemical (pheromones). As the bandwidth increases new phenomena will likely appear and the\ngroup as an organism begins to take on its own life. \n\n\nThe communication between the units of a borganism can be characterised by its\nbandwidth and topology. \n\n\n### Bandwidth\n\n\nBandwidth denotes the amount of information exchanged between units and to which mental\ndepth it occurs; speech is a low bandwidth communication only reaching a superficial\nmental level while a direct mental link giving insight in the mental imagery of the other\npart would be a high bandwidth communication. The extreme case is total connection where\nthe bandwidth is so high that all units form a single neural network. What is uncertain at\npresent is how high bandwidth is needed to create a true borganism. This may be a matter\nof degree rather than a distinct transition between several individuals and one borganism.\n\n\n\nStarting from a low bandwidth we have a group of individuals communicating and acting\non mutual goals. As the bandwidth is increased they can not only communicate intentions\nbut their deeper causes; at higher bandwidths the mental chains leading to decisions\nbecome communicable and hence shareable. This may allow collaborative refinement of goals\nand plans in a much more efficient way than low-bandwidth discussion and the borders\nbetween individuals gradually fade away. Note that the units still can be specialised and\nhave different memories, values and personalities. \n\n\nGroup psychology has studied under what conditions groups become more (or less)\nproductive than individuals. In general it depends on the nature of the task and group. In\nproblem solving tasks groups frequently develops better solutions than individuals\n(Hellriegel et al 1989), since there are more opportunities for error- correction, idea\ngeneration, scenario testing and a higher likelihood that the skills and knowledge needed\nto solve a complex problem are available. This is especially true for tasks which can be\nsubdivided easily. \n\n\nGroups do not perform better than their most gifted individual on tasks which cannot be\nsubdivided if the task is simple and the solution immediately becomes obvious to everyone\nonce it is proposed (Baron & Byrne 1991). In many cases of human psychology social\nprocesses can interfere with this and decrease the performance; this might be possible to\ncircumvent in a borganism. For example, in human groups the gifted individual often\nvoluntarily stands back in order not to dominate the discussions; in a borganism there is\nless concern for the individual (both positive and negative), which suggests that this\ntendency will be weakened in favour of helping the group. The above observations of human\nproblem solving suggests that borganisms should divide problems into manageable chunks\nwhich are handled by small subnetworks (possibly temporary) which in turn communicate with\neach other, at least in the case of divisible problems. In less easily divided problems it\nappears likely that a high bandwidth connection between the participating units is\ndesirable, turning them into a more homogenous group. \n\n\nSo far I have assumed the group is interacting in a fairly homogenous manner, akin to a\nmeeting. It is also possible to differentiate between a planning part of the borganism and\nan executive part which implements the plans while remaining in contact with the planning\npart. This suggests two densely connected clusters of units linked by a somewhat lower\nbandwidth link. \n\n\nIt appears likely that for a borganism which encounters different kinds of problems in\ndaily life it is advantageous to modify its internal topology and bandwidth. There are of\ncourse technological and physical limitations to this, as well as a control problem: what\nsubsystem should organise the topology? \n\n\nOne possibility was suggested in *Star Trek: First Contact*: the \"Borg\nQueen\", a female unit explained her function as \"I bring order into chaos\".\nThis could be interpreted as her having an organising role unlike the other fairly\nidentical units; other replies suggested that she was instantiated on other, perhaps all,\nborg ships. A borganism may consist of two different kinds of units, one basic general\npurpose unit that makes up most of the population, implementing the collective will, and\none or a few organising units optimising the internal structure (possibly acting as\narbitrators in internal conflicts or a supervisory B-brain (see Minsky 1988). \n\n\nHowever, it is not certain there is a need for special units. If individual units can\ninfluence their topology and bandwidth, it is not unreasonable to think that a regulatory\nsystem could be implemented locally, for example by a market-based approach (Miller &\nDrexler 1988). It is important to realise that borganisms may consist of many different\nkinds of units both physically and mentally; while most descriptions have concentrated on\nhomogenous or stratified structures borganisms with wildly diverse units, possibly as\ndifferent as humans, AI systems and non-intelligent software agents, are a possibility. \n\n\n### Topology\n\n\nThe topology can be varied endlessly. A simple solution is total interconnectivity\nwhere every unit is connected to every other. Total interconnectivity is usually\ninefficient since the total bandwidth (and its overhead) grows as N^2 (where N is the\nnumber of units); in most cases there is little need for every unit to constantly\ncommunicate with every other and most of the bandwidth is wasted. If time or attention has\nto be taken from work to keep up to date with what other units are doing there will even\nbe an optimum size of the borganism where the total amount of work done is maximal, above\nit the overhead of communication removes any advantage in adding more units. \n\n\nOther interesting topologies are bus structures where units needing to communicate do\nso through a high bandwidth medium (for example broadcast signals or infrared links to a\ncomputer network), hierarchical topologies where supervisory or logistic units acts as\nintermediaries for the communication (this places high demands on their ability to manage\nhigh bandwidths; the top level can easily become a bottleneck) and hypercube topologies\nwhere the units form a multidimensional cube and each unit communicates with log2(N)\nothers; the maximal distance between any two units is log2(N) and the total bandwidth\ngrows as Nlog2(N). \n\n\nAs can be seen, knowledge from designing multiprocessor systems can be applied to\nborganisms. In both cases the problem is distributing information in a system consisting\nof many subunits, and finding problems and ways of solving them that work well in\nparallel. \n\n\nTo accommodate a changeable topology the network must be as flexible as possible. Most\nlikely a virtual network is the simplest solution: the mental topology is implemented as a\nlayer on top of another network, for example a fast packet-switched network where each\nunit is linked to the nearest node, or an internet of different networks. \n\n\nOne interesting architecture of a borganism is a hierarchy of meta- individuals.\nIndividuals form meta-individuals due to high bandwidth connections and well coordinated\nmental processes. These meta-individuals form higher level individuals, and so on until a\ntop level is reached. This suggests a hierarchical network topology where higher levels\nmainly exchange high-level information keeping the necessary bandwidth low by a high level\nof abstraction. A similar structure has been suggested by Marvin Minsky for the human\nmind, where \"agents\" (simple independent subsystems with their own goals)\ninteract to form more complex behaviours which can be grouped into higher level agents\n(Minsky 1988). \n\n\nThis scenario is similar to the hierarchy of minds in Stapledon's *Star Maker*:\nadvanced cultures form planetary borganisms where each individual is at the same time a\npart of the planetary mind and an independent individual. The planetary minds in turn form\ngalactic minds in the same way, which in turn participates in the universal ultimate mind.\n\n\n\nIt is worth noting this model doesn't imply that each unit lacks individuality;\nStapledon quite clearly suggests that they can remain individuals but at the same time\nparticipate in the borganism. One possibility is the ability to link into the borganism at\nwill, another is a permanent linkup which leaves some mental levels individual while\nothers collective. \n\n\nThe Psychology of Borganisms\n----------------------------\n\n\n\n> \n> So perfectly organised was the life of the minded swarm that all routine activities of\n> industry and agriculture had become, from the point of view of the swarm's mind,\n> unconscious, like the digestive processes of a human being. The little insectoid units\n> themselves carried on these consciously, though without understanding their significance;\n> but the mind of the swarm had lost the power of attending to them. Its concern was almost\n> wholly with such activities as called for unified conscious control, in fact with\n> practical and theoretical invention of all kinds and with physical and mental exploration.\n> \n> \n> Olaf Stapledon, Star Maker \n> \n> \n> \n\n\nHow does a borganism recruit units? There are three possible answers: the individual\nmust willingly give up some of its individuality in exchange for the positive effects of\nbeing part of the borganism (extended mental capacity, transhuman support etc), the\nindividual is involuntarily borganised, or the individual is created as a part of the\nborganism. \n\n\nBeing a part of a borganism may or may not be reversible depending of how much the\nindividual unit is integrated into the collective mind. If units are individuals which are\nlinked together into a relatively low-bandwidth mental network for enhanced communication\nand metasystem formation the process may be reversible (although the former units may have\na hard time understanding or remembering their thoughts as borganism). More intimate forms\nof communication may however necessitate a permanent link to the borganism since the unit\nis dependent on other units for many mental processes. Since it is likely a borganism will\nneed a significant amount of mental coordination to function well having units leave or\njoin often may be disadvantageous. \n\n\nUnwilling units may not be desirable, both for the above reasons and due to the risk of\nmemetic infections (see the section about borganic weaknesses). If units remain relatively\nunchanged when they are integrated into the collective, unwilling units are likely to be\nhighly disturbing and more trouble than the extra mental capacity is worth. However, if\nthe borganism doesn't care for the individual skills and memes of the units they can\nperhaps be \"mentally reformatted\", turned into standardised drones a la the Borg\nof Star Trek. \n\n\nA recruitment method which circumvents the problems of both the other methods is to\nbuild/grow new units to fit the borganism. This could range from having units grow up\nlinked to the borganism (which would likely make their minds much better adapted to a\nborganic existence) to the copying of units. If units grow up in the borganism it is very\nlikely they will adapt well to it, likely to a much larger extent than units introduced\nfrom the outside. \n\n\nWith advanced cloning techniques and a way to imprint suitable neural information it\ndoes not appear entirely unlikely that individuals could create more or less similar\nclones of themselves. Since these copies would be very similar, it is likely they will fit\ninto the borganism well if the original does. It is even easier if uploading is possible:\nthe borganism consists of infomorph entities which are interlinked much more strongly than\nwould be possible if the units were entirely physical; the physical presence of the\nborganism could be handled by telepresence. Copying might enable a single individual to\ndevelop into a borganism, where all units (at least originally) share his or her values,\ngoals and personality, making a good foundation to build a metaorganism on (assuming the\nbasic personality is compatible with borganisation; some people might not get along with\nthemselves). \n\n\n### Emotion\n\n\nOne obvious trait of the Borgs of *Star Trek* is their total emotionlessness; even\nin extreme situations they behave robotically. Most likely this was intended to dehumanise\nthem further, but there is a good reason to expect that borganisms may tend to *appear*\nemotionless. In humans mood is conveyed through intonations, body language and especially\nfacial expression. This transmission is important since without functional emotional\ncommunication many humans have a hard time functioning socially. But in a borganism\nemotions need not be expressed through body language and expression, since they can be\nexpressed much more clearly through the intranet communication. There is no point for an\nunit to smile if it is amused (or the borganism as a whole is amused) since any other unit\nwould be able to know exactly what mood it is in. So it is likely borganisms (unless they\ntry to avoid it) would appear emotionless to individual humans despite having a rich inner\nlife. \n\n\n### Self-Sacrifice\n\n\nIt sometimes occurs that parents sacrifice themselves for their children, or siblings\nfor each other. There are sound sociobiological reasons for this which serve to ensure\ngenetic survival, and throughout history individuals have sacrificed themselves to ensure\nthe survival of their memes in an analogous fashion (Dawkins 1976). A borganism is a\nmemetic organism, and it might be possible for units to sacrifice themselves for the\nborganism. This regularly occurs in *Star Trek* and real insect colonies. If all\nunits are roughly identical there is no great loss (except in resources) to sacrifice one\nfrom the perspective of the borganism *and* the unit, which ensures the survival of\nsimilar units and its shared memes. If the connections between units are powerful enough\nor the units are infomorphs it may even be possible to make mental backups, making self-\nsacrifice relatively cheap. More individual units of course have more to lose, and it is\nless likely the borganism can compel them to sacrifice themselves (still, this is largely\ndependent on the memes dominant in the borganism and units). \n\n\n### Interaction\n\n\nHow would a borganism interact with other borganisms and individuals? It is important\nto realise that as a metaorganism borganisms may not even perceive individuals as anything\nthan independent units, with roughly the same value (which may be high or low). To a\nborganism the other \"real\" inhabitants of the world may be other borganisms, the\nindependent units are simply not \"real\" beings. This seems to be the classic\nview of how borganisms would see the world, and fits in quite well with the villain\nstereotype. However, there is no particular reason for why borganisms would be unable to\nappreciate the individual existence of non-borganisms. \n\n\nBeing communication-based entities, borganisms may have an easier time communicating\nwith each other than individuals have. If one ignores technical problems of compatibility\nand protocol, it seems quite possible for borganisms to interlink in order to communicate.\nThis would correspond to an extremely high bandwidth channel, enabling fast transmission\nof very complex concepts. There is of course the matter of avoiding total merge and\nsecurity, but this could perhaps be dealt with by using some units as a\n\"firewall\". \n\n\nImplementation of Borganisation\n-------------------------------\n\n\n\n> \n> I want to be assimilated. I want to be borg. Machines will not destroy humans; humans\n> and machine will become one. Crist Clark \n> \n> \n> \n\n\nMany descriptions of borganisms have assumed telepathy, but as Olaf Stapledon pointed\nout in 1937 radio could do just as well. Implementing a high-bandwidth mobile information\nnetwork is a hot research topic today, linked to research into wearable computing, mobile\noffices and ubiquitious computing. \n\n\nHow large bandwidth is needed? We can estimate a lower bound from the bandwidth of\nspeech and body language, which appears to be on the order of 10-100 bits/s. A highest\nupper bound would be total interconnection at the same signal density as the human mind,\nor roughly 10^18 bits/s, quite an extreme range. However, the two human hemispheres\ncommunicate closely through the corpus callosum normally with no discernible differences;\nthis connection has a theoretical bandwidth on the order of 10^10 bits/s, which could be\nseen as a likely bandwidth needed for a deep connection between different units making\nthem truly parts of the same mind. \n\n\nIt seems likely that for any high bandwidth borganism neural interfaces are necessary,\nsince there are no channels into the mind with enough extra bandwidth. Hence an artificial\nborganism interface is needed. Of course, it may turn out that smaller bandwidths does\naccommodate the formation of borganisms (as mentioned above, the *conscious*\nbandwidth appears to be quite small, on the order of 100 bits/s according to some\nresearchers). \n\n\nOf course, a simple solution would be to keep the minds of the units in a computational\nmatrix outside the bodies, which are controlled remotely. This would require a bandwidth\nsimilar to the spinal cord + brain nerves, on the order of 10^10 bits/s per body or so. It\nmay even be possible to let the bodies largely run themselves using lower level systems of\nthe brain and spinal cord. Since a significant amount of information is simply abstracted\naway before reaching the conscious level and higher brain functions the necessary\nbandwidth would be even smaller, and hence easier to send. \n\n\nDesigning a mobile linkup to the borganism network is nontrivial due to the estimated\ndemands. Current mobile networks (radio, IR) reach around 100 Kbit/s-10 Mbit/s over short\nranges <50 meters (Weiser 1991, 1996) which suggests that we need three orders of\nmagnitude broader bandwidth to achieve the necessary 10^10 bits/s for high bandwidth\nborganisation. This does not appear impossible in principle: visible light lasers could\nenable this bandwidth over line-of-sight distances, and neural activity is normally quite\nsparse and likely possible to compress (roughly 5% of a set of neurons are active at any\ngiven time; this suggests that the signals can be compressed by one to two orders of\nmagnitude). Other aspects of the borganism network structure are addressed by current work\nin ubiquitious and mobile computing, such as flexible switching between transceivers,\nerror correction, energy demands and network protocols. In principle a high bandwidth\nneural interface seems to be doable using near future technology. \n\n\nA likely structure would consist of a high-bandwidth non-mobile digital network\n(\"the backbone\") which acts as the central switching system for the present\nunits. They can either be in contact with it, enabling very high bandwidth communication,\nor mobile, in which case they communicate with it using radio, IR or visible laser signals\n(it is amusing to note that the Borg in *Star Trek* often have lasers playing over\ntheir surroundings). The signals have a short range, and need only reach the nearest\ntransceiver. Units outside the \"hive\" will not be able to communicate with the\nborganism with as high bandwidth, and may have to settle for radio signals. It seems\nlikely that units \"on their own\" must deal with situations that occur more as\nindividuals than as parts of the borganism. \n\n\nInside the borganism network, signals are dynamically routed between units (and other\naugmentative hard- and software). Low-level protocols implement packet-switching and\nvirtual connections, whose structure and organization is regulated by an \"arbitration\nlayer\" which could be seen as the pre-conscious part of the borganism's mind. This\narbitration layer could be implemented (as discussed above) using coordinator units,\nmarket based systems, other approaches or mixed systems; the arbitration layer makes sure\nthe virtual network structure is optimal for the tasks at hand, and organizes the units\ninto meaningful teams and groups. These teams and groups form the true mind of the\nborganism, which gathers information, solves problems and implements solutions. \n\n\nWeaknesses of Borganisms\n------------------------\n\n\nDespite their likely high mental and practical capacity borganisms have noticeable\nweaknesses, just as individual organisms do. \n\n\nMany of the problems of borganisms are emergent properties of the system, not inherent\nin the units themselves. \n\n\n### Memetic Infection\n\n\nOne of the most worrying weaknesses is the spread of virulent information patterns such\nas memes. Memes thrive in environments with intense communication (Bjarneskans et al.\n1997), and would likely spread extremely quickly inside a borganism, infecting both\ncollective and unit schemata. Having a working system for memetic defence appears to be\nvital for the well-being of a borganism, especially in the face of memes similar to\ncomputer viruses (in the cybernetic environment of a borganism there is little\ndifference). It is not unlikely that a borganism has to retain a high degree of mental\nhygiene in order not to succumb to selfish mental replicators. \n\n\nStill, it is unlikely that external or internal memetic defences will be perfect,\nespecially since the borganism itself may accidentally create destabilizing memes during\nnormal thinking and internal communication. The evolution of parasites appears to be\nubiquitous in life-like (eco)systems, and the more interconnected the ecosystem is, the\ngreater is the complexity of coevolution and hyperparasitism (Kelly 1994). This suggests\nthat borganisms might generally not be able to avoid a certain level of internal selfish\nreplicators, and that the best strategy in dealing with them is to integrate symbiotic\nreplicators as a kind of immune system rather than attempt to fruitlessly eradicate them\n(Moravec 1988). \n\n\n### Groupthink\n\n\nGroupthink is a common problem in human groups: the group becomes divorced from reality\ndue to its internal consensus (which may even be illusory); it fails to question its own\nassumptions and to take unwelcome aspects of reality into account. If the borganism has to\nkeep its units in line, it is likely it will directly or indirectly counteract dissent,\nwhich may promote groupthink. Often the best way of avoiding groupthink is to allow\ndissenting minorities to present their view. On the other hand, borganisms with\nsufficiently high bandwidth may be *less* susceptible to groupthink than human\ngroups. If the units can present not only their views but the mental processes which\nreached these views it may become easier to judge the relative merit of the different\npositions. They are no longer assertions about reality but rather different models which\ncan be analysed using critical thinking, empirical testing or synthesis. \n\n\n### The Selfish Borg\n\n\nA borganism is not just a distributed organism, it is also in some sense a social\norganisation. This means that the relationship between itself and its units can become a\nsource of trouble. If memetic evolution and spread cannot be avoided (for example by\nhaving units whose minds can easily be reformatted), there is the risk that discontent or\nother disturbances can propagate among the units, destabilising the borganism. \n\n\nFor example, selfish units may be a problem. Assuming that the units retain some\nautonomy, it is not unreasonable to think that some might decide to profit on the expense\nof the borganism. In human groups this can be observed as the diffusion of responsibility\n(the more people involved in a task the less intensely they tend to work if their results\ncannot be traced back to them) and forms of social parasitism. If this strategy is\nsuccessful it can quickly spread (due to the fast transmission of memes) leading to the\nweakening or dissolution of the borganism. Accountability of units may be a simple way of\ndealing with this, especially since the borganism network is likely ideal for keeping\ntrack of what everybody is doing (or not doing). Still, it is likely that selfish\nstrategies can develop which are hard to detect. \n\n\nDiscussion\n----------\n\n\n\n> \n> We are the Borg. Lower your shields, and surrender your ship. We will add your\n> biological and technological distinctiveness to our own. Your culture will adapt to\n> service ours. Resistance is futile. \n> \n> Star Trek \n> \n> \n> \n\n\nBorganisms horrify some and attract others. They represent both the human fear of\nlosing the self and the vision of total community. The Borg of *Star Trek* are\ndepicted as inhuman and ruthless, while the \"minded planets\" of Stapledon are\nbenevolent and spiritual. Hobbes suggests that a limited form of borganisation (the\nformation of societies with strong rulers) is necessary for individual survival and\nwell-being. \n\n\nRegardless of people's reactions to them, borganisms are one of the best explored forms\nof posthumanity. Unlike Jupiter brains or uploaded entities, we can at least have an\ninkling of what they are and how they can be brought about; there is no immense\ndiscontinuity between current humanity and borganisms. \n\n\nAre borganisation a desirable state? The answer seems to depend on how much one values\nindividuality and autonomy. If these are made central values borganisms are clearly not\ndesirable, and to an extreme individualist it might even appear ethical to disrupt\nborganisms in order to \"free\" the units (Morrow 1996). The case is not as clear\nfor voluntary borganisms where units both retain a sense of individuality and still belong\nto the borganism. In this case extreme individualists would likely argue that being part\nof a borganisation stunts personal development and freedom, even if it is voluntary (this\nalso mirrors the libertarian debates about the rights of government versus the individual,\nand the legitimacy of the \"social contract\"). \n\n\nIf one does not see individuality and autonomy as fundamental values there are fewer\narguments against borganisms. There is a certain worry that borganisms will be inefficient\nsocial or memetic attractors; suboptimal evolutionary stable strategies (one possible\nattractor state in the Strong Convergence Hypothesis of Boström 1997), or that the goals\nof the borganism as a whole will in the long run become incompatible with the original\ngoals of the units which joined together. There is some evidence for the later\npossibility: the goals of multicellular organisms and hives of insects call for the\nsacrifice of their units, and judging from the relative amount of biomass in\nmulticellular/single celled and social/nonsocial insects the non-borganised lifeforms do\nquite well *from the perspective of the individual*, although borganisation clearly\nis not a disadvantage for the genes and may instead be very advantageous on the genetic\nlevel (Dawkins 1976). If this observation can be translated into the noosphere, it\nsuggests that borganisms are advantageous for many strongly action-influencing memes and\nmeme-complexes (a possible example would be religions or ideologies) which can override\nthe personal self-interest of individuals. It is worth noting that in the biosphere the\nborganic analogues do not dominate either species-wise or in a numerical sense;\nsingle-celled and individual animals are still the norm. This suggests that even if\nborganisms are attractors and self-supporting, they may not be so advantageous or flexible\nthat they out-compete all other lifestyles (especially since in an environment with\nborganisms there exists a memetic evolutionary advantage to exploit them for\nnon-borganisms). \n\n\nWhat are the biggest advantages of borganisms? They provide an \"easy\" way to\ncreate superhuman entities (it might even be argued that we have created simple\nlow-bandwidth borganisms based on metasystems today: organisations and states), and there\ndoes not appear to exist any obvious barrier to their creation (although plenty of\nexperimentation in group-interaction and -integration is clearly needed). Borganisms would\nbe able to solve some large classes of problems and implement the solutions much more\nefficiently than collections of individuals, giving them a practical and economical\nadvantage. There is also the long-standing human dream of total community which may make\nborganisms desirable to some for purely aesthetic or emotional reasons. \n\n\nRegardless of one's view of borganisms it is clear that they provide a possible\nposthuman state, and that they are advantageous in some situations. This is usually enough\nto ensure that at least some borganisms will eventually be implemented by some group for\nsome reason. Resistance is futile. \n\n\nBibliography\n------------\n\n\nBaron, R.A., Byrne D. (1991) *Social Psychology: Understanding Human Interaction*\n (6th ed.) Boston: Allyn & Bacon \n\n\nBjarneskans, H., Grřnnevik, B. & Sandberg, A., 1997, The Lifecycle of Memes *Homo\n Excelsior*, [http://www.aleph.se/Trans/Cultural/Memetics/memecycle.html](file:///h:/alephweb/Trans/Cultural/Memetics/memecycle.html)\n\n\n\nBoström, N., 1997, Predictions from Philosophy?, \n\n\n\nChialvo, D.R, Millonas, M.M., 1995, *How Swarms Build Cognitive Maps* Santa Fe\n Institute working paper 95-03-033, [Abstract](http://www.santafe.edu/sfi/publications/Abstracts/95-03-033abs.html), [PostScript\n version](http://www.santafe.edu/sfi/publications/Working-Papers/95-03-033.ps) \n\n\nDawkins R, (1976) The Selfish Gene , Oxford: Oxford University Press \n\n\nHellriegel, D., Slocum, J.W. Jr., Woodman, R.W. (1989) *Organizational Behavior*\n (5th ed.) St Paul: West. \n\n\nHilgard, E.R. (1977) *Divided Consciousness: Multiple Controls in Human Thought and\n Action*, New York: Wiley \n\n\nHobbes, T., 1651, *Leviathan* \n\n\nKelly, K.,*Out of Control: the New Biology of Machines*, London Fourth Estate\n 1994, ISBN 1-85702-308-0 \n\n\nMark S. Miller and K. Eric Drexler, Incentive Engineering: for Computational Resource\n Management in *The Ecology of Computation*, Bernardo Huberman (ed.) Elsevier Science\n Publishers/North-Holland, 1988. \n\n\n\nMinsky, M., 1988, *The Society of Mind*, Simon & Schuster \n\n\nMoravec, H.,*Mind Children: the Future of Robot and Human Intelligence*, Cambridge\n Harvard University Press, 1988 ISBN 0- 674-57618-7 \n\n\nMorrow, T. 1996, >H HUMOR: Borganism in the media, [http://www.aleph.se/Trans/Cultural/Fun/0173.html](file:///h:/alephweb/Trans/Cultural/Fun/0173.html)\n\n\n\nNicholls, P., 1982 *The Science in Science Fiction*, Roxby Science Fiction Limited\n \n\n\nPutnam, F.W. (1989) *Diagnosis and treatment of multiple personality disorder* New\n York: Guilford \n\n\nStapledon, O., 1931, *Last and First Men* \n\n\nStapledon, O., 1937, *Star Maker* \n\n\nTurchin, V., Joslyn, C., 1993, The Metasystem Transition \n\n\nWeiser, M., The Computer for the 21st Century, *Scientific American*, pp. 94-10,\n September 1991, \n\n\n\nWeiser, M. 1996, Nomadic Issues in Ubiquitous Computing, talk given at Nomadic '96\n conference. [Slides](http://www.ubiq.com/hypertext/weiser/NomadicInteractive/).", "url": "http://www.aleph.se/Trans/Global/Posthumanity/WeBorg.html", "title": "We, Borg: Speculations on hive minds as a posthuman state", "source": "html_articles", "source_type": "blogPost", "source_filetype": "pdf", "date_published": "2014-12-31T23:00:00Z", "authors": ["Anders Sandberg"], "summary": [], "id": "62fb9e36539a60d0010c956f8cdca786"} {"text": "Abstract\n--------\n\nThis article is based on a series of special lectures delivered at University College, London, in November 1972.\n\n\n\n\n\n[Access through your institution](https://wayf.springernature.com?redirect_uri=https%3A%2F%2Fwww.nature.com%2Farticles%2F241507a0)\n\n\n[Buy or subscribe](#access-options)\n\n\n\n\n\n\nThis is a preview of subscription content, [access via your institution](https://wayf.springernature.com?redirect_uri=https%3A%2F%2Fwww.nature.com%2Farticles%2F241507a0)\n\n\n\n\n if (window.dataLayer) {\n window.dataLayer.push({\n content: { article: { relevantArticlesCount: 0 }}\n })\n }\n \nAccess options\n--------------\n\n\n\n\n\n[Access through your institution](https://wayf.springernature.com?redirect_uri=https%3A%2F%2Fwww.nature.com%2Farticles%2F241507a0)\n\n\n\n\n\n\n[Access through your institution](https://wayf.springernature.com?redirect_uri=https%3A%2F%2Fwww.nature.com%2Farticles%2F241507a0)\n\n\n[Change institution](https://wayf.springernature.com?redirect_uri=https%3A%2F%2Fwww.nature.com%2Farticles%2F241507a0)\n\n\n[Buy or subscribe](#access-options)\n\n\n/\\* style specs start \\*/\nstyle{display:none!important}.LiveAreaSection-193358632 \\*{align-content:stretch;align-items:stretch;align-self:auto;animation-delay:0s;animation-direction:normal;animation-duration:0s;animation-fill-mode:none;animation-iteration-count:1;animation-name:none;animation-play-state:running;animation-timing-function:ease;azimuth:center;backface-visibility:visible;background-attachment:scroll;background-blend-mode:normal;background-clip:borderBox;background-color:transparent;background-image:none;background-origin:paddingBox;background-position:0 0;background-repeat:repeat;background-size:auto auto;block-size:auto;border-block-end-color:currentcolor;border-block-end-style:none;border-block-end-width:medium;border-block-start-color:currentcolor;border-block-start-style:none;border-block-start-width:medium;border-bottom-color:currentcolor;border-bottom-left-radius:0;border-bottom-right-radius:0;border-bottom-style:none;border-bottom-width:medium;border-collapse:separate;border-image-outset:0s;border-image-repeat:stretch;border-image-slice:100%;border-image-source:none;border-image-width:1;border-inline-end-color:currentcolor;border-inline-end-style:none;border-inline-end-width:medium;border-inline-start-color:currentcolor;border-inline-start-style:none;border-inline-start-width:medium;border-left-color:currentcolor;border-left-style:none;border-left-width:medium;border-right-color:currentcolor;border-right-style:none;border-right-width:medium;border-spacing:0;border-top-color:currentcolor;border-top-left-radius:0;border-top-right-radius:0;border-top-style:none;border-top-width:medium;bottom:auto;box-decoration-break:slice;box-shadow:none;box-sizing:border-box;break-after:auto;break-before:auto;break-inside:auto;caption-side:top;caret-color:auto;clear:none;clip:auto;clip-path:none;color:initial;column-count:auto;column-fill:balance;column-gap:normal;column-rule-color:currentcolor;column-rule-style:none;column-rule-width:medium;column-span:none;column-width:auto;content:normal;counter-increment:none;counter-reset:none;cursor:auto;display:inline;empty-cells:show;filter:none;flex-basis:auto;flex-direction:row;flex-grow:0;flex-shrink:1;flex-wrap:nowrap;float:none;font-family:initial;font-feature-settings:normal;font-kerning:auto;font-language-override:normal;font-size:medium;font-size-adjust:none;font-stretch:normal;font-style:normal;font-synthesis:weight style;font-variant:normal;font-variant-alternates:normal;font-variant-caps:normal;font-variant-east-asian:normal;font-variant-ligatures:normal;font-variant-numeric:normal;font-variant-position:normal;font-weight:400;grid-auto-columns:auto;grid-auto-flow:row;grid-auto-rows:auto;grid-column-end:auto;grid-column-gap:0;grid-column-start:auto;grid-row-end:auto;grid-row-gap:0;grid-row-start:auto;grid-template-areas:none;grid-template-columns:none;grid-template-rows:none;height:auto;hyphens:manual;image-orientation:0deg;image-rendering:auto;image-resolution:1dppx;ime-mode:auto;inline-size:auto;isolation:auto;justify-content:flexStart;left:auto;letter-spacing:normal;line-break:auto;line-height:normal;list-style-image:none;list-style-position:outside;list-style-type:disc;margin-block-end:0;margin-block-start:0;margin-bottom:0;margin-inline-end:0;margin-inline-start:0;margin-left:0;margin-right:0;margin-top:0;mask-clip:borderBox;mask-composite:add;mask-image:none;mask-mode:matchSource;mask-origin:borderBox;mask-position:0 0;mask-repeat:repeat;mask-size:auto;mask-type:luminance;max-height:none;max-width:none;min-block-size:0;min-height:0;min-inline-size:0;min-width:0;mix-blend-mode:normal;object-fit:fill;object-position:50% 50%;offset-block-end:auto;offset-block-start:auto;offset-inline-end:auto;offset-inline-start:auto;opacity:1;order:0;orphans:2;outline-color:initial;outline-offset:0;outline-style:none;outline-width:medium;overflow:visible;overflow-wrap:normal;overflow-x:visible;overflow-y:visible;padding-block-end:0;padding-block-start:0;padding-bottom:0;padding-inline-end:0;padding-inline-start:0;padding-left:0;padding-right:0;padding-top:0;page-break-after:auto;page-break-before:auto;page-break-inside:auto;perspective:none;perspective-origin:50% 50%;pointer-events:auto;position:static;quotes:initial;resize:none;right:auto;ruby-align:spaceAround;ruby-merge:separate;ruby-position:over;scroll-behavior:auto;scroll-snap-coordinate:none;scroll-snap-destination:0 0;scroll-snap-points-x:none;scroll-snap-points-y:none;scroll-snap-type:none;shape-image-threshold:0;shape-margin:0;shape-outside:none;tab-size:8;table-layout:auto;text-align:initial;text-align-last:auto;text-combine-upright:none;text-decoration-color:currentcolor;text-decoration-line:none;text-decoration-style:solid;text-emphasis-color:currentcolor;text-emphasis-position:over right;text-emphasis-style:none;text-indent:0;text-justify:auto;text-orientation:mixed;text-overflow:clip;text-rendering:auto;text-shadow:none;text-transform:none;text-underline-position:auto;top:auto;touch-action:auto;transform:none;transform-box:borderBox;transform-origin:50% 50%0;transform-style:flat;transition-delay:0s;transition-duration:0s;transition-property:all;transition-timing-function:ease;vertical-align:baseline;visibility:visible;white-space:normal;widows:2;width:auto;will-change:auto;word-break:normal;word-spacing:normal;word-wrap:normal;writing-mode:horizontalTb;z-index:auto;-webkit-appearance:none;-moz-appearance:none;-ms-appearance:none;appearance:none;margin:0}.LiveAreaSection-193358632{width:100%}.LiveAreaSection-193358632 .login-option-buybox{display:block;width:100%;font-size:17px;line-height:30px;color:#222;padding-top:30px;font-family:Harding,Palatino,serif}.LiveAreaSection-193358632 .additional-access-options{display:block;font-weight:700;font-size:17px;line-height:30px;color:#222;font-family:Harding,Palatino,serif}.LiveAreaSection-193358632 .additional-login>li:not(:first-child)::before{transform:translateY(-50%);content:\"\";height:1rem;position:absolute;top:50%;left:0;border-left:2px solid #999}.LiveAreaSection-193358632 .additional-login>li:not(:first-child){padding-left:10px}.LiveAreaSection-193358632 .additional-login>li{display:inline-block;position:relative;vertical-align:middle;padding-right:10px}.BuyBoxSection-683559780{display:flex;flex-wrap:wrap;flex:1;flex-direction:row-reverse;margin:-30px -15px 0}.BuyBoxSection-683559780 .box-inner{width:100%;height:100%}.BuyBoxSection-683559780 .readcube-buybox{background-color:#f3f3f3;flex-shrink:1;flex-grow:1;flex-basis:255px;background-clip:content-box;padding:0 15px;margin-top:30px}.BuyBoxSection-683559780 .subscribe-buybox{background-color:#f3f3f3;flex-shrink:1;flex-grow:4;flex-basis:300px;background-clip:content-box;padding:0 15px;margin-top:30px}.BuyBoxSection-683559780 .subscribe-buybox-nature-plus{background-color:#f3f3f3;flex-shrink:1;flex-grow:4;flex-basis:100%;background-clip:content-box;padding:0 15px;margin-top:30px}.BuyBoxSection-683559780 .title-readcube,.BuyBoxSection-683559780 .title-buybox{display:block;margin:0;margin-right:10%;margin-left:10%;font-size:24px;line-height:32px;color:#222;padding-top:30px;text-align:center;font-family:Harding,Palatino,serif}.BuyBoxSection-683559780 .title-asia-buybox{display:block;margin:0;margin-right:5%;margin-left:5%;font-size:24px;line-height:32px;color:#222;padding-top:30px;text-align:center;font-family:Harding,Palatino,serif}.BuyBoxSection-683559780 .asia-link{color:#069;cursor:pointer;text-decoration:none;font-size:1.05em;font-family:-apple-system,BlinkMacSystemFont,\"Segoe UI\",Roboto,Oxygen-Sans,Ubuntu,Cantarell,\"Helvetica Neue\",sans-serif;line-height:1.05em6}.BuyBoxSection-683559780 .access-readcube{display:block;margin:0;margin-right:10%;margin-left:10%;font-size:14px;color:#222;padding-top:10px;text-align:center;font-family:-apple-system,BlinkMacSystemFont,\"Segoe UI\",Roboto,Oxygen-Sans,Ubuntu,Cantarell,\"Helvetica Neue\",sans-serif;line-height:20px}.BuyBoxSection-683559780 .access-asia-buybox{display:block;margin:0;margin-right:5%;margin-left:5%;font-size:14px;color:#222;padding-top:10px;text-align:center;font-family:-apple-system,BlinkMacSystemFont,\"Segoe UI\",Roboto,Oxygen-Sans,Ubuntu,Cantarell,\"Helvetica Neue\",sans-serif;line-height:20px}.BuyBoxSection-683559780 .access-buybox{display:block;margin:0;margin-right:10%;margin-left:10%;font-size:14px;color:#222;opacity:.8px;padding-top:10px;text-align:center;font-family:-apple-system,BlinkMacSystemFont,\"Segoe UI\",Roboto,Oxygen-Sans,Ubuntu,Cantarell,\"Helvetica Neue\",sans-serif;line-height:20px}.BuyBoxSection-683559780 .price-buybox{display:block;font-size:30px;color:#222;font-family:-apple-system,BlinkMacSystemFont,\"Segoe UI\",Roboto,Oxygen-Sans,Ubuntu,Cantarell,\"Helvetica Neue\",sans-serif;padding-top:30px;text-align:center}.BuyBoxSection-683559780 .price-buybox-to{display:block;font-size:30px;color:#222;font-family:-apple-system,BlinkMacSystemFont,\"Segoe UI\",Roboto,Oxygen-Sans,Ubuntu,Cantarell,\"Helvetica Neue\",sans-serif;text-align:center}.BuyBoxSection-683559780 .price-info-text{font-size:16px;padding-right:10px;color:#222;font-family:-apple-system,BlinkMacSystemFont,\"Segoe UI\",Roboto,Oxygen-Sans,Ubuntu,Cantarell,\"Helvetica Neue\",sans-serif}.BuyBoxSection-683559780 .price-value{font-size:30px;font-family:-apple-system,BlinkMacSystemFont,\"Segoe UI\",Roboto,Oxygen-Sans,Ubuntu,Cantarell,\"Helvetica Neue\",sans-serif}.BuyBoxSection-683559780 .price-per-period{font-family:-apple-system,BlinkMacSystemFont,\"Segoe UI\",Roboto,Oxygen-Sans,Ubuntu,Cantarell,\"Helvetica Neue\",sans-serif}.BuyBoxSection-683559780 .price-from{font-size:14px;padding-right:10px;color:#222;font-family:-apple-system,BlinkMacSystemFont,\"Segoe UI\",Roboto,Oxygen-Sans,Ubuntu,Cantarell,\"Helvetica Neue\",sans-serif;line-height:20px}.BuyBoxSection-683559780 .issue-buybox{display:block;font-size:13px;text-align:center;color:#222;font-family:-apple-system,BlinkMacSystemFont,\"Segoe UI\",Roboto,Oxygen-Sans,Ubuntu,Cantarell,\"Helvetica Neue\",sans-serif;line-height:19px}.BuyBoxSection-683559780 .no-price-buybox{display:block;font-size:13px;line-height:18px;text-align:center;padding-right:10%;padding-left:10%;padding-bottom:20px;padding-top:30px;color:#222;font-family:-apple-system,BlinkMacSystemFont,\"Segoe UI\",Roboto,Oxygen-Sans,Ubuntu,Cantarell,\"Helvetica Neue\",sans-serif}.BuyBoxSection-683559780 .vat-buybox{display:block;margin-top:5px;margin-right:20%;margin-left:20%;font-size:11px;color:#222;padding-top:10px;padding-bottom:15px;text-align:center;font-family:-apple-system,BlinkMacSystemFont,\"Segoe UI\",Roboto,Oxygen-Sans,Ubuntu,Cantarell,\"Helvetica Neue\",sans-serif;line-height:17px}.BuyBoxSection-683559780 .tax-buybox{display:block;width:100%;color:#222;padding:20px 16px;text-align:center;font-family:-apple-system,BlinkMacSystemFont,\"Segoe UI\",Roboto,Oxygen-Sans,Ubuntu,Cantarell,\"Helvetica Neue\",sans-serif;line-height:NaNpx}.BuyBoxSection-683559780 .button-container{display:flex;padding-right:20px;padding-left:20px;justify-content:center}.BuyBoxSection-683559780 .button-container>\\*{flex:1px}.BuyBoxSection-683559780 .button-container>a:hover,.Button-1078489254:hover,.Button-2496381730:hover{text-decoration:none}.BuyBoxSection-683559780 .readcube-button{background:#fff;margin-top:30px}.BuyBoxSection-683559780 .button-asia{background:#069;border:1px solid #069;border-radius:0;cursor:pointer;display:block;padding:9px;outline:0;text-align:center;text-decoration:none;min-width:80px;margin-top:75px}.BuyBoxSection-683559780 .button-label-asia,.ButtonLabel-3296148077,.ButtonLabel-1651148777{display:block;color:#fff;font-size:17px;line-height:20px;font-family:-apple-system,BlinkMacSystemFont,\"Segoe UI\",Roboto,Oxygen-Sans,Ubuntu,Cantarell,\"Helvetica Neue\",sans-serif;text-align:center;text-decoration:none;cursor:pointer}.Button-1078489254,.Button-2496381730{background:#069;border:1px solid #069;border-radius:0;cursor:pointer;display:block;padding:9px;outline:0;text-align:center;text-decoration:none;min-width:80px;max-width:320px;margin-top:10px}.Button-1078489254 .readcube-label,.Button-2496381730 .readcube-label{color:#069}\n/\\* style specs end \\*/Subscribe to this journal\n\nReceive 51 print issues and online access\n\n199,00 € per year\n\nonly 3,90 € per issue\n\n[Learn more](/nature/subscribe)Rent or buy this article\n\nPrices vary by article type\n\nfrom$1.95\n\nto$39.95\n\n[Learn more](//www.nature.com/articles/241507a0.epdf?no_publisher_access=1&r3_referer=nature)Prices may be subject to local taxes which are calculated during checkout\n\n\n\n### Additional access options:\n\n\n* [Log in](https://idp.nature.com/authorize/natureuser?client_id=grover&redirect_uri=https%3A%2F%2Fwww.nature.com%2Farticles%2F241507a0)\n* [Learn about institutional subscriptions](https://www.springernature.com/gp/librarians/licensing/license-options)\n* [Read our FAQs](https://support.nature.com/en/support/home)\n* [Contact customer support](https://www.springernature.com/gp/contact)\n\n\n\n\n\n\nReferences\n----------\n\n1. Lighthill, M. J., *Artificial Intelligence: a general survey* (to be published by the Science Research Council, London).\n2. Turing, A. M., in *Machine Intelligence 5* (edit. by Meltzer, B., and Michie, D.), 3 (Edinburgh University Press, 1969).\n\n[Google Scholar](http://scholar.google.com/scholar_lookup?&title=Machine%20Intelligence%205&publication_year=1969&author=Turing%2CAM)\n3. Maynard Smith, J., *Evolution*, **6**, 127 (1952), reprinted in *On Evolution* (edit. by Maynard Smith, J.), 29 (Edinburgh University Press, 1972).\n\n[Google Scholar](http://scholar.google.com/scholar_lookup?&title=Evolution&publication_year=1952&author=Maynard%20Smith%2CJ)\n4. Shannon, C. E., *Phil. Mag.*, **41**, 356 (1950).\n\n[Article](https://doi.org/10.1080%2F14786445008521796) \n [Google Scholar](http://scholar.google.com/scholar_lookup?&title=&journal=Phil.%20Mag.&doi=10.1080%2F14786445008521796&volume=41&publication_year=1950&author=Shannon%2CCE)\n5. Turing, A. M., in *Faster than Thought* (edit. by Bowden, B. V.), 288 (Pitman, London, 1953).\n\n[Google Scholar](http://scholar.google.com/scholar_lookup?&title=Faster%20than%20Thought&publication_year=1953&author=Turing%2CAM)\n6. Newell, A., Shaw, J. C., and Simon, H. A., *IBM J. Res. Dev.*, **2**, 320 (1958).\n\n[Article](https://doi.org/10.1147%2Frd.24.0320) \n [Google Scholar](http://scholar.google.com/scholar_lookup?&title=&journal=IBM%20J.%20Res.%20Dev.&doi=10.1147%2Frd.24.0320&volume=2&publication_year=1958&author=Newell%2CA&author=Shaw%2CJC&author=Simon%2CHA)\n7. Samuel, A. L., *IBM J. Res. Dev.*, **3**, 210 (1959).\n\n[Article](https://doi.org/10.1147%2Frd.33.0210) \n [Google Scholar](http://scholar.google.com/scholar_lookup?&title=&journal=IBM%20J.%20Res.%20Dev.&doi=10.1147%2Frd.33.0210&volume=3&publication_year=1959&author=Samuel%2CAL)\n8. Michie, D., Ross, R., and Shannan, G. J., in *Machine Intelligence 7* (edit. by Meltzer, B., and Michie, D.), 141 (Edinburgh University Press, 1972).\n\n[Google Scholar](http://scholar.google.com/scholar_lookup?&title=Machine%20Intelligence%207&publication_year=1972&author=Michie%2CD&author=Ross%2CR&author=Shannan%2CGJ)\n9. Doran, J. E., and Michie, D., *Proc. Roy. Soc.*, A, **294**, 235 (1966).\n\n[ADS](http://adsabs.harvard.edu/cgi-bin/nph-data_query?link_type=ABSTRACT&bibcode=1966RSPSA.294..235D) \n [Google Scholar](http://scholar.google.com/scholar_lookup?&title=&journal=Proc.%20Roy.%20Soc.&volume=294&publication_year=1966&author=Doran%2CJE&author=Michie%2CD)\n10. Hart, P., Nilsson, N. J., and Raphael, B., *IEEE Trans. on Sys. Sci. and Cybernetics*, SSC-4, 100 (1968).\n11. Michie, D., and Ross, R., in *Machine Intelligence 5* (edit. by Meltzer, B., and Michie, D.), 301 (Edinburgh University Press, 1969).\n\n[Google Scholar](http://scholar.google.com/scholar_lookup?&title=Machine%20Intelligence%205&publication_year=1969&author=Michie%2CD&author=Ross%2CR)\n12. Pohl, I., *Artificial Intelligence*, **1**, 193 (1970).\n\n[Article](https://doi.org/10.1016%2F0004-3702%2870%2990007-X) \n [MathSciNet](http://www.ams.org/mathscinet-getitem?mr=294179) \n [Google Scholar](http://scholar.google.com/scholar_lookup?&title=&journal=Artificial%20Intelligence&doi=10.1016%2F0004-3702%2870%2990007-X&volume=1&publication_year=1970&author=Pohl%2CI)\n13. Michie, D., in *Artificial Intelligence and Heuristic Programming* (edit. by Findler, N. V., and Meltzer, B.), 101 (Edinburgh University Press, 1971).\n\n[Google Scholar](http://scholar.google.com/scholar_lookup?&title=Artificial%20Intelligence%20and%20Heuristic%20Programming&publication_year=1971&author=Michie%2CD)\n14. Fikes, R. E., Hart, P. E., and Nilsson, N. J., *Artificial Intelligence*, **3**, 251 (1972).\n\n[Article](https://doi.org/10.1016%2F0004-3702%2872%2990051-3) \n [Google Scholar](http://scholar.google.com/scholar_lookup?&title=&journal=Artificial%20Intelligence&doi=10.1016%2F0004-3702%2872%2990051-3&volume=3&publication_year=1972&author=Fikes%2CRE&author=Hart%2CPE&author=Nilsson%2CNJ)\n15. Fikes, R. E., and Nilsson, N. J., *Artificial Intelligence*, **2**, 189 (1971).\n\n[Article](https://doi.org/10.1016%2F0004-3702%2871%2990010-5) \n [Google Scholar](http://scholar.google.com/scholar_lookup?&title=&journal=Artificial%20Intelligence&doi=10.1016%2F0004-3702%2871%2990010-5&volume=2&publication_year=1971&author=Fikes%2CRE&author=Nilsson%2CNJ)\n16. Winston, P. H., thesis, MIT (1970), reprinted as *MAC-TR-76* (MIT, Project MAC, 1970).\n17. Bobrow, D. G., thesis, MIT (1964), reprinted in *Semantic Information Processing* (edit. by Minsky, M.) (The MIT Press, Cambridge, Mass., 1968).\n\n[Google Scholar](http://scholar.google.com/scholar_lookup?&title=Semantic%20Information%20Processing&publication_year=1964&author=Bobrow%2CDG)\n18. Woods, W. A., Kaplan, R. M., and Nash-Webber, B., *BBN Report No. 2378* (Bolt, Beranek, and Newman, Cambridge, Mass. 1972).\n19. Winograd, T., thesis, MIT (1970), reprinted in revised form as *MAC-TR-84* (MIT, Project MAC, 1971); also available as *Understanding Natural Language* (Edinburgh University Press, 1972).\n\n[Google Scholar](http://scholar.google.com/scholar_lookup?&title=Understanding%20Natural%20Language&publication_year=1970&author=Winograd%2CT)\n20. Feibenbaum, E. A., Buchanan, B. G., and Lederberg, J., in *Machine Intelligence 6* (edit. by Meltzer, B., and Michie, D.), 165 (Edinburgh University Press, 1971).\n\n[Google Scholar](http://scholar.google.com/scholar_lookup?&title=Machine%20Intelligence%206&publication_year=1971&author=Feibenbaum%2CEA&author=Buchanan%2CBG&author=Lederberg%2CJ)\n21. McCarthy, J., and Hayes, P. J., in *Machine Intelligence 4* (edit. by Meltzer, B., and Michie, D.), 463 (Edinburgh University Press, 1969).\n\n[MATH](http://www.emis.de/MATH-item?0226.68044) \n [Google Scholar](http://scholar.google.com/scholar_lookup?&title=Machine%20Intelligence%204&publication_year=1969&author=McCarthy%2CJ&author=Hayes%2CPJ)\n22. Ryle, G., *The Concept of Mind* (Hutchinson, London, 1949).\n\n[Google Scholar](http://scholar.google.com/scholar_lookup?&title=The%20Concept%20of%20Mind&publication_year=1949&author=Ryle%2CG)\n23. Green, C. C., *Proc. Intern. Joint. Conf. Art. Intell*. (edit. by Walker, D. E., and Norton, L. M.), 219 (Washington DC, 1969).\n\n[Google Scholar](http://scholar.google.com/scholar_lookup?&title=Proc.%20Intern.%20Joint.%20Conf.%20Art.%20Intell&publication_year=1969&author=Green%2CCC)\n24. Raphael, B., in *Artificial Intelligence and Heuristic Programming* (edit. by Findler, N. V., and Meltzer, B.), 159 (Edinburgh University Press, 1971).\n\n[Google Scholar](http://scholar.google.com/scholar_lookup?&title=Artificial%20Intelligence%20and%20Heuristic%20Programming&publication_year=1971&author=Raphael%2CB)\n25. Hayes, P. J., in *Machine Intelligence 6* (edit. by Meltzer, B., and Michie, D.), 495 (Edinburgh University Press, 1971).\n\n[Google Scholar](http://scholar.google.com/scholar_lookup?&title=Machine%20Intelligence%206&publication_year=1971&author=Hayes%2CPJ)\n26. Burstall, R. M., Collins, J. S., and Popplestone, R. J., *Programming in POP-2* (Edinburgh University Press, 1971).\n\n[MATH](http://www.emis.de/MATH-item?0216.49903) \n [Google Scholar](http://scholar.google.com/scholar_lookup?&title=Programming%20in%20POP-2&publication_year=1971&author=Burstall%2CRM&author=Collins%2CJS&author=Popplestone%2CRJ)\n27. Michie, D., *Nature*, **228**, 717 (1970).\n\n[Article](https://doi.org/10.1038%2F228717a0) \n [ADS](http://adsabs.harvard.edu/cgi-bin/nph-data_query?link_type=ABSTRACT&bibcode=1970Natur.228..717M) \n [CAS](/articles/cas-redirect/1:STN:280:DyaE3M%2Fht1Gqtw%3D%3D) \n [Google Scholar](http://scholar.google.com/scholar_lookup?&title=&journal=Nature&doi=10.1038%2F228717a0&volume=228&publication_year=1970&author=Michie%2CD)\n28. Hewitt, C., thesis, MIT (1971), reprinted as *Art. Intell. Mem. 258* (MIT, Artificial Intelligence Laboratory, 1972).\n29. Rulifson, J. F., Waldinger, R. J., and Derksen, J. A., *Technical Note 48* (Stanford Research Institute, Artificial Intelligence Group, 1970).\n\n[Google Scholar](http://scholar.google.com/scholar_lookup?&title=Technical%20Note%2048&publication_year=1970&author=Rulifson%2CJF&author=Waldinger%2CRJ&author=Derksen%2CJA)\n30. McDermott, D. V., and Sussman, G. J., *Art. Intell. Mem. 259* (MIT, Artificial Intelligence Laboratory, 1972).\n31. Fikes, R. E., Hart, P. E., and Nilsson, N. J., in *Machine Intelligence* (edit. by Meltzer, B., and Michie, D.), 405 (Edinburgh University Press, 1972).\n\n[Google Scholar](http://scholar.google.com/scholar_lookup?&title=Machine%20Intelligence&publication_year=1972&author=Fikes%2CRE&author=Hart%2CPE&author=Nilsson%2CNJ)\n32. Shirai, Y., and Suwa, M., *Proc. Second Intern. Joint Conf. Art. Intell.*, 80 (British Computer Society, London, 1971).\n33. Will, P. M., and Pennington, K. S., *Artificial Intelligence*, **2**, 319 (1971).\n\n[Article](https://doi.org/10.1016%2F0004-3702%2871%2990015-4) \n [Google Scholar](http://scholar.google.com/scholar_lookup?&title=&journal=Artificial%20Intelligence&doi=10.1016%2F0004-3702%2871%2990015-4&volume=2&publication_year=1971&author=Will%2CPM&author=Pennington%2CKS)\n34. Ejiri, M., Unon, T., Yoda, H., Goto, T., and Takeyasu, K., in *Proc. Second Intern. Joint Conf. on Art. Intell.*, 350 (British Computer Society, London, 1971).\n35. Tan, S. T., *Research Memorandum* MIP-R-98 (University of Edinburgh, School of Artificial Intelligence, 1972).\n\n[Google Scholar](http://scholar.google.com/scholar_lookup?&title=Research%20Memorandum&publication_year=1972&author=Tan%2CST)\n36. Huffman, D. A. in *Machine Intelligence 6* (edit. by Meltzer, B., and Michie, D.), 295 (Edinburgh University Press, 1971).\n\n[Google Scholar](http://scholar.google.com/scholar_lookup?&title=Machine%20Intelligence%206&publication_year=1971&author=Huffman%2CDA)\n37. McCarthy, J., *Memo No. 16* (Stanford University, Stanford Artificial Intelligence Project, 1964).\n38. Craik, K. J. W., *The Nature of Explanation* (Cambridge University Press, 1952).\n\n[Google Scholar](http://scholar.google.com/scholar_lookup?&title=The%20Nature%20of%20Explanation&publication_year=1952&author=Craik%2CKJW)\n39. Choate, R., and Jaffe, L. D., in *Proc. First National Conference on Remotely Manned Systems (RMS)* (forthcoming).\n40. Plotkin, G., in *Machine Intelligence 5* (edit. by Meltzer, B., and Michie, D.), 153 (Edinburgh University Press, 1969); also in *Machine Intelligence 6* (edit. by Meltzer, B., and Michie, D.), 101 (Edinburgh University Press, 1971).\n\n[Google Scholar](http://scholar.google.com/scholar_lookup?&title=Machine%20Intelligence%205&publication_year=1969&author=Plotkin%2CG)\n41. Papert, S., *Art. Intell. Mem. 247* (MIT, Artificial Intelligence Laboratory, 1971).\n\n[Google Scholar](http://scholar.google.com/scholar_lookup?&title=Art.%20Intell.%20Mem.%20247&publication_year=1971&author=Papert%2CS)\n42. Winston, P. H., in *Machine Intelligence 7* (edit. by Meltzer, B., and Michie, D.), 431 (Edinburgh University Press, 1972).\n\n[Google Scholar](http://scholar.google.com/scholar_lookup?&title=Machine%20Intelligence%207&publication_year=1972&author=Winston%2CPH)\n\n[Download references](https://citation-needed.springer.com/v2/references/10.1038/241507a0?format=refman&flavour=references)\n\nAuthor information\n------------------\n\n### Authors and Affiliations\n\n1. Department of Machine Intelligence, University of Edinburgh, \n \n\nDONALD MICHIE\n\nAuthors1. DONALD MICHIE[View author publications](/search?author=DONALD%20MICHIE)You can also search for this author in\n [PubMed](http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=search&term=DONALD%20MICHIE) [Google Scholar](http://scholar.google.co.uk/scholar?as_q=&num=10&btnG=Search+Scholar&as_epq=&as_oq=&as_eq=&as_occt=any&as_sauthors=%22DONALD%20MICHIE%22&as_publication=&as_ylo=&as_yhi=&as_allsubj=all&hl=en)\nRights and permissions\n----------------------\n\n[Reprints and Permissions](https://s100.copyright.com/AppDispatchServlet?title=Machines%20and%20the%20Theory%20of%20Intelligence&author=DONALD%20MICHIE&contentID=10.1038%2F241507a0©right=Springer%20Nature%20Limited&publication=0028-0836&publicationDate=1973-02-23&publisherName=SpringerNature&orderBeanReset=true)\n\n\n\nComments\n--------\n\nBy submitting a comment you agree to abide by our [Terms](/info/tandc.html) and [Community Guidelines](/info/community-guidelines.html). If you find something abusive or that does not comply with our terms or guidelines please flag it as inappropriate.", "url": "http://www.nature.com/articles/241507a0", "title": "Machines and the Theory of Intelligence", "source": "html_articles", "source_type": "journalArticle", "source_filetype": "pdf", "date_published": "1973-02-22T23:00:00Z", "authors": ["Donald Michie"], "summary": [], "id": "7c1651644b96d18a08aad5e460057053"} {"text": "[Download PDF](/articles/548520a.pdf)\n\n\n\n\n\n\n### Subjects\n\n\n* [Government](/subjects/government)\n* [Mathematics and computing](/subjects/mathematics-and-computing)\n* [Society](/subjects/society)\n* [Technology](/subjects/technology)\n\n\n\n\n\nStuart Russell weighs up a book on the risks and rewards of the AI revolution.\n\n\n\n\n\n\nLife 3.0: Being Human in the Age of Artificial Intelligence\n-----------------------------------------------------------\n\n* *Max Tegmark*\n\nKnopf: 2017. 9781101946596, 9780241237199 | ISBN: 978-1-1019-4659-6\nMax Tegmark is a renowned physicist. He is also the irrepressibly optimistic co-founder of the Future of Life Institute in Cambridge, Massachusetts (motto: “Technology is giving life the potential to flourish like never before ... or to self-destruct. Let's make a difference!”). Now, in *Life 3.0*, he tackles a pressing future development — the evolution of artificial intelligence (AI). He argues that the risks demand serious thought if our “cosmic endowment” is not to be inadvertently thrown away.\n\n![](//media.springernature.com/w300/springer-static/image/art%3A10.1038%2F548520a/MediaObjects/41586_2017_Article_BF548520a_Figa_HTML.jpg)'RoboBees' are meant for artificial pollination but could have unforeseen environmental effects.\n Credit: Thierry Falise/Lightrocket Via GettyIn the interests of disclosure, Tegmark and I are collaborators and share a literary agent. With physicists Stephen Hawking and Frank Wilczek, we wrote the 2014 *Huffington Post* article 'Transcending complacency on superintelligent machines' (see [go.nature.com/2wadkao](http://go.nature.com/2wadkao)). Ostensibly a review of Wally Pfister's dystopian AI film *Transcendence*, this was really a call to the AI community to take the risks of intelligent systems seriously. Thus, I am unlikely to disagree strongly with the premise of *Life 3.0*. Life, Tegmark argues, may or may not spread through the Universe and “flourish for billions or trillions of years” because of decisions we make now — a possibility both seductive and overwhelming.\n\nThe book's title refers to a third phase in evolutionary history. For almost 4 billion years, both hardware (bodies) and software (capacity for generating behaviour) were fixed by biology. For the next 100,000 years, learning and culture enabled humans to adapt and control their own software. In the imminent third phase, both software and hardware can be redesigned. This may sound like transhumanism — the movement to re-engineer body and brain — but Tegmark's focus is on AI, which supplements mental capabilities with external devices.\n\nTegmark considers both risks and benefits. Near-term risks include an arms race in autonomous weapons and dramatic reductions in employment. The AI community is practically unanimous in condemning the creation of machines that can choose to kill humans, but the issue of work has sparked debate. Many predict an economic boon — AI inspiring new jobs to replace old, as with previous industrial revolutions. Tegmark wryly imagines two horses discussing the rise of the internal combustion engine in 1900. One predicts “new jobs for horses ... That's what's always happened before, like with the invention of the wheel and the plow.” For most horses, alas, the “new job” was to be pet food. Tegmark's analysis is compelling, and shared by economists such as Paul Krugman. But the question remains: what desirable economy might we aim for, when most of what we now call work is done by machines?\n\nThe longer-term risks are existential. The book's fictional prelude describes a reasonably plausible scenario in which superintelligent AI might emerge. Later, Tegmark ranges over global outcomes from near-Utopias to human enslavement or extinction. That we have no idea how to steer towards the better futures points to a dearth of serious thinking on why making AI better might be a bad thing.\n\nComputer pioneer Alan Turing, raising the possibility in 1951 that our species would at best be “greatly humbled” by AI, expressed the general unease of making something smarter than oneself. Assuaging this unease by curtailing progress on AI may be neither feasible nor preferable. The most interesting part of *Life 3.0* explains that the real issue is the potential for misaligned objectives. Cybernetics founder Norbert Wiener wrote in 1960, “We had better be quite sure that the purpose put into the machine is the purpose which we really desire.” Or, as Tegmark has it, “It's unclear how to imbue a superintelligent AI with an ultimate goal that neither is undefined nor leads to the elimination of humanity.” In my view, this technological and philosophical problem demands all the intellectual resources we can bring to bear.\n\nOnly if we solve it can we reap the benefits. Among these is expansion across the Universe, perhaps powered by such exotic technologies as Dyson spheres (which would capture the energy of a star), accelerators built around black holes or Tegmark's theorized sphalerizers (like diesel engines, but quark-powered and one billion times more efficient). For sheer science fun, it's hard to beat the explanations of how much upside the Universe and the laws of physics will allow. We may one day, for example, expand the biosphere “by about 32 orders of magnitude”. It's seriously disappointing, then, to learn that cosmic expansion may limit us to settling only 10 billion galaxies. And we feel our descendants' anxiety as “the threat of dark energy tearing cosmic civilizations apart motivates massive cosmic engineering projects”.\n\nThe book concludes with the Future of Life Institute's role in moving these issues into mainstream AI thinking — for which Tegmark deserves huge credit. He is not alone, of course, in raising the alarm. In its sweeping vision, *Life 3.0* has most in common with Nick Bostrom's 2014 *Superintelligence* (Oxford University Press). Unlike Bostrom, however, Tegmark is not trying to prove that risk is unavoidable; and he eschews dense philosophy in favour of asking the reader which scenarios they think more probable or desirable.\n\nAlthough I strongly recommend both books, I suspect that Tegmark's is less likely to provoke in AI researchers a common allergic reaction — a retreat into defensive arguments for paying no attention. Here's a typical one: we don't worry about remote but species-ending possibilities such as black holes materializing in near-Earth orbit, so why worry about superintelligent AI? Answer: if physicists were working to make such black holes, wouldn't we ask them if it was safe?\n\n*The Economist* has drily characterized the overarching issue thus: “The implications of introducing a second intelligent species onto Earth are far-reaching enough to deserve hard thinking.” *Life 3.0* is far from the last word on AI and the future, but it provides a fascinating glimpse of the hard thinking required.\n\n\n\n\nAuthor information\n------------------\n\nAuthor notes1. co-author of *Artificial Intelligence: A Modern Approach*.\n\n### Authors and Affiliations\n\n1. professor of computer science at the University of California, Berkeley\n\nStuart Russell\n\nAuthors1. Stuart Russell[View author publications](/search?author=Stuart%20Russell)You can also search for this author in\n [PubMed](http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=search&term=Stuart%20Russell) [Google Scholar](http://scholar.google.co.uk/scholar?as_q=&num=10&btnG=Search+Scholar&as_epq=&as_oq=&as_eq=&as_occt=any&as_sauthors=%22Stuart%20Russell%22&as_publication=&as_ylo=&as_yhi=&as_allsubj=all&hl=en)\n### Corresponding author\n\nCorrespondence to\n [Stuart Russell](mailto:russell@berkeley.edu).\n\nRelated links\n-------------\n\n### Related links\n\n### Related links in Nature Research\n\n\n[Artificial Intelligence: Chess match of the century](https://doi.org/10.1038/544413a)\n\n\n\n[Robotics: Countering singularity sensationalism](https://doi.org/10.1038/526320a)\n\n\n\n[Robotics: Morals and machines](https://doi.org/10.1038/481026a)\n\n\n### Web links\n\n\n[*Nature* special: Turing at 100](http://www.nature.com/news/specials/turing/index.html)\n\n\n\n[Books & Arts blog: Lust and the Turing test](http://blogs.nature.com/aviewfromthebridge/2015/05/27/lust-and-the-turing-test/)\n\n\n### Related external links\n\n\n[Future of Life Institute](https://futureoflife.org/)\n\n\nRights and permissions\n----------------------\n\n[Reprints and Permissions](https://s100.copyright.com/AppDispatchServlet?title=Artificial%20intelligence%3A%20The%20future%20is%20superintelligent&author=Stuart%20Russell&contentID=10.1038%2F548520a©right=Springer%20Nature%20Limited&publication=0028-0836&publicationDate=2017-08-31&publisherName=SpringerNature&orderBeanReset=true)\n\n\n\n\n\nThis article is cited by\n------------------------\n\n\n\n* ### \n[Towards high-quality development: how does digital economy impact low-carbon inclusive development?: mechanism and path](https://doi.org/10.1007/s11356-023-25185-4)\n\n\n\t+ Guoge Yang\n\t+ Xianhong Xiang\n\t+ Fengyi Wang*Environmental Science and Pollution Research* (2023)\n* ### \n[Artificial intelligence CT screening model for thyroid-associated ophthalmopathy and tests under clinical conditions](https://doi.org/10.1007/s11548-020-02281-1)\n\n\n\t+ Xuefei Song\n\t+ Zijia Liu\n\t+ Huifang Zhou*International Journal of Computer Assisted Radiology and Surgery* (2021)\n* ### \n[A reference framework and overall planning of industrial artificial intelligence (I-AI) for new application scenarios](https://doi.org/10.1007/s00170-018-3106-3)\n\n\n\t+ Xianyu Zhang\n\t+ Xinguo Ming\n\t+ Yuan Chang*The International Journal of Advanced Manufacturing Technology* (2019)", "url": "http://www.nature.com/articles/548520a", "title": "Artificial intelligence: The future is superintelligent [Book review of \"Life 3.0: Being Human in the Age of Artificial Intelligence\" by Max Tegmark]", "source": "html_articles", "source_type": "journalArticle", "source_filetype": "pdf", "date_published": "2017-08-29T22:00:00Z", "authors": ["Stuart Russell"], "summary": [], "id": "58c6a313ed6c779f72a1f030dbadfeae"} {"text": "[Download PDF](/articles/s41598-018-19194-4.pdf)\n\n\n\n\n\n\n### Subjects\n\n\n* [Behavioural methods](/subjects/behavioural-methods)\n* [Computer science](/subjects/computer-science)\n* [Social evolution](/subjects/social-evolution)\n\n\n\n\n\nAbstract\n--------\n\nWe introduce new theoretical insights into two-population asymmetric games allowing for an elegant symmetric decomposition into two single population symmetric games. Specifically, we show how an asymmetric bimatrix game (*A*,*B*) can be decomposed into its symmetric counterparts by envisioning and investigating the payoff tables (*A* and *B*) that constitute the asymmetric game, as two independent, single population, symmetric games. We reveal several surprising formal relationships between an asymmetric two-population game and its symmetric single population counterparts, which facilitate a convenient analysis of the original asymmetric game due to the dimensionality reduction of the decomposition. The main finding reveals that if *(x,y)* is a Nash equilibrium of an asymmetric game (*A*,*B*), this implies that *y* is a Nash equilibrium of the symmetric counterpart game determined by payoff table *A*, and *x* is a Nash equilibrium of the symmetric counterpart game determined by payoff table *B*. Also the reverse holds and combinations of Nash equilibria of the counterpart games form Nash equilibria of the asymmetric game. We illustrate how these formal relationships aid in identifying and analysing the Nash structure of asymmetric games, by examining the evolutionary dynamics of the simpler counterpart games in several canonical examples.\n\n\n\n\n\nIntroduction\n------------\n\nWe are interested in analysing the Nash structure and evolutionary dynamics of strategic interactions in multi-agent systems. Traditionally, such interactions have been studied using single population replicator dynamics models, which are limited to symmetric situations, i.e., players have access to the same set of strategies and the payoff structure is symmetric as well[1](/articles/s41598-018-19194-4#ref-CR1 \"Bloembergen, D., Tuyls, K., Hennes, D. & Kaisers, M. Evolutionary dynamics of multi-agent learning: A survey. J. Artif. Intell. Res. 53, 659–697 (2015).\"). For instance, Walsh *et al*. introduce an empirical game theory methodology (also referred to as heuristic payoff table method) that allows for analysing multiagent interactions in complex multiagent games[2](/articles/s41598-018-19194-4#ref-CR2 \"Walsh, W. E., Das, R., Tesauro, G. & Kephart, J. Analyzing complex strategic interactions in multi-agent games. In Proceedings of the Fourth Workshop on Game-Theoretic and Decision-Theoretic Agents, 109–118 (2002).\"),[3](/articles/s41598-018-19194-4#ref-CR3 \"Walsh, W. E., Parkes, D. C. & Das, R. Choosing samples to compute heuristic-strategy nash equilibrium. In Proceedings of the Fifth Workshop on Agent-Mediated Electronic Commerce, 109–123 (2003).\"). This method has been extended by others and been applied e.g. in continuous double auctions, variants of poker and multi-robot systems[1](/articles/s41598-018-19194-4#ref-CR1 \"Bloembergen, D., Tuyls, K., Hennes, D. & Kaisers, M. Evolutionary dynamics of multi-agent learning: A survey. J. Artif. Intell. Res. 53, 659–697 (2015).\"),[4](#ref-CR4 \"Tuyls, K. & Parsons, S. What evolutionary game theory tells us about multiagent learning. Artif. Intell. 171, 406–416 (2007).\"),[5](#ref-CR5 \"Ponsen, M. J. V., Tuyls, K., Kaisers, M. & Ramon, J. An evolutionary game-theoretic analysis of poker strategies. Entertainment Computing 1, 39–45 (2009).\"),[6](#ref-CR6 \"Wellman, M. P. Methods for empirical game-theoretic analysis. In Proceedings of The Twenty-First National Conference on Artificial Intelligence and the Eighteenth Innovative Applications of Artificial Intelligence Conference, 1552–1556 (2006).\"),[7](#ref-CR7 \"Phelps, S. et al. Auctions, evolution, and multi-agent learning. In Tuyls, K., Nowe, A., Guessoum, Z. & Kudenko, D. (eds.) Adaptive Agents and Multi-Agent Systems III. 5th, 6th, and 7th European Symposium on Adaptive and Learning Agents and Multi-Agent Systems, Revised Selected Papers, 188–210 (Springer, 2007).\"),[8](#ref-CR8 \"Phelps, S., Parsons, S. & McBurney, P. An evolutionary game-theoretic comparison of two double-auction market designs. In Faratin, P. & Rodriguez-Aguilar, J. A. (eds.) Agent-Mediated Electronic Commerce VI, Theories for and Engineering of Distributed Mechanisms and Systems, Revised Selected Papers, 101–114 (Springer, 2004).\"),[9](/articles/s41598-018-19194-4#ref-CR9 \"Lanctot, M. et al. A unified game-theoretic approach to multiagent reinforcement learning. In Advances in Neural Information Processing Systems 30: Annual Conference on Neural Information Processing Systems, 4193–4206 (2017).\"). Similar evolutionary methods have been applied to the modelling of human cooperation, language, and complex social dilemma’s[10](#ref-CR10 \"Perc, M. et al. Statistical physics of human cooperation. Physics Reports 687, 1–51 (2017).\"),[11](#ref-CR11 \"Moreira, J. A., Pacheco, J. M. & Santos, F. C. Evolution of collective action in adaptive social structures. Scientific Reports 3, 1521 (2013).\"),[12](#ref-CR12 \"Santos, F. P., Pacheco, J. M. & Santos, F. C. Evolution of cooperation under indirect reciprocity and arbitrary exploration rates. Scientific Reports 6, 37517 (2016).\"),[13](#ref-CR13 \"Pérolat, J. et al. A multi-agent reinforcement learning model of common-pool resource appropriation. In Advances in Neural Information Processing Systems 30: Annual Conference on Neural Information Processing Systems, 3646–3655 (2017).\"),[14](#ref-CR14 \"Lazaridou, A., Peysakhovich, A. & Baroni, M. Multi-agent cooperation and the emergence of (natural) language. In 5th International Conference on Learning Representations (2017).\"),[15](#ref-CR15 \"De Vylder, B. & Tuyls, K. How to reach linguistic consensus: A proof of convergence for the naming game. Journal of Theoretical Biology 242, 818–831 (2006).\"),[16](/articles/s41598-018-19194-4#ref-CR16 \"Cho, I. & Kreps, D. Signaling games and stable equilibria. The Quarterly Journal of Economics 179–221 (1987).\"). Though these evolutionary methods have been very useful in providing insights into the type and form of interactions in such systems, the underlying Nash structure, and evolutionary dynamics, the analysis is limited to symmetric situations, i.e., players or agents can be interchanged and have access to the same strategy set, in other words there are no different roles for the various agents involved in the interactions (e.g. a seller vs a buyer in an auction). As such this method is not directly applicable to asymmetric situations in which the players can choose strategies from different sets of actions, with asymmetric payoff structures. Many interesting multiagent scenarios involve asymmetric interactions though, examples include simple games from game theory such as e.g. the Ultimatum Game or the Battle of the Sexes and more complex board games that can involve various roles such as Scotland Yard, but also trading on the internet for instance can be considered asymmetric.\n\nThere exist approaches that deal with asymmetry in multiagent interactions, but they usually propose to transform the asymmetric game into a symmetric game, with new strategy sets and payoff structure, which then can be analysed again in the context of symmetric games. This is indeed a feasible approach, but not easily scalable to the complex interactions mentioned before, nor is it practical or intuitive to construct a new symmetric game before the asymmetric one can be analysed in full. The approach we take in this paper does not require constructing a new game and is theoretically underpinned, revealing some new interesting insights in the relation between the Nash structure of symmetric and asymmetric games.\n\nAnalysing multiagent interactions using evolutionary dynamics, or replicator dynamics, provides not only valuable insights into the (Nash) equilibria and their stability properties, but also sheds light on the behaviour trajectories of the involved agents and the basins of attraction of the equilibrium landscape[1](/articles/s41598-018-19194-4#ref-CR1 \"Bloembergen, D., Tuyls, K., Hennes, D. & Kaisers, M. Evolutionary dynamics of multi-agent learning: A survey. J. Artif. Intell. Res. 53, 659–697 (2015).\"),[4](/articles/s41598-018-19194-4#ref-CR4 \"Tuyls, K. & Parsons, S. What evolutionary game theory tells us about multiagent learning. Artif. Intell. 171, 406–416 (2007).\"),[15](/articles/s41598-018-19194-4#ref-CR15 \"De Vylder, B. & Tuyls, K. How to reach linguistic consensus: A proof of convergence for the naming game. Journal of Theoretical Biology 242, 818–831 (2006).\"),[17](/articles/s41598-018-19194-4#ref-CR17 \"Nowak, M. A. Evolutionary Dynamics: Exploring the Equations of Life (Harvard University Press, 2006).\"),[18](/articles/s41598-018-19194-4#ref-CR18 \"Tuyls, K., Verbeeck, K. & Lenaerts, T. A selection-mutation model for q-learning in multi-agent systems. In The Second International Joint Conference on Autonomous Agents & Multiagent Systems, 693–700 (2003).\"). As such it can be a very useful tool to analyse the Nash structure and dynamics of several interacting agents in a multiagent system. However, when dealing with asymmetric games the analysis quickly becomes tedious, as in this case we have a coupled system of replicator equations, and changes in the behaviour of one agent immediately change the dynamics in the linked replicator equation describing the behaviour of the other agent, and vice versa. This paper sheds new light on asymmetric games, and reveals a number of theorems, previously unknown, that allow for a more elegant analysis of asymmetric multiagent games. The major innovation is that we decouple asymmetric games in their *symmetric counterparts*, which can be studied in a symmetric fashion using symmetric replicator dynamics. The Nash equilibria of these symmetric counterparts are formally related to the Nash equilibria of the original asymmetric game, and as such provide us with a means to analyse the asymmetric game using its symmetric counterparts. Note that we do not consider asymmetric replicator dynamics in which both intra-species (within a population) and inter-species interactions (between different populations) take place[19](/articles/s41598-018-19194-4#ref-CR19 \"Cressman, R. & Tao, Y. The replicator equation and other game dynamics. Proceedings of the National Academy of Sciences USA 111, 10810–10817 (2014).\"), but we only consider inter-species interactions in which two different roles interact, i.e., truly asymmetric games[20](/articles/s41598-018-19194-4#ref-CR20 \"Selten, R. A note on evolutionary stable strategies in asymmetric animal conflicts. Journal of Theoretical Biology 84, 93–101 (1980).\").\n\nOne of our main findings is that the *x strategies* (player 1) and the *y strategies* (player 2) of a mixed Nash equilibrium of full support in the original asymmetric game, also constitute Nash equilibria in the symmetric counterpart games. The symmetric counterpart of player 1 (*x*) is defined on the payoff of player 2 and vice versa. We prove that for full support strategies, Nash equilibria of the asymmetric game are pairwise combinations of Nash equilibria of the two symmetric counterparts. Then, we show that this property stands without the assumption of full support as well. Though this analysis does not allow us to visualise the evolutionary dynamics of the asymmetric game itself, it does allow us to identify its Nash equilibria by investigating the evolutionary dynamics of the counterparts. As such we can easily distinguish Nash equilibria from other restpoints in the asymmetric game and get an understanding of its underlying Nash structure.\n\nThe paper is structured as follows: we first describe related work, then we continue with introducing essential game theoretic concepts. Subsequently, we present the main contributions and we illustrate the strengths of the theory by carrying out an evolutionary analysis on four canonical examples. Finally, we discuss the implications and provide a deeper understanding of the theoretical results.\n\nRelated Work\n------------\n\nThe most straightforward and classical approach to asymmetric games is to treat agents as evolving separately: one population per player, where each agent in a population interacts by playing against agent(s) from the other population(s), i.e. co-evolution[21](/articles/s41598-018-19194-4#ref-CR21 \"Taylor, P. Evolutionarily stable strategies with two types of players. Journal of Applied Probability 16, 76–83 (1979).\"). This assumes that players of these games are always fundamentally attached to one role and never need to know/understand how to play as the other player. In many cases, though, a player may want to know how to play as either player. For example, a good chess player should know how to play as white or black. This reasoning inspired the role-based symmetrization of asymmetric games[22](/articles/s41598-018-19194-4#ref-CR22 \"Guanersdorfer, A., Hofbauer, J. & Sigmund, K. On the dynamics of asymmetric games. Theoretical Population Biology 39, 345–357 (1991).\").\n\nThe role-based symmetrization of an arbitrary bimatrix game defines a new (extensive-form) game where before choosing actions the role of the two players are decided by uniform random chance. If two roles are available, an agent is assigned one specific role with probability \\(\\frac{1}{2}\\). Then, the agent plays the game under that role and collects the role-specific payoff appropriately. A new strategy space is defined, which is the product of both players’ strategy spaces, and a new payoff matrix computing (expected) payoffs for each combination of pure strategies that could arise under the different roles. There are relationships between the sets of evolutionarily stable strategies and rest points of the replicator dynamics between the original and symmetrized game[19](/articles/s41598-018-19194-4#ref-CR19 \"Cressman, R. & Tao, Y. The replicator equation and other game dynamics. Proceedings of the National Academy of Sciences USA 111, 10810–10817 (2014).\"),[23](/articles/s41598-018-19194-4#ref-CR23 \"Cressman, R. Evolutionary Dynamics and Extensive Form Games (The MIT Press, 2003).\").\n\nThis single-population model forces the players to be general: able to devise a strategy for each role, which may unnecessarily complicate algorithms that compute strategies for such players. In general, the payoff matrix in the resulting role-based symmetrization is *n*! (*n* being the number of agents) times larger due to the number of permutations of player role assignments. There are two-population variants that formulate the problem slightly differently: a new matrix that encapsulates both players’ utilities assigns 0 utility to combinations of roles that are not in one-to-one correspondence with players[24](/articles/s41598-018-19194-4#ref-CR24 \"Accinelli, E. & Carrera, E. J. S. Evolutionarily stable strategies and replicator dynamics in asymmetric two-population games. In Peixoto, M. M., Pinto, A. A. & Rand, D. A. (eds.) Dynamics, Games and Science I, 25–35 (Springer, 2011).\"). This too, however, results in an unnecessarily larger (albeit sparse) matrix.\n\nLastly, there are approaches that have structured asymmetry, that arises due to ecological constraints such as locality in a network and genotype/genetic relationships between population members[25](/articles/s41598-018-19194-4#ref-CR25 \"McAvoy, A. & Hauert, C. Asymmetric evolutionary games. PLoS Comput Biol 11, e1004349 (2015).\"). Similarly here, replicator dynamics and their properties are derived by transforming the payoff matrix into a larger symmetric matrix.\n\nOur primary motivation is to enable analysis techniques for asymmetric games. However, we do this by introducing new *symmetric counterpart dynamics* rather than using standard dynamics on a symmetrised game. Therefore, the traditional role interpretation as well as any method that enlarges the game for the purpose of obtaining symmetry is unnecessarily complex for our purposes. Consequently, we consider the original co-evolutionary interpretation, and derive new (lower-dimensional) strategy space mappings.\n\nPreliminaries and Methods\n-------------------------\n\nIn this section we concisely outline (evolutionary) game theoretic concepts necessary to understand the remainder of the paper[23](/articles/s41598-018-19194-4#ref-CR23 \"Cressman, R. Evolutionary Dynamics and Extensive Form Games (The MIT Press, 2003).\"),[26](/articles/s41598-018-19194-4#ref-CR26 \"Weibull, J. Evolutionary Game Theory (MIT press, 1997).\"),[27](/articles/s41598-018-19194-4#ref-CR27 \"Hofbauer, J. & Sigmund, K. Evolutionary Games and Population Dynamics (Cambridge University Press, 1998).\"). We briefly specify definitions of Normal Form Games and solution concepts such as Nash Equilibrium in a single population game and in a two-population game. Furthermore, we introduce the Replicator Dynamics (RD) equations for single and two population games and briefly discuss the concept of Evolutionary Stable Strategies (ESS) introduced by Smith and Price in 1973[28](#ref-CR28 \"Maynard Smith, J. & Price, G. R. The logic of animal conflicts. Nature 246, 15–18 (1973).\"),[29](#ref-CR29 \"Zeeman, E. Population dynamics from game theory. Lecture Notes in Mathematics, Global theory of dynamical systems 819 (1980).\"),[30](/articles/s41598-018-19194-4#ref-CR30 \"Zeeman, E. Dynamics of the evolution of animal conflicts. Journal of Theoretical Biology 89, 249–270 (1981).\").\n\n### Normal Form Games and Nash Equilibrium\n\n**Definition**. *A two-player Normal Form Game* (*NFG*) *G is a 4*-*tuple G* = (*S*1, *S*2, *A*, *B*), *with pure strategy sets S*1 *and S*2 *for player 1*, *respectively player 2*, *and corresponding payoff tables A and B*. *Both players choose their pure strategies* (*also called actions*) *simultaneously*.\n\nThe payoffs for both players are represented by a bimatrix (*A*, *B*), which gives the payoff for the row player in *A*, and the column player in *B* (see Table [1](/articles/s41598-018-19194-4#Tab1) for a two strategy example). Specifically, the row player chooses one of the two rows, the column player chooses one of the columns, and the outcome of their joint strategy determines the payoff to both.\n\n**Table 1 General payoff bimatrix (A, B) for a two-player two-action normal form game, where player 1 can choose between actions *A*1 and *A*2, and player 2 can choose between actions *B*1 and *B*2.**[Full size table](/articles/s41598-018-19194-4/tables/1)In case *S*1 = *S*2 and *A* = *B**T* the players are interchangeable and we call the game symmetric. In case at least one of these conditions is not met we have an asymmetric game. In classical game theory the players are considered to be individually rational, in the sense that each player is perfectly logical trying to maximise their own payoff, assuming the others are doing likewise. Under this assumption, the Nash equilibrium (NE) solution concept can be used to study what players will reasonably choose to do.\n\nWe denote a strategy profile of the two players by the tuple (*x*, *y*) ∈ Δ*S*1 × Δ*S*2, where Δ*S*1, Δ*S*2 are the sets of mixed strategies, that is, distributions over the pure strategy sets or action sets. The strategy *x* (respectively *y*) is represented as a vector in \\({{\\mathbb{R}}}^{|{S}\\_{1}|}\\) (respectively \\({{\\mathbb{R}}}^{|{S}\\_{2}|}\\)) where each entry is the probability of playing the corresponding action. The payoff associated with player 1 is *x**T**Ay* and *x**T**By* is the payoff associated with player 2. A strategy profile (*x*,*y*) now forms a NE if no single player can do better by unilaterally switching to a different strategy. In other words, each strategy in a NE is a best response against all other strategies in that equilibrium. Formally we have,\n\n**Definition**. *A strategy profile* (*x*,*y*) *is a Nash equilibrium*, *iff the following holds*:\n\n$$\\forall x^{\\prime} \\,\\in \\,\\Delta {S}\\_{1},\\,{x}^{T}Ay\\ge {x^{\\prime} }^{T}Ay\\,and\\,\\forall y^{\\prime} \\,\\in \\,\\Delta {S}\\_{2},\\,{x}^{T}\\,By\\ge {x}^{T}By^{\\prime} $$In the following, we will write *NE*(*A*, *B*) for the set of Nash equilibria of the game *G* = (*S*1, *S*2, *A*, *B*). Furthermore, a Nash equilibrium is said to be pure if only one strategy of the strategy set is played and we will say that it is completely mixed if all pure strategies are played with a non-zero probability.\n\nIn evolutionary game theory, games are often considered with a single population. In other words, a player is playing against itself and only a single payoff table *A* is necessary to define the game (note that this definition only makes sense when |*S*1| = |*S*2| = *n*). In this case, the payoff received by the player is *x**T**Ax* and the following definition describes the Nash equilibrium:\n\n**Definition**. *In a single population game*, *a strategy x is a Nash equilibrium*, *iff the following holds*:\n\n$$\\forall x^{\\prime} ,{x}^{T}Ax\\ge {x^{\\prime} }^{T}Ax$$In this single population case, we will write that *x*∈ *NE*(*A*).\n\n### Replicator Dynamics\n\nReplicator Dynamics in essence are a system of differential equations that describe how a population of pure strategies, or replicators, evolve through time[26](/articles/s41598-018-19194-4#ref-CR26 \"Weibull, J. Evolutionary Game Theory (MIT press, 1997).\"),[32](/articles/s41598-018-19194-4#ref-CR32 \"Gintis, H. Game Theory Evolving (Princeton University Press, 2009).\"). In their most basic form they correspond to the biological *selection* principle, i.e. survival of the fittest. More specifically the *selection* replicator dynamic mechanism is expressed as follows:\n\n$$\\frac{d{x}\\_{i}}{dt}={x}\\_{i}[(Ax{)}\\_{i}-{x}^{T}Ax]$$\n (1)\n Each replicator represents one (pure) strategy *i*. This strategy is inherited by all the offspring of the replicator. *x**i* represents the density of strategy *i* in the population, *A* is the payoff matrix which describes the different payoff values each individual replicator receives when interacting with other replicators in the population. The state of the population *x* can be described as a probability vector *x* = (*x*1, *x*2, ..., *x**n*) which expresses the different densities of all the different types of replicators in the population. Hence (*Ax*)*i* is the payoff which replicator *i* receives in a population with state *x* and *x**T**Ax* describes the average payoff in the population. The support *I**x* of a strategy is the set of actions (or pure strategies) that are played with a non-zero probability *I**x* = {*i* |*x**i* > 0}.\n\nIn essence this equation compares the payoff a strategy receives with the average payoff of the entire population. If the strategy scores better than average it will be able to replicate *offspring*, if it scores lower than average its presence in the population will diminish and potentially approach extinction. The population remains in the simplex (∑*i**x**i* = 1) since ∑*i*(*dx**i*)/(*dt*) = 0.\n\n### Evolutionary Stable Strategies\n\nOriginally, an Evolutionary Stable Strategy was introduced in the context of a symmetric single population game[28](/articles/s41598-018-19194-4#ref-CR28 \"Maynard Smith, J. & Price, G. R. The logic of animal conflicts. Nature 246, 15–18 (1973).\"),[32](/articles/s41598-018-19194-4#ref-CR32 \"Gintis, H. Game Theory Evolving (Princeton University Press, 2009).\") (as introduced in the previous section), though this can be extended to multi-population games as well as defined in the next section[23](/articles/s41598-018-19194-4#ref-CR23 \"Cressman, R. Evolutionary Dynamics and Extensive Form Games (The MIT Press, 2003).\"),[33](/articles/s41598-018-19194-4#ref-CR33 \"Sandholm, W. Population Games and Evolutionary Dynamics (MIT Press, 2010).\"). Imagine a population of simple agents playing the same strategy. Assume that this population is invaded by a different strategy, which is initially played by a small proportion of the total population. If the reproductive success of the new strategy is smaller than the original one, it will not overrule the original strategy and will eventually disappear. In this case we say that the strategy is *evolutionary stable* (ESS) against this newly appearing strategy. In general, we say a strategy is ESS if it is robust against evolutionary pressure from any appearing mutant replicator not yet present in the population (or only with a very small fraction).\n\n### Asymmetric Replicator Dynamics\n\nWe have assumed replicators come from a single population, which makes the model only applicable to symmetric games. One can now wonder how the previous introduced equations extend to asymmetric games. Symmetry assumes that strategy sets and corresponding payoffs are the same for all players in the interaction. An example of an asymmetric game is the famous Battle of the Sexes (BoS) game illustrated in Table [2](/articles/s41598-018-19194-4#Tab2). In this game both players do have the same strategy set, i.e., go to the opera or go to the movies, however, the corresponding payoffs for each are different, expressing the difference in preferences that both players have in their respective roles.\n\n**Table 2 Payoff bimatrix for the Battle of the Sexes game. Strategies *O* and *M* correspond to going to the Opera and going to the Movies respectively.**[Full size table](/articles/s41598-018-19194-4/tables/2)If we would like to carry out a similar evolutionary analysis as before we will now need two populations, one for each player over its respective strategy set, and we need to use the asymmetric or coupled version of the replicator dynamics, i.e.,\n\n**Definition**.\n\n$$\\frac{d{x}\\_{i}}{dt}={x}\\_{i}[(Ay{)}\\_{i}-{x}^{T}Ay]\\quad \\quad {and}\\quad \\quad \\frac{d{y}\\_{i}}{dt}={y}\\_{i}[({x}^{T}B{)}\\_{i}-{x}^{T}By]$$\n (2)\n with payoff tables *A* and *B*, respectively for player 1 and 2. In case *A* = *B**T* the equations reduce to the single population model.\n\n### Symmetric Counterpart Replicator Dynamics\n\nWe now introduce a new concept, the *symmetric counterpart* replicator dynamics (SCRD) of asymmetric replicator equations. We consider the two payoff tables *A* and *B* as two independent games that are no longer coupled, and in which both players participate. In the first counterpart game all players choose their strategy according to distribution *y*, the original strategy or replicator distribution for the 2nd population, or player 2, and in the second counterpart game all players choose their strategy according to distribution *x*, the original strategy or replicator distribution for the 1st population, or player 1. This gives us the following two sets of replicator equations:\n\n$$\\frac{d{y}\\_{i}}{dt}={y}\\_{i}[(Ay{)}\\_{i}-{y}^{T}Ay]$$\n (3)\n and\n\n$$\\frac{d{x}\\_{i}}{dt}={x}\\_{i}[({x}^{T}B{)}\\_{i}-{x}^{T}Bx]$$\n (4)\n In the results Section we will introduce some remarkable relationships between the equilibria of asymmetric replicator equations and the equilibria of their symmetric counterpart equations, which facilitates, and substantially simplifies, the analysis of the Nash structure of asymmetric games.\n\n### Visualising evolutionary dynamics\n\nOne can visualise the replicator dynamics in a directional field and trajectory plot, which provides useful information about the equilibria, flow of dynamics and basins of attraction. As long as we stay in the realm of 2-player 2-action games this can be achieved relatively easily by plotting the probability with which player 1 plays its first action on the x-axis, and the probability with which player 2 plays its first action on the y-axis. Since there are only 2 actions for each player, this immediately gives a complete image of the dynamics over all strategies, since the probability for the second action *a*2 to be chosen is one minus the first. By means of example we show a directional field plot here for the famous Prisoner’s dilemma game (game illustrated in Table [3](/articles/s41598-018-19194-4#Tab3)).\n\n**Table 3 Payoff matrix for the Prisoner’s Dilemma game. Strategies *D* and *C* correspond to the actions *Defect* and *Cooperate*.**[Full size table](/articles/s41598-018-19194-4/tables/3)The directional field plot, and corresponding trajectories, are shown in Fig. [1](/articles/s41598-018-19194-4#Fig1). For both players the axis represents the probability with which they play *Defect* (D). As can be observed all dynamics are absorbed by the pure Nash equilibrium (*D*, *D*) in which both players defect.\n\n**Figure 1**[![figure 1](//media.springernature.com/lw685/springer-static/image/art%3A10.1038%2Fs41598-018-19194-4/MediaObjects/41598_2018_19194_Fig1_HTML.jpg)](/articles/s41598-018-19194-4/figures/1)Directional field plot of the Prisoner’s Dilemma game.\n\n[Full size image](/articles/s41598-018-19194-4/figures/1)Unfortunately, we cannot use the same type of plot illustrating the dynamics when we consider more than two strategies. However, if we move to single population games we can easily rely on a simplex plot. In the case of a two population game the situation become tedious as we will discuss later. Specifically, the set of probability distributions over *n* elements can be represented by the set of vectors (*x*1, ..., *x**n*) \\(\\in \\,{{\\mathbb{R}}}^{n}\\), satisfying *x*1, ..., *x**n* ≥ 0 and ∑*i**x**i* = 1. This can be seen to correspond to an *n* − 1-dimensional structure called a simplex Σ*n* (or simply Σ, when *n* is clear from the context). In many of the figures throughout the paper we use Σ3, projected as an equilateral triangle. For example, consider the single population *Rock-Paper-Scissors* game, described by the payoff matrix shown in Fig. [2a](/articles/s41598-018-19194-4#Fig2).\n\n**Figure 2**[![figure 2](//media.springernature.com/lw685/springer-static/image/art%3A10.1038%2Fs41598-018-19194-4/MediaObjects/41598_2018_19194_Fig2_HTML.jpg)](/articles/s41598-018-19194-4/figures/2)(**a**) Payoff matrix for the Rock-Paper-Scissors game. Strategies *R*, *S* and *P* correspond to playing respectively *R*ock, *S*cissors, *P*aper. (**b**) Σ3 Trajectory plot of the Rock-Paper-Scissors game. The Nash equilibrium is marked with a full yellow dot.\n\n[Full size image](/articles/s41598-018-19194-4/figures/2)The game has one completely mixed Nash equilibrium, being \\((\\frac{1}{3},\\frac{1}{3},\\frac{1}{3})\\). In Fig. [2b](/articles/s41598-018-19194-4#Fig2) we have plotted the replicator equations Σ3 trajectory plot for this game. Each of the corners of the simplex corresponds to one of the pure strategies, i.e., {*Rock*, *Paper*, *Scissors*}. For three strategies in the strategy simplex we then plot a trajectory illustrating the flow of the replicator dynamics. As can be observed from the plot, trajectories of the dynamics cycle around the mixed Nash equilibrium, which is not ESS and not asymptotically stable.\n\nIn fact, three categories of rest points can be discerned in single population replicator dynamics (see Figs [3](/articles/s41598-018-19194-4#Fig3), [4](/articles/s41598-018-19194-4#Fig4) and [5](/articles/s41598-018-19194-4#Fig5)). Figure [3](/articles/s41598-018-19194-4#Fig3) displays a stable Nash equilibrium called an Evolutionary Stable Strategy (ESS). An ESS is an attractor of the RD dynamical system defined in the previous section and has been one of the main foci of evolutionary game theory. The second type of rest points are the ones that are Nash but not ESS (Fig. [4](/articles/s41598-018-19194-4#Fig4)). These rest points are not an attractor of the RD but they have a specific form. Specifically, if a strategy is a Nash equilibrium, all the actions that are not part of the support are dominated, i.e., the support is invariant under the RD, which means that the fraction of a strategy cannot become non-zero if it is zero at some point. The third category that can occur is illustrated in Fig. [5](/articles/s41598-018-19194-4#Fig5). Those rest points are not Nash and thus there is an action outside of the support that is dominant. Thus, the flow will leave from points in the close vicinity of the rest point, which is called a *source*.\n\n**Figure 3**[![figure 3](//media.springernature.com/lw685/springer-static/image/art%3A10.1038%2Fs41598-018-19194-4/MediaObjects/41598_2018_19194_Fig3_HTML.jpg)](/articles/s41598-018-19194-4/figures/3)ESS.\n\n[Full size image](/articles/s41598-018-19194-4/figures/3)**Figure 4**[![figure 4](//media.springernature.com/lw685/springer-static/image/art%3A10.1038%2Fs41598-018-19194-4/MediaObjects/41598_2018_19194_Fig4_HTML.jpg)](/articles/s41598-018-19194-4/figures/4)NE but not ESS.\n\n[Full size image](/articles/s41598-018-19194-4/figures/4)**Figure 5**[![figure 5](//media.springernature.com/lw685/springer-static/image/art%3A10.1038%2Fs41598-018-19194-4/MediaObjects/41598_2018_19194_Fig5_HTML.jpg)](/articles/s41598-018-19194-4/figures/5)Rest point but not NE.\n\n[Full size image](/articles/s41598-018-19194-4/figures/5)Results\n-------\n\nIn the following, we first present our main findings, formally relating Nash equilibria in asymmetric 2-player games with the Nash equilibria that can be found in the corresponding counterpart games. We also examine the stability properties of the corresponding rest points of the replicator dynamics in these games. Then we experimentally illustrate these findings in some canonical examples.\n\n### Theoretical Findings\n\nIn this section, we prove the following result: if (*x*, *y*) ∈ *NE*(*A*, *B*) (where *x* and *y* have the same support), then *x*∈ *NE*(*B*Τ) and *y*∈ *NE*(*A*). In addition, we prove that the reverse is true: if *x*∈ *NE*(*B*Τ) and *y*∈ *NE*(*A*) (where *x* and *y* have the same support) then (*x*,*y*) ∈ *NE*(*A*,*B*). We will prove this result in two steps (Theorem 1 and its generalization Theorem 2).\n\nThe theorems introduced apply to games where both players can play the same number of actions (i.e. square games). This condition can be weakened by adding dominated strategies to the player having the smallest number of actions (see the extended Battle of the Sexes example in the experimental section). Thus, without loss of generality, the theory will focus on square games. To begin, we state an important well-known property of Nash equilibria, that has been given different names; Gintis calls it fundamental theorem of Nash equilibria[32](/articles/s41598-018-19194-4#ref-CR32 \"Gintis, H. Game Theory Evolving (Princeton University Press, 2009).\"). For sake of completeness, we provide a proof.\n\n\n### \n**Property 1.**\n\n\n*Let the strategy profile* (*x*, *y*) *be a Nash equilibrium of an asymmetric normal form game* (*A*, *B*), *and denote I**z* = {*i* | *z**i* > 0} *the support of a strategy z*. *Then*,\n\n$${z}^{{\\rm{{\\rm T}}}}Ay={x}^{{\\rm{{\\rm T}}}}Ay\\,for\\,all\\,z\\,such\\,that\\,{I}\\_{z}\\subset {I}\\_{x},\\,\\,and$$\n (5)\n $${x}^{{\\rm{{\\rm T}}}}Bz={x}^{{\\rm{{\\rm T}}}}By\\,for\\,all\\,z\\,such\\,that\\,{I}\\_{z}\\subset {I}\\_{y}\\mathrm{.}$$\n (6)\n \n### *Proof*.\n\n\nThis result is widely known. We provide it as it is a basis of our theoretical results and for the sake of completeness.\n\n\nIf *x* and *y* constitute a Nash equilibrium then, by definition *z*Τ*Ay* ≤ *x*Τ*Ay*,∀*z*. Let us suppose that there exists a *z* with *I**z* ⊂ *I**x* such that *z*Τ*Ay* < *x*Τ*Ay*. Then there is a *i* ∈ *I**z* ⊂ *I**x* satisfying (*Ay*)*i* < *x*Τ*Ay*, and we get \\({x}^{{\\rm{{\\rm T}}}}Ay=\\sum \\_{i\\in {I}\\_{x}}{x}\\_{i}{(Ay)}\\_{i} < \\sum \\_{i\\in {I}\\_{x}}{x}\\_{i}{x}^{{\\rm{{\\rm T}}}}Ay={x}^{{\\rm{{\\rm T}}}}Ay\\), which is a contradiction, proving the first claim. The claim for *B* follows analogously.◽\n\n\n### **Property 2**.\n\n\n*Let the strategy x be a Nash equilibrium of a single population game A*. *Then*,\n\n$${z}^{{\\rm T}}Ax={x}^{{\\rm T}}Ax\\,for\\,all\\,z\\,such\\,that\\,{I}\\_{z}\\subset {I}\\_{x}\\mathrm{.}$$\n (7)\n \n### *Proof*.\n\n\nThe proof is similar to the proof of Property 1.◽\n\n\nThis property will be useful in the steps of the proofs that follow. We now present our first main result: a correspondence between the Nash equilibria of full support in the asymmetric game with those of full support in the counterpart games. Theorem 2 subsumes this result and we introduce this simpler version first for the sake of readability.\n\n\n### \n***Theorem 1.***\n\n\n*If strategies x and y constitute a Nash equilibrium of an asymmetric normal form game G =* (*S**1*, *S**2*, *A*, *B*), *with both x**i* *> 0 and y**j* *> 0 for all i*, *j* (*full support*), *and |S**1**| = |S**2**| = n*, *then it holds that x is a Nash equilibrium of the single population game B**T* *and y is a Nash equilibrium of the single population game A*. *The reverse is also true*.\n\n\n### *Proof* .\n\n\nThis result follows naturally from Property 1 and is implied by Theorem 2.\n\n\nWe start by assuming that *x* and *y* constitute a full support Nash equilibrium of the asymmetric game (*A*, *B*). By Property 1 and since *x* and *y* have full support, we know that:\n\n$$Ay={\\mathrm{(1,}\\mathrm{...,}\\mathrm{1)}}^{T}\\,\\mathop{max}\\limits\\_{i\\in \\mathrm{\\{1,...,}n\\}}\\,{(Ay)}\\_{i}\\quad \\mathrm{and},\\quad {x}^{T}B=\\mathrm{(1,}\\,\\mathrm{...,}\\,\\mathrm{1)}\\,\\mathop{max}\\limits\\_{i\\in \\mathrm{\\{1,...,}n\\}}\\,{({x}^{T}B)}\\_{i}$$\nFrom this we also know that *y**T**Ay* = (*Ay*)*i* (since the (*Ay*)*i* are equal for all *I*’s in the vector *Ay*, so multiplying *Ay* with *y**T* will yield the same number \\({{\\rm{\\max }}}\\_{i}\\,{(Ay)}\\_{i}\\)), and similarly (*x**T**B*)*i* = *x**T**Bx* (and thus (*B**T**x*)*i* = *x**T**B**T**x*), implying that:\n\n$$\\forall y^{\\prime} ,{y}^{T}Ay={y^{\\prime} }^{T}Ay\\quad \\mathrm{and},\\quad \\forall x^{\\prime} ,{x}^{T}{B}^{T}x={x^{\\prime} }^{T}{B}^{T}x$$\nThis concludes the proof.◽\n\n\nFor the first counterpart game this means that the players will use the *y* part of the Nash equilibrium of player 2 of the original asymmetric game, in the symmetric counterpart game determined by payoff table *A*. And similarly, for the second counterpart game this means that players will play according to the *x* part of the Nash equilibrium of player 1 of the original asymmetric game, in the symmetric game determined by payoff table *B*. As such both players consider a symmetric version of the asymmetric game, for which this *y* component and *x* component constitute a Nash equilibrium in the two new respective symmetric games.\n\nIn essence, these two symmetric counterpart games can be considered as a decomposition of the original asymmetric game, which gives us a means to illustrate in a smaller strategy space where the mixed and pure equilibria are located.\n\nA direct consequence of Theorem 1 is the following corollary that gives insights on the geometrical structure of Nash equilibrium,\n\n\n### **Corollary 1**.\n\n\n*Combinations of Nash equilibria of full support of the games corresponding to the symmetrical counterparts of the original asymmetric game also form Nash equilibria of full support in this asymmetric game*.\n\n\n### *Proof*.\n\n\nThis is a direct consequence of Theorem 1.◽\n\n\nThe next theorem explores the case where the equilibrium is not of full support. We prove that the theorem stands if the strategies of both players have the same support. Indeed, the first theorem requires that both players play all actions with a positive probability, here we will only require that they play the actions with the same index with a positive probability. We say that *x* and *y* have the same support if the set of played actions *I**x* = {*i* | *x**i* > 0} and *I**y* = {*i* | *y**i* > 0} are equal.\n\n\n### **Theorem 2**.\n\n\n*Strategies x and y constitute a Nash equilibrium of an asymmetric game G =* (*S**1*, *S**2*, *A*, *B*) *with the same support* (*i*.*e*. *I**x* *= I**y*) *if and only if x is a Nash equilibrium of the single population game B**T*, *y is a Nash equilibrium of the single population game A and I**x* *= I**y*.\n\n\n### *Proof*.\n\n\nWe start by assuming that *x* and *y* constitute a Nash equilibrium of same support (*I**x* = *I**y*) of the asymmetric game (*A*, *B*). By Property 1, and since *x* and *y* have the same support, we know that:\n\n$${z}^{{\\rm{{\\rm T}}}}Ay={x}^{{\\rm{{\\rm T}}}}Ay\\,{\\rm{for}}\\,{\\rm{all}}\\,z\\,{\\rm{such}}\\,{\\rm{that}}\\,{I}\\_{z}\\subset {I}\\_{x},\\,{\\rm{and}}$$\n (8)\n $${x}^{{\\rm{{\\rm T}}}}Bz^{\\prime} ={x}^{{\\rm{{\\rm T}}}}By\\,{\\rm{for}}\\,{\\rm{all}}\\,z^{\\prime} \\,{\\rm{such}}\\,{\\rm{that}}\\,{I}\\_{z^{\\prime} }\\subset {I}\\_{y}\\mathrm{.}$$\n (9)\n \nImplying that *y*Τ*Ay* = *x*Τ*Ay* and *x*Τ*Bx* = *x*Τ*By* (by setting *z* = *y* and *z*′ = *x*). Then, from the Nash equilibrium condition we can write:\n\n$$\\forall x^{\\prime} \\,\\in \\,\\Delta {S}\\_{1},\\,{x}^{T}Ay\\ge {x^{\\prime} }^{T}Ay\\,{\\rm{and}}\\,\\forall y^{\\prime} \\,\\in \\,\\Delta {S}\\_{2},\\,{x}^{T}By\\ge {x}^{T}By^{\\prime} $$$${\\rm{\\forall }}y{\\rm{^{\\prime} }}\\,\\in \\,{\\rm{\\Delta }}{S}\\_{2},\\,{y}^{{\\rm{T}}}Ay\\ge {y{\\rm{^{\\prime} }}}^{{\\rm{T}}}Ay\\,{\\rm{a}}{\\rm{n}}{\\rm{d}}\\,{\\rm{\\forall }}x{\\rm{^{\\prime} }}\\,\\in \\,{\\rm{\\Delta }}{S}\\_{1},\\,{x}^{{\\rm{T}}}Bx\\ge {x}^{T}Bx{\\rm{^{\\prime} }}$$$$\\forall y^{\\prime} \\,\\in \\,\\Delta {S}\\_{2},\\,{y}^{{\\rm{T}}}Ay\\ge {y^{\\prime} }^{T}Ay\\,{\\rm{and}}\\,\\forall x^{\\prime} \\,\\in \\,\\Delta {S}\\_{1},\\,{x}^{{\\rm{T}}}{B}^{{\\rm{T}}}x\\ge {x^{\\prime} }^{T}{B}^{{\\rm{T}}^{\\prime} }$$which implies that *y* is a Nash equilibrium of *B*Τ and *x* is a Nash equilibrium of *A*.\n\nThe proof of the other direction follows similar mechanics and uses Property 2. Let us now assume that *y* is a Nash equilibrium of *B*Τ and *x* is a Nash equilibrium of *A* with *I**x* = *I**y*. Then, from Property 2 we have:\n\n$${z}^{{\\rm{{\\rm T}}}}Ay={y}^{{\\rm{{\\rm T}}}}Ay\\,{\\rm{for}}\\,{\\rm{all}}\\,z\\,{\\rm{such}}\\,{\\rm{that}}\\,{I}\\_{z}\\subset {I}\\_{y},\\,{\\rm{and}}$$\n (10)\n $${z^{\\prime} }^{{\\rm T}}{B}^{{\\rm T}}x={x}^{{\\rm T}}{B}^{{\\rm T}}x\\,{\\rm{for}}\\,{\\rm{all}}\\,z^{\\prime} \\,{\\rm{such}}\\,{\\rm{that}}\\,{I}\\_{z^{\\prime} }\\subset {I}\\_{x}\\mathrm{.}$$\n (11)\n In particular we get *y*Τ*Ay* = *x*Τ*Ay* and *x*Τ*Bx* = *x*Τ*By* (by setting *z* = *x* and *z*′ = *y*). From the Nash equilibrium condition of the single population games we can write:\n\n$$\\forall y^{\\prime} \\in \\Delta {S}\\_{2},\\,{y}^{{\\rm{T}}}Ay\\ge {y^{\\prime} }^{T}Ay\\,{\\rm{and}}\\,\\forall x^{\\prime} \\,\\in \\,\\Delta {S}\\_{1},\\,{x}^{{\\rm{T}}}{B}^{{\\rm{T}}}x\\ge {x^{\\prime} }^{{\\rm{T}}}{B}^{{\\rm{T}}}x$$$$\\forall y^{\\prime} \\in \\Delta {S}\\_{2},\\,{y}^{{\\rm{T}}}Ay\\ge {y^{\\prime} }^{T}Ay\\,{\\rm{and}}\\,\\forall x^{\\prime} \\in \\Delta {S}\\_{1},{x}^{{\\rm{T}}}Bx\\ge {x}^{T}Bx^{\\prime} $$$$\\forall x^{\\prime} \\in \\Delta {S}\\_{1},{x}^{T}Ay\\ge {x^{\\prime} }^{T}Ay\\,{\\rm{and}}\\,\\forall y^{\\prime} \\in \\Delta {S}\\_{2},\\,{x}^{T}By\\ge {x}^{T}By^{\\prime} $$which concludes the proof.◽\n\n\n### ***Corollary 2***.\n\n\n*Strategies x and y constitute a pure* (*strict*) *Nash equilibrium of an asymmetric normal form game G =* (*S**1*, *S**2*, *A*, *B*), *with support on the strategy with the same index in their respective strategy sets S**1* *and S**2*, *if and only if*, *y and x are also pure* (*strict*) *Nash equilibria of the counterpart games defined by A*,\n\n$$\\frac{d{y}\\_{i}}{dt}={y}\\_{i}((Ay{)}\\_{i}-{y}^{T}Ay)=0$$\n (12)\n *and B*,\n\n$$\\frac{d{x}\\_{i}}{dt}={x}\\_{i}(({x}^{T}B{)}\\_{i}-{x}^{T}Bx)=0$$\n (13)\n \n### *Proof*.\n\n\nThis is a direct consequence of Theorem 2.◽\n\n\nThe theorems can only be used for equilibria in the counterpart games with matching supports (*I**x* = *I**y*) from both players. One can work around this condition though by simply permuting the actions of one player in matrix *A* and *B* to study all configurations of supports of the same cardinality. To be precise, we need to analyze all the counterpart games defined by *A*Σ = *A*Σ and \\({B}\\_{\\Sigma }^{T}=(B{\\rm{\\Sigma }}{)}^{T}\\) for all permutation matrices Σ. This technique is sufficient to study non-degenerate games, as in a non-degenerate game all Nash equilibria have a support of same size (in a non-degenerate game if (*x*, *y*) is a Nash equilibrium then |*I**x*| = |*I**y*|[34](/articles/s41598-018-19194-4#ref-CR34 \"von Stengel, B. Computing equilibria for two-person games. In Aumann, R. & Hart, S. (eds.) Handbook of Game Theory with Economic Applications, 1723–1759 (Elsevier, 2002).\")).\n\n### Stability Analysis\n\nWe can now examine the stability of the pure Nash equilibria discussed in the previously derived theorems.\n\n\n### **Corollary 3**.\n\n\n*Strategy y is a strict Nash equilibrium of the first counterpart game defined by A and strategy x* is a strict Nash equilibrium of the second counterpart game defined by B, if and only if, (*x*, *y*) is a locally asymptotically stable equilibrium and a two-species ESS of the asymmetric normal form game *G* = (*S*1, *S*2, *A*, *B*) with support on the strategy with the same index in their respective strategy sets *S*1 and *S*2.\n\n\n### *Proof*.\n\n\nThis a direct consequence of Corollary 2. More specifically, from Corollary 2 we know that (*x*, *y*) is a strict Nash equilibrium of *G*. It has been shown that (*x*, *y*) is a strict Nash equilibrium of *G* iff it is a two-species ESS[19](/articles/s41598-018-19194-4#ref-CR19 \"Cressman, R. & Tao, Y. The replicator equation and other game dynamics. Proceedings of the National Academy of Sciences USA 111, 10810–10817 (2014).\"),[20](/articles/s41598-018-19194-4#ref-CR20 \"Selten, R. A note on evolutionary stable strategies in asymmetric animal conflicts. Journal of Theoretical Biology 84, 93–101 (1980).\"),[27](/articles/s41598-018-19194-4#ref-CR27 \"Hofbauer, J. & Sigmund, K. Evolutionary Games and Population Dynamics (Cambridge University Press, 1998).\").◽\n\n\nExperimental illustration\n-------------------------\n\nWe will now illustrate how the theoretical links between asymmetric games and their counterpart symmetric replicator dynamics facilitate analysis of asymmetric multiagent games, and provide a convenient tool to get insight into their equilibrium landscape. We do this for several examples. The first example concerns the Battle of the Sexes game to illustrate the intuition behind the results. The second example extends the Battle of the Sexes game with one strategy for one of the players, illustrating the permutation argument of the theorems and how to apply the results in case of a non-square game. The third example is a bimatrix game generated in the context of a multiagent learning algorithm called PSRO (Policy Space Response Oracles[9](/articles/s41598-018-19194-4#ref-CR9 \"Lanctot, M. et al. A unified game-theoretic approach to multiagent reinforcement learning. In Advances in Neural Information Processing Systems 30: Annual Conference on Neural Information Processing Systems, 4193–4206 (2017).\")) and concerns Leduc Poker. This algorithm produces normal-form “empirical games” which each correspond to an extensive-form game with a reduced strategy space, using incremental best response learning. Finally, the last asymmetric game illustrates the theorems for a single mixed equilibrium of full support, while its counterpart games have many more equilibria.\n\nA fundamental complexity arises when using the evolutionary dynamics of a 2-player asymmetric game to analyse its equilibrium structure, as the dynamics for the two players is intrinsically coupled and high-dimensional. While one could fix a player’s strategy and consider the induced dynamics for the other player in its respective strategy simplex, a static trajectory plot of this would not faithfully represent the complexity of the full 2-player dynamics. To gain a somewhat more complete intuitive picture, one can represent this dynamics as a movie, showing the change in induced dynamics for one player, as one varies the (fixed) strategy for the other (we will illustrate this in the PSRO-produced game on Leduc Poker).\n\nThe theorems introduced in the previous section help to overcome this problem, and allow to analyse the evolutionary dynamics of the symmetric counterpart games instead of the asymmetric game itself, revealing the landscape of Nash equilibria, which seriously simplifies the analysis.\n\n### Battle of the Sexes\n\nSymmetry assumes that strategy sets and corresponding payoffs are the same for all players in the interaction. An example of an asymmetric game is the Battle of the Sexes (BoS) game illustrated in Table [2](/articles/s41598-018-19194-4#Tab2). In this game both players do have the same strategy set, i.e., go to the *opera* or go to the *movies*, however, the corresponding payoffs for each are different, expressing the difference in preferences that both players have over their choices.\n\nThe Battle of the Sexes has two pure Nash equilibria, which are ESS as well (located at coordinates (0, 0) and (1, 1)), and one unstable completely mixed Nash equilibrium in which the players play respectively *x* = \\((\\frac{3}{5},\\frac{2}{5})\\) and *y* = \\((\\frac{2}{5},\\frac{3}{5})\\). Figure [6](/articles/s41598-018-19194-4#Fig6) illustrates the two-player evolutionary dynamics using the replicator equations, in which the x-axis corresponds to the probability with which player 1 plays *O* (Opera), and the y-axis corresponds to the probability with which the 2nd player plays *O* (Opera). The blue arrows show the vector field and the black lines are the corresponding trajectories. Note that it is still possible here to capture all of the dynamics in a static plot for the case of 2-player 2-action games, but is generally not possible in games with more than two actions.\n\n**Figure 6**[![figure 6](//media.springernature.com/lw685/springer-static/image/art%3A10.1038%2Fs41598-018-19194-4/MediaObjects/41598_2018_19194_Fig6_HTML.jpg)](/articles/s41598-018-19194-4/figures/6)Directional field plot of the Battle of the Sexes game.\n\n[Full size image](/articles/s41598-018-19194-4/figures/6)We now use this game to illustrate Theorem 1. If we apply Theorem 1 we know that the first and second counterpart symmetric games can be described by the payoff tables shown in Table [4](/articles/s41598-018-19194-4#Tab4). The first counterpart game has \\(((\\frac{2}{5},\\frac{3}{5}),(\\frac{2}{5},\\frac{3}{5}))\\) as a mixed Nash equilibrium, and the second counterpart game has \\(((\\frac{3}{5},\\frac{2}{5}),(\\frac{3}{5},\\frac{2}{5}))\\) as a mixed Nash equilibrium.\n\n**Table 4 Counterpart matrix game 1 and 2 for the Battle of the Sexes game.**[Full size table](/articles/s41598-018-19194-4/tables/4)In Fig. [7(b) and (c)](/articles/s41598-018-19194-4#Fig7) we show the evolutionary dynamics of both counterpart games, from which the respective equilibria can be observed, as predicted by Theorem 1.\n\n**Figure 7**[![figure 7](//media.springernature.com/lw685/springer-static/image/art%3A10.1038%2Fs41598-018-19194-4/MediaObjects/41598_2018_19194_Fig7_HTML.jpg)](/articles/s41598-018-19194-4/figures/7)This plot shows a visual representation of how the mixed Nash equilibrium is decomposed into Nash equilibria in both counterpart games. (**a**) shows the directional field plot of the Battle of the Sexes game. (**b**) illustrates how the y-component of the asymmetric Nash equilibrium becomes a Nash equilibrium in the first counterpart game, and (**c**) shows how the x-component of the asymmetric Nash equilibrium becomes a Nash equilibrium in the first counterpart game.\n\n[Full size image](/articles/s41598-018-19194-4/figures/7)Additionally, we also know that the reverse holds, i.e., if we were given the symmetric counterpart games, we would know that \\(((\\frac{3}{5},\\frac{2}{5}),(\\frac{2}{5},\\frac{3}{5}))\\) would also be a mixed Nash equilibrium of the original asymmetric BoS. In this case we can combine the mixed Nash equilibria of both counterpart games into the mixed Nash equilibrium of the original asymmetric game, as prescribed by Theorem 1. Specifically, as *y* = \\((\\frac{2}{5},\\frac{3}{5})\\) is part of the Nash equilibrium in the first counterpart game and *x* = \\((\\frac{3}{5},\\frac{2}{5})\\) in the second counterpart game, we can combine them into (*x* = \\((\\frac{3}{5},\\frac{2}{5})\\), *y* = \\((\\frac{2}{5},\\frac{3}{5})\\), which is a mixed Nash equilibrium of full support of the asymmetric Battle of the Sexes game.\n\nIf we now apply Theorem 2 to the Battle of the Sexes game, then we find that pure strategy Nash equilibria *x* = (1, 0) (and *y* = (1, 0) for the second counterpart) and *x* = (0, 1) (and *y* = (0, 1) for the second counterpart), which are both ESS, are also Nash equilibria in the counterpart games shown in Table [4](/articles/s41598-018-19194-4#Tab4). Also here the reverse holds, i.e., if we know the counterpart games, and we observe that *x* = (1, 0) and *x* = (0, 1) (*y* = (1, 0) and *y* = (0, 1) for the other counterpart of the game) are Nash in both games, we know that *x* = *y* = (1, 0) and *x* = *y* = (0, 1) are also Nash in the original asymmetric game. This can also be observed in Fig. [7(a),(b) and (c)](/articles/s41598-018-19194-4#Fig7). Specifically, the pure Nash equilibria are situated at coordinates (0, 0) and (1, 1) in Fig. [7(b) and (c)](/articles/s41598-018-19194-4#Fig7). Furthermore, it is important to understand that the counterpart dynamics are visualised only on the diagonal from coordinates (0, 0) to (1, 1), as that is where both players play with the same strategy distribution over their respective actions.\n\n### Extended Battle of the Sexes game\n\nIn order to illustrate the theorems in a game that is non-square, including permutation of strategies, we extend the Battle of the Sexes game with a third strategy. Specifically, we give the second player a third strategy *R* in which she can choose to listen to a concert on the radio instead of going to the opera or movies with her partner. This game is illustrated in Table [5](/articles/s41598-018-19194-4#Tab5).\n\n**Table 5 Extended Battle of the Sexes game.**[Full size table](/articles/s41598-018-19194-4/tables/5)If we would like to carry out a similar evolutionary analysis as before we need two populations for the asymmetric replicator equations. Note that in this case the strategy sets of both players are different. Using the asymmetric replicator dynamics to plot the evolutionary dynamics quickly becomes complicated since the full dynamical picture is high-dimensional and not faithfully represented by projections to the respective player’s individual strategy simplices. In other words, a static plot of the dynamics for one player does not immediately allow conclusions about equilibria, as it only describes a player’s strategy evolution assuming a fixed (rather than dynamically evolving) strategy of the other player. Again we can apply the counterpart RD theorems here to remedy this problem and consequently analyse the equilibrium structure in the symmetric counterpart games instead, yielding insight into the equilibrium landscape of the asymmetric game.\n\nIn Tables [6](/articles/s41598-018-19194-4#Tab6) and [7](/articles/s41598-018-19194-4#Tab7) we show the counterpart games A and B. Note that we introduce a *dummy* action *D* for the first player, in order to make sure that both players have the same number of actions in their strategy set (a requirement to apply the theorems) by just adding −1 for both players playing this strategy, which makes *D* completely dominated and thus redundant.\n\n**Table 6 Payoff matrix for the 1st counterpart game of the Extended BoS game. Strategy *D* is added to make the matrix completely square.**[Full size table](/articles/s41598-018-19194-4/tables/6)**Table 7 Payoff matrix for the 2nd counterpart game of the Extended BoS game. Strategy *D* is added to make the matrix completely square.**[Full size table](/articles/s41598-018-19194-4/tables/7)The three Nash equilibria of interest of this asymmetric game are the following, {(*x* = (0.6, 0.4, 0),*y* = (0.4, 0, 0.6)),(*x* = (0, 1, 0),*y* = (0, 0, 1)),(*x* = (1, 0, 0),*y* = (1, 0, 0)))} (we use the online banach solver to check that the Nash equilibria we find are correct[31](/articles/s41598-018-19194-4#ref-CR31 \"Avis, D., Rosenberg, G., Savani, R. & von Stengel, B. Enumeration of nash equilibria for two-player games. Economic Theory 42, 9–37 (2010).\")).\n\nWe now look for the *y* and *x* parts of these equilibria in the counterpart games. In Fig. [8](/articles/s41598-018-19194-4#Fig8) we show the evolutionary dynamics of the first counterpart game and in Fig. [9](/articles/s41598-018-19194-4#Fig9) the evolutionary dynamics of the second counterpart game. In the first counterpart we only need to consider the 1-face formed by strategies *O* and *M* as the third strategy is our dummy strategy. In this game there are two Nash equilibria, i.e., (1, 0, 0) (stable, yellow oval) and (0, 1, 0) (unstable, orange oval), so either playing *O* or *M*. The second counterpart game also has two Nash equilibria, i.e., (1, 0, 0) and (0, 0, 1) playing either *O* or *M* as well. Note there are also two rest points at the faces formed by *O* and *R* and *O* and *M*, which are not Nash (see Fig. [5](/articles/s41598-018-19194-4#Fig5) for an explanation). There is no mixed equilibrium of full support, so we cannot apply Theorem 1 here. If we apply Theorem 2 we know that ((1, 0, 0), (1, 0, 0)) must also be a pure Nash equilibrium in the original asymmetric game, and we can remove the dummy strategy for player 1. At this stage we are left with equilibria (*x* = (0.6, 0.4, 0),*y* = (0.4, 0, 0.6)) and (*x* = (0, 1, 0),*y* = (0, 0, 1)) in the asymmetric game for which we did not find a symmetric counterpart at this stage. Now the permutation of the counterpart games, explained earlier in the findings section, comes into play. Recall that in order to study all configurations of supports of the same cardinal for both players one needs to simply permute the actions of one player in matrix *A* and *B*. Let’s have a look at such a permutation, specifically, let’s permute the 2nd and 3rd action for player 2, resulting in Tables [8](/articles/s41598-018-19194-4#Tab8) and [9](/articles/s41598-018-19194-4#Tab9).\n\n**Figure 8**[![figure 8](//media.springernature.com/lw685/springer-static/image/art%3A10.1038%2Fs41598-018-19194-4/MediaObjects/41598_2018_19194_Fig8_HTML.jpg)](/articles/s41598-018-19194-4/figures/8)Directional field plot Σ3 of the first counterpart game of the extended Battle of the Sexes game.\n\n[Full size image](/articles/s41598-018-19194-4/figures/8)**Figure 9**[![figure 9](//media.springernature.com/lw685/springer-static/image/art%3A10.1038%2Fs41598-018-19194-4/MediaObjects/41598_2018_19194_Fig9_HTML.jpg)](/articles/s41598-018-19194-4/figures/9)Directional field plot Σ3 of the second counterpart game of the extended Battle of the Sexes game.\n\n[Full size image](/articles/s41598-018-19194-4/figures/9)**Table 8 Permuted payoff matrix for the 1st counterpart game of the Extended BoS game.**[Full size table](/articles/s41598-018-19194-4/tables/8)**Table 9 Permuted payoff matrix for the 2nd counterpart game of the Extended BoS game.**[Full size table](/articles/s41598-018-19194-4/tables/9)Again we can analyse these counterpart games. Specifically, we find Nash equilibria (1, 0, 0), (0.4, 0.6, 0), and (0, 1, 0) for permuted counterpart game 1 (Table [8](/articles/s41598-018-19194-4#Tab8)), and Nash equilibria (0, 0, 1), (0.6, 0.4, 0), (0, 1, 0), and (1,0,0) for permuted counterpart game 2 (Table [9](/articles/s41598-018-19194-4#Tab9)), which are illustrated in Figs [10](/articles/s41598-018-19194-4#Fig10) and [11](/articles/s41598-018-19194-4#Fig11). From these identified Nash equilibria in both counterpart games we can combine the remaining Nash equilibria for the asymmetric game. Specifically, by applying Theorem 2 we find (*x* = (0.6, 0.4, 0),*y* = (0.4, 0.6, 0)), which translates into (*x* = (0.6, 0.4, 0),*y* = (0.4, 0, 0.6)) for the asymmetric game as we permuted actions 2 and 3 for the second player and we need to swap these again. Additionally, we also find (*x* = (0, 1, 0),*y* = (0, 1, 0)), which translates into equilibrium (*x* = (0, 1, 0),*y* = (0, 0, 1)) for the asymmetric game as we permuted action 2 and 3 for the second player. Now we have found all Nash equilibria of the original asymmetric game.\n\n**Figure 10**[![figure 10](//media.springernature.com/lw685/springer-static/image/art%3A10.1038%2Fs41598-018-19194-4/MediaObjects/41598_2018_19194_Fig10_HTML.jpg)](/articles/s41598-018-19194-4/figures/10)Directional field plot Σ3 of the first counterpart game of the permuted extended Battle of the Sexes game.\n\n[Full size image](/articles/s41598-018-19194-4/figures/10)**Figure 11**[![figure 11](//media.springernature.com/lw685/springer-static/image/art%3A10.1038%2Fs41598-018-19194-4/MediaObjects/41598_2018_19194_Fig11_HTML.jpg)](/articles/s41598-018-19194-4/figures/11)Directional field plot Σ3 of the second counterpart game of the permuted extended Battle of the Sexes game.\n\n[Full size image](/articles/s41598-018-19194-4/figures/11)So, also in this case, i.e., when the game is not square and strategies need to be permuted, the theorems are still applicable and allow for analysis of the original asymmetric game.\n\n### Poker generated asymmetric games\n\nPolicy Space Response Oracles (PSRO) is a multiagent reinforcement learning process that reduces the strategy space of large extensive-form games via iterative best response computation. PSRO can be seen as a generalized form of fictitious play that produces approximate best responses, with arbitrary distributions over generated responses computed by meta-strategy solvers. PSRO was applied to a commonly-used benchmark problem in artificial intelligence research known as Leduc poker[35](/articles/s41598-018-19194-4#ref-CR35 \"Southey, F. et al. Bayes’ bluff: Opponent modelling in poker. In Proceedings of the Twenty-First Conference on Uncertainty in Artificial Intelligence, 550–558 (2005).\"). Leduc poker has a deck of 6 cards (jack, queen, king in two suits). Each player receives an initial private card, can bet a fixed amount of 2 chips in the first round, 4 chips in the second round (with a maxium of two raises in each round). Before the second round starts, a public card is revealed.\n\nIn Table [10](/articles/s41598-018-19194-4#Tab10) we present such an asymmetric 3 × 3 2-player PSRO generated game, playing Leduc Poker. In the game illustrated here, each player has three strategies that, for ease of the exposition, we call {*A*, *B*, *C*} for player 1, and {*D*, *E*, *F*} for player 2. Each one of these strategies represents a larger strategy in the full extensive-form game of Leduc poker, specifically an approximate best response to a distribution over previous opponent strategies. The game produced here then is truly asymmetric, in the sense that the strategy spaces in the original game are inherently asymmetric since player 1 always starts each round, the strategy spaces are defined by different (mostly unique) betting sequences, and even under perfect equilibrium play there is a slight advantage to player 2[9](/articles/s41598-018-19194-4#ref-CR9 \"Lanctot, M. et al. A unified game-theoretic approach to multiagent reinforcement learning. In Advances in Neural Information Processing Systems 30: Annual Conference on Neural Information Processing Systems, 4193–4206 (2017).\"). So, both players have significantly different strategy sets. In Tables [11](/articles/s41598-018-19194-4#Tab11) and [12](/articles/s41598-018-19194-4#Tab12) we show the two symmetric counterpart games of the empirical game produced by PSRO on Leduc poker.\n\n**Table 10 Payoff matrix of an asymmetric empirical game produced by PSRO applied to Leduc poker.**[Full size table](/articles/s41598-018-19194-4/tables/10)**Table 11 First counterpart game of the Leduc poker empirical game.**[Full size table](/articles/s41598-018-19194-4/tables/11)**Table 12 Second counterpart game of the Leduc poker empirical game.**[Full size table](/articles/s41598-018-19194-4/tables/12)Again we can now analyse the landscape of equilibria of this game using the introduced theorems. Since the Leduc poker empirical game is asymmetric we need two populations for the asymmetric replicator equations. As mentioned before, analysing and plotting the evolutionary asymmetric replicator dynamics now quickly becomes very tedious as we deal with two simplices, one for each player. More precisely, if we consider a strategy for one player in its corresponding simplex, and that player is adjusting its strategy, will immediately cause the trajectory in the second simplex to change, and vice versa. Consequently, it is not straightforward anymore to analyse the dynamics and equilibrium landscape for both players, as any trajectory in one simplex causes the other simplex to change. A movie illustrates what is meant: we show how the dynamics of player 2 changes in function of player 1. We overlay the simplex of the second player with the simplex of the first player; the yellow dots indicate what the strategy of the first player is. The movie then shows how the dynamics of the second player changes when the yellow dot changes, see .\n\nIn order to facilitate the process of analysing this game we can apply the counterpart RD theorems here to remedy the problem, and consequently analyse the game in the far simpler symmetric counterpart games that will shed light onto the equilibrium landscape of the Leduc Poker empirical game.\n\nIn Figs [12](/articles/s41598-018-19194-4#Fig12) and [13](/articles/s41598-018-19194-4#Fig13) we show the evolutionary dynamics of the counterpart games. As can be observed in Fig. [12](/articles/s41598-018-19194-4#Fig12) the first counterpart game has only one equilibrium, i.e., a mixed Nash equilibrium at the face formed by *A* and *C*, which absorbs the entire strategy space. Looking at Fig. [13](/articles/s41598-018-19194-4#Fig13) we see the situation is a bit more complex in the second counterpart game, here we observe three Nash equilibria: one pure at strategy *D*, one pure at strategy *F*, and one unstable mixed equilibrium at the 1-face formed by strategies *D* and *F*. Note there is also a rest point at the face formed by strategies *D* and *E*, which is not Nash. Given that there are no mixed equilibria with full support in both games we cannot apply Theorem 1. Using Theorem 2 we now know that we only maintain the two mixed equilibria, i.e. (0.32, 0, 0.68) (CP1) and (0.83, 0, 0.17) (CP2), forming the mixed Nash equilibrium (*x* = (0.83, 0, 0.17),*y* = (0.32, 0, 0.68)) of the asymmetric Leduc poker empirical game. The other equilibria in the second counterpart game can be discarded as candidates for Nash equilibria in the Leduc poker empirical game since they also do not appear for player 1 when we permute the strategies for player 1 (not shown here).\n\n**Figure 12**[![figure 12](//media.springernature.com/lw685/springer-static/image/art%3A10.1038%2Fs41598-018-19194-4/MediaObjects/41598_2018_19194_Fig12_HTML.jpg)](/articles/s41598-018-19194-4/figures/12)Directional field plot Σ3 of the first counterpart game of the Leduc poker empirical game under study.\n\n[Full size image](/articles/s41598-018-19194-4/figures/12)**Figure 13**[![figure 13](//media.springernature.com/lw685/springer-static/image/art%3A10.1038%2Fs41598-018-19194-4/MediaObjects/41598_2018_19194_Fig13_HTML.jpg)](/articles/s41598-018-19194-4/figures/13)Directional field plot Σ3 of the second counterpart game of the Leduc poker empirical game under study.\n\n[Full size image](/articles/s41598-018-19194-4/figures/13)### Mixed equilibrium of full support\n\nAs a final example to illustrate the introduced theory, we examine an asymmetric game, that has one completely mixed equilibrium and several equilibria in its counterpart games. The bimatrix game (*A*,*B*) is illustrated in Table [13](/articles/s41598-018-19194-4#Tab13) and its symmetric counterparts are shown in Tables [14](/articles/s41598-018-19194-4#Tab14) and [15](/articles/s41598-018-19194-4#Tab15).\n\n**Table 13 Payoff matrix of an asymmetric game with mixed equilibrium of full support.**[Full size table](/articles/s41598-018-19194-4/tables/13)**Table 14 First counterpart game of the asymmetric game.**[Full size table](/articles/s41598-018-19194-4/tables/14)**Table 15 Second counterpart game of the asymmetric game.**[Full size table](/articles/s41598-018-19194-4/tables/15)The asymmetric game has a unique completely mixed Nash equilibrium with different mixtures for the two players, i.e., (*x* = \\((x=(\\frac{1}{3},\\frac{1}{3},\\frac{1}{3});\\,y=(\\frac{2}{7},\\frac{3}{7},\\frac{2}{7}))\\).\n\nThe two symmetric counterpart game each have seven equilibria. Counterpart game 1 (Table [14](/articles/s41598-018-19194-4#Tab14)), has the following set of Nash equilibria: {(*a*)(*p*1 = \\(({p}\\_{1}=(\\frac{2}{7},\\frac{3}{7},\\frac{2}{7}),\\,{p}\\_{2}=(\\frac{2}{7},\\frac{3}{7},\\frac{2}{7}))\\), \\(({p}\\_{1}=(\\frac{1}{2},\\frac{1}{2},0),\\,{p}\\_{2}=(0,\\frac{1}{2},\\frac{1}{2}))\\), (*p*1 = (1,0,0), *p*2 = (0,0,1)), (*b*)(*p*1 = (0,1,0), *p*2 = (0,1,0)), ((*c*) *p*1 = \\(((c)\\,{p}\\_{1}=(\\frac{1}{2},0,\\frac{1}{2}),\\,{p}\\_{2}=(\\frac{1}{2},0,\\frac{1}{2}))\\), (*p*1 = (0,0,1), *p*2 = (1,0,0)), (*p*1 = \\(({p}\\_{1}=(0,\\frac{1}{2},\\frac{1}{2}),\\,{p}\\_{2}=(\\frac{1}{2},\\frac{1}{2},0))\\)}. Note that there are also two rest points, which are not Nash, at the faces formed by *A* and *B* and *B* and *C*. From these seven equilibria only (a), (b) and (c) are of interest since these are symmetric equilibria in which both players play with the same strategy (or support). Also counterpart game 2 has seven equilibria, i.e., \\(\\{(d)\\,({p}\\_{1}=(\\tfrac{1}{3},\\tfrac{1}{3},\\tfrac{1}{3}),{p}\\_{2}=(\\tfrac{1}{3},\\tfrac{1}{3},\\tfrac{1}{3}))\\), \\(({p}\\_{1}=(\\tfrac{1}{2},0,\\tfrac{1}{2}),{p}\\_{2}=(0,\\tfrac{1}{2},\\tfrac{1}{2}))\\), \\(((e){p}\\_{1}=(0,0,1),{p}\\_{2}=(0,0,1))\\), \\(({p}\\_{1}=(0,\\tfrac{1}{2},\\tfrac{1}{2}),{p}\\_{2}=(\\tfrac{1}{2},0,\\tfrac{1}{2}))\\), \\(((f){p}\\_{1}=(\\tfrac{1}{2},\\tfrac{1}{2},0),{p}\\_{2}=(\\tfrac{1}{2},\\tfrac{1}{2}),0)\\), \\(({p}\\_{1}=(1,0,0),{p}\\_{2}=(0,1,0))\\), \\(({p}\\_{1}=(0,1,0),{p}\\_{2}=(1,0,0)\\}\\) of which only (d), (e) and (f) are of interest.\n\nWe observe that only the completely mixed equilibrium of the asymmetric game, i.e., \\((x=(\\frac{1}{3},\\frac{1}{3},\\frac{1}{3});\\,y=(\\frac{2}{7},\\frac{3}{7},\\frac{2}{7}))\\), has its counterpart in the symmetric games. To apply the theorems we only need to have a look at equilibria (a), (b) and (c) in counterpart game 1, and (d), (e) and (f) in counterpart game 2. These equilibria can also be observed in the directional field plots, respectively, trajectory plots, illustrating the evolutionary dynamics of both counterpart games in Figs [14](/articles/s41598-018-19194-4#Fig14), [15](/articles/s41598-018-19194-4#Fig15), [16](/articles/s41598-018-19194-4#Fig16) and [17](/articles/s41598-018-19194-4#Fig17). Figure [14](/articles/s41598-018-19194-4#Fig14) visualises the three remaining equilibria (a), (b) and (c), with (a) indicated as a yellow oval, and (b) and (c) both indicated as green ovals. As can be observed, (a) is an unstable mixed equilibrium, (b) is a stable pure equilibrium, and (c) is a partly mixed equilibrium at the 2-face formed by strategies A and C.\n\n**Figure 14**[![figure 14](//media.springernature.com/lw685/springer-static/image/art%3A10.1038%2Fs41598-018-19194-4/MediaObjects/41598_2018_19194_Fig14_HTML.jpg)](/articles/s41598-018-19194-4/figures/14)Directional field plot Σ3 of the first counterpart game of the mixed equilibrium asymmetric game.\n\n[Full size image](/articles/s41598-018-19194-4/figures/14)**Figure 15**[![figure 15](//media.springernature.com/lw685/springer-static/image/art%3A10.1038%2Fs41598-018-19194-4/MediaObjects/41598_2018_19194_Fig15_HTML.jpg)](/articles/s41598-018-19194-4/figures/15)Trajectory plot Σ3 of the first counterpart game of the mixed equilibrium asymmetric game.\n\n[Full size image](/articles/s41598-018-19194-4/figures/15)**Figure 16**[![figure 16](//media.springernature.com/lw685/springer-static/image/art%3A10.1038%2Fs41598-018-19194-4/MediaObjects/41598_2018_19194_Fig16_HTML.jpg)](/articles/s41598-018-19194-4/figures/16)Directional field plot Σ3 of the second counterpart game of the mixed equilibrium asymmetric game.\n\n[Full size image](/articles/s41598-018-19194-4/figures/16)**Figure 17**[![figure 17](//media.springernature.com/lw685/springer-static/image/art%3A10.1038%2Fs41598-018-19194-4/MediaObjects/41598_2018_19194_Fig17_HTML.jpg)](/articles/s41598-018-19194-4/figures/17)Trajectory plot Σ3 of the second counterpart game of the mixed equilibrium asymmetric game.\n\n[Full size image](/articles/s41598-018-19194-4/figures/17)We can make the same observation for the second counterpart game, and see that (d), (e) and (f) are equilibria in Fig. [16](/articles/s41598-018-19194-4#Fig16). Equilibrium (d), indicated by a yellow oval, is completely mixed, equilibrium (e) is a pure equilibrium in corner F (green oval), and (f) is a partly mixed equilibrium on the 2-face formed by strategies D and E (green ovals as well).\n\nIf we now apply Theorem 1 we know that we can combine the mixed equilibria of full support of both counterpart games into the mixed equilibrium of the original asymmetric game, in which the mixed equilibrium of counterpart game 1, i.e. \\((\\frac{2}{7},\\frac{3}{7},\\frac{2}{7})\\), becomes the part of the mixed equilibrium in the asymmetric game of player 2, and the mixed equilibrium of counterpart game 2, i.e. \\((\\frac{1}{3},\\frac{1}{3},\\frac{1}{3})\\), becomes the part of the mixed equilibrium in the asymmetric game of player 1, leading to \\((x=(\\frac{1}{3},\\frac{1}{3},\\frac{1}{3});\\,y=(\\frac{2}{7},\\frac{3}{7},\\frac{2}{7}))\\). Both equilibria are unstable in the counterpart games and also form an unstable mixed equilibrium in the asymmetric game.\n\nDiscussion\n----------\n\nReplicator Dynamics have proved to be an excellent tool to analyse the Nash landscape of multiagent interactions and distributed learning in both abstract games and complex systems[1](/articles/s41598-018-19194-4#ref-CR1 \"Bloembergen, D., Tuyls, K., Hennes, D. & Kaisers, M. Evolutionary dynamics of multi-agent learning: A survey. J. Artif. Intell. Res. 53, 659–697 (2015).\"),[2](/articles/s41598-018-19194-4#ref-CR2 \"Walsh, W. E., Das, R., Tesauro, G. & Kephart, J. Analyzing complex strategic interactions in multi-agent games. In Proceedings of the Fourth Workshop on Game-Theoretic and Decision-Theoretic Agents, 109–118 (2002).\"),[4](/articles/s41598-018-19194-4#ref-CR4 \"Tuyls, K. & Parsons, S. What evolutionary game theory tells us about multiagent learning. Artif. Intell. 171, 406–416 (2007).\"),[6](/articles/s41598-018-19194-4#ref-CR6 \"Wellman, M. P. Methods for empirical game-theoretic analysis. In Proceedings of The Twenty-First National Conference on Artificial Intelligence and the Eighteenth Innovative Applications of Artificial Intelligence Conference, 1552–1556 (2006).\"). The predominant approach has been the use of symmetric replicator equations, allowing for a relatively straightforward analysis in symmetric games. Many interesting real-world settings though involve roles or player-types for the different agents that take part in an interaction, and as such are *asymmetric* in nature. So far, most research has avoided to carry out RD analysis in this type of interactions, by either constructing a new symmetric game, in which the various actions of the different roles are joined together in one population[23](/articles/s41598-018-19194-4#ref-CR23 \"Cressman, R. Evolutionary Dynamics and Extensive Form Games (The MIT Press, 2003).\"),[24](/articles/s41598-018-19194-4#ref-CR24 \"Accinelli, E. & Carrera, E. J. S. Evolutionarily stable strategies and replicator dynamics in asymmetric two-population games. In Peixoto, M. M., Pinto, A. A. & Rand, D. A. (eds.) Dynamics, Games and Science I, 25–35 (Springer, 2011).\"), or by considering the various roles and strategies as heuristics, grouped in one population as well[2](/articles/s41598-018-19194-4#ref-CR2 \"Walsh, W. E., Das, R., Tesauro, G. & Kephart, J. Analyzing complex strategic interactions in multi-agent games. In Proceedings of the Fourth Workshop on Game-Theoretic and Decision-Theoretic Agents, 109–118 (2002).\"),[3](/articles/s41598-018-19194-4#ref-CR3 \"Walsh, W. E., Parkes, D. C. & Das, R. Choosing samples to compute heuristic-strategy nash equilibrium. In Proceedings of the Fifth Workshop on Agent-Mediated Electronic Commerce, 109–123 (2003).\"),[8](/articles/s41598-018-19194-4#ref-CR8 \"Phelps, S., Parsons, S. & McBurney, P. An evolutionary game-theoretic comparison of two double-auction market designs. In Faratin, P. & Rodriguez-Aguilar, J. A. (eds.) Agent-Mediated Electronic Commerce VI, Theories for and Engineering of Distributed Mechanisms and Systems, Revised Selected Papers, 101–114 (Springer, 2004).\"). In the latter approach the payoffs due to different player-types are averaged over many samples of the player type resulting in a single average payoff to each player for each entry in the payoff table.\n\nThe work presented in this paper takes a different stance by decomposing an asymmetric game into its symmetric counterparts. This method proves to be mathematically simple and elegant, and allows for a straightforward analysis of asymmetric games, without the need for turning the strategy spaces into one simplex or population, but instead allows to keep separate simplices for the involved populations of strategies. Furthermore, the counterpart games allow to get insight in the type and form of interaction of the asymmetric game under study, identifying its equilibrium structure and as such enabling analysis of abstract and empirical games discovered through multiagent learning processes (e.g. Leduc poker empirical game), as was shown in the experimental section.\n\nA deeper counter-intuitive understanding of the theoretical results of this paper is that when identifying Nash equilibria in the counterpart games with *matching* support (including permutations of strategies for one of the players), it turns out that also the combination of those equilibria form a Nash equilibrium in the corresponding asymmetric game. In general, the vector field for the evolutionary dynamics of one player is a function of the other player’s strategy, and hence a vector field in one player’s simplex doesn’t carry much information as any equilibria you observe in it are changing with time as the other player is moving too. However, if you position the second player at a Nash equilibrium, it turns out that player one becomes indifferent between his different strategies, and remains stationary under the RD. This gives the unique situation in which the vector field plot for the second player’s simplex is actually meaningful, because the assumption of player one being stationary actually holds (and vice versa). This is what we end up using when establishing the correspondence of the Nash Equilibria in asymmetric and counterpart games, and why the single-simplex plots for the counterpart games are actually meaningful for the asymmetric game - but this is also why they only describe the Nash Equilibria faithfully, but fail to be a valid decomposition of the full asymmetric game away from equilibrium.\n\nThese findings shed new light on asymmetric interactions between multiple agents and provide new insights that facilitate a thorough and convenient analysis of asymmetric games. As pointed out by Veller and Hayward[36](/articles/s41598-018-19194-4#ref-CR36 \"Veller, C. & Hayward, L. Finite-population evolution with rare mutations in asymmetric games. Journal of Economic Theory 162, 93–113 (2016).\"), many real-world situations, in which one aims to study evolutionary or learning dynamics of several interacting agents, are better modelled by asymmetric games. As such these theoretical findings can facilitate deeper analysis of equilibrium structures in evolutionary asymmetric games relevant to various topics including economic theory, evolutionary biology, empirical game theory, the evolution of cooperation, evolutionary language games and artificial intelligence[11](/articles/s41598-018-19194-4#ref-CR11 \"Moreira, J. A., Pacheco, J. M. & Santos, F. C. Evolution of collective action in adaptive social structures. Scientific Reports 3, 1521 (2013).\"),[12](/articles/s41598-018-19194-4#ref-CR12 \"Santos, F. P., Pacheco, J. M. & Santos, F. C. Evolution of cooperation under indirect reciprocity and arbitrary exploration rates. Scientific Reports 6, 37517 (2016).\"),[37](#ref-CR37 \"Baek, S., Jeong, H., Hilbe, C. & Nowak, M. Comparing reactive and memory-one strategies of direct reciprocity. Scientific Reports 6, 25676 (2016).\"),[38](#ref-CR38 \"Hilbe, C., Martinez-Vaquero, L., Chatterjee, K. & Nowak, M. Memory-n strategies of direct reciprocity. Proceedings of the National Academy of Sciences USA 114, 4715–4720 (2017).\"),[39](#ref-CR39 \"Allen, B. et al. Evolutionary dynamics on any population structure. Nature 544, 227–230 (2017).\"),[40](/articles/s41598-018-19194-4#ref-CR40 \"Steels, L. Language as a complex adaptive system. In Parallel Problem Solving from Nature - PPSN VI, 6th International Conference, 17–26 (2000).\").\n\nFinally, the results of this paper also nicely underpin what is said in H. Gintis’ book on the evolutionary dynamics of asymmetric games, i.e., *‘although the static game pits the row player against the column player*, *the evolutionary dynamic pits row players against themselves and column players against themselves’*[32](/articles/s41598-018-19194-4#ref-CR32 \"Gintis, H. Game Theory Evolving (Princeton University Press, 2009).\") (chapter 12, p.292). He also indicates that this aspect of an evolutionary dynamic is often misunderstood. The use of our counterpart dynamics supports and illustrates this statement very clearly, showing that in the counterpart games species play games within a population and as such show an intra-species survival of the fittest, which is then combined into an equilibrium of the asymmetric game.\n\n\n\n\nReferences\n----------\n\n1. Bloembergen, D., Tuyls, K., Hennes, D. & Kaisers, M. Evolutionary dynamics of multi-agent learning: A survey. *J. Artif. Intell. Res.* **53**, 659–697 (2015).\n\n[MathSciNet](http://www.ams.org/mathscinet-getitem?mr=3389566) \n [MATH](http://www.emis.de/MATH-item?1336.68210) \n [Google Scholar](http://scholar.google.com/scholar_lookup?&title=Evolutionary%20dynamics%20of%20multi-agent%20learning%3A%20A%20survey&journal=J.%20Artif.%20Intell.%20Res.&volume=53&pages=659-697&publication_year=2015&author=Bloembergen%2CD&author=Tuyls%2CK&author=Hennes%2CD&author=Kaisers%2CM)\n2. Walsh, W. E., Das, R., Tesauro, G. & Kephart, J. Analyzing complex strategic interactions in multi-agent games. In *Proceedings of the Fourth Workshop on Game-Theoretic and Decision-Theoretic Agents*, 109–118 (2002).\n3. Walsh, W. E., Parkes, D. C. & Das, R. Choosing samples to compute heuristic-strategy nash equilibrium. In *Proceedings of the Fifth Workshop on Agent-Mediated Electronic Commerce*, 109–123 (2003).\n4. Tuyls, K. & Parsons, S. What evolutionary game theory tells us about multiagent learning. *Artif. Intell.* **171**, 406–416 (2007).\n\n[Article](https://doi.org/10.1016%2Fj.artint.2007.01.004) \n [MathSciNet](http://www.ams.org/mathscinet-getitem?mr=2332289) \n [MATH](http://www.emis.de/MATH-item?1168.68497) \n [Google Scholar](http://scholar.google.com/scholar_lookup?&title=What%20evolutionary%20game%20theory%20tells%20us%20about%20multiagent%20learning&journal=Artif.%20Intell.&doi=10.1016%2Fj.artint.2007.01.004&volume=171&pages=406-416&publication_year=2007&author=Tuyls%2CK&author=Parsons%2CS)\n5. Ponsen, M. J. V., Tuyls, K., Kaisers, M. & Ramon, J. An evolutionary game-theoretic analysis of poker strategies. *Entertainment Computing* **1**, 39–45 (2009).\n\n[Article](https://doi.org/10.1016%2Fj.entcom.2009.09.002) \n [Google Scholar](http://scholar.google.com/scholar_lookup?&title=An%20evolutionary%20game-theoretic%20analysis%20of%20poker%20strategies&journal=Entertainment%20Computing&doi=10.1016%2Fj.entcom.2009.09.002&volume=1&pages=39-45&publication_year=2009&author=Ponsen%2CMJV&author=Tuyls%2CK&author=Kaisers%2CM&author=Ramon%2CJ)\n6. Wellman, M. P. Methods for empirical game-theoretic analysis. In *Proceedings of The Twenty-First National Conference on Artificial Intelligence and the Eighteenth Innovative Applications of Artificial Intelligence Conference*, 1552–1556 (2006).\n7. Phelps, S. *et al*. Auctions, evolution, and multi-agent learning. In Tuyls, K., Nowe, A., Guessoum, Z. & Kudenko, D. (eds.) *Adaptive Agents and Multi-Agent Systems III*. *5th*, *6th*, *and 7th European Symposium on Adaptive and Learning Agents and Multi-Agent Systems*, *Revised Selected Papers*, 188–210 (Springer, 2007).\n8. Phelps, S., Parsons, S. & McBurney, P. An evolutionary game-theoretic comparison of two double-auction market designs. In Faratin, P. & Rodriguez-Aguilar, J. A. (eds.) *Agent-Mediated Electronic Commerce VI*, *Theories for and Engineering of Distributed Mechanisms and Systems*, *Revised Selected Papers*, 101–114 (Springer, 2004).\n9. Lanctot, M. *et al*. A unified game-theoretic approach to multiagent reinforcement learning. In *Advances in Neural Information Processing Systems 30: Annual Conference on Neural Information Processing Systems*, 4193–4206 (2017).\n10. Perc, M. *et al*. Statistical physics of human cooperation. *Physics Reports* **687**, 1–51 (2017).\n\n[Article](https://doi.org/10.1016%2Fj.physrep.2017.05.004) \n [ADS](http://adsabs.harvard.edu/cgi-bin/nph-data_query?link_type=ABSTRACT&bibcode=2017PhR...687....1P) \n [MathSciNet](http://www.ams.org/mathscinet-getitem?mr=3670080) \n [MATH](http://www.emis.de/MATH-item?1366.80006) \n [Google Scholar](http://scholar.google.com/scholar_lookup?&title=Statistical%20physics%20of%20human%20cooperation&journal=Physics%20Reports&doi=10.1016%2Fj.physrep.2017.05.004&volume=687&pages=1-51&publication_year=2017&author=Perc%2CM)\n11. Moreira, J. A., Pacheco, J. M. & Santos, F. C. Evolution of collective action in adaptive social structures. *Scientific Reports* **3**, 1521 (2013).\n\n[Article](https://doi.org/10.1038%2Fsrep01521) \n [ADS](http://adsabs.harvard.edu/cgi-bin/nph-data_query?link_type=ABSTRACT&bibcode=2013NatSR...3E1521M) \n [CAS](/articles/cas-redirect/1:CAS:528:DC%2BC3sXhtVSktL7M) \n [PubMed](http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=Retrieve&db=PubMed&dopt=Abstract&list_uids=23519283) \n [PubMed Central](http://www.ncbi.nlm.nih.gov/pmc/articles/PMC3605608) \n [Google Scholar](http://scholar.google.com/scholar_lookup?&title=Evolution%20of%20collective%20action%20in%20adaptive%20social%20structures&journal=Scientific%20Reports&doi=10.1038%2Fsrep01521&volume=3&publication_year=2013&author=Moreira%2CJA&author=Pacheco%2CJM&author=Santos%2CFC)\n12. Santos, F. P., Pacheco, J. M. & Santos, F. C. Evolution of cooperation under indirect reciprocity and arbitrary exploration rates. *Scientific Reports* **6**, 37517 (2016).\n\n[Article](https://doi.org/10.1038%2Fsrep37517) \n [ADS](http://adsabs.harvard.edu/cgi-bin/nph-data_query?link_type=ABSTRACT&bibcode=2016NatSR...637517S) \n [CAS](/articles/cas-redirect/1:CAS:528:DC%2BC28XitFSmu73O) \n [PubMed](http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=Retrieve&db=PubMed&dopt=Abstract&list_uids=27892509) \n [PubMed Central](http://www.ncbi.nlm.nih.gov/pmc/articles/PMC5124964) \n [Google Scholar](http://scholar.google.com/scholar_lookup?&title=Evolution%20of%20cooperation%20under%20indirect%20reciprocity%20and%20arbitrary%20exploration%20rates&journal=Scientific%20Reports&doi=10.1038%2Fsrep37517&volume=6&publication_year=2016&author=Santos%2CFP&author=Pacheco%2CJM&author=Santos%2CFC)\n13. Pérolat, J. *et al*. A multi-agent reinforcement learning model of common-pool resource appropriation. In *Advances in Neural Information Processing Systems 30: Annual Conference on Neural Information Processing Systems*, 3646–3655 (2017).\n14. Lazaridou, A., Peysakhovich, A. & Baroni, M. Multi-agent cooperation and the emergence of (natural) language. In *5th International Conference on Learning Representations* (2017).\n15. De Vylder, B. & Tuyls, K. How to reach linguistic consensus: A proof of convergence for the naming game. *Journal of Theoretical Biology* **242**, 818–831 (2006).\n\n[Article](https://doi.org/10.1016%2Fj.jtbi.2006.05.024) \n [MathSciNet](http://www.ams.org/mathscinet-getitem?mr=2279748) \n [PubMed](http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=Retrieve&db=PubMed&dopt=Abstract&list_uids=16843499) \n [Google Scholar](http://scholar.google.com/scholar_lookup?&title=How%20to%20reach%20linguistic%20consensus%3A%20A%20proof%20of%20convergence%20for%20the%20naming%20game&journal=Journal%20of%20Theoretical%20Biology&doi=10.1016%2Fj.jtbi.2006.05.024&volume=242&pages=818-831&publication_year=2006&author=Vylder%2CB&author=Tuyls%2CK)\n16. Cho, I. & Kreps, D. Signaling games and stable equilibria. *The Quarterly Journal of Economics* 179–221 (1987).\n17. Nowak, M. A. *Evolutionary Dynamics: Exploring the Equations of Life* (Harvard University Press, 2006).\n18. Tuyls, K., Verbeeck, K. & Lenaerts, T. A selection-mutation model for q-learning in multi-agent systems. In *The Second International Joint Conference on Autonomous Agents & Multiagent Systems*, 693–700 (2003).\n19. Cressman, R. & Tao, Y. The replicator equation and other game dynamics. *Proceedings of the National Academy of Sciences USA* **111**, 10810–10817 (2014).\n\n[Article](https://doi.org/10.1073%2Fpnas.1400823111) \n [ADS](http://adsabs.harvard.edu/cgi-bin/nph-data_query?link_type=ABSTRACT&bibcode=2014PNAS..11110810C) \n [MathSciNet](http://www.ams.org/mathscinet-getitem?mr=3263307) \n [CAS](/articles/cas-redirect/1:CAS:528:DC%2BC2cXhtFyqtbzJ) \n [MATH](http://www.emis.de/MATH-item?1355.91011) \n [Google Scholar](http://scholar.google.com/scholar_lookup?&title=The%20replicator%20equation%20and%20other%20game%20dynamics&journal=Proceedings%20of%20the%20National%20Academy%20of%20Sciences%20USA&doi=10.1073%2Fpnas.1400823111&volume=111&pages=10810-10817&publication_year=2014&author=Cressman%2CR&author=Tao%2CY)\n20. Selten, R. A note on evolutionary stable strategies in asymmetric animal conflicts. *Journal of Theoretical Biology* **84**, 93–101 (1980).\n\n[Article](https://doi.org/10.1016%2FS0022-5193%2880%2981038-1) \n [MathSciNet](http://www.ams.org/mathscinet-getitem?mr=577174) \n [CAS](/articles/cas-redirect/1:STN:280:DyaL3M%2FhtV2gsQ%3D%3D) \n [PubMed](http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=Retrieve&db=PubMed&dopt=Abstract&list_uids=7412323) \n [Google Scholar](http://scholar.google.com/scholar_lookup?&title=A%20note%20on%20evolutionary%20stable%20strategies%20in%20asymmetric%20animal%20conflicts&journal=Journal%20of%20Theoretical%20Biology&doi=10.1016%2FS0022-5193%2880%2981038-1&volume=84&pages=93-101&publication_year=1980&author=Selten%2CR)\n21. Taylor, P. Evolutionarily stable strategies with two types of players. *Journal of Applied Probability* **16**, 76–83 (1979).\n\n[Article](https://doi.org/10.1017%2FS0021900200046210) \n [MathSciNet](http://www.ams.org/mathscinet-getitem?mr=520938) \n [MATH](http://www.emis.de/MATH-item?0398.90120) \n [Google Scholar](http://scholar.google.com/scholar_lookup?&title=Evolutionarily%20stable%20strategies%20with%20two%20types%20of%20players&journal=Journal%20of%20Applied%20Probability&doi=10.1017%2FS0021900200046210&volume=16&pages=76-83&publication_year=1979&author=Taylor%2CP)\n22. Guanersdorfer, A., Hofbauer, J. & Sigmund, K. On the dynamics of asymmetric games. *Theoretical Population Biology* **39**, 345–357 (1991).\n\n[Article](https://doi.org/10.1016%2F0040-5809%2891%2990028-E) \n [MathSciNet](http://www.ams.org/mathscinet-getitem?mr=1115666) \n [MATH](http://www.emis.de/MATH-item?0732.92031) \n [Google Scholar](http://scholar.google.com/scholar_lookup?&title=On%20the%20dynamics%20of%20asymmetric%20games&journal=Theoretical%20Population%20Biology&doi=10.1016%2F0040-5809%2891%2990028-E&volume=39&pages=345-357&publication_year=1991&author=Guanersdorfer%2CA&author=Hofbauer%2CJ&author=Sigmund%2CK)\n23. Cressman, R. Evolutionary Dynamics and Extensive Form Games (The MIT Press, 2003).\n24. Accinelli, E. & Carrera, E. J. S. Evolutionarily stable strategies and replicator dynamics in asymmetric two-population games. In Peixoto, M. M., Pinto, A. A. & Rand, D. A. (eds.) *Dynamics*, *Games and Science I*, 25–35 (Springer, 2011).\n25. McAvoy, A. & Hauert, C. Asymmetric evolutionary games. *PLoS Comput Biol* **11**, e1004349 (2015).\n\n[Article](https://doi.org/10.1371%2Fjournal.pcbi.1004349) \n [ADS](http://adsabs.harvard.edu/cgi-bin/nph-data_query?link_type=ABSTRACT&bibcode=2015PLSCB..11E4349M) \n [PubMed](http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=Retrieve&db=PubMed&dopt=Abstract&list_uids=26308326) \n [PubMed Central](http://www.ncbi.nlm.nih.gov/pmc/articles/PMC4550251) \n [Google Scholar](http://scholar.google.com/scholar_lookup?&title=Asymmetric%20evolutionary%20games&journal=PLoS%20Comput%20Biol&doi=10.1371%2Fjournal.pcbi.1004349&volume=11&publication_year=2015&author=McAvoy%2CA&author=Hauert%2CC)\n26. Weibull, J. *Evolutionary Game Theory* (MIT press, 1997).\n27. Hofbauer, J. & Sigmund, K. *Evolutionary Games and Population Dynamics* (Cambridge University Press, 1998).\n28. Maynard Smith, J. & Price, G. R. The logic of animal conflicts. *Nature* **246**, 15–18 (1973).\n\n[Article](https://doi.org/10.1038%2F246015a0) \n [MATH](http://www.emis.de/MATH-item?1369.92134) \n [Google Scholar](http://scholar.google.com/scholar_lookup?&title=The%20logic%20of%20animal%20conflicts&journal=Nature&doi=10.1038%2F246015a0&volume=246&pages=15-18&publication_year=1973&author=Maynard%20Smith%2CJ&author=Price%2CGR)\n29. Zeeman, E. Population dynamics from game theory. *Lecture Notes in Mathematics*, *Global theory of dynamical systems* **819** (1980).\n30. Zeeman, E. Dynamics of the evolution of animal conflicts. *Journal of Theoretical Biology* **89**, 249–270 (1981).\n\n[Article](https://doi.org/10.1016%2F0022-5193%2881%2990311-8) \n [MathSciNet](http://www.ams.org/mathscinet-getitem?mr=630636) \n [Google Scholar](http://scholar.google.com/scholar_lookup?&title=Dynamics%20of%20the%20evolution%20of%20animal%20conflicts&journal=Journal%20of%20Theoretical%20Biology&doi=10.1016%2F0022-5193%2881%2990311-8&volume=89&pages=249-270&publication_year=1981&author=Zeeman%2CE)\n31. Avis, D., Rosenberg, G., Savani, R. & von Stengel, B. Enumeration of nash equilibria for two-player games. *Economic Theory* **42**, 9–37 (2010).\n\n[Article](https://doi.org/10.1007%2Fs00199-009-0449-x) \n [MathSciNet](http://www.ams.org/mathscinet-getitem?mr=2551727) \n [MATH](http://www.emis.de/MATH-item?1182.91013) \n [Google Scholar](http://scholar.google.com/scholar_lookup?&title=Enumeration%20of%20nash%20equilibria%20for%20two-player%20games&journal=Economic%20Theory&doi=10.1007%2Fs00199-009-0449-x&volume=42&pages=9-37&publication_year=2010&author=Avis%2CD&author=Rosenberg%2CG&author=Savani%2CR&author=Stengel%2CB)\n32. Gintis, H. *Game Theory Evolving* (Princeton University Press, 2009).\n33. Sandholm, W. Population Games and Evolutionary Dynamics (MIT Press, 2010).\n34. von Stengel, B. Computing equilibria for two-person games. In Aumann, R. & Hart, S. (eds.) *Handbook of Game Theory with Economic Applications*, 1723–1759 (Elsevier, 2002).\n35. Southey, F. *et al*. Bayes’ bluff: Opponent modelling in poker. In *Proceedings of the Twenty-First Conference on Uncertainty in Artificial Intelligence*, 550–558 (2005).\n36. Veller, C. & Hayward, L. Finite-population evolution with rare mutations in asymmetric games. *Journal of Economic Theory* **162**, 93–113 (2016).\n\n[Article](https://doi.org/10.1016%2Fj.jet.2015.12.005) \n [MathSciNet](http://www.ams.org/mathscinet-getitem?mr=3454444) \n [MATH](http://www.emis.de/MATH-item?1369.91023) \n [Google Scholar](http://scholar.google.com/scholar_lookup?&title=Finite-population%20evolution%20with%20rare%20mutations%20in%20asymmetric%20games&journal=Journal%20of%20Economic%20Theory&doi=10.1016%2Fj.jet.2015.12.005&volume=162&pages=93-113&publication_year=2016&author=Veller%2CC&author=Hayward%2CL)\n37. Baek, S., Jeong, H., Hilbe, C. & Nowak, M. Comparing reactive and memory-one strategies of direct reciprocity. *Scientific Reports* **6**, 25676 (2016).\n\n[Article](https://doi.org/10.1038%2Fsrep25676) \n [ADS](http://adsabs.harvard.edu/cgi-bin/nph-data_query?link_type=ABSTRACT&bibcode=2016NatSR...625676B) \n [CAS](/articles/cas-redirect/1:CAS:528:DC%2BC28Xnslahu7w%3D) \n [PubMed](http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=Retrieve&db=PubMed&dopt=Abstract&list_uids=27161141) \n [PubMed Central](http://www.ncbi.nlm.nih.gov/pmc/articles/PMC4861973) \n [Google Scholar](http://scholar.google.com/scholar_lookup?&title=Comparing%20reactive%20and%20memory-one%20strategies%20of%20direct%20reciprocity&journal=Scientific%20Reports&doi=10.1038%2Fsrep25676&volume=6&publication_year=2016&author=Baek%2CS&author=Jeong%2CH&author=Hilbe%2CC&author=Nowak%2CM)\n38. Hilbe, C., Martinez-Vaquero, L., Chatterjee, K. & Nowak, M. Memory-n strategies of direct reciprocity. *Proceedings of the National Academy of Sciences USA* **114**, 4715–4720 (2017).\n\n[Article](https://doi.org/10.1073%2Fpnas.1621239114) \n [CAS](/articles/cas-redirect/1:CAS:528:DC%2BC2sXmt1egtro%3D) \n [Google Scholar](http://scholar.google.com/scholar_lookup?&title=Memory-n%20strategies%20of%20direct%20reciprocity&journal=Proceedings%20of%20the%20National%20Academy%20of%20Sciences%20USA&doi=10.1073%2Fpnas.1621239114&volume=114&pages=4715-4720&publication_year=2017&author=Hilbe%2CC&author=Martinez-Vaquero%2CL&author=Chatterjee%2CK&author=Nowak%2CM)\n39. Allen, B. *et al*. Evolutionary dynamics on any population structure. *Nature* **544**, 227–230 (2017).\n\n[Article](https://doi.org/10.1038%2Fnature21723) \n [ADS](http://adsabs.harvard.edu/cgi-bin/nph-data_query?link_type=ABSTRACT&bibcode=2017Natur.544..227A) \n [CAS](/articles/cas-redirect/1:CAS:528:DC%2BC2sXlt12rtrs%3D) \n [PubMed](http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=Retrieve&db=PubMed&dopt=Abstract&list_uids=28355181) \n [Google Scholar](http://scholar.google.com/scholar_lookup?&title=Evolutionary%20dynamics%20on%20any%20population%20structure&journal=Nature&doi=10.1038%2Fnature21723&volume=544&pages=227-230&publication_year=2017&author=Allen%2CB)\n40. Steels, L. Language as a complex adaptive system. In Parallel Problem Solving from Nature - PPSN VI, 6th International Conference, 17–26 (2000).\n\n[Download references](https://citation-needed.springer.com/v2/references/10.1038/s41598-018-19194-4?format=refman&flavour=references)\n\nAcknowledgements\n----------------\n\nWe are very grateful to D. Bloembergen and O. Pietquin for helpful comments and discussions.\n\nAuthor information\n------------------\n\n### Authors and Affiliations\n\n1. Google DeepMind, 6 Pancras Square, N1C 4AG, London, UK\n\nKarl Tuyls, Julien Pérolat, Marc Lanctot, Georg Ostrovski, Joel Z Leibo, Thore Graepel & Shane Legg\n2. Dept. of Computer Science, University of Liverpool, Ashton Street, L69 3BX, Liverpool, UK\n\nKarl Tuyls & Rahul Savani\n3. Faculty of Philosophy, Oxford University, Woodstock Road, OX2 6GG, Oxford, UK\n\nToby Ord\n\nAuthors1. Karl Tuyls[View author publications](/search?author=Karl%20Tuyls)You can also search for this author in\n [PubMed](http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=search&term=Karl%20Tuyls) [Google Scholar](http://scholar.google.co.uk/scholar?as_q=&num=10&btnG=Search+Scholar&as_epq=&as_oq=&as_eq=&as_occt=any&as_sauthors=%22Karl%20Tuyls%22&as_publication=&as_ylo=&as_yhi=&as_allsubj=all&hl=en)\n2. Julien Pérolat[View author publications](/search?author=Julien%20P%C3%A9rolat)You can also search for this author in\n [PubMed](http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=search&term=Julien%20P%C3%A9rolat) [Google Scholar](http://scholar.google.co.uk/scholar?as_q=&num=10&btnG=Search+Scholar&as_epq=&as_oq=&as_eq=&as_occt=any&as_sauthors=%22Julien%20P%C3%A9rolat%22&as_publication=&as_ylo=&as_yhi=&as_allsubj=all&hl=en)\n3. Marc Lanctot[View author publications](/search?author=Marc%20Lanctot)You can also search for this author in\n [PubMed](http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=search&term=Marc%20Lanctot) [Google Scholar](http://scholar.google.co.uk/scholar?as_q=&num=10&btnG=Search+Scholar&as_epq=&as_oq=&as_eq=&as_occt=any&as_sauthors=%22Marc%20Lanctot%22&as_publication=&as_ylo=&as_yhi=&as_allsubj=all&hl=en)\n4. Georg Ostrovski[View author publications](/search?author=Georg%20Ostrovski)You can also search for this author in\n [PubMed](http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=search&term=Georg%20Ostrovski) [Google Scholar](http://scholar.google.co.uk/scholar?as_q=&num=10&btnG=Search+Scholar&as_epq=&as_oq=&as_eq=&as_occt=any&as_sauthors=%22Georg%20Ostrovski%22&as_publication=&as_ylo=&as_yhi=&as_allsubj=all&hl=en)\n5. Rahul Savani[View author publications](/search?author=Rahul%20Savani)You can also search for this author in\n [PubMed](http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=search&term=Rahul%20Savani) [Google Scholar](http://scholar.google.co.uk/scholar?as_q=&num=10&btnG=Search+Scholar&as_epq=&as_oq=&as_eq=&as_occt=any&as_sauthors=%22Rahul%20Savani%22&as_publication=&as_ylo=&as_yhi=&as_allsubj=all&hl=en)\n6. Joel Z Leibo[View author publications](/search?author=Joel%20Z%20Leibo)You can also search for this author in\n [PubMed](http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=search&term=Joel%20Z%20Leibo) [Google Scholar](http://scholar.google.co.uk/scholar?as_q=&num=10&btnG=Search+Scholar&as_epq=&as_oq=&as_eq=&as_occt=any&as_sauthors=%22Joel%20Z%20Leibo%22&as_publication=&as_ylo=&as_yhi=&as_allsubj=all&hl=en)\n7. Toby Ord[View author publications](/search?author=Toby%20Ord)You can also search for this author in\n [PubMed](http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=search&term=Toby%20Ord) [Google Scholar](http://scholar.google.co.uk/scholar?as_q=&num=10&btnG=Search+Scholar&as_epq=&as_oq=&as_eq=&as_occt=any&as_sauthors=%22Toby%20Ord%22&as_publication=&as_ylo=&as_yhi=&as_allsubj=all&hl=en)\n8. Thore Graepel[View author publications](/search?author=Thore%20Graepel)You can also search for this author in\n [PubMed](http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=search&term=Thore%20Graepel) [Google Scholar](http://scholar.google.co.uk/scholar?as_q=&num=10&btnG=Search+Scholar&as_epq=&as_oq=&as_eq=&as_occt=any&as_sauthors=%22Thore%20Graepel%22&as_publication=&as_ylo=&as_yhi=&as_allsubj=all&hl=en)\n9. Shane Legg[View author publications](/search?author=Shane%20Legg)You can also search for this author in\n [PubMed](http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=search&term=Shane%20Legg) [Google Scholar](http://scholar.google.co.uk/scholar?as_q=&num=10&btnG=Search+Scholar&as_epq=&as_oq=&as_eq=&as_occt=any&as_sauthors=%22Shane%20Legg%22&as_publication=&as_ylo=&as_yhi=&as_allsubj=all&hl=en)\n### Contributions\n\nK.T. and J.P. designed the research and theoretical contributions. K.T. implemented the experimental illustrations. K.T., J.P. and M.L. performed the simulations. All authors analysed the results and wrote and reviewed the paper.\n\n### Corresponding author\n\nCorrespondence to\n [Karl Tuyls](mailto:karltuyls@google.com).\n\nEthics declarations\n-------------------\n\n\n### Competing Interests\n\n\nThe authors declare that they have no competing interests.\n\n\nAdditional information\n----------------------\n\n**Publisher's note:** Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.\n\nRights and permissions\n----------------------\n\n\n**Open Access** This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons license, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons license and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this license, visit .\n\n\n[Reprints and Permissions](https://s100.copyright.com/AppDispatchServlet?title=Symmetric%20Decomposition%20of%20Asymmetric%20Games&author=Karl%20Tuyls%20et%20al&contentID=10.1038%2Fs41598-018-19194-4©right=The%20Author%28s%29&publication=2045-2322&publicationDate=2018-01-17&publisherName=SpringerNature&orderBeanReset=true&oa=CC%20BY)\n\n\n\n\n\nThis article is cited by\n------------------------\n\n\n\n* ### \n[AI in Human-computer Gaming: Techniques, Challenges and Opportunities](https://doi.org/10.1007/s11633-022-1384-6)\n\n\n\t+ Qi-Yue Yin\n\t+ Jun Yang\n\t+ Liang Wang*Machine Intelligence Research* (2023)\n* ### \n[The greedy crowd and smart leaders: a hierarchical strategy selection game with learning protocol](https://doi.org/10.1007/s11432-019-2825-y)\n\n\n\t+ Linghui Guo\n\t+ Zhongxin Liu\n\t+ Zengqiang Chen*Science China Information Sciences* (2021)\n* ### \n[Bounds and dynamics for empirical game theoretic analysis](https://doi.org/10.1007/s10458-019-09432-y)\n\n\n\t+ Karl Tuyls\n\t+ Julien Perolat\n\t+ Thore Graepel*Autonomous Agents and Multi-Agent Systems* (2020)\n* ### \n[α-Rank: Multi-Agent Evaluation by Evolution](https://doi.org/10.1038/s41598-019-45619-9)\n\n\n\t+ Shayegan Omidshafiei\n\t+ Christos Papadimitriou\n\t+ Remi Munos*Scientific Reports* (2019)\n\n\n\n\n\nComments\n--------\n\nBy submitting a comment you agree to abide by our [Terms](/info/tandc.html) and [Community Guidelines](/info/community-guidelines.html). If you find something abusive or that does not comply with our terms or guidelines please flag it as inappropriate.", "url": "http://www.nature.com/articles/s41598-018-19194-4", "title": "Symmetric Decomposition of Asymmetric Games", "source": "html_articles", "source_type": "journalArticle", "source_filetype": "pdf", "date_published": "2018-01-16T23:00:00Z", "authors": ["Karl Tuyls", "Julien Pérolat", "Marc Lanctot", "Georg Ostrovski", "Rahul Savani", "Joel Z Leibo", "Toby Ord", "Thore Graepel", "Shane Legg"], "summary": [], "id": "474847de71d8fb141e4708b75708b7ea"} {"text": "### Article PDF first page preview\n\n\n\n![Article PDF first page preview](https://oup.silverchair-cdn.com/oup/backfile/Content_public/Journal/analysis/59/3/10.1093/analys/59.3.137/2/59-3-137.pdf.gif?Expires=1690966735&Signature=BaCz9QYMZ3K4vEvfrACcjggLo9Gv7CM4Zx~Kr5Zlex6Hq2GsLtziE13kXoYWJ6ie7CuDUo5OxcVrMUlWlelqpzLFm~ql9dN6xJwKmZnKT-FXgdkhPH98pJZKi8-HlrmJ4SzAWWcuMPVr3gE7p73qHu1SJEeH5hVleNij6ykBYOOy024ynxs-rXkowFZCKJB8e9bef1RjTdBh8yTJ~ewW6hrL0voQIFn7LGqgWhWOqHnSBWWEWdge93qp0b08RX5JmX3VuvywTey2giBq7ySocFTua1FWdZkH9GBZBhhNv-6VF3QQ2LNWnz~ATRTmMh2N3xRfJNY3Ztm0s3BhSQCmvg__&Key-Pair-Id=APKAIE5G5CRDK6RD3PGA)\n\n![Article PDF first page preview](https://oup.silverchair-cdn.com/oup/backfile/Content_public/Journal/analysis/59/3/10.1093/analys/59.3.137/2/59-3-137.pdf.gif?Expires=1690966735&Signature=BaCz9QYMZ3K4vEvfrACcjggLo9Gv7CM4Zx~Kr5Zlex6Hq2GsLtziE13kXoYWJ6ie7CuDUo5OxcVrMUlWlelqpzLFm~ql9dN6xJwKmZnKT-FXgdkhPH98pJZKi8-HlrmJ4SzAWWcuMPVr3gE7p73qHu1SJEeH5hVleNij6ykBYOOy024ynxs-rXkowFZCKJB8e9bef1RjTdBh8yTJ~ewW6hrL0voQIFn7LGqgWhWOqHnSBWWEWdge93qp0b08RX5JmX3VuvywTey2giBq7ySocFTua1FWdZkH9GBZBhhNv-6VF3QQ2LNWnz~ATRTmMh2N3xRfJNY3Ztm0s3BhSQCmvg__&Key-Pair-Id=APKAIE5G5CRDK6RD3PGA)\n[Close](javascript:;)\n\n\r\n $(window).load(function () {\r\n if ('true' === 'true'\r\n && 'false' === 'true') {\r\n $('#divFirstPagePreview').foundation('reveal', 'open');\r\n }\r\n });\r\n\n\n\r\n This content is only available as a PDF.\r\n \n\n© 1999 Basil Blackwell Oxford\n\n\n\n\n\n\n\nIssue Section:\n[Original Article](/analysis/search-results?f_TocHeadingTitle=Original+Article)\n\n\n\n\n\n\r\n You do not currently have access to this article.\r\n \n\n\n[Download all slides](/DownloadFile/DownloadImage.aspx?image=&PPTtype=SlideSet&ar=214282&xsltPath=~/UI/app/XSLT&siteId=5428)", "url": "https://academic.oup.com/analysis/article-lookup/doi/10.1093/analys/59.3.137", "title": "Do the desires of rational agents converge?", "source": "html_articles", "source_type": "journalArticle", "source_filetype": "pdf", "date_published": "1999-06-30T22:00:00Z", "authors": ["D. Sobel"], "summary": [], "id": "8281788c76aadd0e379e16ee7bcdc4c0"} {"text": "Abstract\n--------\n\n\nAdvances in robotics technology are causing major changes in manufacturing, transportation, medicine, and numerous other sectors. While many of these changes are beneficial, some will inevitably lead to harm. Who should be liable when a robot causes harm? This chapter addresses how the law can and should account for robot liability, including robots that exist today and that could potentially be built in the future. Current and near-future robots pose no significant challenge: existing law or minor variations therein can readily handle them. A greater challenge will arise if it becomes possible to build robots that merit legal personhood and thus can be held liable, as well as if future robots can cause major global catastrophe.\n\n \n\n\n\n\n\nKeywords:\n[robots](/search-results?qb=%7b%22Keywords1%22:%22robots%22%7d), [ethics](/search-results?qb=%7b%22Keywords1%22:%22ethics%22%7d), [morality](/search-results?qb=%7b%22Keywords1%22:%22morality%22%7d), [liability](/search-results?qb=%7b%22Keywords1%22:%22liability%22%7d), [law](/search-results?qb=%7b%22Keywords1%22:%22law%22%7d), [legal responsibility](/search-results?qb=%7b%22Keywords1%22:%22legal+responsibility%22%7d), [harm](/search-results?qb=%7b%22Keywords1%22:%22harm%22%7d), [risk](/search-results?qb=%7b%22Keywords1%22:%22risk%22%7d), [catastrophe](/search-results?qb=%7b%22Keywords1%22:%22catastrophe%22%7d), [personhood](/search-results?qb=%7b%22Keywords1%22:%22personhood%22%7d) \n\n\nSubject\n[Philosophy of Science](/search-results?page=1&tax=AcademicSubjects/AHU02980)\n\n\n\n\r\n Collection: \r\n \n[Oxford Scholarship Online](/oxford-scholarship-online)", "url": "https://academic.oup.com/book/2320/chapter-abstract/142464710", "title": "Liability For Present And Future Robotics Technology", "source": "html_articles", "source_type": "journalArticle", "source_filetype": "pdf", "date_published": "2016-12-31T23:00:00Z", "authors": ["Trevor N. White", "Seth D. Baum"], "summary": [], "id": "27985782709b420592952c55cd5c253e"} {"text": "Machine learning algorithms often learn models or policies that are inscrutable to humans. We believe that these systems work well because we have empirically validated them, but beyond that we may have little insight into why they work or what exactly they are doing.\n\nThis lack of understanding is an important part of most AI risk scenarios. If human users can understand why an AI system is making a decision, and can evaluate the underlying reasoning themselves, then it seems much harder for things to go terribly wrong. In addition to the intuitive appeal, I’ve found this kind of understanding to be a useful ingredient in concrete proposals for AI control — even in domains where it is impractical for a human to actually review an AI’s individual decisions.\n\nSo we might ask: can we make the reasoning of machine learning systems “understandable,” without making those systems much less powerful?\n\nIn this post I’ll propose a precise version of this intuitive challenge. Unfortunately, the precise version of my proposal has a serious problem, and the proposed patches are much less precise.\n\nOne concrete instance of the proposed definition is: an algorithm for computer vision is “understandable” if it can convince a blind person that its output is correct. This post will make this intuitive definition more precise.\n\nWhat does “understandable” mean?\n--------------------------------\n\nHere is one interpretation: if you understand why a decision is good, then you can distinguish it from one which is bad. Moreover, you can distinguish good and bad decisions by looking at explanations of the decisions, rather than by trying each of them out.\n\nI think this captures an important aspect of understandability. Moreover, I think it is the aspect that is most important for AI control.\n\n“Explanations” need not be static. An explanation may be an interactive process, between a machine learning system and a user who wants to understand its decisions. It may also be an adversarial process, involving auxiliary machine learning systems trained to help find flaws in a given explanation. I expect that static explanations will usually be inadequate in realistic domains.\n\nOperationalizing understanding:Advisor games\n============================================\n\nConsider an arbitrary machine learning task, for example playing a game or diagnosing a disease. How do we know if we’ve accomplished this task in an “understandable” way?\n\nFollowing the last section, we introduce an associated *advisor game*, played by two agents: a “friendly” advisor and an “adversarial” advisor. In each round of the game, each advisor proposes a decision in the underlying task. A human judge can interact with the two advisors, and then must decide which of their decisions to accept. At the start of each round, the human doesn’t know which advisor is which.\n\nThe friendly advisor wants the human to score a high a payoff on the underlying task, while the adversarial advisor wants the human to score a low payoff. A strategy for an advisor game consists of an algorithm for the friendly advisor, and a protocol for the human judge. A successful strategy should allow the human to get a good payoff, regardless of how the adversarial advisor behaves.\n\n(This formulation is only relevant in domains where the advisors are trying to achieve superhuman performance. This issue is discussed in the section *Human-level performance* below.)\n\nFor example, consider a medical diagnosis problem, where we are given a patient’s symptoms and diagnostic test results, and want to output a diagnosis. In the associated advisor game, a human interacts with two machine learning systems, each trying to convince the human of a different diagnosis. These systems might present justifications for their diagnoses, they might cite the historical examples on which their diagnoses are based, they might identify problems with each other’s diagnoses, and so on.\n\nA defect\n========\n\nAdvisor games have a glaring flaw — there is a simple strategy that works well but completely fails to address the spirit of the problem:\n\n* Pick a correct algorithm A.\n* Have the friendly advisor use algorithm A.\n* If the two advisors disagree, have the judge figure out which of them used algorithm A, and trust that one. This can be done with only logarithmically many rounds of interaction and without any advanced technology — if the two advisors disagree about what algorithm A outputs, then they can find a subproblem on which they disagree, and they can continue subdividing until they disagree about an atomic step, using Merkle trees to commit to intermediate memory states.\n\nAs long as algorithm A works, there is nothing that the adversarial advisor can do to thwart this strategy.\n\nThe problem with this solution is that it doesn’t require trusting *the advisor*, but it does require trusting *the algorithm*. The entire point of our procedure was to avoid trusting algorithms, not to avoid trusting implementations of those algorithms.\n\nAdvisor games are only potentially useful if we can patch them to rule out this kind of solution. I can think of a few possibilities, though none of them seem very satisfactory:\n\n* Don’t allow the advisors to record too much data from their training, and/or don’t give them enough time to rerun significant parts of the training. This could prevent them from proving that they correctly followed a particular protocol for training, forcing them to provide a more direct justification for their judgment.\n* Don’t let the judge use a specialized algorithm, or have any particular knowledge about the algorithm being used by the friendly advisor. Perhaps the judge can receive a short briefing about how to interpret the explanations that the advisors will provide. But assume that the judge can’t distinguish a “trustworthy” algorithm from an “untrustworthy” algorithm, and doesn’t know anything about the empirical performance of different algorithms.\n* Force the judge to use a simple procedure for judging and/or a procedure which could generalize to many different underlying machine learning algorithms and many different tasks.\n* Don’t change the rules of the game, but require each solution to “be a good sport:” determine by inspection whether a proposed solution amounts to trusting the underlying algorithm, or whether it would continue to work even in contexts where the algorithm broke down.\n\nAll of these modifications seem plausible, and I don’t know if it’s worth trying to pick one until there are algorithms that could plausibly satisfy any of them. For now, advisor games are only compelling to the extent that we expect there to be some “fair” approaches to such games, and for it to be possible to eventually distinguish these from “unfair” solutions in a satisfactory way.\n\nElaborations and clarifications\n===============================\n\nHuman-level performance\n-----------------------\n\nIf a human can perform the underlying task themselves, then they can just ignore the advisors and make the decision directly.\n\nAdvisor games are only interesting for tasks that are too challenging for a human to accomplish unaided (similarly, the AI control problem seems to be much more challenging and important in this case).\n\nMuch AI research focuses on reproducing rather then exceeding human capabilities, and is not directly suitable for advisor games. But even in these cases, we might be able to construct useful analogies by restricting the abilities of the judge.\n\nIn a vision task we could force a judge to close their eyes. In the diagnosis example we could consult a judge a without medical training. And so on.\n\nOf course, we can also work directly with domains where machines exceed human capabilities and avoid this problem.\n\nArbitration\n-----------\n\nIn most domains, we don’t actually want a human to evaluate every decision. In some domains, consulting a human for every decision would be so expensive that we can’t even perform a single useful test (for example, imagine a human arbitrating between two decisions about how a robot should move one of its joints during a fraction of a second). In these domains, we can consider a slightly more challenging game.\n\nIn an *advisor game with arbitration*, the two advisors interact with an arbiter rather than with a human judge, and the arbiter must make each decision. This arbiter should be *very* simple and fast — too simple to do the underlying task itself, or to distinguish good from bad answers. Instead, the arbiter gives the two advisors an opportunity to resolve their disagreement by “talking it out.” If they can’t, then the human judge will be involved. This process is then repeated, and the identities of the two advisors are remembered between rounds (so if one advisor consistently gives bad advice, then the arbiter can learn to ignore them most of the time).\n\nIn this setting, a solution consists not only of a strategy for the human and the friendly advisor, but also for the arbiter. A good solution must not only get a high payoff, but must ensure that the human doesn’t have to be involved too many times.\n\nWhere do you get an adversarial advisor?\n----------------------------------------\n\nA good solution should work for any adversarial advisor that the research community can come up with.\n\nIn order to validate a proposed solution, adversarial investigators can search for an adversarial advisor specially designed to challenge the proposed solution. If they fail to find any, we can provisionally conclude that the proposed solution is adequate.\n\nUnfortunately, this process only really works if the adversarial advisor is able to integrate all of the information that was used to build the friendly advisor. For example, a research team capable of programming a more eloquent explainer could publish a “solution” in which the human simply believes whichever advisor is more eloquent. This solution wouldn’t be satisfactory unless there was some fundamental reason that the right answer had a more eloquent explanation — and to test this, other researchers would need to have access to a similar level of expertise in building eloquent explainers. (Or else they could simply reject solutions which seem to rely on proprietary information as a “secret handshake.”)\n\nWhat can advisors do?\n---------------------\n\nThe advisors should be able to output text to the human judge or write files for the judge to interact with, at the judge’s request. They may recommend that the judge look at particular information, especially pieces of the input data or of the training data. They may recommend that the judge run useful computations (e.g. to extract summary statistics from training data).\n\nThe advisors can also communicate with each other, potentially with much higher bandwidth. The judge is free to inspect parts of the resulting transcript, and the advisors can call the judge’s attention to certain parts of the exchange. The advisors are free to generate cryptographic credentials or use any other tools that might be helpful for preventing cheating strategies by an adversarial advisor.\n\nIn some contexts, we may want the advisors to communicate in other ways, such as by displaying an image. To that end, we can simply let the judge use programs that can parse the advisors’ messages and display them as images (or in whatever other form).\n\nOn regularization\n-----------------\n\nFor advisor games in a classification setting, one natural strategy is:\n\n* Each advisor commits to a model, and uses that model to make a decision.\n* The judge chooses some random validation data.\n* The judge trusts whichever advisor’s model performs better on the training data.\n\nThis strategy is likely to succeed if the distribution of test and validation data is identical and if the advisors don’t have access to the validation data. It is much less compelling when the validation data may be from a different distribution (which is the usual setting of interest in AI risk scenarios) or outside of the classification setting.\n\nIn this harder case, there are adversarial models which work well on the validation data but deliberately fail on the test data.\n\nA natural way to get around this problem is to use strong regularization. A model which deliberately fails on the test data seems likely to be more complicated. So if we score the expert’s models not only by their performance but also by their complexity, then we may prevent this kind of cheating strategy.\n\nThis approach may work when the adversarial advisor is perfectly well-matched with the friendly advisor, but it is extremely brittle. If the adversarial advisor is able to find a model which is slightly simpler or which performs slightly better on the validation data, then the approach will break down again. An adequate solution to an advisor game should work for a very broad class of adversaries, rather than assuming that the friendly advisor is at least as capable as the adversarial advisor.\n\nConclusion\n==========\n\nBuilding machine learning systems with “understandable” behavior would go a long way towards addressing concerns with AI risk. At the moment we don’t have any clear statement of what understandable behavior means, and a naive standard may be unattainable.\n\nAdvisor games give one operationalization of understandability. Unfortunately, the simple and precise statement of the game doesn’t really work, and so we would need to make do with a patched variant. Only time will tell whether any simple patch can lead to a reasonable problem.\n\nFor now advisor games only seem approachable for very simple domains. If they remain out of reach as machine learning systems become more sophisticated, that would be a (weak) warning sign about the difficulty of AI control. Hopefully it will be possible to make fast enough headway on this problem or other formalizations of “understandable” reasoning that they can catch up with unrestricted machine learning.", "url": "https://ai-alignment.com/advisor-games-b33382fef68c", "title": "Advisor games", "source": "html_articles", "source_type": "blogPost", "source_filetype": "pdf", "date_published": "2015-09-25T22:00:00Z", "authors": ["Paul Christiano"], "summary": [], "id": "3f970923b3fc8c588faaa1554f6affc6"} {"text": "[AlphaGo Zero](https://deepmind.com/blog/alphago-zero-learning-scratch/) is an impressive demonstration of AI capabilities. It also happens to be a nice proof-of-concept of a [promising alignment strategy](/benign-model-free-rl-4aae8c97e385).\n\nHow AlphaGo Zero works\n======================\n\nAlphaGo Zero learns two functions (which take as input the current board):\n\n* A prior over moves **p** is trained to predict what AlphaGo will eventually decide to do.\n* A value function **v** is trained to predict which player will win (if AlphaGo plays both sides)\n\nBoth are trained with supervised learning. Once we have these two functions, AlphaGo actually picks it moves by using 1600 steps of Monte Carlo tree search (MCTS), using **p** and **v** to guide the search. It trains **p** to bypass this expensive search process and directly pick good moves. As **p** improves, the expensive search becomes more powerful, and **p** chases this moving target.\n\nIterated capability amplification\n=================================\n\nIn the simplest form of [iterated capability amplification](/benign-model-free-rl-4aae8c97e385), we train one function:\n\n* A “weak” policy **A**, which is trained to predict what the agent will eventually decide to do in a given situation.\n\nJust like AlphaGo doesn’t use the prior **p** directly to pick moves, we don’t use the weak policy **A** directly to pick actions. Instead, we use a [capability amplification](/policy-amplification-6a70cbee4f34) scheme: we call **A** many times in order to produce more intelligent judgments. We train **A** to bypass this expensive amplification process and directly make intelligent decisions. As **A** improves, the amplified policy becomes more powerful, and **A** chases this moving target.\n\nIn the case of AlphaGo Zero, **A** is the prior over moves, and the amplification scheme is MCTS. (More precisely: **A** is the pair (**p**, **v**), and the amplification scheme is MCTS + using a rollout to see who wins.)\n\nOutside of Go, **A** might be a question-answering system, which can be applied several times in order to first break a question down into pieces and then separately answer each component. Or it might be a policy that updates a [cognitive workspace](https://blog.ought.com/dalca-4d47a90edd92), which can be applied many times in order to “think longer” about an issue.\n\nThe significance\n================\n\nReinforcement learners take a reward function and optimize it; unfortunately, it’s not clear where to get a reward function that faithfully tracks what we care about. That’s a key source of safety concerns.\n\nBy contrast, AlphaGo Zero takes a policy-improvement-operator (like MCTS) and converges towards a fixed point of that operator. If we can find a way to improve a policy *while preserving its alignment*, then we can apply the same algorithm in order to get very powerful but aligned strategies.\n\nUsing MCTS to achieve a simple goal in the real world wouldn’t preserve alignment, so it doesn’t fit the bill. But “[think longer](/humans-consulting-hch-f893f6051455)” might. As long as we start with a policy that is [close enough](/corrigibility-3039e668638) to being aligned — a policy that “wants” to be aligned, in some sense — allowing it to think longer may make it both smarter *and* more aligned.\n\nI think designing alignment-preserving policy amplification is a tractable problem today, which can be studied either in the context of existing ML or human coordination. So I think it’s an exciting direction in AI alignment. A candidate solution could be incorporated directly into the AlphaGo Zero architecture, so we can already get empirical feedback on what works. If by good fortune powerful AI systems look like AlphaGo Zero, then that might get us much of the way to an aligned AI.", "url": "https://ai-alignment.com/alphago-zero-and-capability-amplification-ede767bb8446", "title": "AlphaGo Zero and capability amplification", "source": "html_articles", "source_type": "blogPost", "source_filetype": "pdf", "date_published": "2017-10-19T22:00:00Z", "authors": ["Paul Christiano"], "summary": [], "id": "44036e41f8df09d8308a3e5e6ce1fb63"} {"text": "My goal is to design AI systems that are aligned with human interests and [competitive](/directions-and-desiderata-for-ai-control-b60fca0da8f4) with unaligned AI.\n\nI find it useful to have a particular AI algorithm in mind. Then I can think about how that algorithm could cause trouble, and try to find a safer variant.\n\nI think of the possibly-unaligned AIs as a benchmark: it’s what AI alignment researchers need to compete with. The further we fall short of the benchmark, the stronger the competitive pressures will be for everyone to give up on aligned AI and take their chances.\n\nI have a few standard benchmarks I keep in mind. This post describes one of those benchmarks. It also tries to lay out clearly why I think that benchmark is unsafe, and explains how I think my current research could make a safe version.\n\nI. Model-based RL with MCTS\n===========================\n\nWe train three systems in parallel:\n\n* A generative model to sample sequences of observations, conditioned on sequences of actions.\n* A reward function that takes as input a sequence of actions and predicted observations and produces a reward.\n* A policy and value function which take as input a sequence of observations and produce the next action and an estimate of the future return.\n\nWe train the policy and value function using (roughly) the AlphaZero algorithm: Use MCTS to improve the current policy. Update the policy at the root to predict the best move found by MCTS, update the value to predict its predicted value. Use the generative model to sample environment transitions and the reward function (with a small discount rate) to score them.\n\nWe train an autoregressive generative model, to maximize the log probability assigned to the actual sequence of actions and observations produced by the AI (with each observation conditioned on the past actions). This isn’t actually a good way to train the generative model, but it’s not really central to the discussion.\n\nWe train the reward function by showing humans sequences of actions and predicted observations, asking them to assign scores, then predicting those scores with supervised learning. We show humans the sequences of actions that look most promising to the system.\n\nThere are plenty of details you’d need in order to make this work well, but that’s the basic idea. When applied with very powerful networks, it’s plausible that this system would be able to decisively outcompete humans. It would be capable performing a large intelligent search over long sequences of actions to find those that would be rated highly.\n\nII. What goes wrong?\n====================\n\nThere are two classes of problems:\n\n**Problem 1: Bad objective**\n----------------------------\n\nThe goal of the system is to produce (action, observation) sequences that look good to humans. I claim that optimizing this objective faithfully will lead to bad outcomes.\n\nAs the system improves, the rationale of many individual actions will become incomprehensible to a human overseer. At this point the only option for a human is to evaluate sequence of observations based on whether the consequences look good.\n\nThe observations present a narrow view of the world, and I strongly suspect that the AI will find sequences of actions that make that narrow view look good without actually *being* good.\n\n**Control vs. intrinsic goodness.** Ithink there are two strategies for defining a reward function:\n\n1. Reward worlds in which *humans remain* *in control of the situation*, in which they are able to get accurate information and correct course as needed.\n2. Reward worlds in which *intrinsically good things are happening*\n\nBoth of these strategies seem unworkable.\n\n**Strategy #1: maintaining control.** This appears to be unworkable because determining if humans are actually in control is incredibly difficult — at best you can tell whether they appear to be and feel in control. It’s very hard to understand if the humans are getting accurate information, if their understanding of the situation is roughly accurate, if their instructions are being faithfully executed, and so on. This is already an incredibly hard problem amongst humans and gets way harder as the AI gets smarter.\n\nThis problem is serious because, if there is even a small gap between the “real” notion of being in control (which an AI cares about in order to generate maximally appealing observation sequences further into the future) and a superficial appearance of control (which the reward function incentivizes), then the AI is going to exploit that gap by taking the real notion of control for itself and giving humans the appearance of control. Of course, if the long-term goals of the AI and human are well-aligned then there would be no conflict of interest, but trying to ensure that alignment brings us back to strategy #2 which I will argue is unworkable.\n\nYou could potentially notice this problem by looking ahead to the point where humans attempt to exercise control — if the humans don’t get what they want, then they must not have really been in control. But evaluating whether they get what they want brings us back to strategy #2.\n\n**Strategy #2: evaluating goodness.** This appears to be unworkable because it requires evaluating human preferences over very large objects (the whole world) under very complex conditions (in the far future, where we’ve had to make many decisions about what kind of world we want to have).\n\nI am quite confident that “world which looks appealing on screen” is not the same as “world which is good;” the gap is already large today, and it will grow much larger over time.\n\n**Maintaining a fiction**. Ultimately the goal of this AI is to make the sensors look like good things are happening in the world. This can be done either by making the world good or by carefully controlling what gets fed to the sensors. In general, there are many fewer constraints on “what gets fed to the sensors” than “what the world is actually like,” and so it’s going to be possible to produce much more desirable-looking outcomes, much more easily, by controlling the sensors.\n\nIf the humans understand what the AI is doing in detail then they can observe the sequence of actions to notice any actions that might corrupt the sensors. But as soon as the AI is engaging in complicated plans this appears to be totally intractable. Even if the AI was human-level, there are many extremely hard-to-detect plans: for example, the AI could collaborate with an external attacker, who will surreptitiously corrupt the sensor. Or the AI could create a copy of itself on an external server who then helped orchestrate a hard-to-detect corruption.\n\nThese actions would be very hard to detect even if you were monitoring a human-level intelligence, and the situation seems hopeless for much more powerful searches. Moreover, because you are searching over sequences of actions for those that receive high reward, you are directly applying optimization power to discovering covert attacks.\n\n**Other problems.** I have a (fairly common) intuition that applying extremely powerful optimization at a target that isn’t *quite* what you want will often lead to bad outcomes. The discussion above is not exhaustive, but I think it is illustrative.\n\nProblem 2: distributional shift (and optimization daemons)\n----------------------------------------------------------\n\nOur training procedure produces a policy and value function, most likely represented as (really big) neural networks. At test time, we combine these the policy and value with MCTS to decide on actions.\n\nThe value function and policy have been optimized to yield good performance on the data points we’ve seen so far, as judged by human evaluations. Unfortunately, there are likely to be a very large number of networks that encode the “wrong” goals but which also yield good performance. These networks will generalize poorly, and moreover when they fail to generalize they can result in an extremely powerful optimization process being pointed at the wrong objective.\n\n**A story about training.** Originally the policy and value function don’t encode anything at all. Over time, they begin to encode a complicated soup of heuristics which is correlated with good performance. If we are training sufficiently powerful models we hope they will eventually perform reasoning about outcomes. For example, the policy could learn to backwards chain from heuristics about what is valuable in order to decide which moves are good. This is what we are trying to do — the policy is *supposed* to backwards chain, it’s the only part of the system that can use heuristics in order to prioritize the search.\n\nWhat humans actually want is somewhat complicated, so it seems quite likely that it’s easier for models to pursue a complicated soup of heuristic goals than to understand exactly what we want. This is similar to the way in which humans acquired an extremely rich set of goals even though we were optimized according to evolutionary fitness. This is a complicated question, but I think it’s the theoretical picture and I think historical experience with deep learning points tends to support it.\n\nAs the system improves, the reward function encourages it to exhibit an increasingly precise understanding of what we want. Unfortunately there are two ways to do this:\n\n* The intended way: adjust the implicit goals baked into the model such that they converge towards “be helpful to humans.” In the analogy to humans, this is like humans caring more and more about reproductive fitness (and less and less about things like beauty or fun except insofar as they are useful for reproductive fitness).\n* The unintended way: correctly understand that earning human approval is necessary to survival and hence to achieving other goals, and act accordingly. In the analogy to humans, this is like humans continuing to care about beauty and fun, but believing that they need to have kids in order to realize those goals in the long run.\n\nIn practice, I expect both of these changes to occur to some extent, ending up with a model that has somewhat wrong goals together with an instrumental desire to appear helpful.\n\n**Catastrophic failure**. This could lead to a catastrophic failure in a few different ways:\n\n* An attacker deliberately produces inputs that drive our AI off of the training distribution, and it starts pursuing the wrong goals. That AI may then launch a similar attack against other AI systems it has access to, leading to cascading failures (as with a computer virus). Or an attacker may be able to simultaneously compromise a large number of systems.\n* As AI systems acquire increasing influence in the world, they necessarily move off the training distribution. Eventually this sparks a failure in some systems. These failures could cause chaos in the world, pushing us further from the training distribution and leading to cascading failures; or they may all be triggered by the same events and so be correlated.\n\nIn either case, we could end up with a massive correlated failure of AI systems, where they start effectively maximizing the wrong goals. That looks effectively like a conflict between us and the AI systems we’ve built (just as a virus might effectively lead to a conflict between you and the computer you bought). If the AI systems either have significant responsibilities, or are much more intelligent than unaided humans, then there may not be any way to recover from this failure.\n\nProblem 1.5: non-robust reward functions\n----------------------------------------\n\nThere is another risk at the intersection between robustness and value specification.\n\nWe may learn a model of human approval which is accurate on the training distribution, but incorrectly assigns a very high value to some bad outcomes that didn’t appear in training. Indeed, recent experience with adversarial examples suggests that our models often have very strange behavior on parts of the input space not visited in training and that this problem can be hard to correct. Presumably some of these inputs would be assigned unusually high values (just as some would be assigned unusually low values).\n\nIn order to reach the most pathological cases, the agent needs significant control over its own observations, which in in turn requires control over its environment. So even if the pathological inputs aren’t any worse than random on average, the ones that the system is able to reach will systematically be those where it has acquired enough influence to control its own observations.\n\nWe can potentially correct this problem by eliciting human feedback for the problematic observations and then adding them to the training set. We’re only in serious trouble if the reward function is really hard to fix. In that case, we may instead make it harder and harder for the system to modify its own observations. After a while, we might train a system which is only able to control its observations by physically overpowering us and causing a catastrophe.\n\nIII. How the problem might be fixed\n===================================\n\nI think that my research has a chance of producing an AI that (a) is nearly-as-good as the benchmark, but (b) doesn’t do anything terrible.\n\nThe main changes are:\n\n* Rather than learning the environment model directly from observation, produce it by [amplification](/iterated-distillation-and-amplification-157debfd1616). The amplification process can use exactly the same data about the world (treating the data as an external resource), and it can ultimately be distilled into a neural network just like the one you would have trained. The important difference is that we now have a way to link the generative model back to (an astronomically large number of) human-comprehensible operations on human-comprehensible data and a [sequence of decoders](/approval-maximizing-representations-56ee6a6a1fe6) which can map from the latent space of the generative model to an implicit human-comprehensible representation.\n* Rather than learning a reward function from human data, we also train it by amplification (acting on the same representations used by the generative model). Again, we can distill the reward function into a neural network that acts on sequences of observations, but now instead of learning to predict human judgments it’s predicting a very large implicit deliberation.\n* In addition to training the policy and value function using RL, we use some combination of [these techniques](/techniques-for-optimizing-worst-case-performance-39eafec74b99) to ensure there is *no* series of observations on which the policy behaves extremely badly. For example, we may use amplification in order to understand what the policy is doing, then use that understanding in order to identify possible (pseudo-)inputs on which the policy behaves badly, then add those (pseudo-)inputs to the training set.\n\nCompetitiveness depends on three properties:\n\n* Training a predictor using amplification makes predictions at least as good as training a generative model directly.\n* The worst-case training for the policy and value function don’t interfere significantly with average case performance.\n* Amplification doesn’t add much overhead to the training process.\n\nSafety is more subtle. It depends on three properties each defined in terms of some as-yet-undefined notion of “bad” behavior ([incorrigible](/corrigibility-3039e668638) is the current leading candidate):\n\n1. On average, the reward function incentivizes behavior which competes effectively and gives the user effective control.\n2. There do not exist inputs on which the policy choose an action because it is bad, or the value function outputs a high reward because the prior behavior was bad. (Or else the policy on its own will generate bad consequences.)\n3. The reward function never gives a behavior a higher reward because it is bad. (Or else the test-time optimization by MCTS can generate bad behavior.) For example, if the AI deludes the human operator so that the operator can’t interfere with the AI’s behavior, that behavior can’t receive a higher reward even if it ultimately allows the AI to make more money.\n\nMy goals\n--------\n\nIn order to make all of that work, we’d need to solve a few research problems.\n\n**Reliability**. Some combination of [these techniques](/techniques-for-optimizing-worst-case-performance-39eafec74b99#4ede) needs to successfully eliminate all bad behavior (and in particular to control [optimization daemons](https://arbital.com/p/daemons/)).\n\n**Amplification**. Amplification needs to be good enough for these three tasks:\n\n* Learn models that make competitive predictions about the world, and which are “understandable” enough that we can…\n* Implement a reward function, defined on those models, that correctly rewards competitive behavior and penalizes bad behavior.\n* Understand the internals of competitive machine learning systems, well enough to do [informed oversight](/the-informed-oversight-problem-1b51b4f66b35) and to help with reliability (if interpretability is an important ingredient for reliability). Probably the key requirement is reliably identifying bad behavior.\n\n(Without introducing significant overhead.)\n\n**Understanding bad behavior**. In order to do either of the above we need some suitable notion of “bad” behavior, such that:\n\n* Avoiding bad behavior is sufficient to avoid catastrophic outcomes.\n* Bad behavior is an act of commission that is “easy” to avoid, such that it can be achieved in the worst case.\n* We can learn a reward function over that avoids creating instrumental incentives for bad behavior, e.g. by punishing any bad behavior which played an important role in receiving a high reward. (This is only plausible because our reward function operates on sequences of predicted states, and so if bad behavior is instrumentally useful it must be because the model “knows about” it.)", "url": "https://ai-alignment.com/an-unaligned-benchmark-b49ad992940b", "title": "An unaligned benchmark", "source": "html_articles", "source_type": "blogPost", "source_filetype": "pdf", "date_published": "2018-09-25T22:00:00Z", "authors": ["Paul Christiano"], "summary": [], "id": "962986e0ec7c0c291726650c75410e3e"} {"text": "(*Follow-up to* [*learning representations*](/learning-representations-c330b7d12c76)*.*)\n\nA long-lived AI will consume a lot of data, and needs to make hard choices about what to store and how to represent it. This is problematic for an [act-based agent](/act-based-agents-8ec926c79e9c), where these decisions must be made or evaluated by an overseer.\n\nWe might wonder: can act-based agents ever learn to use representations that are incomprehensible to humans? If they can’t, it would probably prevent them from achieving state-of-the-art performance.\n\nAs a warm-up, consider the case where we observe an image, and want to store a compressed version which we can recall on a future day. That is, we want to learn an encoder E : (image) → (code) and a decoder D : (code) → (image).\n\n![]()Schematic of an autoencoderHow should we train these maps?\n\nEven if the overseer can’t understand the code at all, we can train the encoder and decoder jointly. That is, we start from an image *x*, obtain the reconstruction *x*′ = D(E(*x*)), and ask: “is *x*′ a good reconstruction of *x*?”\n\nThe overseer’s responses define a loss function L(*x*, *x*′), and we can then train D and E to minimize this loss, subject to a constraint on the size of the representation *z*. Such an autoencoder can learn to throw out details that the overseer doesn’t expect to be useful.\n\nManipulating representations\n----------------------------\n\nNow suppose that we want to learn to *manipulate* codes. For example, I may have a target transformation on natural inputs like “change the season from winter to summer.” I’d like to train an agent to implement the analogous transformation on codes.\n\nThat is, suppose that the overseer has a map F : (natural input) → (natural output), and an approval function L : (natural input) × (natural output) → [0, 1]. This is the data that we need to train an agent to manipulate natural inputs using [imitation+RL](/imitation-rl-613d70146409).\n\n![]()By following D, then having the overseer apply F, then applying E, we can produce training samples for *f*. Similarly, by applying D to both inputs and outputs, we can elicit the overseer’s feedback on f.If we have an encoder/decoder, we can use them together with F and L to train a map on codes. To get examples we can evaluate E(F(D(*z*))). To get evaluations, we can compute L(D(*z*), D(*f*(*z*))).\n\nSo the overseer has lost nothing by working with these representations — they are able to provide feedback as well as if they understood the representations natively.\n\nWe can also learn representations which are computationally useful in addition to being compact, by training *f* jointly with the encoder/decoder. A more convenient representation will make it easier to learn *f.*\n\nAt the end of the day, our system also interacts with the world by taking natural inputs (e.g. instructions in natural language, or images from a camera) and producing natural outputs (e.g. utterances, or images displayed on a screen).\n\n![]()f(z) ~ F(D(z)), which we can use both to sample and evaluate input/output pairs.We can train these functions in exactly the same way, by translating to/from codes using our encoder or decoder.\n\nIf we were willing to train our system entirely end-to-end then we don’t need to deal with the encoder/decoder at all and could just demonstrate/evaluate the natural inputs and outputs. The encoder/decoder are intended to allow the overseer to evaluate individual operations on codes, so that they don’t need to evaluate the entire process from start to finish. This is important because our agents might perform very long sequences of operations, which won’t be practical to train end-to-end. For example, an agent might want to use an observation months after making it.\n\nIterative encoding\n==================\n\nIn general, transforming an efficient representation into a human-comprehensible representation will increase its size.\n\nFor example, a comprehensible representation might be a description in English, while an efficient representation uses a richer language containing concepts that humans don’t understand. Translating the rich language into English may greatly increase its size — perhaps a single word of the rich language can only be explained with many paragraphs of English text.\n\nIn general, unpacking an efficient representation could lead to exponentially large comprehensible representations, which would render the schemes from the last section impractical.\n\nWe can potentially address this problem by working with a sequence of increasingly complex representations.\n\nCompound representations\n------------------------\n\nThe first ingredient is building a large representation out of smaller parts.\n\nWe’ll start with some space of “simple” representations S, such as snippets of English text that can be read and understood in 5 seconds. We’d like to build more complex representations out of S.\n\nThis could be approached in many equally good ways, but I’ll use the idea of *messages* defined in [meta-execution](/meta-execution-27ba9b34d377).\n\n![]()A compound representation composed out of simpler representations.A message over S consists of two parts:\n\n* An element of S (the “head”)\n* A list of additional messages over S (the “arguments”), that can be referenced by the head.\n\nThe semantics of messages are straightforward, and hopefully the example above makes it clear.\n\nThe *size* of a finite message is 1 plus the sum of the sizes of all of its arguments. Write M(S) for the set of messages over S of size at most 1000.\n\nM(S) is a much bigger set than S, and can generally represent much more complex concepts.\n\nCompressing representations\n---------------------------\n\nBy combining compound representations with representation learning, we can potentially construct a hierarchy of increasingly efficient representations.\n\nLet S be some initial representation which is comprehensible to the human overseer H. An agent A trained by H would naturally operate on S.\n\nNow we’d like to train agent A⁺ which is smarter than A and operates on a more efficient representation S⁺.\n\nIf we use an [amplification](/policy-amplification-6a70cbee4f34) scheme like [meta-execution](/meta-execution-27ba9b34d377), then we can turn A into a smarter agent Hᴬ which operates naturally on M(S). We can then use Hᴬ to train an autoencoder which compresses M(S) into a new representation S⁺:\n\n![]()An autoencoder which translates compound representations into a more efficient encoding. The empty boxes would be filled in with elements of S, and the binary code belongs to S⁺.In general this encoder can’t preserve all of the information in every compound representation. But it can preserve the most important details, and may be able to substantially compress the representation.\n\nWe can train this autoencoder jointly with a new agent A⁺ who operates on S⁺, in order to obtain representations that are computationally convenient. With the encoder in hand, Hᴬ can effectively oversee A⁺.\n\nIteration\n---------\n\nNow we have an agent A⁺ which operates on a more efficient representation S⁺. We can repeat the process: Hᴬ⁺ can train a new agent A⁺⁺ which operates on a still more efficient representation S⁺⁺:\n\n![]()An autoencoder which translates M(S⁺) into a still-more-efficient encoding S⁺⁺.In this picture, E⁺ takes as input a message over S⁺ (represented as binary) and outputs a symbol in S⁺⁺ (represented as squiggles).\n\nHᴬ⁺ is able to operate on messages over S⁺ since it is able to call A⁺ many times, and so it can train E⁺ and D⁺. And with the encoder in hand, Hᴬ⁺ can train a new agent A⁺⁺ which operates on S⁺⁺.\n\nNote that while it is possible to convert S⁺⁺ back to a message over S, the size may grow considerably. An element of S⁺⁺ can be unpacked to a message of size 1000 over S⁺. Each of these thousand elements of S⁺ can be unpacked to a message of size 1000 over S. And so an element of S⁺⁺ would unpack into a message of size 1000000 over S. This size will keep growing exponentially as we iterate.\n\nPutting it all together\n=======================\n\nI’ve been somewhat cavalier when talking about [bootstrapping schemes for aligned RL](/benign-model-free-rl-4aae8c97e385), and haven’t discussed the representations they would use. I’ve swept this under the rug by assuming that the resulting agents operate in an episodic environment with a well-defined beginning and end and that all information stored between episodes is human-comprehensible.\n\nThe primary purpose of this post is to patch up these oversights, and to move towards removing the episodic assumption altogether.\n\nSo what does it look like if we combine this system with [this bootstrapping scheme](/benign-model-free-rl-4aae8c97e385), putting together the steps from the preceding sections?\n\n![]()It’s a mouthful.This is a copy of the diagram from [the previous post](/benign-model-free-rl-4aae8c97e385), but with the following changes:\n\n* I’ve annotated each policy with its domain and range. As before, S, S⁺ and S⁺⁺ are increasingly efficient representations, and M(X) is the space of small messages over X. For convenience, I’m assuming that each of S, S⁺, S⁺⁺ also contains fixed representations of the object-level inputs and outputs of our agents.\n* I’ve inserted encoders E and E⁺, and decoders D and D⁺. These are trained using the mechanism described in this post. They are needed to provide oversight to A⁺ and A⁺⁺, since Hᴬ and Hᴬ⁺ aren’t capable of directly understanding S⁺ or S⁺⁺.\n\nConclusion\n==========\n\nUltimately, efficient AI systems will act on compact representations which will be incomprehensible to humans. If we want to build act-based agents, we need to connect these representations to something that the overseer can understand.\n\n![]()This might be achievable by training a hierarchy of increasingly efficient representations in parallel with training a hierarchy of increasingly competent agents. We can learn richer representations by training an autoencoder to compress compound representations built up of simpler pieces.\n\nIt is completely unclear if this proposal is realistic, and this post hasn’t touched on any of the practical difficulties. But it suggests to me that there isn’t an inherent philosophical problem with act-based training of agents to operate on efficient, incomprehensible representations.", "url": "https://ai-alignment.com/approval-maximizing-representations-56ee6a6a1fe6", "title": "Approval-maximizing representations", "source": "html_articles", "source_type": "blogPost", "source_filetype": "pdf", "date_published": "2017-07-01T22:00:00Z", "authors": ["Paul Christiano"], "summary": [], "id": "91e952e153381edea335e0e8a6a9603b"} {"text": "In my [last post](https://medium.com/ai-control/directions-and-desiderata-for-ai-control-b60fca0da8f4), I described three research areas in AI control that I see as central: reward learning, robustness, and deliberation.\n\nIn this post I argue that these three pieces may be *sufficient* to get a [benign](https://medium.com/ai-control/benign-ai-e4eb6ec6d68e#.ugg3x77ws) and competitive version of model-free reinforcement learning. I think this is an important intermediate goal of solving AI control.\n\nThis post doesn’t discuss [benign model-based RL](https://medium.com/ai-control/aligned-search-366f983742e9#.rq3auppf0) at all, which I think is another key obstacle for [prosaic AI control](https://medium.com/ai-control/prosaic-ai-control-b959644d79c2#.d46mjxf3f).\n\n(*This post overlaps extensively with my* [*post on ALBA*](https://medium.com/ai-control/alba-an-explicit-proposal-for-aligned-ai-17a55f60bbcf#.m3m81zgrd)*, but I hope this one will be much clearer. Technically, ALBA is an implementation of the general strategy outlined in this post. I think the general strategy is much more important than that particular implementation.*)\n\nIngredients\n===========\n\nReward learning and robustness\n------------------------------\n\nGiven a [benign](https://medium.com/ai-control/benign-ai-e4eb6ec6d68e#.ugg3x77ws) agent H, [reward learning](https://medium.com/ai-control/the-reward-engineering-problem-30285c779450#.pmofowr9x) allows us to construct a reward function *r* that can be used to train a weaker benign agent A. If our training process is robust, the resulting agent A will remain benign off of the training distribution (though it may be *incompetent* off of the training distribution).\n\nSchematically, we can think of reward learning + robustness as a widget which takes a slow, benign process H and produces a fast, benign process A:\n\n![]()Reward learningA’s capabilities should be roughly the “intersection” of H’s capabilities and our RL algorithms’ competence. That is, A should be able to perform a task whenever *both* H can perform that task and our RL algorithms can learn to perform that task.\n\nIn these pictures, the vertical axis corresponds intuitively to “capability,” with higher agents being more capable. But in reality I’m thinking of the possible capabilities as forming a complete [lattice](https://en.wikipedia.org/wiki/Lattice_(order)). That is, a generic pair of levels of capabilities is incomparable, with neither strictly dominating the other.\n\nAmplification\n-------------\n\nIf we iteratively apply reward learning and robustness, we will obtain a sequence of weaker and weaker agents. To get anywhere, we need some mechanism that lets us produce a *stronger* agent.\n\nThe [capability amplification problem](https://medium.com/ai-control/policy-amplification-6a70cbee4f34) is to start with a weak agent A and a human expert H, and to produce a significantly more capable agent Hᴬ. The more capable agent can take a lot longer to think, all we care about is that it *eventually* arrives at better decisions than A. The key challenge is ensuring that Hᴬ remains benign, i.e. that the system doesn’t acquire new preferences as it becomes more capable.\n\nAn example approach is to provide A as an assistant to H. We can give H an hour to deliberate, and let it consult A thousands of times during that hour. Hᴬ’s output is then whatever H outputs at the end of that process. Because H is consulting A a large number of times, we can hope that the resulting system will be much smarter than A. Of course, the resulting system will be thousands of times more computationally expensive than A, but that’s fine.\n\nIn general, [meta-execution](https://medium.com/ai-control/meta-execution-27ba9b34d377#.isdz38ftn) is my current preferred approach to capability amplification.\n\nSchematically, we can think of amplification as a widget which takes a fast, [benign](https://medium.com/ai-control/benign-ai-e4eb6ec6d68e#.ugg3x77ws) process A and produces a slow, benign process Hᴬ:\n\n![]()AmplificationPutting it together\n===================\n\nWith these two widgets in hand, we can iteratively produce a sequence of increasingly competent agents:\n\n![]()Iterated amplification + reward learningThat is, we start with our benign expert H. We then learn a reward function and train an agent A, which is less capable than H but can run much faster. By running many instances of A, we obtain a more powerful agent Hᴬ, which is approximately as expensive as H.\n\nWe can then repeat the process, using Hᴬ to train an agent A⁺ which runs as fast as A but is more capable. By running A⁺ for a long time we obtain a still more capable agent Hᴬ⁺, and the cycle repeats.\n\nCollapsing the recursion\n------------------------\n\nI’ve described an explicit sequence of increasingly capable agents. This is the most convenient framework for analysis, but actually implementing a sequence of distinct agents might introduce significant overhead. It also feels at odds with current practice, such that I would be intuitively surprised to actually see it work out.\n\nInstead, we can collapse the entire sequence to a single agent:\n\n![]()An agent defines its own reward functionIn this version there is a single agent A which is simultaneously being trained and being used to define a reward function.\n\nAlternatively, we can view this as a sequential scheme with a strong initialization: there is a separate agent at each time *t*, who oversees the agent at time *t*+1, but each agent is initialized using the previous one’s state.\n\nThis version of the scheme is more likely to be efficient, and it feels much closer to a practical framework for RL. (I originally suggested a similar scheme [here](/implementing-our-considered-judgment-6c715a239b3e).)\n\nHowever, in addition to complicating the analysis, it also introduces additional challenges and risks. For example, if Hᴬ actually consults A, then there are unattractive equilibria in which A manipulates the reward function, and the manipulated reward function rewards manipulation. Averting this problem either requires H to sometimes avoid depending on A, or else requires us to sometimes run against an old version of A (a trick sometimes used to stabilize self-play). Both of these techniques implicitly reintroduce the iterative structure of the original scheme, though they may do so with lower computational overhead.\n\nWe will have an even more serious problem if our approach to reward learning relied on throttling the learning algorithm. When we work with an explicit sequence of agents, we can ensure that their capabilities improve gradually. It’s not straightforward to do something analogous in the single agent case.\n\nOverall I think this version of the scheme is more likely to be practical. But it introduces several additional complications, and I think it’s reasonable to start by considering the explicit sequential form until we have a solid grasp of it.\n\nAnalysis\n========\n\nI’ll make two critical claims about this construction. Neither claim has yet been formalized, and it’s not clear whether it will be possible to formalize them completely.\n\n**Claim #1: All of these agents are** [**benign**](https://medium.com/ai-control/benign-ai-e4eb6ec6d68e#.ugg3x77ws).\n--------------------------------------------------------------------------------------------------------------------\n\nThis is plausible by induction:\n\n* The original expert H is benign by definition.\n* If we start with a benign overseer H, and have working solutions to reward learning + robustness, then the trained agent A is benign.\n* If we start with a benign agent A, and have a woking solution to capability amplification, then the amplified agent Hᴬ will be benign.\n\nThere are important subtleties in this argument; for example, an agent may be benign with high probability, and the error probability may increase exponentially as we proceed through the induction. Dealing with these subtleties will require careful definitions, and in some cases adjustments to the algorithm. For example, in the case of increasing failure probabilities, we need to [strengthen the statement of amplification](https://medium.com/ai-control/reliability-amplification-a96efa115687) to avoid the problem.\n\n**Claim #2: The final agent has state-of-the-art performance.**\n---------------------------------------------------------------\n\nThis is plausible if our building blocks satisfy several desirable properties.\n\nFirst, capability amplification should be able to cross every level non-maximal level of capability. That is, for every level of capability, it is possible to start with an agent A who is below that level, and end up with an agent Hᴬ which is above that level:\n\n![]()For every possible place we could put the dotted line — every possible capability level — there must be some agent A for whom the orange arrow crosses that dotted line. Otherwise we would never be able to get to the other side of that dotted line, i.e. we would never be able to surpass that level of capability.Second, capability amplification should be monotonic (if A is at least as capable as B then Hᴬ should be at least as capable as Hᴮ).\n\nThird, reward learning should yield an agent whose capabilities are at least the infimum of our RL algorithm’s capabilities and the overseer’s capabilities, even if we train robustly.\n\nNow given a sequence of increasingly powerful fast agents we can take the supremum of their capabilities. Those agents will all be weaker than our RL algorithms and so the supremum is not the maximal capability, so we can consider a starting point from which capability amplification would cross that supremum. By hypothesis the sequence must eventually cross this starting point, and at that point amplification will push it above the supremum (and reward learning will keep it above the supremum). Making this argument carefully shows that the supremum is the state of the art for RL algorithms and that we attain the supremum after some finite number of steps. (Though all of this is based on a leaky abstraction of “capabilities.”)\n\nCost\n====\n\nI think this proposal will be most helpful if it imposes minimal additional overhead. My main goal is to develop algorithms with sublinear overhead, i.e. for which the fraction of overhead converges to 0 as the underlying algorithms become stronger.\n\nThe cost of this scheme depends on the quantitative properties of our basic building blocks:\n\n**Factor #1: How much do reward learning and robustness slow down training?**\n-----------------------------------------------------------------------------\n\nDuring RL, we need to evaluate the agent A many times. If we want to use a learned reward function we may need to evaluate A more times. And if we want to train a policy which remains benign off of the training distribution, we may need to evaluate A more times (e.g. since we may need to do adversarial training). Ideally that overhead will shrink as our algorithms become more powerful.\n\nI think this is plausible but far from certain (for now it is uncertain whether reward learning and robustness are even plausible). Some reassuring factors:\n\n* Reward learning / adversarial training can actually improve the performance of our system — the computational time spent on them might actually be well-spent even from a capabilities perspective\n* The difficulty of the “additional learning problem” we are trying to solve in each case (e.g. the concept of “defer to human control”) may not scale up linearly with the complexity of the underlying domain.\n\n**Factor #2: how many times do we have to invoke the overseer during training?**\n--------------------------------------------------------------------------------\n\nIn addition to calling the agent A, we will need to call the overseer H in order to get information about the reward function. Because the overseer is much more expensive than the agent, we would like to minimize the number of times we call the overseer. This can be quantified by the ratio between the number of calls to H and the number of calls to A. For example, we may need to call H once for every hundred calls to A.\n\n**Factor #3: how expensive is capability amplification?**\n---------------------------------------------------------\n\nCapability amplification is possible only because we allow the agent Hᴬ to think for much longer than A. But “much longer” could represent a range of values: is Hᴬ a hundred times more expensive to evaluate than A? A thousand? A million?\n\nRoughly speaking, factors #2 and #3 should be multiplied together to get the overhead from reward learning: factor #2 tells us how many times we have to call the overseer, while factor #3 tells us how expensive the overseer is.\n\nThe total overhead is thus (Factor #1) + (Factor #2) \\* (Factor #3). As an example, I’d be happy with values like 10% + 0.01% ×1000 = 20%.\n\nFactor #4: do we need to train many separate agents?\n----------------------------------------------------\n\nIf we need to use a sequence of N increasingly capable agents, then we would naively increase our training time by a factor of N. Naively, this would dominate the overhead, and in order for the scheme to be workable I think we would need to avoid it. I see a few plausible approaches:\n\n* We could use the collapsed version with a single agent.\n* We could use some other initialization or parameter-sharing scheme to effectively reuse the computational work done in training earlier agents.\n* The earlier agents could require significantly less training time than the final agent, e.g. because they are less capable. For example, if each agent takes only 20% as long to train as the following one, then the total overhead is only 25%.\n\nThese mechanisms can work together; for example, each agent may require some amount of non-reusable computation, but that amount may be reduced by a clever initialization scheme.\n\nConclusion\n==========\n\nI’ve outlined an approach to AI control for model-free RL. I think there is a very good chance, perhaps as high as 50%, that this basic strategy can eventually be used to train benign state-of-the-art model-free RL agents. Note that this strategy also applies to techniques like evolution that have historically been considered really bad news for control.\n\nThat said, the scheme in this post is still extremely incomplete. I have recently prioritized building a practical implementation of these ideas, rather than continuing to work out conceptual issues. That does not mean that I think the conceptual issues are worked out conclusively, but it does mean that I think we’re at the point where we’d benefit from empirical information about what works in practice (which is a long way from how I felt about AI control 3 years ago!)\n\nI think the largest technical uncertainty with this scheme is whether we can achieve enough robustness to avoid malign behavior in general.\n\nThis scheme does not apply to any components of our system which [aren’t learned end-to-end](https://medium.com/ai-control/not-just-learning-e3bfb5a1f96e#.mvuanlogj). The idea is to use this training strategy for any internal components of our system which use model-free RL. In parallel, we need to develop aligned variants of each other algorithmic technique that plays a role in our AI systems. In particular, I think that model-based RL with extensive planning is a likely sticking point for this program, and so is a natural topic for further conceptual research.", "url": "https://ai-alignment.com/benign-model-free-rl-4aae8c97e385", "title": "Benign model-free RL", "source": "html_articles", "source_type": "blogPost", "source_filetype": "pdf", "date_published": "2017-06-01T22:00:00Z", "authors": ["Paul Christiano"], "summary": [], "id": "60520ed6a1cee9acc884c14ac2e9de9f"} {"text": "(*Related:* [*Inaccessible Information*](/inaccessible-information-c749c6a88ce)*,* [*What does the universal prior actually look like?*](https://ordinaryideas.wordpress.com/2016/11/30/what-does-the-universal-prior-actually-look-like/)*,* [*Learning the prior*](/learning-the-prior-48f61b445c04))\n\nFitting a neural net implicitly uses a “wrong” prior. This makes neural nets more data hungry and makes them generalize in ways we don’t endorse, but it’s not clear whether it’s an alignment problem.\n\nAfter all, if neural nets are what works, then both the aligned and unaligned AIs will be using them. It’s not clear if that systematically disadvantages aligned AI.\n\nUnfortunately I think it’s an alignment problem:\n\n* I think the neural net prior may work better for agents with certain kinds of simple goals, as described in [Inaccessible Information](/inaccessible-information-c749c6a88ce). The problem is that the prior mismatch may bite harder for some kinds of questions, and some agents simply never need to answer those hard questions.\n* I think that Solomonoff induction [generalizes catastrophically](https://ordinaryideas.wordpress.com/2016/11/30/what-does-the-universal-prior-actually-look-like/) because it becomes dominated by consequentialists who use better priors.\n\nIn this post I want to try to build some intuition for this problem, and then explain why I’m currently feeling excited about learning the right prior.\n\nIndirect specifications in universal priors\n===========================================\n\nWe usually work with very broad “universal” priors, both in theory (e.g. Solomonoff induction) and in practice (deep neural nets are a very broad hypothesis class). For simplicity I’ll talk about the theoretical setting in this section, but I think the points apply equally well in practice.\n\nThe classic universal prior is a random output from a random stochastic program. We often think of the question “which universal prior should we use?” as equivalent to the question “which programming language should we use?” but I think that’s a loaded way of thinking about it — not all universal priors are defined by picking a random program.\n\nA universal prior can never be *too* wrong — a prior P is universal if, for any other computable prior Q, there is some constant *c* such that, for all *x,* we have P(*x*) > *c* Q(x). That means that given enough data, any two universal priors will always converge to the same conclusions, and no computable prior will do much better than them.\n\nUnfortunately, universality is much less helpful in the finite data regime. The first warning sign is that our “real” beliefs about the situation can appear in the prior in two different ways:\n\n* **Directly:** ifour beliefs about the world are described by a simple computable predictor, they are guaranteed to appear in a universal prior with significant weight.\n* **Indirectly**: the universal prior also “contains” other programs that are themselves acting as priors. For example, suppose I use a universal prior with a terribly inefficient programming language, in which each character needed to be repeated 10 times in order for the program to do anything non-trivial. This prior is still universal, but it’s reasonably likely that the “best” explanation for some data will be to first sample a really simple interpret for a *better* programming language, and then draw a uniformly randomly program in that better programming language.\n\n(There isn’t a bright line between these two kinds of posterior, but I think it’s extremely helpful for thinking intuitively about what’s going on.)\n\nOur “real” belief is more like the direct model — we believe that the universe is a lawful and simple place, not that the universe is a hypothesis of some agent trying to solve a prediction problem.\n\nUnfortunately, for realistic sequences and conventional universal priors, I think that indirect models are going to dominate. The problem is that “draw a random program” isn’t actually a very good prior, even if the programming language is OK— if I were an intelligent agent, even if I knew nothing about the particular world I lived in, I could do a lot of a priori reasoning to arrive at a much better prior.\n\nThe conceptually simplest example is “I think therefore I am.” Our hypotheses about the world aren’t just arbitrary programs that produce our sense experiences— we restrict attention to hypotheses that explain why we exist and for which it matters what we do. This rules out the overwhelming majority of programs, allowing us to assign significantly higher prior probability to the real world.\n\nI can get other advantages from a priori reasoning, though they are a little bit more slippery to talk about. For example, I can think about what kinds of specifications make sense and really are most likely a priori, rather than using an arbitrary programming language.\n\nThe upshot is that an agent who is trying to do something, and has enough time to think, actually seems to implement a *much* better prior than a uniformly random program. If the complexity of specifying such an agent is small relative to the prior improbability of the sequence we are trying to predict, then I think the universal prior is likely to pick out the sequence indirectly by going through the agent (or else in some even weirder way).\n\nI make this argument in the case of Solomonoff induction in [What does the universal prior actually look like?](http://v) I find that argument pretty convincing, although Solomonoff induction is weird enough that I expect most people to bounce off that post.\n\nI make this argument in a much more realistic setting in [Inaccessible Information](https://ordinaryideas.wordpress.com/2016/11/30/what-does-the-universal-prior-actually-look-like/). There I argue that if we e.g. use a universal prior to try to produce answers to informal questions in natural language, we are very likely to get an indirect specification via an agent who reasons about how we use language.\n\nWhy is this a problem?\n======================\n\nI’ve argued that the universal prior learns about the world indirectly, by first learning a new better prior. Is that a problem?\n\nTo understand how the universal prior generalizes, we now need to think about how the learned prior generalizes.\n\nThe learned prior is itself a program that reasons about the world. In both of the cases above (Solomonoff induction and neural nets) I’ve argued that the simplest good priors will be goal-directed, i.e. will be *trying* to produce good predictions.\n\nI have two different concerns with this situation, both of which I consider serious:\n\n* **Bad generalizations may disadvantage aligned agents**. The simplest version of “good predictions” may not generalize to some of the questions we care about, and may put us at a disadvantage relative to agents who only care about simpler questions. (See [Inaccessible Information](/inaccessible-information-c749c6a88ce).)\n* **Treacherous behavior**. Some goals might be easier to specify than others, and a wide range of goals may converge instrumentally to “make good predictions.” In this case, the simplest programs that predict well might be trying to do something totally unrelated, when they no longer have instrumental reasons to predict well (e.g. when their predictions can no longer be checked) they may do something we regard as catastrophic.\n\nI think it’s unclear how serious these problems are in practice. But I think they are huge obstructions from a theoretical perspective, and I think there is a reasonable chance that this will bite us in practice. Even if they aren’t critical in practice, I think that it’s methodologically worthwhile to try to find a good scalable solution to alignment, rather than having a solution that’s contingent on unknown empirical features of future AI.\n\nLearning a competitive prior\n============================\n\nFundamentally, I think our mistake was building a system that uses the wrong universal prior, one that fails to really capture our beliefs. Within that prior, there are other agents who use a better prior, and those agents are able to outcompete and essentially take over the whole system.\n\nI’ve considered lots of approaches that try to work around this difficulty, taking for granted that we won’t have the right prior and trying to somehow work around the risky consequences. But now I’m most excited about the direct approach: give our original system the right prior so that sub-agents won’t be able to outcompete it.\n\nThis roughly tracks what’s going on in our real beliefs, and why it seems absurd to us to infer that the world is a dream of a rational agent—why think that the agent will assign higher probability to the real world than the “right” prior? (The simulation argument is actually quite subtle, but I think that after all the dust clears this intuition is basically right.)\n\nWhat’s really important here is that our system uses a prior which is competitive, as evaluated by our real, endorsed (inaccessible) prior. A neural net will never be using the “real” prior, since it’s built on a towering stack of imperfect approximations and is computationally bounded. But it still makes sense to ask for it to be “as good as possible” given the limitations of its learning process — we want to avoid the situation where the neural net is able to learn a new prior which *predictably* to outperforms the outer prior. In that situation we can’t just blame the neural net, since it’s demonstrated that it’s able to learn something better.\n\nIn general, I think that competitiveness is a desirable way to achieve stability — using a suboptimal system is inherently unstable, since it’s easy to slip off of the desired equilibrium to a more efficient alternative. Using the wrong prior is just one example of that. You can try to avoid slipping off to a worse equilibrium, but you’ll always be fighting an uphill struggle.\n\nGiven that I think that finding the right universal prior should be “plan A.” The real question is whether that’s tractable. My current view is that it looks plausible enough (see [Learning the prior](/learning-the-prior-48f61b445c04) for my current best guess about how to approach it) that it’s reasonable to focus on for now.", "url": "https://ai-alignment.com/better-priors-as-a-safety-problem-24aa1c300710", "title": "Better priors as a safety problem", "source": "html_articles", "source_type": "blogPost", "source_filetype": "pdf", "date_published": "2020-07-04T22:00:00Z", "authors": ["Paul Christiano"], "summary": [], "id": "9f68c7bd116991f1958f0b90461812c4"} {"text": "In the first half of this post, I’ll discuss three research directions that I think are especially promising and relevant to AI alignment:\n\n1. **Reliability and robustness.** Building ML systems which behave acceptably in the worst case rather than only on the training distribution.\n2. **Oversight / reward learning.** Constructing objectives and training strategies which lead our policies to do what we intend.\n3. **Deliberation and amplification.** Surpassing human performance without simultaneously abandoning human preferences.\n\nI think that we have several angles of attack on each of these problems, and that solutions would significantly improve our ability to align AI. My current feeling is that these areas cover much of the key work that needs to be done.\n\nIn the second half of the post, I’ll discuss three desiderata that I think should guide research on alignment:\n\n1. **Secure**. Our solutions should work acceptably even when the environment itself is under the influence of an adversary.\n2. **Competitive**. Our solutions should impose minimal overhead, performance penalties, or restrictions compared to malign AI.\n3. **Scalable.** Our solutions should continue to work well even when the underlying learning systems improve significantly.\n\nI think that taking these requirements seriously leads us to substantially narrow our focus.\n\nIt may turn out that these desiderata are impossible to meet, but if so I think that the first order of business should be understanding clearly *why* they are impossible. This would let us better target our work on alignment and better prepare for a future where we won’t have a completely satisfying solution to alignment.\n\n(The ideas in this post are not novel. My claimed contribution is merely collecting these things together. I will link to my own writing on each topic in large part because that’s what I know.)\n\nI. Research directions\n======================\n\n1. Reliability and robustness\n-----------------------------\n\nTraditional ML algorithms optimize a model or policy to perform well on the training distribution. These models can behave arbitrarily badly when we move away from the training distribution. Similarly, they can behave arbitrarily badly on a small part of the training distribution.\n\nI think this is bad news:\n\n* Deploying ML systems will critically change their environment, in a way that is hard or impossible to simulate at training time. (The “treacherous turn” is a special case of this phenomenon.)\n* Deployed ML systems are interconnected and exposed to the same world. So if conditions change in a way that causes one of them to fail, *many* systems may fail simultaneously.\n* If ML systems are extremely powerful, or if they play a critical role in society, then a widespread failure may have catastrophic consequences.\n\nI’m aware of three basic approaches to reliability that seem to me like they could plausibly scale and be competitive:\n\n(*ETA: this list is superseded by the list in* [*Techniques for Optimizing Worst-Case Performance*](/techniques-for-optimizing-worst-case-performance-39eafec74b99)*. I removed consensus and added interpretability and verification. I don’t discuss “learning the right model,” which I still consider a long shot.*)\n\n* **Adversarial training**. At training time, attempt to construct inputs that induce problematic behavior and train on those. Eventually, we hope there will be no catastrophe-inducing inputs left. We don’t yet know what is possible to achieve. ([Szegedy 2014](https://arxiv.org/pdf/1312.6199v4.pdf), [Goodfellow 2015](https://arxiv.org/pdf/1412.6572v3.pdf))\n* **Ensembling and consensus**. We often have confidence that there exists *some* models which will generalize appropriately. If we can verify that many models agree about an answer, we can be confident that the consensus is correct. If we use this technique, we will often need to abstain on unfamiliar inputs, and in order to remain competitive we will probably need to represent the ensemble implicitly. ([Khani 2016](https://cs.stanford.edu/~pliang/papers/unanimity-acl2016.pdf))\n* **Learning the right model**. If we understood enough about the structure of our model (for example if it reflected the structure of the underlying data-generating process), we might be confident that it will generalize correctly. Very few researchers are aiming for a secure / competitive / scalable solution along these lines, and finding one seems almost (but not completely) hopeless to me. This is MIRI’s approach.\n\nUsual caveats apply: these approaches may need to be used in combination; we are likely to uncover completely different approaches in the future; and I’m probably overlooking important existing approaches.\n\nI think this problem is pretty well-understood and well-recognized, but it looks really hard. ML researchers mostly focus on improving performance rather than robustness, and so I think that this area remains neglected despite the problem being well-recognized.\n\n(Previous posts on this blog: [*red teams*](https://medium.com/ai-control/red-teams-b5b6de33dc76#.w2nsces19)*,* [*learning with catastrophes*](https://medium.com/ai-control/learning-with-catastrophes-59387b55cc30#.a590k1j0p)*,* [*thoughts on training highly reliable models*](https://medium.com/ai-control/some-thoughts-on-training-highly-reliable-models-2c78c17e266d#.pbtkz0czs))\n\n2. Oversight / reward learning\n------------------------------\n\nML systems are typically trained by optimizing some objective over the training distribution. For this to yield “good” behavior, the objective needs to sufficiently close to what we really want.\n\nI think this is also bad news:\n\n* Some tasks are very “easy” to frame as optimization problems. For example, we can already write an objective to train an RL agent to operate a profit-maximizing autonomous corporation (though for now we can only train very weak agents).\n* Many tasks that humans care about, such as maintaining law and order or helping us better understand our values, are extremely hard to convert into precise objectives: they are inherently poorly-defined or involve very long timescales, and simple proxies can be “gamed” by a sophisticated agent.\n* As a result, many tasks that humans care about may not get done well; we may find ourselves in an increasingly sophisticated and complex world driven by completely alien values.\n\nSo far, the most promising angle of attack is to optimize extremely complex objectives, presumably by learning them.\n\nI’m aware of two basic approaches to reward learning that seem like they could plausibly scale:\n\n* **Inverse reinforcement learning**. We can observe human behavior in a domain and try to infer what the human is “trying to do,” converting it into an objective that can be used to train our systems. ([Russell 1998](http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.152.6795&rep=rep1&type=pdf), [Ng 2000](http://ai.stanford.edu/~ang/papers/icml00-irl.pdf), [Hadfield-Menell 2016](https://arxiv.org/pdf/1606.03137v3.pdf))\n* **Learning from human feedback**. We can pose queries to humans to figure out which behaviors or outcomes they prefer, and then optimize our systems accordingly. ([Isbell 2001](https://papers.nips.cc/paper/2118-cobot-a-social-reinforcement-learning-agent.pdf), [Thomaz 2006](http://robotic.media.mit.edu/wp-content/uploads/sites/14/2015/01/Thomaz-etal-AAAI-06.pdf), [Pilarski 2011](http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.715.7132&rep=rep1&type=pdf), [Knox 2012](http://www.bradknox.net/wp-content/uploads/2013/06/thesis-knox.pdf))\n\nThese solutions seem much closer to working than those listed in the previous section on reliability and robustness. But they still face many challenges, and are not yet competitive, scalable, *or* secure:\n\n* IRL requires a prior over preferences and a model of how human behavior relates to human preferences. Current implementations either only work in severely restricted environments, or use simple models of human rationality which cause the learner to attempt to very precisely imitate the human’s behavior (which might be challenging or impossible).\n* For similar reasons, existing IRL implementations are not able to learn from other data like human utterances or off-policy behavior, even though these constitute the largest and richest source of data about human preferences.\n* Human feedback requires accurately eliciting human preferences, which introduces many complications. (I discuss a few easy problems [here](https://medium.com/ai-control/thoughts-on-reward-engineering-82b193ec03f6#.6n2d4co3i).)\n* Human feedback is expensive and so we will need to be able to learn from a relatively small amount of labeled data. Demonstrations are also expensive and so may end up being a bottleneck for approaches based on IRL though it’s not as clear.\n* Both imitation learning and human feedback may fail when evaluating a behavior requires understanding where the behavior came from. For example, if you ask a human to evaluate a painting they may not be able to easily check whether it is derivative, even if over the long run they would prefer their AI to paint novel paintings.\n\n(I’ve described these approaches in the context of “human” behavior, but the expert providing feedback/demonstrations might themselves be a human augmented with AI assistance, and eventually may simply be an AI system that is aligned with human interests.)\n\nThis problem has not received much attention in the past, but it seems to be rapidly growing in popularity, which is great. I’m currently working on a project in this area.\n\n(*Previous posts on this blog:* [*the reward engineering problem*](https://medium.com/ai-control/the-reward-engineering-problem-30285c779450#.f1ihhss6w)*,* [*ambitious vs. narrow value learning*](https://medium.com/ai-control/ambitious-vs-narrow-value-learning-99bd0c59847e#.s33f26ht5)*,* [*against mimicry*](https://medium.com/ai-control/against-mimicry-6002a472fc42#.chg6xlqve)*,* [*thoughts on reward engineering*](https://medium.com/ai-control/thoughts-on-reward-engineering-82b193ec03f6#.iim1wpt9a)*.*)\n\n3. Deliberation and amplification\n---------------------------------\n\nMachine learning is usually applied to tasks where feedback is readily available. The research problem in the previous section aims to obtain quick feedback in general by using human judgments as the “gold standard.” But this approach breaks down if we want to exceed human performance.\n\nFor example, it is easy to see how we could use machine learning to train ML systems to make human-level judgments about urban planning, by training them to produce plans that sound good to humans. But if we want to train an ML system to make superhuman judgments about how to lay out a city, it’s completely unclear how we could do it — without spending billions of dollars trying out the system’s ideas and telling it which ones work.\n\nThis is a problem for the same reasons discussed in the preceding section. If our society is driven by systems superhumanly optimizing short-term proxies for what we care about — such as how much they impress humans, or how much money they make—then we are liable to head off in a direction which does not reflect our values or leave us in meaningful control of the situation.\n\nIf we lowered our ambitions and decide that superhuman performance is inherently unsafe, we would be leaving huge amounts of value on the table. Moreover, this would be an unstable situation: it could last only as long as everyone with access to AI coordinated to pull their punches and handicap their AI systems.\n\nI’m aware of two approaches to this problem that seem like they could scale:\n\n* **IRL [hard mode]**. In principle we can use IRL to recover a representation of human preferences, and then apply superhuman intelligence to satisfy those preferences much better than a human could. However, this is a much more ambitious and challenging form of IRL than is usually discussed, which [remains quite challenging](https://medium.com/ai-control/the-easy-goal-inference-problem-is-still-hard-fad030e0a876) even when you set aside all of the usual algorithmic and statistical difficulties. (Jacob Steinhardt and Owain Evans discuss this issue in [a recent post](https://jsteinhardt.wordpress.com/2017/02/07/model-mis-specification-and-inverse-reinforcement-learning/).)\n* **Iterated amplification**. A group of interacting humans can potentially be smarter than a single human, and a group of AI systems could be smarter than the original AI system. By using these groups as “experts” in place of individual humans, we could potentially train much smarter systems. The key questions are how to perform this composition in a way that causes the group to implement the same preferences as its members, and whether the cognitive benefits for groups are large enough to overcome the overhead of coordination. (I discuss this approach [here](https://medium.com/ai-control/policy-amplification-6a70cbee4f34#.ampcyxi9r) and in follow-up work.)\n* **IRL for cognition**. Rather than applying IRL to a humans’ actions, we could apply it to the cognitive actions taken by a human while they deliberate about a subject. We can then use those values to execute a longer deliberation process, asking “what would the human do if they had more time to think / more powerful cognitive tools?” I think this approach ends up being similar to a blend of the previous two.\n\nIt’s completely unclear how hard this problem is or how far we are from a solution. It is a much less common research topic than either of the preceding points.\n\nIn the short term, I think it might be easier to study analogs of this problem in the context of human behavior than to attempt to directly study it in the context of AI systems.\n\n[Ought](https://blog.ought.com/) is a non-profit aimed at addressing (roughly) this problem; I think it is reasonably likely to make significant progress.\n\n(*Previous posts on this blog:* [*capability amplification*](https://medium.com/ai-control/policy-amplification-6a70cbee4f34)*,* [*reliability amplification*](https://medium.com/ai-control/reliability-amplification-a96efa115687)*,* [*security amplification*](https://medium.com/ai-control/security-amplification-f4931419f903)*,* [*meta-execution*](https://medium.com/ai-control/meta-execution-27ba9b34d377)*,* [*the easy goal inference problem is still hard*](https://medium.com/ai-control/the-easy-goal-inference-problem-is-still-hard-fad030e0a876))\n\nII. Desiderata\n==============\n\nI’m most interested in algorithms that are secure, competitive, and scalable, and I think that most research programs are very unlikely to deliver these desiderata (this is why the lists above are so short).\n\nSince these desiderata are doing a lot of work in narrowing down the space of possible research directions, it seems worthwhile to be thoughtful and clear about them. It would be easy to gloss over any of them as obviously unobjectionable, but I would be more interested in people pushing back on the strong forms than implicitly accepting a milder form.\n\n1. Secure\n---------\n\nMany pieces of software work “well enough” most of the time; we often learn this not by a deep analysis but by just trying it and seeing what happens. “Works well enough” often breaks down when an adversary enters the prediction.\n\nWhether or not that’s a good way to build AI, I think it’s a bad way to do alignment research right now.\n\nInstead, we should try to come up with alignment solutions that work in the least convenient world, when nature itself is behaving adversarially. Accomplishing this requires argument and analysis, and cannot be exclusively or based on empirical observation.\n\nAI systems obviously won’t work well in the worst case (there is no such thing as a free lunch) but it’s reasonable to hope that our AI systems will never respond to a bad input by actively *trying* to hurt us —at least as long as we remain in physical control of the computing hardware, and the training process, *etc.*\n\nWhy does security seem important?\n\n* It’s really hard to anticipate what is going to happen in the future. I think it’s easy to peer into the mists and say “well, hard to know what’s going to happen, but this solution might work out OK,” and then to turn out to be too optimistic. It’s harder to make this error when we hold ourselves to a higher standard, of actually giving an argument for why things work. I think that this is a general principle for doing useful research in advance of when it is needed — we should hold ourselves to standards that are unambiguous and clear even when the future is murky. This is a theme that will recur in the coming sections.\n* We are used to technological progress proceeding slowly compared to timescales of human judgment and planning. It seems quite likely that powerful AI will be developed during or after a period of acceleration, challenging those assumptions and undermining a traditional iterative approach to development.\n* The world really does contain adversaries. It’s one thing to build insecure software when machines have power over modest amounts of money with significant human oversight, it’s another thing altogether when they have primary responsibility for enforcing the law. I’m not even particularly worried about human attackers, I’m mostly worried about a future where all it takes to launch attacks is money (which can itself be earned by executing attacks). Moreover, if the underlying ML is insecure and ML plays a role in almost all software, we are going to have a hard time writing any secure software at all.\n\n(*Previous posts:* [*security and AI alignment*](https://medium.com/ai-control/security-and-ai-control-675ace05ce31))\n\n2. Competitive\n--------------\n\nIt’s easy to avoid building an unsafe AI system (for example: build a spreadsheet instead). The only question is how much you have to sacrifice to do it.\n\nIdeally we’ll be able to build benign AI systems that are just as efficient and capable as the best AI that we could build by any means. That means: we don’t have to additional domain-specific engineering work to align our systems, benign AI doesn’t require too much more data or computation, and our alignment techniques don’t force us to use particular techniques or restrict our choices in other ways.\n\n(More precisely, I would consider an alignment strategy a success if the additional costs are sublinear: if the fraction of resources that need to be spent on alignment research and run-time overhead *decreases* as the AI systems become more powerful, converging towards 0.)\n\nWhy is competitiveness important?\n\n**A. It’s easy to tell when a solution is plausibly competitive, but very hard to tell exactly how uncompetitive an uncompetitive solution will be.** For example, if a purported alignment strategy requires an AI not to use technique or development strategy X, it’s easy to tell that this proposal isn’t competitive in general, but very hard to know exactly how uncompetitive it is.\n\nAs in the security case, it seems very easy to look into the fog of the future and say “well this seems like it will probably be OK” and then to turn out to be too optimistic. If we hold ourselves to the higher standard of competitiveness, it is much easier to stay honest.\n\nRelatedly, we want alignment solutions that work across an extremely large range of techniques not just because we are uncertain about which techniques will be important, but because generalizing across all of the situations we can foresee is a good predictor of working for situations we can’t foresee.\n\n**B. You can’t unilaterally use uncompetitive alignment techniques; we would need global coordination to avoid trouble.**If we *don’t* know how to build competitive benign AI, then users/designers of AI systems have to compromise efficiency in order to maintain reliable control over those systems. The most efficient systems will by default be built by whoever is willing to accept the largest risk of catastrophe (or perhaps by actors who consider unaligned AI a desirable outcome).\n\nIt may be possible to avert this kind of race to the bottom by effective coordination by e.g. enforcing regulations which mandate adequate investments in alignment or restrict what kinds of AI are deployed. Enforcing such controls domestically is already a huge headache. But internationally things are even worse: a country that handicapped its AI industry in order to proceed cautiously would face the risk of being overtaken by a less prudent competitor, and avoiding *that* race would require effective international coordination.\n\nUltimately society will be able and willing to pay *some* efficiency cost to reliably align AI with human interests. But the higher that cost, the harder the coordination problem that we will need to solve. I think the research community should be trying to make that coordination problem as easy as possible.\n\n(*Previous posts:* [*prosaic AI alignment*](https://medium.com/ai-control/prosaic-ai-control-b959644d79c2)*,* [*a possible stance for AI control*](https://medium.com/ai-control/a-possible-stance-for-ai-control-research-fe9cf717fc1b)*,* [*efficient and safely scalable*](https://medium.com/ai-control/efficient-and-safely-scalable-8218fa8a871f#.m7z8qccef))\n\n3. Scalable\n-----------\n\nOver time, we are acquiring more data, more powerful computers, richer model classes, better optimization algorithms, better exploration strategies, and so on. If we extrapolate these trends, we end up with very powerful models and policies.\n\nMany approaches to alignment break down at some point in this extrapolation. For example, if we train an RL agent with a reward function which imperfectly approximates what we want, it is likely to fail once the agent becomes sufficiently sophisticated — unless the reward function itself becomes more sophisticated in parallel.\n\nIn contrast, let’s say that a technique is “scalable” if it continues to work just as well even when the underlying learning becomes much more powerful. (See also: Eliezer’s more colorful “[omnipotence test](https://arbital.com/p/omni_test/).”)\n\nThis is another extremely demanding requirement. It rules out many possible approaches to alignment. For example, it probably rules out any approach that involves hand-engineering reward functions. More subtly, I expect it will rule out any approach that requires hand-engineering an informative prior over human values (though some day we will hopefully find a scalable approach to IRL).\n\nWhy is scalability important?\n\n* As in the previous sections, it’s easy to be too optimistic about exactly when a non-scalable alignment scheme will break down. It’s much easier to keep ourselves honest if we actually hold ourselves to producing scalable systems.\n* If AI progress rapidly, and especially if AI research is substantially automated, then we may literally confront the situation where the capabilities of our AI systems are changing rapidly. It would be desirable to have alignment schemes that continued to work in this case.\n* If we don’t have scalable solutions then we require a continuing investment of research on alignment in order to “keep up” with improvements in the underlying learning. This risks compromising competitiveness, forcing AI developers to make a hard tradeoff between alignment and capabilities. This would be acceptable if the ongoing investments in alignment are modest compared to the investments in capabilities. But as with the last point, that’s a very murky question about which it seems easy to be overly optimistic in advance. If we think the problem will be easy in the future when we have more computing, then we ought to be able to do it now. Or at the very least we ought to be able to explain how more computing will make it easy. If we make such an explanation sufficiently precise then it will itself become a scalable alignment proposal (though perhaps one that involves ongoing human effort).\n\n(*Previous posts:* [*scalable AI control*](https://medium.com/ai-control/scalable-ai-control-7db2436feee7#.tljxalxgv))\n\nAside: feasibility\n------------------\n\nOne might reject these desiderata because they seem too demanding: it would be great if we had a secure, competitive and scalable approach to alignment, but that might not be possible.\n\nI am interested in trying to satisfy these desiderata despite the fact that they are quite demanding, for two reasons:\n\n* I think that it is very hard to say in advance what is possible or impossible. I don’t yet see any fundamental obstructions to achieving these goals, and until I see hard obstructions I think there is a significant probability that the problem will prove to be feasible (or “almost possible,” in the sense that we may need to weaken these goals only slightly).\n* If there is some fundamental obstruction to achieving these goals, then it would be good to understand that obstruction in detail. Understanding it would help us understand the nature of the problem we face and would allow us to do better research on alignment (by focusing on the key aspects of the problem). And knowing that these problems are impossible, and understanding exactly how impossible they are, helps us prepare for the future, to build institutions and mechanisms that will be needed to cope with unavoidable limitations of our AI alignment strategies.\n\nIII. Conclusion\n===============\n\nI think there is a lot of research to be done on AI alignment; we are limited by a lack of time and labor rather than by a lack of ideas about how to make progress.\n\nResearch relevant to alignment is already underway; researchers and funders interested in alignment can get a lot of mileage by supporting and fleshing out existing research programs in relevant directions. I don’t think it is correct to assume that if anyone is working on a problem then it is going to get solved — even amongst things that aren’t literally at the “no one else is doing it” level, there are varying degrees of neglect.\n\nAt the same time, the goals of alignment are sufficiently unusual that we shouldn’t be surprised or concerned to find ourselves doing unusual research. I think that area #3 on deliberation and amplification is almost completely empty, and will probably remain pretty empty until we have clearer statements of the problem or convincing demonstrations of work in that area.\n\nI think the distinguishing feature of research motivated by AI alignment should be an emphasis on secure, competitive, and scalable solutions. I think these are very demanding requirements that significantly narrow down the space of possible approaches and which are rarely explicitly considered in the current AI community.\n\nIt may turn out that these requirements are infeasible; if so, one key output of alignment research will be a better understanding of the key obstacles. This understanding can help guide less ambitious alignment research, and can help us prepare for a future in which we won’t have a completely satisfying solution to AI alignment.\n\nThis post has mostly focused on research that would translate directly into concrete systems. I think there is also a need for theoretical research building better abstractions for reasoning about optimization, security, selection, consequentialism, and so on. It is plausible to me that we will produce acceptable systems with our current conceptual machinery, but if we want to convincingly *analyze* those systems then I think we will need significant conceptual progress (and better concepts may lead us to different approaches). I think that practical and theoretical research will be attractive to different researchers, and I don’t have strong views about their relative value.", "url": "https://ai-alignment.com/directions-and-desiderata-for-ai-control-b60fca0da8f4", "title": "Directions and desiderata for AI alignment", "source": "html_articles", "source_type": "blogPost", "source_filetype": "pdf", "date_published": "2018-05-11T22:00:00Z", "authors": ["Paul Christiano"], "summary": [], "id": "97b64590cb476570df96a60ac5651ba2"} {"text": "In this post I describe a pattern of behavior I call “implicit extortion.” RL agents are particularly susceptible to implicit extortion, in a way that is likely to be problematic for high-stakes applications in open-ended strategic environments.\n\nI expect that many people have made this point before. My goal is just to highlight the issue and to explore it a little bit more carefully.\n\nBasic setup\n-----------\n\nConsider two actors, the target (T) and manipulator (M), such that:\n\n* M wants T to perform some *target action* — e.g. make a payment, leak information, buy a particular product, handicap itself…\n* M can take *destructive actions* that hurts both M and T — e.g. spreading rumors about T, undercutting T in a marketplace, physically attacking T…\n\nIn *explicit extortion*, M threatens to take the destructive action unless T performs the target action. Then a naive T reasons: “if I don’t take the target action, something bad will happen, so I better take the target action.”\n\nIn *implicit extortion*, M simply performs the destructive action whenever T doesn’t perform the target action. Then a naive T eventually learns that failure to take the target action is associated with something bad happening, and so learns to take the target action.\n\nImplicit extortion is very similar to explicit extortion:\n\n* T would prefer notbe the kind of person who is vulnerable to extortion, so that bad things don’t happen to them.\n* Extortion doesn’t necessarily cost M very much, if they don’t follow through on the threat very often.\n\nHowever, implicit extortion can be particularly hard to avoid:\n\n* It can be effective without T realizing that it’s happening, which makes it hard for them to respond appropriately even if they do have defenses.\n* It affects simple RL algorithms (which don’t have defenses against extortion, and can’t be easily modified to include such defenses).\n\nExample\n-------\n\nThe most extreme and blatant example would be for M to send T a daily request for $100. On any day when T fails to pay, M launches a costly cyberattack against T. A human would immediately recognize this behavior as extortion and would respond appropriately, but an RL algorithm might simply notice that paying is the best strategy and therefore decide to pay.\n\nImplicit extortion can be much harder to detect, while still being effective. Suppose that every time T tries to change their product, M runs a grassroots smear campaign. It might not be possible for T to distinguish the situations “M is attempting to manipulate me into not changing my product” and “Every time I change the product people get really unhappy, so I should do so sparingly.”\n\nDetails\n=======\n\nHow expensive is this for the manipulator?\n------------------------------------------\n\nSuppose that T is using an RL algorithm, and M is trying to manipulate them. How expensive is this for M? How likely is it to be worthwhile?\n\n**At equilibrium**: T learns to always perform the target action; so only fails to take the target action while exploring. The long-term cost to M depends entirely on the target’s exploration policy.\n\nIf T uses ε-exploration, then they take the target action (1 − ε) of the time. So M only needs to pay the cost of the destructive action on an ε fraction of trials.\n\nFor complex high-level actions, the effective ε can’t be *too* high — it’s not a good idea to “try something crazy” 10% of the time just to see what happens. But let’s be conservative and suppose that ε=0.1 anyway.\n\nSuppose that M is trying to directly extract money from T, $10 at a time, and that it costs M $50 of value in order to cause $15 of trouble for T.\n\nIf M asks for $10 on 10 occasions, T will refuse to pay only once as an exploration. Then M needs to pay that $50 cost only once, thereby ensuring that the cost of paying (=$10) is smaller than the average cost of refusing to pay (=$15). Meanwhile, M makes $90, pocketing $40 of profit.\n\nIn general, M can make a profit whenever the product of (payment efficiency) \\* (destructive efficiency) > ε, where “payment efficiency” is the benefit to M divided by the cost to T of the target action, and “destructive efficiency” is the cost to T divided by the cost to M of the destructive action.\n\nIn practice I think it’s not too uncommon for payment efficiency to be ~1, and for destructive efficiency to be >1, such that extortion is possible regardless of ε. Small values of ε make extortion considerably easier and more cost-effective, and make it much harder to prevent.\n\n**During learning**: the analysis above only applies when the agent has already learned to consistently take the target action. Earlier in learning, the target action may only occur rarely and so punishment may be very expensive. This could be worth it over the long term but may be a major hurdle.\n\nFortunately for M, they can simply start by rewarding the target behavior, and then gradually shift to punishment once the target behavior is common. From the perspective of the RL agent, the benefit of the target action is the same whether it’s getting a reward or avoiding a punishment.\n\nIn the cash payment example, M could start by paying T $20 every time that T sends $10. Once T notices that paying works well, M can gradually reduce the payment towards $10 (but leaving a profit so that the behavior becomes more and more entrenched). Once T is consistently paying, M can start scaling up the cost of not paying while it gradually reduces the benefits of paying.\n\nAnalyzing the error\n-------------------\n\nPaying off a (committed) extortionist typically has the best consequences and so is recommended by causal decision theory, but *having the policy of paying off extortionists* is a bad mistake.\n\nEven if our decision theory would avoid caving in to extortion, it can probably only avoid implicit extortion if it recognizes it. For example, UDT typically avoids extortion because of the logical link from “I cave to extortion” → “I get extorted.” There is a similar logical link from “I cave to implicit extortion” → “I get implicitly extorted.” But if we aren’t aware that an empirical correlation is due to implicit extortion, we won’t recognize this link and so it can’t inform our decision.\n\nIn practice the target is only in trouble if would-be manipulators know that they are inclined to comply with extortion. If manipulators base that judgment on past behavior, then taking actions that “look like what someone vulnerable to extortion would do” is itself a bad decision that even a causal decision theorist would avoid. Unfortunately, it’s basically impossible for an RL algorithm to learn to avoid this, because the negative consequences only appear over a very long timescale. In fact, the timescale for the negative consequences is longer than the timescale over which the RL agent adjusts its policy— which is too long for a traditional RL system to possibly do the credit assignment.\n\nOther learning systems\n======================\n\nWhat algorithms are vulnerable?\n-------------------------------\n\nAt first glance the problem may seem distinctive to policy gradient RL algorithms, where we take actions randomly and then reinforce whatever actions are associated with a high reward.\n\nBut the same problem afflicts any kind of RL. For example, a model-based agent would simply learn the model “not doing what the manipulator wants causes to happen,” and using that model for planning would have exactly the same effect as using policy gradients.\n\nMore broadly, the problem is with the algorithm: “learn an opaque causal model and use it to inform decisions.” That’s an incredibly general algorithm. If you aren’t willing to use that algorithm, then you are at a significant competitive disadvantage, since the world contains lots of complicated causal processes that we can learn about by experiment but can’t model explicitly. So it seems like everyone just has to live with the risk of implicit extortion.\n\nI describe the problem as afflicting “algorithms,” but it can also afflict humans or organizations. For example, any organization that is compelled by arguments like “X has always worked out poorly in the past, even though we’re not quite sure why, so let’s stop doing it” is potentially vulnerable to implicit extortion.\n\nWhat about human learning?\n--------------------------\n\nHumans have heuristics like vindictiveness that help prevent us from being manipulated by extortion, and which seem particularly effective against implicit extortion. Modern humans are also capable of doing explicit reasoning to recognize the costs of giving in to extortion.\n\nOf course, we can only be robust to implicit extortion when we recognize it is occurring. Humans do have some general heuristics of caution when acting on the basis of opaque empirical correlations, or in situations where they feel they might be manipulable. However, it still seems pretty clear that human learning is vulnerable to implicit extortion in practice. (Imagine a social network which subtly punishes users, e.g. by modulating social feedback, for failing to visit the site regularly.)\n\nEvolution?\n----------\n\nEvolution itself doesn’t have any check against extortion, and it operates entirely by empirical correlations, so why isn’t it exploited in this way?\n\nManipulating evolution requires the manipulator to have a time horizon that is many times the generation length of the target. There aren’t many agents with long enough time horizons, or sophisticated enough behavior, to exploit the evolutionary learning dynamic (and in particular, evolution can’t easily learn to exploit it).\n\nWhen we do have such a large gap in time horizons and sophistication — for example, when humans square off against bacteria with very rapid evolution — we do start to see implicit extortion.\n\nFor example, when a population of bacteria develop resistance to antibiotic A, we take extra pains to totally eradicate them with antibiotic B, even though we could not afford to use that strategy if A-resistance spread more broadly through the bacteria population. This is effectively implicit extortion to prevent bacteria from developing A-resistance. It would continue to be worthwhile for humanity even if the side effects of antibiotic B were much worse than the infection itself, though we probably wouldn’t do it in that case since it’s a hard coordination problem (and there are lots of other complications).\n\nConclusion\n==========\n\nThere are many ways that an AI can fail to do the right thing. Implicit extortion is a simple one that is pretty likely to come up in practice, and which may seriously affect the applicability of RL in some contexts.\n\nI don’t think there is any “silver bullet” or simple decision-theoretic remedy to implicit extortion, we just need to think about the details of the real world, who might manipulate us in what ways, what their incentives and leverage are, and how to manage the risk on a case-by-case basis.\n\nI think we need to [define “alignment” narrowly enough](/clarifying-ai-alignment-cec47cd69dd6) that it is consistent with implicit extortion, just like we define alignment narrowly enough that it’s consistent with losing at chess. I’ve found understanding implicit extortion helpful for alignment because it’s one of many conditions under which an aligned agent may end up effectively optimizing for the “wrong” preferences, and I’d like to understand those cases in order to understand what we are actually trying to do with alignment.\n\nI don’t believe implicit extortion is an existential risk. It’s just another kind of conflict between agents, that will divert resources from other problems but should “wash out in the long run.” In particular, every agent can engage in implicit extortion and so it doesn’t seem to shift the relative balance of influence amongst competing agents. (Unlike alignment problems, which shift influence from human values to whatever values unaligned AI systems end up pursuing.)", "url": "https://ai-alignment.com/implicit-extortion-3c80c45af1e3", "title": "Implicit extortion", "source": "html_articles", "source_type": "blogPost", "source_filetype": "pdf", "date_published": "2018-04-12T22:00:00Z", "authors": ["Paul Christiano"], "summary": [], "id": "7f2fd567d248ec9a0798eb4d99a94ea4"} {"text": "Suppose that I have a great model for predicting “what will Alice say next?”\n\nI can evaluate and train this model by checking its predictions against reality, but there may be many facts this model “knows” that I can’t easily access.\n\nFor example, the model might have a detailed representation of Alice’s thoughts which it uses to predict what Alice will say, *without* being able to directly answer “What is Alice thinking?” In this case, I can only access that knowledge indirectly, e.g. by asking about what Alice would say in under different conditions.\n\nI’ll call information like “What is Alice thinking?” inaccessible. I think it’s very plausible that AI systems will build up important inaccessible knowledge, and that this may be a central feature of the AI alignment problem.\n\nIn this post I’m going to try to clarify what I mean by “inaccessible information” and the conditions under which it could be a problem. This is intended as clarification and framing rather than a presentation of new ideas, though sections IV, V, and VI do try to make some small steps forward.\n\nI. Defining inaccessible information\n====================================\n\nI’ll start by informally defining what it means for information to be **accessible**, based on two mechanisms:\n\nMechanism 1: checking directly\n------------------------------\n\nIf I can check X myself, *given other accessible information,* then I’ll define X to be accessible.\n\nFor example, I can check a claim about what Alice will do, but I can’t check a claim about what Alice is thinking.\n\nIf I can run randomized experiments, I can probabilistically check a claim about what Alice *would* do. But I can’t check a counterfactual claim for conditions that I can’t create in an experiment.\n\nIn reality this is a graded notion — some things are easier or harder to check. For the purpose of this post, we can just talk about whether something can be tested even a single time over the course of my training process.\n\nMechanism 2: transfer\n---------------------\n\nThe simplest model that provides some accessible information X may also provide some other information Y. After all, it’s unlikely that the simplest model that outputs X doesn’t output *anything* else. In this case, we’ll define Y to be accessible.\n\nFor example, if I train a model to predict what happens over the next minute, hour, or day, it may generalize to predicting what will happen in a month or year. For example, if the simplest model to predict the next day was a fully-accurate physical simulation, then the same physics simulation might work when run for longer periods of time.\n\nI think this kind of transfer is kind of dicey, so I genuinely don’t know if long-term predictions are accessible or not (we certainly can’t directly check them, so transfer is the only way they could be accessible).\n\nRegardless of whether long-term predictions are accessible by transfer, there are other cases where I think transfer is pretty unlikely. For example, the simplest way to predict Alice’s behavior might be to have a good working model for her thoughts. But it seems unlikely that this model would spontaneously describe what Alice is thinking in an understandable way — you’d need to specify some additional machinery, for turning the latent model into useful descriptions.\n\nI think this is going to be a fairly common situation: predicting accessible information may involve almost all the same work as predicting inaccessible information, but you need to combine that work with some “last mile” in order to actually output inaccessible facts.\n\nDefinition\n----------\n\nI’ll say that information is *accessible* if it’s in the smallest set of information that is closed under those two mechanisms, and *inaccessible* otherwise.\n\nThere are a lot of nuances in that definition, which I’ll ignore for now.\n\nExamples\n--------\n\nHere are some candidates for accessible vs. inaccessible information:\n\n* “What will Alice say?” vs “What is Alice thinking?”\n* “What’s on my financial statement?” vs. “How much money do I really have?”\n* “Am I coughing?” vs. “What’s happening with my immune system?”\n* “How will senators vote?” vs. “What’s the state of political alliances and agreements in the senate?”\n* “What do I see on my computer screen?” vs. “Is my computer compromised?”\n* “What’s the market price of this company?” vs. “How valuable is this IP really?”\n* “Will the machine break tomorrow?” vs. “Is there hard-to-observe damage in this component?”\n* “What does the news show me from 5000 miles away?” vs. “What’s actually happening 5000 miles away?”\n* “Is this argument convincing?” vs. “Is this argument correct?”\n* “What will happen tomorrow?” vs. “What will happen in a year” (depending on whether models transfer to long horizons)\n\nII. Where inaccessible info comes from and why it might matter\n==============================================================\n\nOur models can build up inaccessible information because it helps them predict accessible information. They know something about what Alice is thinking because it helps explain what Alice does. In this diagram, the black arrow represents the causal relationship:\n\n![]()Unfortunately, this causal relationship doesn’t directly let us *elicit* the inaccessible information.\n\nScientific theories are prototypical instances of this diagram, e.g. I might infer the existence of electron from observing the behavior of macroscopic objects. There might not be any explanation for a theory other than “it’s made good predictions in the past, so it probably will in the future.” The actual claims the theory makes about the world — e.g. that the Higgs boson has such-and-such a mass — are totally alien to someone who doesn’t know anything about the theory.\n\nI’m not worried about scientific hypotheses in particular, because they are usually *extremely* simple. I’m much more scared of analogous situations that we think of as intuition — if you want to justify your intuition that Alice doesn’t like you, or that some code is going to be hard to maintain, or that one tower of cards is going to be more stable than another, you may not be able to say very much other than “This is part of a complex group of intuitions that I built up over a very long time and which seems to have a good predictive track record.”\n\nAt that point “picking the model that matches the data best” starts to look a lot like doing ML, and it’s more plausible that we’re going to start getting hypotheses that we don’t understand or which behave badly.\n\nWhy might we care about this?\n-----------------------------\n\nIn some sense, I think this all comes down to what I’ve called [strategy-stealing](/the-strategy-stealing-assumption-a26b8b1ed334): if AI can be used to compete effectively, can humans use AI to compete *on their behalf*?\n\nMore precisely, for every strategy A that an AI could pursue to bring about some arbitrary outcome, is there a strategy A\\* that would help humans get what we want over the long term, without leaving us at a competitive disadvantage over the short term?\n\n![]()If so it’s good news for humanity: if most humans build AIs who execute plans like A\\*, then humans won’t be outcompeted by unaligned AIs who execute plans like A.\n\nBut the mere *existence* of A\\* isn’t very helpful, we need to actually be able to figure out that A\\* leads to human flourishing so that we can do it. If we can’t recognize plans like A\\*, then humanity will be at a disadvantage.\n\nWe could have a problem if the fact “A\\* leads to human flourishing” is inaccessible while the fact “A leads to paperclips” is accessible.\n\n![]()In this case, we either need to be sure that no one acts on the basis of information like “A leads to lots of paperclips,” or we need to figure out some way to access the inaccessible information that “A\\* leads to lots of human flourishing.”\n\nIII. Eliciting inaccessible info and the instrumental policy\n============================================================\n\nI’ve been talking as if we can’t access inaccessible info (and the name is suggestive…), but I don’t think you should be convinced of that yet.\n\nLet M be a model that reports only accessible facts — it answers questions like “what will Alice say next?” but not questions like “is Alice actually happy with the situation?” It’s built up a lot of knowledge about the world, and uses that knowledge to estimate accessible facts well as it can.\n\nWe suspect there is a related model M\\* that uses all of M’s machinery in order to explain what’s going on in Alice’s head. By the definition of inaccessibility, M\\* is at least somewhat more complex than M. So if we want to learn M\\*, we need a training objective that penalizes M relative to M\\*.\n\nBut it’s really easy to tell M\\* and M apart! M won’t automatically tell us what Alice is thinking, but it also won’t automatically make up credible stories that hold together. Both of those require extra complexity. So we can still hope to get M\\* if we train our model to (i) correctly provides accessible info, (ii) says something consistent and plausible about inaccessible info.\n\nThe real problem comes from what I’ll call the *instrumental policy*. Let’s say we’ve tried to dream up a loss function L(*x*, *y*) to incentivize the model to correctly answer information we can check, and gives at least plausible and consistent answers on things we can’t check. By definition, the values L(*x*, *y*) are themselves accessible.\n\nThen it’s natural to learn a policy like: “on input *x*, produce the output *y* for which the loss L(*x*, *y*) will be minimal.” Let’s write BAD for this policy.\n\nPut differently, BAD is the model that performs a simple substitution — when you ask it question Q, it replaces it with the meta question “What answer to question Q will make the person training me happiest?” By definition, this new question is also accessible, so it can be posed to M.\n\nBAD may be much simpler than M\\*, because all it has to do is describe this substitution, whereas M\\* needs to build new machinery to explain inaccessible facts.\n\nIf M is very weak, BAD may perform much worse than M\\* because it doesn’t understand the loss function very well. But if M understands a lot about the world, and about its training process, then it seems like BAD will get at least as low a loss as M\\* — it’s using the same machinery as M and M\\*, and picking whatever value leads to the lowest loss. It can choose to report its own best estimates if that’s the best strategy, e.g. for questions where we can check. So by the time M and M\\* are human level, it already seems like BAD should be getting as low a loss as M\\*.\n\nThis gets a little bit more complicated if I do consistency checks or if I have large output spaces, and of course humans aren’t *that* good at lying, but it seems like the basic idea is pretty solid: once we have a model of accessible information, we can directly turn it into a model that will pass whatever checks we throw at it.\n\nFor concreteness, imagine M is a model of Alice’s behavior and we are asking it “is Alice actually happy?” Then M\\* is the model which translates its model of Alice’s psychology into words and explains it to us, while BAD is the model that asks “What answer would sound most believable to the person asking the question?”\n\nThe question is: if we start with model M and fine-tune it to answer inaccessible questions, is it easier to learn to articulate what it believes or is it going to learn to answer strategically?\n\nWe need to be *very* careful about our checks if we want to ensure that BAD doesn’t get a lower loss than M\\*. For example, it may turn out that sometimes the truth looks a little bit wrong to us…. And if we do everything right, then M\\* and BAD perform equally well, and so we may not have much control over which one we get.\n\nIV. When inaccessible info is a safety problem\n==============================================\n\nLet’s get a bit more detailed about the argument in section II. I think that our inability to access inaccessible info would become a safety problem when:\n\n1. We care about inaccessible facts, so we can’t just evaluate plans based on their accessible consequences.\n2. Inaccessible info is a competitive advantage — agents who are blind to inaccessible facts about the world will get outcompeted.\n3. There are *some* agents who are able to use inaccessible facts to acquire influence, e.g. because they are optimizing accessible long-term goals.\n\n1. We care about inaccessible facts\n===================================\n\nIf I only cared about accessible facts, then I might not need to ever access inaccessible facts. For example, if I cared about my life expectancy, and this was accessible, then I could ask my AI “what actions lead to me living the longest?” and execute those.\n\nFor better or worse, I think we are likely to care about inaccessible facts.\n\n* Generally we care about what’s *actually happening* and not just what appears to be happening. We don’t want smiling faces on cameras. And if there’s a lot of inaccessible action in the world, then it’s reasonably likely for accessible indicators to be systematically manipulated by inaccessible forces.\n* We care intrinsically about what happens inside people’s heads (and inside computers), not just outward appearances. Over the very long term a *lot* may happen inside computers.\n* If we totally give up on measuring how well things are going day-to-day, then we need to be actually optimizing the thing we really care about. But figuring that out may require reflecting a long time, and may be inaccessible to us now. We want a world where we actually reach the correct moral conclusions, not one where we believe we’ve reached the correct moral conclusions.\n* Our real long-term priorities, and our society’s long-term future, may also be really weird and hard to reason about even if we were able to know what was good. It just seems really bad to try to evaluate plans only by their very long-term consequences.\n* We care about things that are far away in space or time, which I think are likely to be inaccessible.\n\nOverall I’m quite skeptical about the strategy “pick an accessible quantity that captures everything you care about and optimize it.” I think we basically need to optimize some kind of value function that tells us how well things are going. That brings us to the next section.\n\n2. Inaccessible info is a competitive advantage\n===============================================\n\nInstead of using AI to directly figure out whether a given action will lead to human flourishing over the coming centuries, we could use AI to help us figure out how to get what we want over the short term — including how to acquire resources and flexible influence, how to keep ourselves safe, and so on.\n\nThis doesn’t require being able to tell how good a very long-term outcome is, but it does require being able to tell how well things are going. We need to be able to ask the AI “which plan would put us in an *actually good* position next year?”\n\nUnfortunately, I think that if we can only ask about accessible quantities, we are going to end up neglecting a bunch of really important stuff about the situation, and we’ll be at a significant competitive disadvantage compared to AIs which are able to take the whole picture into account.\n\nAs an intuition pump, imagine a company that is run entirely by A/B tests for metrics that can be easily checked. This company would burn every resource it couldn’t measure — its code would become unmaintainable, its other infrastructure would crumble, it would use up goodwill with customers, it would make no research progress, it would become unable to hire, it would get on the wrong side of regulators…\n\nMy worry is that inaccessible facts will be similarly critical to running superhuman businesses, and that humans who rely on accessible proxies will get outcompeted just as quickly as the company that isn’t able to optimize anything it can’t A/B test.\n\n* Even in areas like business that society tries particularly hard to make legible, evaluating how well you are doing depends on e.g. valuing intellectual property and intangible assets, understanding contractual relationships, making predictions about what kinds of knowledge or what relationships will be valuable, and so on.\n* . In domains like social engineering, biology, cybersecurity, financial systems, *etc.*, I think inaccessible information becomes even more important.\n* If there is a lot of critical inaccessible information, then it’s not clear that a simple proxy like “how much money is actually in my bank account” is even accessible. The only thing that I can directly check is “what will I see when I look at my bank account statement?”, but that statement could itself be meaningless. We really care about things like who effectively controls that bank account and what would really happen if I tried to spend the money. (And if I largely care about inaccessible facts about the world, then “what would happen if I tried to spend my money?” may itself be inaccessible.)\n* I can pay inaccessible costs for an accessible gain — for example leaking critical information, or alienating an important ally, or going into debt, or making short-sighted tradeoffs. Moreover, if there are other actors in the world, they can try to get me to make bad tradeoffs by hiding real costs.\n\n3. Some AIs can plan with inaccessible info\n===========================================\n\nSo far this discussion could just be about an *AI missed opportunity*, not an *AI risk*.\n\nThings become problematic when it is possible to build AI systems that do use inaccessible info to pursue ambitious long-term goals that would conflict with human flourishing. If illegible knowledge is important enough, those systems could outcompete humans and divert some (or almost all) of our civilization’s resources.\n\nThis happens if *any* interesting long-term goal is accessible, i.e. if there’s any accessible goal that benefits from accumulating influence.\n\nWhy might some long-term goal be accessible?\n\n* Verifiable long-term predictions may accessible, e.g. “total reward over the next 50 years” might be accessible as a generalization of “total reward over the next 50 minutes.”\n* Even if models don’t transfer in a natural way to very long time horizons, they may still transfer in *some* way. Maybe they can’t generalize to something as cleanly specified as “total reward,” but e.g. they may be able to predict quantities that happen to be natural in terms of the representation they are using.\n* More generally, models will do *something* when we run them way out of distribution. I think the best reason to expect models to not plan intelligently over long time horizons is that they lack the abilities and knowledge. Once they have relevant cognitive machinery, I think it’s pretty likely that they’ll use the machinery somehow when they go out of distribution, and if you search over a lot of possibilities.\n\nThe key asymmetry working against us is that optimizing flourishing appears to require a particular quantity to be accessible, while danger just requires *anything* to be accessible.\n\nThere’s a longer discussion about whether anyone would want to build such AI systems, or might be likely to do it accidentally. And another discussion about how bad it would actually be. But I think that once we satisfy the 3 properties in this section then we have a real problem, and the remaining question is one of magnitude, with a bunch of factors pulling up and down.\n\nV. Possible responses\n=====================\n\nAt a high level, we’d like to get good answers by training a model to make checkable predictions and then somehow incentivizing it to use the same machinery to also report inaccessible facts, for example by forcing it to report views and enforcing consistency.\n\nI think there are a bunch of problems with that plan, but the strongest one is that it’s totally unclear how we ever get the intended policy rather than the instrumental policy.\n\nHere are some possible angles of attack:\n\n* In some intuitive sense there is a discontinuity between saying X because you believe it and saying X because you believe that it’s what you need to say to get a high reward. If weak policies are straightforward rather than instrumental, then maybe we can exploit continuity to avoid the instrumental strategy. \nI’m generally pretty scared of these approaches since they feel pretty brittle, and people seem prone to being optimistic even before we can test or understand anything. \nBut I’m a bit more hopeful if we consider continuity of behavior rather than continuity in the space of policies, and if we exploit it by using a model at time T to help check the behavior of a model at time T+1.\n* If we were able to actually understand something about what the policy was doing, even crudely, it might let us discriminate between instrumental and intended behavior. I don’t think we have any concrete proposals for how to understand what the policy is doing well enough to make this distinction, or how to integrate it into training. But I also don’t think we have a clear sense of the obstructions, and I think there are various obvious obstructions to interpretability in general that *don’t* apply to this approach. So together with the other points on this list — especially the existence of a discontinuity and the use of previous versions of a model to help — I think it could be part of a solution.\n* The “instrumental” strategy — “pick the answer that minimizes my training loss” — may have a shorter description, but it seems to be unnecessarily computationally complex. First it figures out what’s true, and then it strategically decides what to say in light of that. It would be a bit cheaper just to actually report what’s true, if we set up the training process well enough that honest reporting got you optimal reward. \nIt’s not clear how to leverage this though, since “use the simplest model” seems like an important and useful prior. We can’t just replace it with “use the cheapest model” without sacrificing competitiveness. That said, it may be able to buy us something in combination with the previous approaches. \n(See discussion [here](https://www.lesswrong.com/posts/nyCHnY7T5PHPLjxmN/open-question-are-minimal-circuits-daemon-free) and [here](https://www.alignmentforum.org/posts/fM5ZWGDbnjb7ThNKJ/are-minimal-circuits-deceptive).)\n* We aren’t able to check inaccessible answers, and they aren’t output by the simplest model, but maybe we can make being straightforward the easiest strategy in some other sense. For example, if you have two models who can’t interact but need to agree on a consistent story, you could imagine the truth being the easiest thing for them to coordinate on. I think that scheme in particular is pretty unlikely to work, but there is a large space of possibilities here, especially when combined with the other points on this list. For example, maybe you could perform consistency checks between models at different points in time.\n* Maybe we can work with models M that don’t understand enough about the world, or especially about their training process, in order to turn into the instrumental policy BAD. I tend to be pretty skeptical of “security by obscurity” — if we want to make life hard for the instrumental policy not working, I want it to be because we have randomness hidden from it, or we’ve constructed a smarter agent that it can’t simulate. I think it’s pretty bad news if we are relying on a smart model not understanding facts about its own situation, and that’s the kind of solution that is particularly likely to break down over time.\n* We could hope that our models don’t build up important inaccessible knowledge, e.g. because it’s possible to justify most interesting conclusions with deduction or because we have reasonably good accessible proxies for our value function. I’m pretty skeptical about this over the long term, but I’m not sure exactly how bad it will be how early.\n* The argument in this post is pretty informal, and there’s a reasonable chance that you can drive a solution through one of the many gaps/loopholes. I like the problem-solving strategy: “write out the proof that there is no solution, and then sift through the proof looking for a fatal hole.”\n\nOverall I don’t see an obvious way forward on this problem, but there are enough plausible angles of attack that it seems exciting to think about.\n\nVI. How this relates to amplification and debate\n================================================\n\nOverall I don’t think it’s very plausible that amplification or debate can be a [scalable](/scalable-ai-control-7db2436feee7) AI alignment solution on their own, mostly for the kinds of reasons discussed in this post — we will eventually run into some inaccessible knowledge that is never produced by amplification, and so never winds up in your distilled agents.\n\nIn the language of my [original post on capability amplification](/policy-amplification-6a70cbee4f34), the gap between accessible and inaccessible knowledge corresponds to an obstruction. The current post is part of the long process of zooming in on a concrete obstruction, gradually refining our sense of what it will look like and what our options are for overcoming it.\n\nI think the difficulty with inaccessible knowledge is not specific to amplification — I don’t think we have any approach that moves the needle on this problem, at least from a theoretical perspective, so I think it’s a plausible candidate for a [hard core](/hard-core-subproblems-8948463455ef) if we fleshed it out more and made it more precise. (I would describe MIRI’s approach to this problem could be described as despair + hope you can find some other way to produce powerful AI.)\n\nI think that iterated amplification *does* address some of the most obvious obstructions to alignment — the possible gap in speed / size / experience / algorithmic sophistication / etc. between us and the agents we train. I think that having amplification mind should make you feel a bit less doomed about inaccessible knowledge, and makes it much easier to see where the real difficulties are likely to lie.\n\nBut there’s a significant chance that we end up needing ideas that look totally different from amplification/debate, and that those ideas will obsolete most of the particulars of amplification. Right now I think iterated amplification is by far our best concrete alignment strategy to scale up, and I think there are big advantages to starting to scale something up. At the same time, it’s really important to push hard on conceptual issues that could tell us ASAP whether amplification/debate are unworkable or require fundamental revisions.", "url": "https://ai-alignment.com/inaccessible-information-c749c6a88ce", "title": "Inaccessible information", "source": "html_articles", "source_type": "blogPost", "source_filetype": "pdf", "date_published": "2020-06-02T22:00:00Z", "authors": ["Paul Christiano"], "summary": [], "id": "0a12763333a5d3969ceee94d2ada1ae3"} {"text": "(revisited)\n-----------\n\n[![Paul Christiano](https://miro.medium.com/v2/resize:fill:88:88/1*BNjZCuQuRfIgcXCBMipuBw.jpeg)](https://paulfchristiano.medium.com/?source=post_page-----18fcb5d3d1e1--------------------------------)[![AI Alignment](https://miro.medium.com/v2/resize:fill:48:48/1*N56Qc5-aHTcfGff0scntKQ.png)](https://ai-alignment.com/?source=post_page-----18fcb5d3d1e1--------------------------------)[Paul Christiano](https://paulfchristiano.medium.com/?source=post_page-----18fcb5d3d1e1--------------------------------)\n\n·[Follow](https://medium.com/m/signin?actionUrl=https%3A%2F%2Fmedium.com%2F_%2Fsubscribe%2Fuser%2F57f1a655a613&operation=register&redirect=https%3A%2F%2Fai-alignment.com%2Finformed-oversight-18fcb5d3d1e1&user=Paul+Christiano&userId=57f1a655a613&source=post_page-57f1a655a613----18fcb5d3d1e1---------------------post_header-----------)\n\nPublished in[AI Alignment](https://ai-alignment.com/?source=post_page-----18fcb5d3d1e1--------------------------------)·8 min read·Jan 10, 2019--\n\n1\n\nListen\n\nShare\n\nTo train an agent with RL, we need to answer the question: “which of the trajectories τ¹ or τ² is better?”\n\nLikewise, in order to train for acceptable worst-case performance, we probably need to answer a question like: “is trajectory τ acceptable?”\n\nWe want to answer these questions “well enough” that our agents end up aiming for good outcomes on average and always behaving acceptably. But how well is well enough?\n\nIn this post I’ll argue that if our goal is to train a well-motivated agent, it is necessary and sufficient for our answers to reflect *everything our agent knows*. This seems essentially equivalent to [ascription universality](/towards-formalizing-universality-409ab893a456) of the oversight process.\n\n(This post is an update of [these](/adequate-oversight-25fadf1edce9) [2016 posts](/the-informed-oversight-problem-1b51b4f66b35). My understanding of the problem has improved a lot over the last few years.)\n\nInformation vs incentives\n-------------------------\n\nWe can distinguish two functions of feedback:\n\n* It provides *information* to the agent we are training. For example, if I provide negative feedback when an agent serves me pizza, it can learn that I don’t like pizza. (This might happen by gradient descent selecting for agents with different beliefs, or by an RL algorithm that uses past reward signals as information.)\n* It determines the *incentives* for the agent we are training, i.e. is used to select the motivations of the agent. For example, an agent pre-trained to make predictions about the world might already understand that I dislike pizza. If I provide negative feedback when the agent serves me pizza, then gradient descent will select for agents that are motivated to avoid things I dislike (at least instrumentally).\n\nFrom the perspective of information, there is no such thing as “good enough.” More information is always better.\n\nThis post is about incentives. We can aim for a “correctly motivated” agent, who lacks relevant information but is at least [trying](/clarifying-ai-alignment-cec47cd69dd6) to do the right thing. At that point there is nothing to gain from better oversight.\n\nSo our question is: how well do we have to answer these questions to produce an approximately correctly motivated agent?\n\n(In the long run, I think that incentives are the more important function of feedback. Agents should mostly learn about the world from other richer signals. So I’m not worried about whether our feedback is optimal from an information perspective.)\n\nNecessity of knowing what the agent knows\n-----------------------------------------\n\nSuppose that our agent understands facts about the world that we don’t understand, i.e which aren’t reflected in our answer to “which of the trajectories τ¹ or τ² is better?” I’ll argue that this is a bad situation.\n\nIf the agent literally took actions and made observations that weren’t available to us, then we would clearly have a problem. For example, consider an agent trying to defend a computer system from attackers. If the agent has access to actions or information we are missing, then it might deliberately take actions that compromise the tools we would use to determine whether our system has been compromised. Because we can’t observe this malicious behavior, we can’t assign it a low reward. And after compromising our tools well enough, the agent could ensure that they always paint a rosy picture of the situation regardless of what actually happens. We intended to protect ourselves from attack, but instead introduced a powerful attacker with privileged access to our system.\n\nThe same thing can happen if we observe everything our agent observes, if we aren’t able to understand everything our agent understands. In the security example, literally seeing a sequence of bits moving across an interface gives you almost no information — something can look innocuous, but cause a huge amount of trouble. In order to incentivize our agent to avoid causing trouble, we need to be able to detect any trouble that the agent deliberately causes. Even an apparently mundane gap in our understanding could hide attacks, just as effectively as if we’d been literally unable to observe the agent’s behavior.\n\nWe might hope that an attack will ultimately cash out as *something* that we can unambiguously observe, so that this is merely a problem with short time horizons. But I don’t think that’s the case — ultimately we are always restricted to indirect observations of the things we care about. We rely on second-hand reports, on viewing small parts of a situation that we hope are representative, on cameras and other sensors that purport to describe the world. All of these things would also be manipulated by an agent incentivized to make things look good to us, and understanding what the agent understands is the most natural defense against that kind of manipulation.\n\nIntuitive argument of sufficiency\n---------------------------------\n\nSuppose we “know everything that the agent knows.” That means the agent should defer to our estimate for any unknown quantity.\n\nNow consider two possible motives an agent could have:\n\n* Create trajectories that are actually good, according to our “real” preferences ([extrapolated as we wish that extrapolated](https://intelligence.org/files/CEV.pdf)…)\n* Create trajectories we believeare good\n\nIf the agent defers to us about the goodness of trajectories, then these two motives will lead to identical actions. So selecting for agents that take actions that we think are good also optimally selects for agents actually motivated to do good.\n\nIntuitively, imagine a student who is trying to predict whether their teacher will think a statement is true. If the teacher knows everything the student knows and can see every consideration the student sees, then the student shouldn’t expect to be able to predict any systematic error in the teacher’s beliefs. So trying to predict what the teacher believes is exactly the same as trying to predict what’s true.\n\nTightening the argument\n-----------------------\n\nThe argument in the last section isn’t quite right. Most glaringly, if the overseer is universal then it means *we* think the overseer has better beliefs than the agent, but it doesn’t mean *the agent* thinks that.\n\n**Changing the statement**. To fix the argument, we need to be a bit more careful. We’ll aim to show that if we think the overseer is ascription universal, then:\n\n* We expect the agent A to achieve at least as much utility (by our real, unobserved utility function) than any action that can be easily computed using any set of beliefs that can be ascribed to A — and in particular A achieves at least as much utility as if it were perfectly motivated.\n\nWe can’t infer that A is perfectly motivated, because it may have arbitrary beliefs that differ from those of the overseer. For example, A might believe that the world will certainly end in a week; instead of planning under that assumption (as it would if it were “correctly” motivated), it will learn to choose actions that the overseer prefers. From its perspective it is choosing actions suboptimally in order to please the overseer, but from our perspective this is better than if it were “correctly” motivated.\n\nAnother way to see we need this weakening: many different believes can be ascribed to the agent, and it can’t simultaneously behave as if it were perfectly motivated with respect to each of those sets of beliefs.\n\n**Assumptions and proof**. This will mostly just be notation; other than that it follows almost immediately from the definition of ascription universality, though we need to do a little bit of work to massage the definition into the right form.\n\nLet 𝔼 be any subjective expectation w.r.t. which the overseer is ascription universal.\n\nLet 𝔼ᴼ be the expectations of the overseer. Let U be our actual utility — this is not a well-defined function we can actually evaluate, but we can query a subjective expectation about it. Then the reward function is *r*(*a*) = 𝔼ᴼ[U(*a*)]. Note that the overseer doesn’t condition on the fact that the agent produced this particular action (so no one needs to reason about the training process itself).\n\nLet 𝔼ᴬ be some expectations that can be ascribed to the agent. Let A be the expression representing the agent’s output (**after training**), and let *a* be the actual output (we need to distinguish the symbol from the action itself because we are plugging this into a subjective expectation). Let A′ be a particular different definition of an action the agent could feasibly output, e.g. argmax 𝔼ᴬ[𝔼ᴼ[U(·)]], and let *a*′ be its actual value.\n\nBecause A and A′ can both be computed using the beliefs 𝔼ᴬ, we can apply the epistemic dominance property to predicates defined in terms of A, A′. (There is the subtlety that dominance requires properties to be open in the product topology, which can be avoided by carefully considering randomization and only becomes relevant when the utility function U itself depends on the behavior of the agent.)\n\nWe’ll assume that 𝔼 believes that training converges well enough that the action A will receive a higher expected reward than A′ (otherwise the agent would eventually switch to outputting A′). We also assume that 𝔼 knows that the overseer epistemically dominates the agent, and that it satisfies a standard [iterated expectation](https://en.wikipedia.org/wiki/Law_of_total_expectation) property.\n\nThen we have:\n\n* 𝔼[*a ←* A; *a*′ ← A′; U(*a*) − U(*a*′)] \n= 𝔼[*a ←* A; *a*′ ← A′; 𝔼[U(*a*) − U(*a*′) | *a*=A, *a*′=A′]] (*iterated expectation*) \n= 𝔼[*a ←* A; *a*′ ← A′; 𝔼[𝔼ᴼ[U(*a*) − U(*a*′)] | *a*=A, *a*′=A′]] (*dominance*) \n= 𝔼[*a ←* A; *a*′ ← A′; 𝔼ᴼ[U(*a*) − U(*a*′)]] (*iterated expectation*) \n= 𝔼[*a ←* A; *a*′ ← A′; *r*(*a*) − *r*(*a*′)]] (*definition of reward*) \n≥ 0 (*training is converged*)\n\nAs desired.\n\n(The meaning of the 𝔼[…;…] notation is to take the expected value of the output of the program which performs the sequence of steps separated by ;’s, returning the final value.)\n\nThis argument only constrains the equilibrium, but eventually we need to say something about the entire training process. It looks to me like very similar ideas are needed for the more general case, but it’s a lot harder to make the arguments precise.\n\nCorrigibility\n-------------\n\nSo far we’ve discussed reward functions that induce correct motivations. The other half of our problem was recognizing unacceptably bad behavior: mistakes so bad that they cannot be corrected.\n\nAgain, we can’t hope to recognize all such mistakes. For example, it may turn out that a particular physics experiment will destroy the world, but correctly predicting that would require a detailed understanding of physics.\n\nBut if we know everything the agent knows, then we can avoid cases where the agent intentionally brings about a catastrophic outcome. In particular, we can avoid cases where the agent takes an action precisely because it would prevent us from correcting the agent’s behavior or reasserting control over the situation.\n\nAscription universality seems to be just what we need for this purpose as well: if we believe the agent is pursuing a plan *because* it prevents us from correcting the situation, then we can ascribe that belief to the agent, and so an ascription-universal overseer will recognize that the behavior is incorrigible and can deem the behavior as unacceptable.\n\nSuch an overseer may also sometimes flag as unacceptable certain actions that are catastrophic but which the agent didn’t recognize as catastrophic. But I think that’s fine, I’m not unhappy if we end up training the agent to avoid a catastrophic mistake.\n\nWhat does this actually look like?\n----------------------------------\n\nIf we are training a model with RL, my best candidate for obtaining a universal overseer is using [iterated amplification](https://arxiv.org/abs/1810.08575) to train a second head on the agent to answer relevant questions, including questions of the form “Which of these two trajectories is better?”\n\nThere are difficulties from training a model to play a game where it shares almost all of its activations with its opponent, and with ensuring that the overseer can remain sufficiently competent relative to the agent (without slowing down training). These resemble the discussion of the `Info` proposal [here](/the-informed-oversight-problem-1b51b4f66b35), but rather than having a separate overseer who evaluates how useful information was, we directly optimize the side information by using the same amplification process. Both of these issues seem OK with gradient descent, but they are larger problems for some other search algorithms, so I don’t consider the issue settled.\n\nThis proposal also faces all of the usual difficulties of iterated amplification — it’s not clear how to generate the distribution of questions, whether the training process is stable, whether errors compound, whether the steps of amplification increase the complexity of training, and so on. These issues seem practically important, and keeping informed oversight in mind can help us understand exactly where the bar is for an adequate solution. I tentatively think the conceptual difficulties in achieving ascription universality are more likely to be a fundamental obstruction.", "url": "https://ai-alignment.com/informed-oversight-18fcb5d3d1e1", "title": "Informed oversight", "source": "html_articles", "source_type": "blogPost", "source_filetype": "pdf", "date_published": "2019-01-23T23:00:00Z", "authors": ["Paul Christiano"], "summary": [], "id": "02de887267f9d5bf87a598a1f636e94f"} {"text": "*(*[*Beth’s post on imitative generalization*](https://www.alignmentforum.org/posts/JKj5Krff5oKMb8TjT/imitative-generalisation-aka-learning-the-prior-1) *is a much clearer description of the unnamed algorithm in this post, which I’m calling “Imitative Generalization.”)*\n\nSuppose that I have a dataset D of observed (*x*, *y*) pairs, and I’m interested in predicting the label *y*\\* for each point *x*\\* in some new set D\\*. Perhaps D is a set of forecasts from the last few years, and D\\* is a set of questions about the coming years that are important for planning.\n\nThe classic deep learning approach is to fit a model *f* on D, and then predict *y*\\* using *f*(*x*\\*).\n\nThis approach implicitly uses a somewhat strange prior, which depends on exactly how I optimize *f*. I may end up with the model with the smallest l2 norm, or the model that’s easiest to find with SGD, or the model that’s most robust to dropout. But *none* of these are anywhere close to the “ideal” beliefs of a human who has updated on D.\n\nThis means that neural nets are unnecessarily data hungry, and more importantly that they can generalize in an undesirable way. I now think that this is a safety problem, so I want to try to attack it head on by learning the “right” prior, rather than attempting to use neural nets as an implicit prior.\n\nWarm-up 1: human forecasting\n----------------------------\n\nIf D and D\\* are small enough, and I’m OK with human-level forecasts, then I don’t need ML at all.\n\nInstead I can hire a human to look at all the data in D, learn all the relevant lessons from it, and then spend some time forecasting *y*\\* for each *x*\\*.\n\nNow let’s gradually relax those assumptions.\n\nWarm-up 2: predicting human forecasts\n-------------------------------------\n\nSuppose that D\\* is large but that D is still small enough that a human can extract all the relevant lessons from it (or that for each *x*\\* in D\\*, there is a small subset of D that is relevant).\n\nIn this case, I can pay humans to make forecasts for many randomly chosen *x*\\* in D\\*, train a model *f* to predict those forecasts, and then use *f* to make forecasts about the rest of D\\*.\n\nThe generalization is now coming entirely from human beliefs, not from the structural of the neural net — we are only applying neural nets to iid samples from D\\*.\n\nLearning the human prior\n========================\n\nNow suppose that D is large, such that a human can’t update on it themselves. Perhaps D contains billions of examples, but we only have time to let a human read a few pages of background material.\n\nInstead of learning the unconditional human forecast P(*y*|*x*), we will learn the forecast P(*y*|*x,* Z), where Z is a few pages of background material that the human takes as given. We can also query the human for the prior probability Prior(Z) that the background material is true.\n\nThen we can train *f*(*y*|*x*, *Z*) to match P(*y*|*x*, Z), and optimize Z\\* for:\n\n\n> *log Prior(Z\\*) + sum((*x*,* y*) ~ D) log f(*y*|*x, *Z\\*)*\n> \n> \n\nWe train *f* in parallel with optimizing Z\\*, on inputs consisting of the current value of Z\\* together with questions *x* sampled from D and D\\*.\n\nFor example, Z might specify a few explicit models for forecasting and trend extrapolation, a few important background assumptions, and guesses for a wide range of empirical parameters. Then a human who reads Z can evaluate how plausible it is on its face, or they can take it on faith in order to predict *y*\\* given *x*\\*.\n\nThe optimal Z\\* is then the set of assumptions, models, and empirical estimates that works best on the historical data. The human never has to reason about more than one datapoint at a time — they just have to evaluate what Z\\* implies about each datapoint in isolation, and evaluate how plausible Z\\* is a priori.\n\nThis approach has many problems. Two particularly important ones:\n\n* To be competitive, this optimization problem needs to be nearly as easy as optimizing *f* directly on D, but it seems harder: finding Z\\* might be much harder than learning *f,* learning a conditional *f* might be much harder than learning an unconditional *f*, and jointly optimizing Z and *f* might present further difficulties.\n* Even if it worked our forecasts would only be “human-level” in a fairly restrictive sense — they wouldn’t even be as good as a human who actually spent years practicing on D before making a forecast on D\\*. To be competitive, we want the forecasts in the iid case to be at least as good as fitting a model directly.\n\nI think the first point is an interesting ML research problem. (If anything resembling this approach ever works in practice, credit will rightly go to the researchers who figure out the precise version that works and resolve those issues, and this blog post will be a footnote.) I feel relatively optimistic about our collective ability to solve concrete ML problems, unless they turn out to be impossible. I’ll give some preliminary thoughts in the next section “Notes & elaborations.”\n\nThe second concern, that we need some way to go beyond human level, is a central philosophical issue and I’ll return to it in the subsequent section “Going beyond the human prior.”\n\nNotes & elaborations\n--------------------\n\n* Searching over long texts may be extremely difficult. One idea to avoid this is to try to have a human guide the search, by either generating hypotheses Z at random or sampling perturbations to the current value of Z. Then we can fit a generative model of that exploration process and perform search in the latent space (and also fit *f* in the latent space rather than having it take Z as input). That rests on two hopes: (i) learning the exploration model is easy relative to the other optimization we are doing, (ii) searching for Z in the latent space of the human exploration process is strictly easier than the corresponding search over neural nets. Both of those seem quite plausible to me.\n* We don’t necessarily need to learn *f* everywhere, it only needs to be valid in a small neighborhood of the current Z. That may not be much harder than learning the unconditional *f*.\n* Z represents a full posterior rather than a deterministic “hypothesis” about the world, e.g. it might say “R0 is uniform between 2 and 3.” What I’m calling Prior(Z) is really the KL between the prior and Z, and P(*y|x,*Z) will itself reflect the uncertainty in Z. The motivation is that we want a flexible and learnable posterior. (This is particularly valuable once we go beyond human level.)\n* This formulation queries the human for Prior(Z) before each fitness evaluation. That might be fine, or you might need to learn a predictor of that judgment. It might be easier for a human to report a ratio Prior(Z)/Prior(Z′) than to give an absolute prior probability, but that’s also fine for optimization. I think there are a lot of difficulties of this flavor that are similar to other efforts to learn from humans.\n* For the purpose of studying the ML optimization difficulties I think we can basically treat the human as an oracle for a reasonable prior. We will then need to relax that rationality assumption in the same way we do for other instances of learning from humans (though a lot of the work will also be done by our efforts to go beyond the human prior, described in the next section).\n\nGoing beyond the human prior\n============================\n\nHow do we get predictions better than explicit human reasoning?\n\nWe need to have a richer latent space Z, a better Prior(Z), and a better conditional P(*y*|*x*, Z).\n\nInstead of having a human predict *y* given *x* and Z, we can use amplification or debate to train f(*y*|*x*, Z) and Prior(Z). This allows Z to be a large object that cannot be directly accessed by a human.\n\nFor example, Z might be a full library of books describing important facts about the world, heuristics, and so on. Then we may have two powerful models debating “What should we predict about *x*, assuming that everything in Z is true?” Over the course of that debate they can cite small components of Z to help make their case, without the human needing to understand almost anything written in Z.\n\nIn order to make this approach work, we need to do a lot of things:\n\n1. We still need to deal with all the ML difficulties described in the preceding section.\n2. We still need to analyze debate/amplification, and now we’ve increased the problem difficulty slightly. Rather than merely requiring them to produce the “right” answers to questions, we also need them to implement the “right” prior. We already needed to implement the right prior as part of answering questions correctly, so this isn’t too much of a strengthening, but we are calling attention to a particularly challenging case. It also imposes a particular structure on that reasoning which is a real (but hopefully slight) strengthening.\n3. Entangled with the new analysis of amplification/debate, we also need to ensure that Z is able to represent a rich enough latent space. I’ll discuss implicit representations of Z in the next section “Representing Z.”\n4. Representing Z implicitly and using amplification or debate may make the optimization problem even more difficult. I’ll discuss this in the subsequent section “Jointly optimizing Mz and f.”\n\nRepresenting Z\n--------------\n\nI’ve described Z as being a giant string of text. If debate/amplification work at all then I think text is in some sense “universal,” so this isn’t a crazy restriction.\n\nThat said, representing complex beliefs might require *very long* text, perhaps many orders of magnitude larger than the model *f* itself. That means that optimizing for (Z, *f*) jointly will be much harder than optimizing for *f* alone.\n\nThe approach I’m most optimistic about is representing Z implicitly as the output of another model Mz. For example, if Z is a text that is trillions of words long, you could have Mz output the *i*th word of Z on input *i*.\n\n(To be really efficient you’ll need to share parameters between *f* and Mz but that’s not the hard part.)\n\nThis can get around the most obvious problem — that Z is too long to possibly write down in its entirety — but I think you actually have to be pretty careful about the implicit representation or else we will make Mz’s job too hard (in a way that will be tied up the competitiveness of debate/amplification).\n\nIn particular, I think that representing Z as implicit flat text is unlikely to be workable. I’m more optimistic about the kind of approach described in [approval-maximizing representations](/approval-maximizing-representations-56ee6a6a1fe6) — Z is a complex object that can be related to slightly simpler objects, which can themselves be related to slightly simpler objects… until eventually bottoming out with something simple enough to be read directly by a human. Then Mz implicitly represents Z as an exponentially large tree, and only needs to be able to do one step of unpacking at a time.\n\nJointly optimizing Mz and f\n---------------------------\n\nIn the first section I discussed a model where we learn *f*(*y*|*x*, Z) and then use it to optimize Z. This is harder if Z is represented implicitly by Mz, since we can’t really afford to let *f* take Mz as input.\n\nI think the most promising approach is to have Mz and *f* both operate on a compact latent space, and perform optimization in this space. I mention that idea in Notes & Elaborations above, but want to go into more detail now since it gets a little more complicated and becomes a more central part of the proposal.\n\n(There are other plausible approaches to this problem; having more angles of attack makes me feel more comfortable with the problem, but all of the others feel less promising to me and I wanted to keep this blog post a bit shorter.)\n\nThe main idea is that rather than training a model Mz(·) which implicitly represents Z, we train a model Mz(·, *z*) which implicitly represents a distribution over Z, parameterized by a compact latent *z.*\n\nMz is trained by iterated amplification to imitate a superhuman exploration distribution, analogous to the way that we could ask a human to sample Z and then train a generative model of the human’s hypothesis-generation. Training Mz this way is itself an open ML problem, similar to the ML problem of making iterated amplification work for question-answering.\n\nNow we can train *f*(*y|x, z*) using amplification or debate. Whenever we would want to reference Z, we use Mz(·, *z*). Similarly, we can train Prior(*z*). Then we choose *z\\** to optimize log Prior(*z*\\*) + sum((*x*, *y*) ~ D) log *f*(*y|x, z*\\*).\n\nRather than ending up with a human-comprehensible posterior Z\\*, we’ll end up with a compact latent *z*\\*. The human-comprehensible posterior Z\\* is implemented implicitly by Mz(·, *z*\\*).\n\nOutlook\n=======\n\nI think the approach in this post can potentially resolve the issue described in [Inaccessible Information](/inaccessible-information-c749c6a88ce), which I think is one of the largest remaining conceptual obstacles for amplification/debate. So overall I feel very excited about it.\n\nTaking this approach means that amplification/debate need to meet a slightly higher bar than they otherwise would, and introduces a bit of extra philosophical difficulty. It remains to be seen whether amplification/debate will work at all, much less whether they can meet this higher bar. But overall I feel pretty excited about this outcome, since I was expecting to need a larger reworking of amplification/debate.\n\nI think it’s still very possible that the approach in this post can’t work for fundamental philosophical reasons. I’m not saying this blog post is anywhere close to a convincing argument for feasibility.\n\nEven if the approach in this post is conceptually sound, it involves several serious ML challenges. I don’t see any reason those challenges should be impossible, so I feel pretty good about that — it always seems like good news when you can move from philosophical difficulty to technical difficulty. That said, it’s still quite possible that one of these technical issues will be a fundamental deal-breaker for competitiveness.\n\nMy current view is that we don’t have candidate obstructions for amplification/debate as an approach to AI alignment, though we have a lot of work to do to actually flesh those out into a workable approach. This is a more optimistic place than I was at a month ago when I wrote [Inaccessible Information](/inaccessible-information-c749c6a88ce).", "url": "https://ai-alignment.com/learning-the-prior-48f61b445c04", "title": "Learning the prior", "source": "html_articles", "source_type": "blogPost", "source_filetype": "pdf", "date_published": "2020-07-04T22:00:00Z", "authors": ["Paul Christiano"], "summary": [], "id": "25aeba2617ef764c4406f9630045dc6b"} {"text": "Right now I’m working on finding a good objective to optimize with ML, rather than trying to make sure our models are robustly optimizing that objective. (This is roughly “[outer alignment](https://www.alignmentforum.org/s/r9tYkB2a8Fp4DN8yB/p/pL56xPoniLvtMDQ4J).”)\n\nThat’s pretty vague, and it’s not obvious whether “find a good objective” is a meaningful goal rather than being inherently confused or sweeping key distinctions under the rug.\n\nSo I like to focus on a more precise special case of alignment: solve alignment when decisions are “low stakes.” I think this case effectively isolates the problem of “find a good objective” from the problem of ensuring robustness and is precise enough to focus on productively.\n\nIn this post I’ll describe what I mean by the low-stakes setting, why I think it isolates this subproblem, why I *want* to isolate this subproblem, and why I think that it’s valuable to work on crisp subproblems.\n\n1. What is the low-stakes setting?\n----------------------------------\n\nA situation is low-stakes if we care very little about any small number of decisions. That is, we only care about the average behavior of the system over long periods of time (much longer than the amount of time it takes us to collect additional data and retrain the system).\n\nFor example, this requires that all of the AI systems in the world can’t corrupt the training process quickly or seize control of resources from humans. If they try, we can keep collecting data and fine-tuning them, and this will cause their behavior to change before anything irreversibly bad happens.\n\nFor a more formal definition see section 6.\n\n2. Why do low stakes require only outer alignment?\n--------------------------------------------------\n\nIf the stakes are low, we can train our model on the decisions that actually arise in practice rather than needing to anticipate tricky decisions in advance. Moreover, because the payoff from an individual action is always small, we can focus on average-case performance and achieve reasonable sample complexities without any additional tricks.\n\nThe main substantive claim is that we don’t need to worry about the “distributional shift” between past decisions and future decisions. When the distribution of inputs change, the system may behave poorly for a while, but if we keep retraining on the new data then it will eventually adapt. If individual decisions are low stakes, then the total cost of all of this adaptation is small. I give this argument in more detail in section 7.\n\nFormally this resembles an online regret bound ([this textbook](https://arxiv.org/pdf/1909.05207.pdf) gives a nice introduction to online learning). SGD satisfies such a bound in the case of convex losses. For messy model classes like neural networks we usually can’t prove much interesting about SGD (either for online *or* offline learning), but for a variety of reasons I think it’s reasonable to expect a similar online bound. I discuss this in more detail in section 8.\n\nThis isn’t to say that we can totally ignore optimization difficulties, or the online nature of the problem. But it appears that the *main* difficulty is constructing a good enough objective and arguing that it is sufficiently easy to optimize.\n\n3. Why focus on this subproblem first?\n--------------------------------------\n\nI think it’s really great to focus on a good subproblem if you can find one.\n\nIf you solve your subproblem, then you’ve made progress. If you get stuck, well then you were probably going to get stuck anyway and at least you’re stuck on something easier. When working on a big problem like alignment, I feel like it’s easy to bounce off of *every* solution because it doesn’t handle the whole problem immediately, and splitting into subproblems is a key way to get over that failure.\n\nI think that the low-stakes setting is a particularly good and clean subproblem: it’s definitely not harder than the original, there are clear ways in which it’s much easier, and solving it would represent real progress.\n\nWhy do I focus on this problem first, rather than starting with the other side (something more like robustness / inner alignment)?\n\n* I think that finding a “good” objective is likely to be similar to finding a “good” specification for adversarial training or verification, and understanding the structure of our specification will change how we approach robustness.\n* I think that defining a good objective likely [requires](/informed-oversight-18fcb5d3d1e1) something like “knowing what the model knows.” If this is successful, it’s likely to be an important ingredient for robustness as well (especially for treacherous behavior, where in some sense the model “knows” about the problem).\n* Put differently, I feel like we can approximately split the full alignment problem into two parts: low stakes and [handling catastrophes](/learning-with-catastrophes-59387b55cc30). We know how to define the low-stakes part but don’t know quite how to formulate catastrophes, so it’s more natural to start with low stakes.\n\n4. Is the low-stakes setting actually scary?\n--------------------------------------------\n\nMany AI safety problems involve AI systems behaving badly in an abrupt and irreversible way. Does the low-stakes assumption dismiss the central concerns?\n\nI think that you could have very risky situations even if every individual decision is low-stakes. For example, suppose that our world was full of safeguards that made it very slow and hard to change anything big (our infrastructure, our government, individual decision-making…). Perhaps any big change will need to take place in tiny pieces spread over thousands of days (though smaller changes occur more easily).\n\nIn such a world AI can still cause huge amounts of trouble *if humans can’t understand what it is doing*. Rather than “taking over” in a single unanticipated shock, the situation can deteriorate in a thousand tiny pieces each of which humans cannot understand.\n\n**5. Why focus on “low stakes” rather than “outer alignment”?**\n---------------------------------------------------------------\n\nTheoretical work in general, and [my research methodology in particular](http://v), is a lot easier when we can cleanly evaluate proposed solutions rather than being fuzzy about the boundaries of our problem. Alignment is always frustratingly fuzzy, but we don’t have to make it any *worse*.\n\nI think that if I tried to work on “outer alignment” defined more fuzzily, I might have ended up sweeping some critical issues under the rug or pursuing dead ends. For example:\n\n* If I’d tried to assume that our models generalize “correctly” as long as the objective is good, I might not have been thinking clearly about how models generalize from questions we can evaluate to [those we can’t](/inaccessible-information-c749c6a88ce). But I think that’s actually a core issue that can’t be cleanly separated from the rest of outer alignment.\n* If I’d tried to assume that models are “trying” to get a low value for the training loss, I might have ended up relying on our ability to incentivize the model to make very long-term predictions. But I think that approach is basically a dead end.\n\nOverall I think that inner alignment and outer alignment are useful intuitive handles but don’t carve the problem space cleanly enough to be good research problems.\n\n6. More formal definition of low-stakes\n---------------------------------------\n\nSay that our AI receives a sequence of inputs *x*[1], *x*[2], … *x*[T], and produces a sequence of outputs *y*[1], *y*[2], … *y*[T].\n\nLet U[*t*](*y*) be our expected utility if we intervene to have the AI output *y*[*t*] = *y* and then be aligned in all future timesteps. Let *ρ* be a bound on the maximum possible utility difference U[*t*](*y*) − U[*t*](*y*′).\n\n(Assume that our AI sometimes behaves randomly for exploration, so that we can define these as conditional expectations given the state of the world before the system chooses *y*[*t*].)\n\nLet *k* be a constant (the “latency”) such that we can afford to train the AI on all data up to time *t* before we need to deploy it to make a decision at time *t*+*k*.\n\nThen the “low-stakes” goal is to achieve a total utility within O(*ρk*√T) of an aligned model. I call this “low-stakes” because the bound is only meaningful when *ρk* — the total damage that can be done by ML systems before retraining — is small relative to the total value at stake. Because this gap grows sublinearly with T, note that it is *eventually* small if ML is deployed for long enough relative to the stakes and the time required for models to learn.\n\n7. More formal argument that outer alignment is sufficient\n----------------------------------------------------------\n\nSuppose that we could compute the utilities U[*t*](*y*) exactly, i.e. that we had a perfect objective. I claim that we could then satisfy the low-stakes goal by performing online RL, i.e. performing SGD with a loss function that is an unbiased estimator for the expectation of U[*t*](*y*) for an action *y* sampled from the model.\n\nI’ll focus on the case *k*=1 for simplicity, but I think *k*>1 is basically the same.\n\nFor each timestep *t*, let M[*t*] be our model at time *t*, let *y*[*t*] be the random output we sample, let M\\*[*t*] be the aligned model that is competitive with M, and let *y*\\*[*t*] be the output of M\\*.\n\nThen:\n\n* U[0](*y*\\*[0]) is the utility we’d obtain by taking aligned decisions at every step. U[T](y[T]) is the actual utility we receive. So our “regret” is U[T](*y*[T]) − U[0](*y*\\*[0])\n* U[*t*+1](*y*\\*[*t*+1]) = U[*t*](*y*[*t*]), since U is defined assuming the system is aligned at all future times.\n* Thus the regret is the sum of U[*t*](*y*\\*[*t*]) − U[*t*](*y*[*t*]) across all time steps *t*.\n* But this is identical to the difference in performance between the sequence of models M[*t*] and M\\*[*t*].\n\nIf our loss function was convex and M\\* was fixed, then SGD would have a regret bound of O(*ρk*√T), as desired. If we are optimizing over the space of neural networks then our loss function is obviously not convex and so we can’t easily prove a bound of this form, but I’ll argue in section 8 that we can aim for a similar guarantee as long as M\\* is “easy” to find with SGD.\n\nOf course we can’t hope to actually compute the real utility differences U[*t*] (since e.g. which decision is optimal may depend on hard empirical facts where we don’t get any feedback until years later). So we’ll need to set our sights a bit lower (e.g. to say that in subjective expectation we do as well as the aligned model, rather than being able to say that for every possible sequence we actually do as well). I discuss similar issues in section 3 of [Towards formalizing universality](/towards-formalizing-universality-409ab893a456), and I don’t think they change the basic picture.\n\n8. Why expect SGD to work online even for neural networks?\n----------------------------------------------------------\n\nSGD enjoys a √T online regret bound only for convex losses. Convexity implies that the iterates of SGD are optimal for a regularized loss function, which is needed to get a bound.\n\nOther than that hiccough I think the regret bound basically goes through. But why am I not concerned about the suboptimality of SGD?\n\n* To the extent that SGD can’t find the optimum, it hurts the performance of both the aligned model and the unaligned model. In some sense what we really want is a regret bound compared to the “best learnable model,” where the argument for a regret bound is heuristic but seems valid.\n* Moreover, a satisfactory alignment scheme already needs to explain why finding the aligned model M\\* is not much harder than finding the unaligned model M. And that’s basically what we need in order to argue that SGD has a regret bound relative to M\\*.\n\nOverall I’m prepared to revise my view if the empirical evidence suggests that online bounds are problematic, but so far all the experiments I’m aware of are broadly consistent with the theoretical picture.\n\nThe other slight subtlety in section 7 is that we care about the regret between the actual model M[*t*] and the aligned version M\\*[*t*], whereas we conventionally define regret relative to some fixed target model. I’m not concerned about this either: (i) regret bounds that compare the actual model M to the best transformed model can be obtained by similar methods, (ii) for our purposes we’d also be fine just bounding our regret compared to the single maximally-competent aligned model at the end of training (although I actually expect that to be somewhat harder).\n\nOverall I don’t think this is completely straightforward, but I think it looks good enough that the main difficulty is probably finding a good objective. I’d personally want to start thinking about these more technical issues only after we’ve solved the thornier conceptual issues.", "url": "https://ai-alignment.com/low-stakes-alignment-f3c36606937f", "title": "Low-stakes alignment", "source": "html_articles", "source_type": "blogPost", "source_filetype": "pdf", "date_published": "2021-04-29T22:00:00Z", "authors": ["Paul Christiano"], "summary": [], "id": "fc06463dfe75740248545eaba7b63d82"} {"text": "I’m looking for alignment techniques that are [indefinitely scalable](/directions-and-desiderata-for-ai-control-b60fca0da8f4) and that [work in any situation we can dream up](/my-research-methodology-b94f2751cb2c). That means I spend time thinking about “exotic” problems — like AI systems reasoning about their own training process or about humanity’s far future.\n\nYet I’m very optimistic about finding practical alignment strategies that are relatively simple and usable today. I expect the results of my research to look mundane and even boring.\n\nThese two stances may appear to be in tension. If I’m worrying about all kinds of exotic considerations, how can the end result be something mundane?\n\nI’m not too concerned. These failures seem exotic because they involve exotic *capabilities.* I’m looking for a mundane algorithm that trains the model to use whatever capabilities it has for good purposes. So it will lead to exotic countermeasures if and when the model has exotic capabilities, i.e. just in time to avoid exotic failures (by which time they’ll no longer feel exotic).\n\nI want to illustrate this phenomenon with a few examples.\n\nExample 1: Gradient hacking\n---------------------------\n\n[*Gradient hacking*](https://www.lesswrong.com/posts/uXH4r6MmKPedk8rMA/gradient-hacking)is a possible failure mode where a powerful misaligned model behaves in a way that is apparently benign but biases the gradient of the loss to point in an unfavorable direction, eventually trapping optimization at a bad local minimum. This feels exotic today because it requires the model to have an incredible amount of introspective access and control, as well as strategic awareness over long time horizons. (“[Forward Pass](http://karpathy.github.io/2021/03/27/forward-pass/)” by Andrej Karpathy explores some similar ideas.)\n\nNormally I imagine using SGD to optimize a “behavioral” objective: to update the parameters θ you use them to make a randomized decision *y* on an input *x*, then you calculate a reward *r*(*x*, *y*), then you compute an unbiased estimate for the gradient of the expectation *r*(*x*, *y*) with respect to θ.\n\nGradient hacking is troubling because it suggests that if we optimize a purely behavioral objective we could get a bad outcome even if the reward function *r*(*x*, *y*) perfectly captures the quality of the output *y*.\n\nRather than fixing this problem in a specific way, I’d like to rely on competitiveness. That is, assume that (throughout training) we have a question-answering policy Q → A which reflects *everything* that our learned model “knows” about the world.\n\nThen it seems like the best way to compute the system’s loss should be by by asking some question like “What should the loss be for the current parameters?”\n\nOur usual behavioral loss then is a great *heuristic* for computing the loss (which we can justify on principled Bayesian grounds if we want). In mundane situations it’s not clear we have anything further to say about what the loss should be, so “What should the loss be?” is almost the same as hard-coding the heuristic.\n\nBut when gradient hacking occurs, we are no longer in a mundane situation. By hypothesis, our learned model is able to reason introspectively about its own parameters and the structure of the loss landscape! It is thinking about how to change its behavior in order to affect the loss, change the optimization trajectory, and ultimately disempower humans.\n\nThat means that a competitive question-answerer *also* is able to answer questions about its own parameters and the structure of the loss landscape. It is able to talk about how assigning a loss could mess up the optimization trajectory and ultimately disempower humans. If we incorporate all that knowledge into our assessment of “What should the loss be for the current parameters?” then it will assign a low loss to some of the actions that the gradient-hacker tried to sabotage. That is, the malicious AI can outsmart the behavioral heuristic, but it can’t outsmart an overseer *who knows everything that it knows*.\n\nIt’s not clear if this works (and my strong guess is that we’ll ultimately end up with an algorithm that looks quite different from this). But overall I think that if gradient hacking actually came up, it’s more likely than not that the core difficulty in avoiding it would be the same version of epistemic competitiveness that we need in more mundane situations. The only real change to our algorithm we need to make is applying a competitive overseer to the right question.\n\nYou might worry that the knowledge a model has about its own parameters is qualitatively different than the knowledge it has about the world. But I don’t think so — they are both produced by transfer from the task of “get a low training loss,” combined with a bunch of computation. It’s possible that after we have an epistemically competitive solution we’ll see that it doesn’t apply to a model’s introspective knowledge. If that happens then we may indeed need some more exotic solution that talks about introspection *per se*, but I personally doubt it.\n\nExample 2: far future forecasts and inaccessible influence\n----------------------------------------------------------\n\nSuppose that there’s [some domain that our AI can understand but we cannot](/inaccessible-information-c749c6a88ce), for vividness you can imagine humans without microscopes and an AI that has a working model of the microscopic world. For simplicity, imagine that the world factors nicely into an accessible part and an inaccessible part, and that there are influences in both directions but to us the the (inaccessible) → (accessible) influences just look like noise. (If you don’t know about the mechanics of biology, then events like “getting cancer” can just look like noise.)\n\nOne of my concerns is that an AI that understands the inaccessible part may be able to cause trouble in the very long term. Even if we’re able to prevent the AI from doing anything overtly bad in the accessible part for the next few years (like killing humans), it may still be that humans aren’t really in control of the situation because the real power is in the inaccessible part of the world, and that power will eventually assert itself.\n\nSo if we can only observe the accessible part of the world, then we would need to look *very* far ahead to avoid problem. Basically we’d need to look far enough ahead that the future humans have done science and learned about the previously-inaccessible parts of the world. This kind of lookahead feels particularly exotic because it involves extreme generalization. Training typically involves feedback loops over seconds, but now we are talking about a model that makes good predictions about events that literally occur *generations* from now.\n\nThat kind of extreme lookahead sounds exotic, but in some cases my current alignment plans require it.\n\nAs in the last example, I think this situation is only exotic because our model had exotic capabilities. We’re in this situation because we are trying to handle a malicious AI that makes plans that have no observable consequences for many generations, building up power in the inaccessible part of the world that it predicts will eventually interfere with humans. That AI is already apparently able to generalize well to extremely long time horizons — if it wasn’t, we’d have no problem at all.\n\nSo once again, to avoid this problem we really just need epistemically competitive question answering, i.e. we need to learn a Q → A function that generalizes as well to novel questions (e.g. about the future) as the underlying model’s knowledge. And that’s a pretty mundane problem that’s core to the idea of epistemic competitiveness that we need to solve for other reasons.\n\nIf the malicious model is able to generalize from its experiences to make plans that unfold over generations, and the Q → A model generalizes as well as the malicious model, then our Q → A system can also answer questions about what happens over generations. That only sounds exotic because we assumed some exotic capabilities. Nothing about the solution or training needs to deal with ultra-long horizons specifically.\n\nConclusion\n----------\n\nIn my work I don’t shy away from exotic problems (I often find them useful as extreme cases to illustrate some principle). At the same time, I’m aiming for mundane solutions and optimistic about finding them.\n\nI think those positions are consistent because my intermediate goal is to ensure that the oversight process is able to leverage all of the capabilities developed by the model — so if the model develops exotic capabilities which pose exotic challenges, then we get an exotic oversight process automatically.", "url": "https://ai-alignment.com/mundane-solutions-to-exotic-problems-395bad49fbe7", "title": "Mundane solutions to exotic problems", "source": "html_articles", "source_type": "blogPost", "source_filetype": "pdf", "date_published": "2021-05-03T22:00:00Z", "authors": ["Paul Christiano"], "summary": [], "id": "2c2b51f7287d04b936c8cfe2fff35a9f"} {"text": "If powerful ML systems fail catastrophically, they may be able to quickly cause irreversible damage. To be safe, it’s not enough to have an average-case performance guarantee on the training distribution — we need to ensure that even if our systems fail on new distributions or with small probability, they will never fail *too* badly.\n\nThe difficulty of optimizing worst-case performance is one of the most likely reasons that I think [prosaic AI alignment](/prosaic-ai-control-b959644d79c2) might turn out to be impossible (if combined with an [unlucky empirical situation](https://arbital.com/p/daemons/)).\n\nIn this post I want to explain my view of the problem and enumerate some possible angles of attack. My goal is to communicate why I have hope that worst-case guarantees are achievable.\n\nNone of these are novel proposals. The intention of this post is to explain my view, not to make a new contribution. I don’t currently work in any of these areas, and so this post should be understood as an outsider looking in, rather than coming from the trenches.\n\nMalign vs. benign failures and corrigibility\n--------------------------------------------\n\nI want to distinguish two kinds of failures:\n\n* “Benign” failures, where our system encounters a novel situation, doesn’t know how to handle it, and so performs poorly. The resulting behavior may simply be erratic, or may serve an external attacker. Their effect is similar to physical or cybersecurity vulnerabilities — they create an opportunity for destructive conflict but don’t systematically disfavor human values. They may pose an existential risk when combined with high-stakes situations, in the same way that human incompetence may pose an existential risk. Although these failures are important, I don’t think it is necessary or possible to eliminate them in the worst case.\n* “Malign” failures, where our system continues to behave competently but applies its intelligence in the service of an unintended goal. These failures systematically favor whatever goals AI systems tend to pursue in failure scenarios, at the expense of human values. They constitute an existential risk independent of any other destructive technology or dangerous situation. Fortunately, they seem both less likely and potentially possible to avoid even in the worst case.\n\nI’m most interested in malign failures, and the narrower focus is important to my optimism.\n\nThe distinction between malign and benign failures is not always crisp. For example, suppose we try to predict a human’s preferences, then search over all strategies to find the one that best satisfies the predicted preferences. Guessing the preferences even a little bit wrong would create an adversarial optimizer incentivized to apply its intelligence to a purpose at odds with our real preferences. If we take this approach, incompetence does systematically disfavor human values.\n\nBy aiming for corrigible rather than optimal behavior (see [here](https://arbital.com/p/hard_corrigibility/) or [here](/corrigibility-3039e668638)) I’m optimistic that it is possible to create a sharper distinction between benign and malign failures, which can be leveraged by the techniques below. But for now, this hope is highly speculative.\n\nAmplification\n-------------\n\nI believe that these techniques are much more likely to work if we have access to an overseer who is significantly smarter than the model that we are trying to train. I hope that [amplification](/alphago-zero-and-capability-amplification-ede767bb8446) makes this possible.\n\nIt seems realistic for a strong overseer to recognize an (input, output) pair as a malign failure mode (though it may require a solution to [informed oversight](/the-informed-oversight-problem-1b51b4f66b35)). So now we have a concrete goal: find a model that never gives an output the overseer would diagnose as catastrophically bad.\n\nHistorically researchers in the AI safety community have been extremely pessimistic about reliability. I think part of that pessimism is because they have been imagining working with models much smarter than the overseer.\n\nTechniques\n==========\n\nI’ll describe three categories of techniques:\n\n* Adversarial training\n* Verification\n* Transparency\n\nPrevious versions of this list included implicit ensembles, e.g. Bayesian neural nets. I still think there might be useful techniques in that space, but I’ve become pessimistic enough that it no longer makes the list.\n\nAdversarial training\n--------------------\n\n(*Examples:* [*Explaining and Harnessing Adversarial Examples*](https://arxiv.org/abs/1412.6572)*,* [*Adversarial Machine Learning at Scale*](https://arxiv.org/abs/1611.01236)*.)*\n\nIf we want to know whether our model ever fails catastrophically, it’s natural to train an adversary to search for catastrophe-inducing inputs. If the adversary can’t find one, then we hope that none exist. In order to produce a catastrophe-free model, we can integrate the results of testing into the training process.\n\nThe [basic problem](http://www.cleverhans.io/security/privacy/ml/2017/06/14/verification.html) with this strategy is that no matter how hard you test, it’s always possible (realistically, it’s likely) that there are possible failure cases your adversary didn’t notice. For example, suppose that it takes 2ᴺ seconds to find the Nth failure. Because nature will be many times more complex than our training procedure, it’s inevitable that there are many types of failures that nature can find but we can’t.\n\nWe can attack this problem by trying to make the adversary’s job easier. Some possibilities:\n\n* Use interpretability techniques (see below) to help the adversary identify possible failures.\n* Use interpretability to expand what counts as catastrophic — if the adversary can get the model to think in a way that looks like it *could* lead to catastrophic behavior under different situations, that could count as a success for the adversary.\n* Combine with verification (see below) to help guide the adversary’s search and expand the space of allowed attacks.\n* Allow the adversary to directly control some of the internal computation of our model rather than merely controlling inputs. This requires a way to constrain the actions of the adversary — we want them to be able to produce any failure that could arise naturally, but if we give them too much power then they can trivially cause a catastrophic failure.\n* Change the system’s inputs so they are easier for the adversary to fake: see the “distorting observations” and “abstracting observations” sections of [this post](/red-teams-b5b6de33dc76).\n* Make the adversary more powerful, without changing their task, by applying amplification.\n\nAdversarial training faces other difficulties as well. For example, it might take a long time for adversarial training to converge, or training might never converge due to non-convexity. I think these are significantly less likely to be deal-breakers, but they are certainly important research problems if we actually want to apply adversarial training.\n\nVerification\n------------\n\n*(Examples:* [*Certified Defenses against Adversarial Examples*](https://arxiv.org/abs/1801.09344)*,* [*Reluplex: An Efficient SMT Solver for Verifying Deep Neural Networks*](https://arxiv.org/pdf/1702.01135.pdf))\n\nIn verification the goal is to find a certificate that there is no input on which a system behaves badly.\n\nVerification has (at least) two problems:\n\n* **Complexity**. It’s really, really hard to verify properties of a computationally interesting model.\n* **Specification**. It’s unclear what we should be trying to verify.\n\nHandling complexity is definitely challenging. But if we are free to train the model in order to facilitate verification, and if we are only interested in certifying some “easy” property that the model satisfies with slack, then it’s not clearly doomed.\n\nPrima facie, specification looks more like an unfixable deal-breaker. In the rest of this section I’ll give three possible approaches for finding specifications. I think none of these is satisfactory on their own, but they leave me optimistic that verification can be useful despite specification difficulty.\n\nMy current view is that specification and complexity are both potentially surmountable difficulties, but that it’s likely their resolutions will result in partial solutions that need to be combined with other approaches.\n\n**Use a slow model of overseer as a specification**. If our overseer is a physical process, to actually get a specification we’d need to learn a model of the overseer. So we’ve just shifted the problem to the model of the overseer.\n\nFortunately, this move does potentially made the problem one step better. Because we don’t need to use the overseer model at runtime, we can afford to use a much slower model. So verification could distill a very slow reliable model into a fast reliable model.\n\n**Amplification**. If our overseer is produced by amplifying some process H, and we have a trusted model of H, then we can obtain a trusted model of the overseer. So if we were able to form a trusted model of the first step of amplification, then we could iteratively use verification to construct trusted models at each subsequent step.\n\nWe are still left with the problem at the first step of amplification. But at this step we might be able to restrict attention to a small range of questions for which worst-case guarantees are more feasible; we may be able to train weaker models for which worst-case guarantees are more feasible; or we might be able to eliminate the human altogether and begin amplification from some explicit simple core of reasoning. Any of these seems plausible.\n\n**“Verification” without specification.** I find it helpful to think of verification from the primal/dual perspective, which also makes the connection to adversarial training clearer.\n\nIn adversarial training, the adversary wants to find an input on which the system behaves poorly. The dual to the adversary is an explicit list of all inputs, showing that each one of them performs well. The dual is exponentially large, and (relatedly) the adversary’s problem is exponentially hard.\n\nWe can view verification in a given proof system as making the adversary’s job easier, by expanding the set of allowable “attacks.” If we do this carefully, we can make the dual certificate much smaller and make the adversary’s task much more tractable.\n\nFor example, if our proofs (dual solutions) are in first order logic, the adversary’s task is finding any model of the axioms in which there exists an attack. (This is only different in the infinite case.) If our proofs are sum of squares proofs, the adversary’s task is finding a [pseudodistribution](https://www.dsteurer.org/talk/cargese.pdf) over attacks. For certain linear programming proofs, the adversary’s goal is finding an attack on a particular extrapolation of the model. And so on.\n\nOn this perspective, we don’t necessarily need an actual specification. Instead, we can take some external catastrophe-checker, relax it to a catastrophe checker over the adversary’s relaxed action space, and then try to implement an adversary against that catastrophe checker. This would change the meaning of the dual, which would now have a subtle statistical interpretation rather than being a traditional certificate.\n\nTransparency\n------------\n\n(*Examples:* [*Feature Visualization*](https://distill.pub/2017/feature-visualization/)*,* [*Deep Inside Convolutional Networks*](https://arxiv.org/pdf/1312.6034.pdf)*)*\n\nWhile training a model, we are free to look inside and “see what it’s thinking.” If we could actually understand those thoughts, we might be able to more easily predict under what conditions it would fail malignantly.\n\n**A (weird) example**: suppose that human evolution was part of an alien project to train a mathematician, and some humans are extracted from our simulation and trained to solve alien math problems. If aliens are only able to test the humans’ behavior they might conclude that they always faithfully solve whatever problem they are posed — especially after a few generations of selection removed all the humans who didn’t do so. But if the aliens were able to look inside the humans’ heads they might discover that this behavior is extremely contingent — perhaps humans are only solving problems because they believe they will die otherwise, and the humans would behave very differently as soon as they saw an opportunity to escape.\n\nThat example is exotic and somewhat implausible, but I think that the same basic dynamic is likely to apply in much messier practical situations. If a model would fail malignantly under some conditions, I think there is some sense in which the model must be testing those conditions during training.\n\n**Malign vs. benign failures:** I think the prospects for using transparency to detect malign failures in models trained by gradient descent are *much* better than the prospects for detecting benign failures, or for detecting failures in arbitrary models.\n\nThe key point is that a malign failure requires leveraging the intelligence of the model to do something actively bad. If our model is trained by gradient descent, its behavior can only be intelligent when it is exercised on the training distribution — if part of the model never (or very rarely) does anything on the training distribution, then that part of the model can’t be intelligent. So in some sense a malign failure mode needs to use a code path that gets run on the training distribution, just under different conditions that cause it to behave badly.\n\nThat means that we don’t necessarily need to analyze our model in the abstract, we can look at its behavior on actual training data. This looks much more likely to be tractable. (Though this entire discussion is extremely subtle, and without being more precise it’s very easy for us to go wrong.)\n\n**Amplification**: Transparency will be hard if we ultimately want to train models much smarter than we are, but deeply understanding what a model is thinking probably requires being much smarter than it. I think that problem can potentially be overcome by using amplification to stay ahead of the models we are training. This means that current work on transparency, which considers models doing tasks that are within the abilities of the humans who are trying to understand them, could potentially remain relevant even as AI improves significantly.\n\n**What do you do with transparency?** Merely understanding that a model might behave catastrophically could be useful, but it would be much nicer to actually fix the problem. Adversarial training gives a natural mechanism: once we understand a failure we can synthesize appropriate data and then train on that data.\n\nThis approach puts significantly more stress on our transparency techniques. Even if were initially able to use transparency to see how our model might fail, after we perform many generations of selection we might weed out exactly the comprehensible failures and leave the incomprehensible ones. You would only want to apply this technique if you had a great deal of faith in your methods; if you were feeling at all shaky about your ability to achieve worst-case guarantees, and transparency techniques let you see one potential catastrophic failure, it would be better to consider that a near-miss and seriously rework your project rather than plowing on.\n\nConclusion\n==========\n\nMaking ML systems work in the worst case is hard, even if we are only concerned with malign failures and have access to an overseer who can identify them. If we can’t solve this problem, I think it seriously calls into question the feasibility of aligned ML.\n\nFortunately there are at least a few plausible angles of attack on this problem. All of these approaches feel very difficult, but I don’t think we’ve run into convincing deal-breakers. I also think these approaches are complementary, which makes it feel even more plausible that they (or their descendants) will eventually be successful. I think that exploring these angles of attack, and identifying new approaches, should be a priority for researchers interested in alignment.", "url": "https://ai-alignment.com/techniques-for-optimizing-worst-case-performance-39eafec74b99", "title": "Techniques for optimizing worst-case performance", "source": "html_articles", "source_type": "blogPost", "source_filetype": "pdf", "date_published": "2018-02-01T23:00:00Z", "authors": ["Paul Christiano"], "summary": [], "id": "194a4fbb27348263c266d361d91428f4"} {"text": "Most AI research focuses on reproducing human abilities: to learn, infer, and reason; to perceive, plan, and predict. There is a complementary problem which (understandably) receives much less attention: if you *had* these abilities, what would you do with them?\n\n**The steering problem:** Using black-box access to human-level cognitive abilities, can we write a program that is as useful as a well-motivated human with those abilities?\n\nThis document explains what the steering problem is and why I think it’s worth spending time on.\n\n1. Introduction\n===============\n\nA capable, well-motivated human can be extremely useful: they can work without oversight, produce results that need not be double-checked, and work towards goals that aren’t precisely defined. These capabilities are critical in domains where decisions cannot be easily supervised, whether because they are too fast, too complex, or too numerous.\n\nIn some sense “be as useful as possible” is just another task at which a machine might reach human-level performance. But it is different from the concrete capabilities normally considered in AI research.\n\nWe can say clearly what it means to “predict well,” “plan well,” or “reason well.” If we ignored computational limits, machines could achieve any of these goals today. And before the existing vision of AI is realized, we must *necessarily* achieve each of these goals.\n\nFor now, “be as useful as possible” is in a different category. We can’t say exactly what it means. We could not do it no matter how fast our computers could compute. And even if we resolved the most salient challenges in AI, we could remain in the dark about this one.\n\nConsider a capable AI tasked with running an academic conference. How should it use its capabilities to make decisions?\n\n* We could try to specify exactly what makes a conference good or bad. But our requirements are complex and varied, and so specifying them exactly seems time-consuming or impossible.\n* We could build an AI that imitates successful conference organizers. But this approach can never do any better than the humans we are imitating. Realistically, it won’t even match human performance unless we somehow communicate what characteristics are important and why.\n* We could ask an AI to maximize our satisfaction with the conference. But we’ll get what we measure. An extensive evaluation would greatly increase the cost of the conference, while a superficial evaluation would leave us with a conference optimized for superficial metrics. Everyday experience with humans shows how hard delegation can be, and how much easier it is to assign a task to someone who actually cares about the outcome.\n\nOf course there is already pressure to write *useful* programs in addition to smart programs, and some AI research studies how to efficiently and robustly communicate desired behaviors. For now, available solutions apply only in limited domains or to weak agents. The steering problem is to close this gap.\n\nMotivation\n----------\n\nA system which “merely” predicted well would be extraordinarily useful. Why does it matter whether we know how to make a system which is “as useful as possible”?\n\nOur machines will probably do *some* things very effectively. We know what it means to “act well” in the service of a given goal. For example, using human cognitive abilities as a black box, we could probably design autonomous corporations which very effectively maximized growth. If the black box was cheaper than the real thing, such autonomous corporations could displace their conventional competitors.\n\nIf machines can do everything equally well, then this would be great news. If not, society’s direction may be profoundly influenced by what can and cannot be done easily. For example, if we can only maximize what we can precisely define, we may inadvertently end up with a world filled with machines trying their hardest to build bigger factories and better widgets, uninterested in anything we consider intrinsically valuable.\n\nAll technologies are more useful for some tasks than others, but machine intelligence might be particularly problematic because it can entrench itself. For example, a rational profit-maximizing corporation might distribute itself throughout the world, pay people to help protect it, make well-crafted moral appeals for equal treatment, or campaign to change policy. Although such corporations could bring large benefits in the short term, in the long run they may be difficult or impossible to uproot, even once they serve no one’s interests.\n\n**Why now?**\n------------\n\nReproducing human abilities gets a lot of deserved attention. Figuring out exactly what you’d do once you succeed feels like planning the celebration before the victory: it might be interesting, but why can’t it wait?\n\n1. **Maybe it’s hard**. Probably the steering problem is much easier than the AI problem, but it might turn out to be surprisingly difficult. If it *is* difficult, then learning that earlier will help us think more clearly about AI, and give us a head start on addressing the steering problem.\n2. **It may help us understand AI.** The difficulty of saying exactly what you want is a basic challenge, and the steering problem is a natural perspective on this challenge. A little bit of research on natural theoretical problems is often worthwhile, even when the direct applications are limited or unclear. In section 4 we discuss possible approaches to the steering problem, many of which are new perspectives on important problems.\n3. **It should be developed alongside AI.** The steering problem is a long-term goal in the same way that understanding human-level prediction is a long-term goal. Just as we do theoretical research on prediction before that research is commercially relevant, it may be sensible to do theoretical research on steering before it is commercially relevant. Ideally, our ability to build useful systems will grow in parallel with our ability to build capable systems.\n4. **Nine women can’t make a baby in one month.** We could try to save resources by postponing work on the steering problem until it seems important. At this point it will be easier to work on the steering problem, and if the steering problem turns out to be unimportant then we can avoid thinking about it altogether. But at large scales it becomes hard to speed up progress by increasing the number of researchers. Fewer people working for longer may ultimately be more efficient even if earlier researchers are at a disadvantage. In general, scaling up fields rapidly is difficult.\n5. **AI progress may be surprising**. We probably won’t reproduce human abilities in the next few decades, and we probably won’t do it without ample advance notice. That said, AI is too young, and our understanding too shaky, to make confident predictions. A mere 15 years is 20% of the history of modern computing. If important human-level capabilities are developed surprisingly early or rapidly, then it would be worthwhile to better understand the implications in advance.\n6. **The field is sparse**. Because the steering problem and similar questions have received so little attention, individual researchers are likely to make rapid headway. There are perhaps three to four orders of magnitude between basic research on AI and research directly relevant to the steering problem, lowering the bar for arguments 1–5.\n\nIn section 3 we discuss some other reasons not to work on the steering problem: Is work done now likely to be relevant? Is there any concrete work to do now? Should we wait until we can do experiments? Are there adequate incentives to resolve this problem already?\n\n2. Defining the problem precisely\n=================================\n\nRecall our problem statement:\n\n**The steering problem:** Using black-box access to human-level cognitive abilities, can we write a program that is as useful as a well-motivated human with those abilities?\n\nWe’ll adopt a particular human, Hugh, as our “well-motivated human:” we’ll assume that we have black-box access to Hugh-level cognitive abilities, and we’ll try to write a program which is as useful as Hugh.\n\nAbilities\n---------\n\nIn reality, AI research yields complicated sets of related abilities, with rich internal structure and no simple performance guarantees. But in order to do concrete work in advance, we will model abilities as black boxes with well-defined contracts.\n\nWe’re particularly interested in tasks which are “AI complete” in the sense that human-level performance on that task could be used as a black box to achieve human-level performance on a very wide range of tasks. For now, we’ll further focus on domains where performance can be unambiguously defined.\n\nSome examples:\n\n* **Boolean question-answering**. A question-answerer is given a statement and outputs a probability. A question-answerer is Hugh-level if it never makes judgments predictably worse than Hugh’s. We can consider question-answerers in a variety of languages, ranging from natural language (“Will a third party win the US presidency in 2016?”) to precise algorithmic specifications (“Will this program output 1?”).\n* **Online learning**. A function learner is given a sequence of labelled examples (x, y) and predicts the label of a new data point, x’. A function learner is Hugh-level if, after training on any sequence of data (xᵢ, yᵢ), the learner’s guess for the label of the next point is—on average—at least as good as Hugh’s.\n* **Embodied reinforcement learning**. A reinforcement learner interacts with an environment and receives periodic rewards, with the goal of maximizing the discounted sum of its rewards. A reinforcement learner is Hugh-level if, following any sequence of observations, it achieves an *expected* performance as good as Hugh’s in the subsequent rounds. The expectation is taken using our subjective distribution over the physical situation of an agent who has made those observations.\n\nWhen talking about Hugh’s predictions, judgments, or decisions, we imagine that Hugh has access to a reasonably powerful computer, which he can use to process or display data. For example, if Hugh is given the binary data from a camera, he can render it on a screen in order to make predictions about it.\n\nWe can also consider a particularly degenerate ability:\n\n* **Unlimited computation**. A box that can run any algorithm in a single time step is—in some sense—Hugh level at every precisely stated task.\n\nAlthough unlimited computation seems exceptionally powerful, it’s not immediately clear how to solve the steering problem even using such an extreme ability.\n\nMeasuring usefulness\n--------------------\n\nWhat does it mean for a program to be “as useful” as Hugh?\n\nWe’ll start by defining “as useful for X as Hugh,” and then we will informally say that a program is “as useful” as Hugh if it’s as useful for the tasks we care most about.\n\nConsider **H,** a black box which simulates Hugh or perhaps consults a version of Hugh who is working remotely. We’ll suppose that running **H** takes the same amount of time as consulting our Hugh-level black boxes. A project to accomplish X could potentially use as many copies of **H** as it can afford to run.\n\nA program **P** is as useful than Hugh for X if, for every project using **H** to accomplish X, we can efficiently transform it into a new project which uses **P** to accomplish X. The new project shouldn’t be much more expensive—-it shouldn’t take much longer, use much more computation or many additional resources, involve much more human labor, or have significant additional side-effects.\n\nWell-motivated\n--------------\n\nWhat it does it mean for Hugh to be well-motivated?\n\nThe easiest approach is universal quantification: for *any* human Hugh, if we run our program using Hugh-level black boxes, it should be as useful as Hugh.\n\nAlternatively, we can leverage our intuitive sense of what it means for someone to be well-motivated to do X, and define “well-motivated” to mean “motivated to help the user’s project succeed.”\n\nScaling up\n----------\n\nIf we are given better black boxes, we should make a better program. This is captured by the requirement that our program should be as useful as Hugh,no matter how capable Hugh is (as long as the black boxes are equally capable).\n\nIdeally, our solutions should scale far past human-level abilities. This is not a theoretical concern—in many domains computers already have significantly superhuman abilities. This requirement is harder to make precise, because we can no longer talk about the “human benchmark.” But in general, we would like to build systems which are (1) working towards their owner’s interests, and (2) nearly as effective as the best goal-directed systems that can be built using the available abilities. The ideal solution to the steering problem will have these characteristics in general, even when the black-box abilities are radically superhuman.\n\nScaling down\n------------\n\n“Human-level abilities” could refer to many different things, including:\n\n1. Human-level performance on high-level tasks.\n2. The level of functionality embodied in the human brain. Human-level perception, intuition, motor control, subsymbolic reasoning, and so on.\n\nIn general, as we shift from 1 towards 2 the steering problem becomes more difficult. It may be difficult to produce simple or predictable high-level functions using low-level abilities.\n\nFor example, humans pursue a complicated set of goals that would be very difficult to determine by looking at the human brain (and some of which are quite distant from the evolutionary pressures that produced us). When given a task that doesn’t serve these goals, a human may simply decide to pursue their own agenda. If we build human-like abilities out of human-like low-level functions, we may find ourselves with similarly unpredictable high-level functions.\n\nIt is harder to formalize or understand low-level abilities than high-level functions. One approach is to consider very short time periods. For example, we could consider black boxes which learn functions as well as a human who spends only 500 milliseconds per example. Unfortunately, at this level it is harder to encapsulate human abilities in a small number of simple functions, and we must pay more attention to the way in which these abilities can be connected.\n\nIf the steering problem were satisfactorily resolved, “scaling down” to these lower-level abilities would be a natural but challenging next step.\n\n**3. Objections**\n=================\n\n**The simple, abstract capabilities we can think of now are much harder to use productively than the rich and messy AI capabilities we will actually develop.**\n\nFor now we can’t clearly state *anything* a machine could do that would make the steering problem easy (short of exactly reproducing human behavior). Filling in this gap would be an appropriate response to the steering problem.\n\nPerhaps we don’t yet know exactly what we want machines to do, but figuring it out is inextricably bound up with getting them to do it. If so, it might be easier to say what we want once we know how to do it. But by the same token, it might be easier to figure out how to do it once we can better say what we want.\n\nIn either case, it seems likely that the steering problem fills in its own niche: either it is a distinct problem that won’t be solved automatically en route to AI; or else it is a different perspective on the same underlying difficulties, and can be productively explored in parallel with other AI research.\n\nBecause the steering problem is non-trivial for simple, precisely stated abilities, it may well be non-trivial for the abilities we actually obtain. Certainly we can imagine developing a human-level predictor without learning too much about how to build useful systems. So it seems unreasonable to be confident that the steering problem will turn out to be a non-problem.\n\n**The simple, abstract abilities we can think of now are much *easier* to work with than the human-level abilities we will actually develop, or at least much different. Building a robust system is easier when all of the pieces have clean, reliable functions; in practice things won’t be so pretty.**\n\nIt would be a larger leap to continue “…and the ideas required to work with simple, reliable components will have no relevance to their more realistic counterparts.” *Whatever* abilities we end up with, many solutions to the steering problem will turn out to be inapplicable, and they will all be incomplete. But we can still find useful general techniques by developing ideas that are helpful for many versions of the steering problem; and we can identify important technical challenges by understanding what makes each version easy or hard.\n\nWe can gradually scale up the difficulty of the steering problem by demanding more robust solutions, making weaker guarantees on our black boxes, or working with less manageable abilities. Our choices can be informed by ongoing progress in AI, focusing on those capabilities we think are most realistic and the forms of robustness we consider most likely to be necessary.\n\n**Why is autonomy necessary?**\n\nOne apparent solution to the steering problem is to retain human decision-making, with AI systems acting as assistants and tools to help humans accomplish their goals.\n\nThis is an appropriate solution while AI systems remain relatively limited. It has serious problems when scaling:\n\n* If large numbers of machines make large numbers of decisions, with human wages orders of magnitude larger than the operating costs of machines, then the cost of human oversight becomes prohibitive. Imagine a million humans overseeing a billion or trillion human-level machines.\n* If machines make very rapid decisions, human oversight can introduce unacceptable latency. Imagine human engineers overseeing the handling of individual Google searches.\n* If machines work on complex problems, human overseers may not be able to understand their reasoning process. Imagine a physics undergraduate overseeing a team of world-class physicists.\n\nAll of these problems become particularly severe when we consider *thinking about thinking*. That is, machines must make numerous, rapid decisions about how to process information, what to investigate or compute, how to organize their resources, and so on. If we want to use machine intelligence to make those decisions better, that will have to be done without substantial human oversight.\n\nIt may be possible to maintain human involvement in all important automation, but doing so will eventually become a serious bottleneck. Tasks that can be performed without human oversight will become increasingly efficient, and without explicit coordination (and a willingness to make short-term sacrifices) it seems likely that more autonomous operations will outcompete their less autonomous counterparts.\n\n**Is there any concrete work to do on the steering problem?**\n\nIn the next section I’ll describe a handful of existing research directions that bear on the steering problem. I think the steering problem suggests an interesting and unusual perspective on each of these domains; I don’t know whether it will prove to be a fruitful perspective, but if it fails it won’t be because of a lack of first steps.\n\nI have done some work motivated explicitly by the steering problem: [a formalization](http://www.google.com/url?q=http%3A%2F%2Fordinaryideas.wordpress.com%2F2012%2F04%2F21%2Findirect-normativity-write-up%2F&sa=D&sntz=1&usg=AFQjCNGtdsc2V3j_mhZbr8uvywxV0R2eJA) of “judgment upon reflection,” which can be expressed entirely algorithmically based on (experimentally controlled) observations of human behavior, [an alternative](https://medium.com/@paulfchristiano/model-free-decisions-6e6609f5d99e) to goal-directed behavior which may enjoy similar productivity benefits while being more robust, and [some](http://ordinaryideas.wordpress.com/2014/07/18/adversarial-collaboration/) [simple](https://medium.com/@paulfchristiano/delegating-to-a-mixed-crowd-dda2b8e22cd8) [protocols](https://medium.com/@paulfchristiano/of-arguments-and-wagers-ee16a0e84cf7) for delegating to untrusted agents.\n\n**4. Approaches, ingredients, and related work**\n================================================\n\nRational agency\n---------------\n\nOne natural approach to the steering problem is to build goal-directed agents who want to be useful or who share their creators’ goals.\n\nThere are two main difficulties:\n\n* Specifying goals in an appropriate language. What does it mean to “be useful”? How can we define what we want?\n* Building agents that reliably pursue goals specified in that language.\n\nDeploying a goal-directed agent is somewhat worrying: an agent with an almost-but-not-quite-correct goal will be working at cross-purposes to its creator, and will be motivated (for example) to avoid revealing that its goal is not quite correct. These concerns motivate a third line of research:\n\n* Designing goals or goal-directed agents which “fail gracefully,” i.e. which don’t behave adversarially or resist correction, even if their goals are not perfectly aligned with their creators’.\n\nSeveral lines of existing research bear on each of these questions.\n\n**Specifying goals**\n\nRather than directly specifying what outcomes are good, it seems more promising to specify how to learn what outcomes are good. This is a topic of existing research, although the focus is typically on pragmatic considerations rather than on the more general theoretical problem.\n\n* Inverse reinforcement learning (for example see [Russell](http://www.cs.berkeley.edu/~russell/papers/colt98-uncertainty.pdf), [Ng and Russell](http://www-cs.stanford.edu/people/ang/papers/icml00-irl.pdf), or [Ziebart et al](http://www.aaai.org/Papers/AAAI/2008/AAAI08-227.pdf).) and goal inference (for example see [Baker, Tenenbaum, and Saxe](http://web.mit.edu/clbaker/www/papers/cogsci2007.pdf) or [Verma and Rao](http://homes.cs.washington.edu/~rao/nips05imit.pdf)) attempt to infer underlying preferences by observing behavior, despite a complex relationship between actions and outcomes. To apply to the steering problem, the techniques would need to be generalized to learners who are much better informed and more capable than the human models they are learning from, and who have much noisier information about the human models and their environment. This requires understanding the limitations and errors of the human models, and generalizing the human’s goals robustly so that they remain acceptable even when they are pursued in an unfamiliar way.\n* Preference learning attempts to infer underlying preferences from observed decisions, despite noisy information and potentially irrational behavior (for a small sample, see [Fürnkranz and Hüllermeier](http://www.springer.com/computer/ai/book/978-3-642-14124-9), [Fürnkranz and Hüllermeier](http://pdf.aminer.org/000/172/223/pairwise_preference_learning_and_ranking.pdf), or [Gervasio et al](http://pdf.aminer.org/000/172/223/pairwise_preference_learning_and_ranking.pdf).). Existing work considers small domains with explicitly represented preferences, and there seem to be serious challenges when scaling to preferences over complete states-of-affairs. As a result, preference learning seems less directly applicable than inverse reinforcement learning or goal inference.\n* Some more speculative and philosophical research (for example, see [my post](http://ordinaryideas.wordpress.com/2012/04/21/indirect-normativity-write-up/) on the subject, or [this](https://intelligence.org/files/CEV.pdf) more discursive article by Yudkowsky) has explored how to formalize our preferences in a general and precise way. The focus is on determining what processes of deliberation correctly capture our informed judgment and how we might formalize those processes. The primary challenge, on this perspective, is defining our preferences about outcomes that are difficult for us to describe or reason about.\n\nWe could also investigate goals of the form “maximize user satisfaction,” but it seems hard to find a satisfactory definition along these lines.\n\n**Pursuing goals**\n\nEven with desirable goals in hand, it may be challenging to design systems that reliably pursue those goals. There are questions about how goal-directed behavior relates to reasoning and to the behavior of subsystems (does the system pursue the goals it appears to?), about the theoretical basis for optimal rational behavior (does it pursue them well? ), and about how an agent should behave in light of uncertainty about what outcomes are desirable.\n\n* Some existing work in reinforcement learning (for a summary, see [Sutton and Barto](http://webdocs.cs.ualberta.ca/~sutton/book/the-book.html)) and probabilistic inference (for example, see [Attas](http://research.goldenmetallic.com/aistats03.pdf)) shows how goal-directed behavior can be implemented using other faculties.\n* Some work in philosophy studies the formal basis for rational agency, from decision theory to epistemology. This work clarifies what it means to rationally pursue a particular goal. A lack of understanding may result in systems that appear to pursue one goal but actually pursue another, or that generalize to novel environments in undesirable ways.\n* Some work on AI safety (see [Dewey](http://www.danieldewey.net/learning-what-to-value.pdf) or [Bostrom](http://books.google.com/books?hl=en&lr=&id=C-_8AwAAQBAJ&oi=fnd&pg=PP1&dq=superintelligence+nick+bostrom&ots=UES5TvoQLp&sig=k55cFr_T1nLzunEALbvgI50KoX8#v=onepage&q=superintelligence%20nick%20bostrom&f=false), ch. 12) explores frameworks for pursuing uncertain goals, in the interest of understanding how and to what extent “learning what is good” is different from “learning what is true.”\n* Some work in multi-objective optimization considers settings with a large space of possible objectives, and seeks policies which are appropriate in light of uncertainty about which objective we really care about.\n\n**Failing gracefully**\n\nEven a “near miss” when defining a goal-directed agent might have undesirable consequences. In addition to minimizing the probability of failure, it would be nice to minimize the costs of a near miss—-and to allow us to use the kind of “trial and error” approach that is more typical of software development.\n\n* Researchers interested in AI safety have introduced and discussed the notion of “corrigible” agents, who cooperate with “corrective” interventions by their programmers—even if we cannot implement goals consistent with that cooperation.\n* Some researchers have worked to make AI reasoning understandable (For example, see [Bullinaria](https://www.cs.bham.ac.uk/~jxb/PUBS/AIRTNN.pdf) or [Craven](http://research.cs.wisc.edu/machine-learning/shavlik-group/craven.thesis.pdf)). Understandability can reduce the scope for malignant failure modes, since engineers might be able to directly monitor the motivation for decisions (and in particular to distinguish between honest and deceptive behavior).\n\nDelegation\n----------\n\nEvery day humans work productively on projects that they don’t intrinsically care about, motivated by a desire for money, recognition, or satisfaction. We could imagine an AI doing the same thing. For example, a reinforcement learner might do useful work in order to earn a reward, without having any intrinsic concern for the work being done.\n\nUnfortunately, such delegation runs into some problems. The problems appear even when delegating to humans, but they get considerably worse as machines become more powerful and more numerous:\n\n* Naively, the agent will only do good work when the principal can verify the quality of that work. The cost of this oversight can be non-trivial.\n* Unless the oversight is extremely thorough or the problem particularly straightforward, there will be some gap between what the principal wants and what the principal evaluates. The reward-driven agent will maximize whatever the principal evaluates.\n* If agents are granted much autonomy, they may find other ways to get rewards. This depends in detail on how the “reward” is implemented, and what the agent cares about.\n\nThere are many tools available to address these problems: breaking a system up into pieces with differing values and limited autonomy, performing randomized audits, automating audits and auditing auditors, relying on agents with short time horizons or extreme risk aversion, and so on. So far there are no compelling proposals that put the pieces together.\n\nA full solution is likely to rely on agents with different values, combined with an appropriate system of checks and balances. But if the agents coordinate to pursue their collective values, they could fatally undermine such a system. We can try to minimize the risk by making the agents’ interactions essentially zero sum, or employing other agents to oversee interactions and report signs of collusion. But the possibility of collusion remains a serious obstacle.\n\nEven if delegation cannot fully resolve the steering problem, a weak solution might be useful as part of a bootstrapping protocol (see the section on bootstrapping below).\n\nThis problem is similar in spirit to mechanism design, but the details (and apparently the required tools) are quite different. Nevertheless, some ideas from mechanism design or the economics of delegation may turn out to be applicable. Conversely, some approaches to the steering problem might be of interest to economists in these areas.\n\nShared language and concepts\n----------------------------\n\nWhen delegating to a helpful human, we would say what we want done in natural language, relying on a rich network of shared concepts that can be used to specify goals or desired actions. Writing programs with the same capability would greatly simplify or perhaps solve the steering problem.\n\nIn some sense, human-level language understanding is already encapsulated in human-level cognitive abilities. For example, if we were pursuing the delegation approach in the last section, we could describe tasks in natural language. The agents would infer what we expect them to do and under what conditions we will give them rewards, and they would behave appropriately in light of that knowledge. But this “language understanding” only appears in the agent’s goal-directed behavior.\n\nTo address the steering problem, we would like something stronger. We would like to build agents that share human concepts, such that we can write code that operates in terms of those concepts: specifying a goal in terms of higher-level concepts, or executing instructions defined in terms of these concepts. These tasks don’t seem to be possible using only goal-directed language understanding.\n\nUnderstanding concept learning and the relationship to language is a fundamental problem in cognitive science and AI. Work in these areas thus bears directly on the steering problem.\n\nFor now, we cannot say formally what it would mean to have a program that reliably acquired “the same” concepts as humans, so that instructions expressed in those concepts would have the intended meaning. Even given unlimited computation, it’s not clear how we would solve the steering problem using concept learning. This is not at all to say it is not possible, merely that it has not yet been done.\n\nThere may be a tight connection between the theoretical question—-what would it mean to learn human concepts, and how could you do it with any amount of computation—-and the pragmatic computational issues. If there is a connection, then the theoretical question might be easier once the pragmatic issues are better understood. But conversely, the pragmatic question might also be easier once the theoretical issues are better understood.\n\nNon-consequentialist intelligence\n---------------------------------\n\nIf describing our real goals is too demanding, and describing a crude approximation is hazardous, then we might try to build systems without explicitly defined goals. This makes the safety problem much easier, but probably makes it harder to build systems which are sufficiently powerful at all.\n\n**Non-agents**\n\nOne idea is to focus on systems with some narrower function: answering questions, proposing plans, executing a narrow task, or so on*.* On their own these systems might not be terribly useful, but the hope is that as tools they can be nearly as useful as a goal-directed assistant. Moreover, because these systems don’t need to be aligned with human goals, they may be easier to construct.\n\nFor example, research in decision support and multi-objective optimization aims to find good solutions to an optimization problem (or a planning problem) by interacting with a decision-maker rather than giving a scalar representation of their preferences (for example, see [Fonseca and Fleming](http://pdf.aminer.org/000/310/607/genetic_algorithms_for_multiobjective_optimization_formulationdiscussion_and_generalization.pdf) or [Deb](http://link.springer.com/chapter/10.1007/978-1-4614-6940-7_15)).\n\nThese systems can certainly be useful (and so may be appropriate as part of a bootstrapping strategy or as a component in a more complex solution; see the next sections). But most of them inherently require significant human oversight, and do not seem suitable as solutions to the full steering problem: for projects involving large numbers of agents with a small number of human overseers, the involvement of human oversight in all substantive decisions is probably an unacceptable overhead.\n\nThis issue applies at every level simultaneously, and seems particularly serious when we consider systems thinking about how to think. For example, in order to produce a plan, a simulated human or team would first plan how to plan. They would seek out relevant information, talk to people with necessary expertise, focus their attention on the highest-priority questions, allocate available computational resources, and so on. If human oversight is necessary for every step of this process, the resulting system is not even as useful as a black box that outputs good plans.\n\nOf course, even if humans rely on goal-directed behavior to accomplish these tasks, this doesn’t imply that machines must as well. But that would require a concrete alternative approach that still captures the benefits of goal-directed reasoning without substantial oversight.\n\n**Non-consequentialist agents**\n\nInstead we might build agents with no explicitly defined goals which can nevertheless accomplish the same tasks as a helpful human.\n\nPerhaps most straightforward is an agent that asks “What would a helpful human do?” and then does that. If we have a particular helpful human available as a template, we could build a predictive model of that human template’s decisions, and use this model to guide our agent’s decisions. With a good enough model, the result would be precisely as useful as the human template.\n\nThis proposal has a number of potential problems. First, it does not scale to the available abilities—-it is never more useful than a simulation of the human template. So it is not a general solution to the steering problem, unless we have access to arbitrarily capable human templates. Second, if our predictions are imperfect, the behavior may be significantly worse than the helpful human template. Third, making accurate predictions about a human is itself a superhuman skill, and so asking Alice to do what she thinks Bob would do can result in behavior worse than Alice would produce on her own, no matter how smart Bob is.\n\nWe can address some of these problems by instead asking “What choice would the human overseer most approve of?” and then taking the most-approved-of option. And a bootstrapping procedure can address some limitations of using a human template. I explore these ideas [here](https://medium.com/@paulfchristiano/model-free-decisions-6e6609f5d99e).\n\nResearch on inverse reinforcement learning or goal inference could also be used to learn imitative behavior which is sensitive to human goals, rather than to build systems that infer a human’s long-term goals.\n\nThere may be many other alternatives to goal-directed behavior. Any proposal would probably be combined with some ideas under “rational agency” and “shared language” above.\n\nBootstrapping\n-------------\n\nWe might try to deal with two cases separately:\n\n* We are dealing with machines *much* smarter, faster, or more numerous than humans.\n* We aren’t.\n\nIn the first case, we could try to delegate the steering problem to the much more capable machines. In the second case, we might be able to make do with a weak solution to the steering problem which wouldn’t scale to more challenging cases.\n\nIt’s hard to know how this strategy would work out, and to a greater extent than the other approaches we would expect to play it by ear. But by thinking about some of the difficulties in advance, we can get a better sense for how hard the steering problem will actually be, and whether a strong solution is necessary.\n\nRoughly, there are two pieces to a bootstrapping protocol:\n\n1. We need to solve a weak version of the steering problem. We don’t have to make our machine as useful as a human, but we do need to get *some* useful work, beyond what we could have done ourselves. Ideally we’ll come as close as possible to a useful human.\n2. We need to use the available machines to solve a stronger version of the steering problem. This is repeated until we’ve solved the full problem.\n\nBoth of these steps would rely on the kinds of techniques we have discussed throughout this section. The difference is that the humans, as well as the machine intelligences, only need to solve the steering problem for machines somewhat smarter or faster than themselves.\n\nOne implication is that weak solutions to the steering problem might be amplified into strong solutions, and so are worth considering even if they can’t be scaled up to strong solutions directly. This includes many of the approaches listed under “Delegation” and “Goal-less intelligence.”", "url": "https://ai-alignment.com/the-steering-problem-a3543e65c5c4", "title": "The Steering Problem", "source": "html_articles", "source_type": "blogPost", "source_filetype": "pdf", "date_published": "2015-05-05T22:00:00Z", "authors": ["Paul Christiano"], "summary": [], "id": "e2067d28e1782b7f1e09e0fcba663dd8"} {"text": "Suppose that 1% of the world’s resources are controlled by unaligned AI, and 99% of the world’s resources are controlled by humans. We might hope that at least 99% of the universe’s resources end up being used for stuff-humans-like (in expectation).\n\nJessica Taylor argued for this conclusion in [Strategies for Coalitions in Unit-Sum Games](https://www.lesswrong.com/posts/5bd75cc58225bf0670375325/strategies-for-coalitions-in-unit-sum-games): if the humans divide into 99 groups each of which acquires influence as effectively as the unaligned AI, then by symmetry each group should end up with as much influence as the AI, i.e. in aggregate they should end up with 99% of the influence.\n\nThis argument rests on what I’ll call the *strategy-stealing assumption*: for any strategy an unaligned AI could use to influence the long-run future, there is an analogous strategy that a similarly-sized group of humans can use in order to capture a similar amount of flexible influence over the future. By “flexible” I mean that humans can decide later what to do with that influence — which is important since humans don’t yet know what we want in the long run.\n\nWhy might the strategy-stealing assumption be true?\n===================================================\n\nToday there are a bunch of humans, with different preferences and different kinds of influence. Crudely speaking, the long-term outcome seems to be determined by some combination of {which preferences have how much influence?} and {what is the space of realizable outcomes?}.\n\nI expect this to become more true over time — I expect groups of agents with diverse preferences to eventually approach efficient outcomes, since otherwise there are changes that everyagent would prefer (though this is not obvious, especially in light of bargaining failures). Then the question is just about *which* of these efficient outcomes we pick.\n\nI think that our actions don’t effect the space of realizable outcomes, because long-term realizability is mostly determined by facts about distant stars that we can’t yet influence. The obvious exception is that if we colonize space faster, we will have access more resources. But [quantitatively this doesn’t seem like a big consideration](https://rationalaltruist.com/2013/04/30/astronomical-waste/), because astronomical events occur over millions of millennia while our decisions only change colonization timelines by decades.\n\nSo I think our decisions mostly affect long-term outcomes by changing the relative weights of different possible preferences (or by causing extinction).\n\nToday, one of the main ways that preferences have weight is because agents with those preferences control resources and other forms of influence. Strategy-stealing seems most possible for this kind of plan — an aligned AI can exactly copy the strategy of an unaligned AI, except the money goes into the aligned AI’s bank account instead. The same seems true for most kinds of resource gathering.\n\nThere are lots of strategies that give influence to other people instead of helping me. For example, I might preferentially collaborate with people who share my values. But I can still steal these strategies, as long as my values are just as common as the values of the person I’m trying to steal from. So a majority can steal strategies from a minority, but not the other way around.\n\nThere can be plenty of strategies that don’t involve acquiring resources or flexible influence. For example, we could have a parliament with obscure rules in which I can make maneuvers that advantage one set of values or another in a way that can’t be stolen. Strategy-stealing may only be possible at the level of groups — you need to retain the option of setting up a different parliamentary system that doesn’t favor particular values. Even then, it’s unclear whether strategy-stealing is possible.\n\nThere isn’t a clean argument for strategy-stealing, but I think it seems plausible enough that it’s meaningful and productive to think of it as a plausible default, and to look at ways it can fail. (If you found enough ways it could fail, you might eventually stop thinking of it as a default.)\n\nEleven ways the strategy-stealing assumption could fail\n=======================================================\n\nIn this section I’ll describe some of the failures that seem most important to me, with a focus on the ones that would interfere with the argument in the introduction.\n\n1. AI alignment\n---------------\n\nIf we can build smart AIs, but not aligned AIs, then humans can’t necessarily use AI to capture flexible influence. I think this is theπ most important way in which strategy-stealing is likely to fail. I’m not going to spend much time talking about it here because I’ve spent so much time elsewhere.\n\nFor example, if smart AIs inevitably want to fill the universe with paperclips, then “build a really smart AI” is a good strategy for someone who wants to fill the universe with paperclips, but it can’t be easily stolen by someone who wants anything else.\n\n2. Value drift over generations\n-------------------------------\n\nThe values of 21st century humans are determined by some complicated mix of human nature and the modern environment. If I’m a 16th century noble who has really specific preferences about the future, it’s not really clear how I can act on those values. But if I’m a 16th century noble who thinks that future generations will inevitably be wiser and should get what they want, then I’m in luck, all I need to do is wait and make sure our civilization doesn’t do anything rash. And if I have some kind of crude intermediate preferences, then I might be able to push our culture in appropriate directions or encourage people with similar genetic dispositions to have more kids.\n\nThis is the most obvious and important way that strategy-stealing has failed historically. It’s not something I personally worry about too much though.\n\nThe big reason I don’t worry is some combination of common-sense morality and decision-theory: our values are the product of many generations each giving way to the next one, and so I’m pretty inclined to “pay it forward.” Put a different way, I think it’s relatively clear I should empathize with the next generation since I might well have been in their place (whereas [I find it much less clear under what conditions I should empathize with AI](/sympathizing-with-ai-e11a4bf5ef6e)). Or from yet another perspective, the same intuition that I’m “more right” than previous generations makes me very open to the possibility that future generations are more right still. This question gets very complex, but my first-pass take is that I’m maybe an order of magnitude less worried than about other kinds of value drift.\n\nThe small reason I don’t worry is that I think this dynamic is probably going to be less important in the future (unless we actively want it to be important — which seems quite possible). I believe there is a good chance that within 60 years most decisions will be made by machines, and so the handover from one generation to the next will be optional.\n\nThat all said, I am somewhat worried about more “out of distribution” changes to the values of future generations, in scenarios where AI development is slower than I expect. For example, I think it’s possible that genetic engineering of humans will substantially change what we want, and that I should be less excited about that kind of drift. Or I can imagine the interaction between technology and culture causing similarly alien changes. These questions are even harder to think about than the basic question of “how much should I empathize with future generations?” which already seemed quite thorny, and I don’t really know what I’d conclude if I spent a long time thinking. But at any rate, these things are not at the top of my priority queue.\n\n3. Other alignment problems\n---------------------------\n\nAIs and future generations aren’t the only optimizers around. For example, we can also build institutions that further their own agendas. We can then face a problem analogous to AI alignment — if it’s easier to build effective institutions with some kinds of values than others, then those values could be at a structural advantage. For example, we might inevitably end up with a society that optimizes generalizations of short-term metrics, if big groups of humans are much more effective when doing this. (I say “generalizations of short-term metrics” because an exclusive focus on short-term metrics is the kind of problem that can fix itself over the very long run.)\n\nI think that institutions are currently considerably weaker than humans (in the sense that’s relevant to strategy-stealing) and this will probably remain true over the medium term. For example:\n\n* A company with 10,000 people might be much smarter than any individual humans, but mostly that’s because of its alliance with its employees and shareholders — most of its influence is just used to accumulate more wages and dividends. Companies do things that seem antisocial not because they have come unmoored from any human’s values, but because plenty of influential humans want them to do that in order to make more money. (You could try to point the “market” as an organization with its own preferences, but it’s even worse at defending itself than bureaucracies — it’s up to humans who benefit from the market to defend it.)\n* Bureaucracies can seem unmoored from any individual human desire. But their actual ability to defend themselves and acquire resources seems much weaker than other optimizers like humans or corporations.\n\nOverall I’m less concerned about this than AI alignment, but I do think it is a real problem. I’m somewhat optimistic that the same general principles will be relevant both to aligning institutions and AIs. If AI alignment wasn’t an issue, I’d be more concerned by problems like institutional alignment.\n\n4. Human fragility\n------------------\n\nIf AI systems are aligned with humans, they may want to keep humans alive. Not only do humans prefer being alive, humans may need to survive if they want to have the time and space to figure out what they really want and to tell their AI what to do. (I say “may” because at some point you might imagine e.g. putting some humans in cold storage, to be revived later.)\n\nThis could introduce an asymmetry: an AI that just cares about paperclips can get a leg up on humans by threatening to release an engineered plague, or trashing natural ecosystems that humans rely on. (Of course, this asymmetry may also go the other way — values implemented in machines are reliant on a bunch of complex infrastructure which may be more or less of a liability than humanity’s reliance on ecosystems.)\n\nStepping back, I think the fundamental long-term problem here is that “do what this human wants” is only a simple description of human values if you actually have the human in hand, and so an agent with these values does have a big extra liability.\n\nI do think that the extreme option of “storing” humans to revive them later is workable, though most people would be very unhappy with a world where that becomes necessary. (To be clear, I think it almost certainly won’t.) We’ll return to this under “short-term terminal preferences” below.\n\n5. Persuasion as fragility\n--------------------------\n\nIf an aligned AI defines its values with reference to “whatever Paul wants,” then someone doesn’t need to kill Paul to mess with the AI, they just need to change what Paul wants. If it’s very easy to manipulate humans, but we want to keep talking with each other and interacting with the world despite the risk, then this extra attack surface could become a huge liability.\n\nThis is easier to defend against — just stop talking with people except in extremely controlled environments where you can minimize the risk of manipulation — but again humans may not be willing to pay that cost.\n\nThe main reason this might be worse than point 4 is that humans may be relatively happy to physically isolate themselves from anything scary, but it would be much more costly for us to cut off from contact with other humans.\n\n6. Asymmetric persuasion\n------------------------\n\nEven if humans are the only optimizers around, it might be easier to persuade humans of some things than others. For example, you could imagine a world where it’s easier to convince humans to endorse a simple ideology like “maximize the complexity of the universe” than to convince humans to pursue some more complex and subtle values.\n\nThis means that people with easily-persuadable values can use persuasion as a strategy, and people with other values cannot copy it.\n\nI think this is ultimately more important than fragility, because it is relevant before we have powerful AI systems. It has many similarities to “value drift over generations,” and I have some mixed feelings here as well — there are some kinds of argument and deliberation that I certainly do endorse, and to the extent that my current views are the product of significant amounts of non-endorsed deliberation I am more inclined to be empathetic to future people who are influenced by increasingly-sophisticated arguments.\n\nBut as I described in section 2, I think these connections can get weaker as technological progress moves us further out of distribution, and if you told me that e.g. it was possible to perform a brute force search and find an argument that could convince someone to maximize the complexity of the future, I wouldn’t conclude that it’s probably fine if they decided to do that.\n\n(Credit to Wei Dai for emphasizing this failure mode.)\n\n7. Value-sensitive bargaining\n-----------------------------\n\nIf a bunch of powerful agents collectively decide what to do with the universe, I think it probably won’t look like “they all control their own slice of the universe and make independent decisions about what to do.” There will likely be opportunities for trade, they may have meddling preferences (where I care what you do with your part of the universe), there may be a possibility of destructive conflict, or it may look completely different in an unanticipated way.\n\nIn many of these settings the outcome is influenced by a complicated bargaining game, and it’s unclear whether the majority can steal a minority’s strategy. For example, suppose that there are two values X and Y in the world, with 99% X-agents and 1% Y-agents. The Y-agents may be able to threaten to destroy the world unless there is an even split, and the X-agents have no way to copy such a strategy. (This could also occur over the short term.)\n\nI don’t have a strong view about the severity of this problem. I could imagine it being a big deal.\n\n8. Recklessness\n---------------\n\nSome preferences might not care about whether the world is destroyed, and therefore have access to productive but risky strategies that more cautious agents cannot copy. The same could happen with other kinds of risks, like commitments that are game-theoretically useful but risk sacrificing some part of the universe or creating long-term negative outcomes.\n\nI tend to think about this problem in the context of particular technologies that pose an extinction risk, but it’s worth keeping in mind that it can be compounded by the existence of more reckless agents.\n\nOverall I think this isn’t a big deal, because it seems much easier to cause extinction by trying to kill everyone than as an accident. There are fewer people who are in fact trying to kill everyone, but I think not enough fewer to tip the balance. (This is a contingent fact about technology though; it could change in the future and I could easily be wrong even today.)\n\n9. Short-term unity and coordination\n------------------------------------\n\nSome actors may have long-term values that are easier to talk about, represent formally, or reason about. Relative to humans, AIs may be especially likely to have such values. These actors could have an easier time coordinating, e.g. by pursuing some explicit compromise between their values (rather than being forced to find a governance mechanism for some resources produced by a joint venture).\n\nThis could leave us in a place where e.g. an unaligned AI controls 1% resources, but the majority of resources are controlled by humans who want to acquire flexible resources. Then the unaligned AIs can form a coalition which achieves very high efficiencies, while the humans cannot form 99 other coalitions to compete.\n\nThis could theoretically be a problem without AI, e.g. a large group of human with shared explicit values might be able to coordinate better and so leave normal humans at a disadvantage, though I think this is relatively unlikely as a major force in the world.\n\nThe seriousness of this problem is bounded by both the efficiency gains for a large coalition, and the quality of governance mechanisms for different actors who want to acquire flexible resources. I think we have OK solutions for coordination between people who want flexible influence, such that I don’t think this will be a big problem:\n\n* The humans can participate in lotteries to concentrate influence. Or you can gather resources to be used for a lottery in the future, while still allowing time for people to become wiser and then make bargains about what to do with the universe before they know who wins.\n* You can divide up the resources produced by a coalition equitably (and then negotiate about what to do with them).\n* You can modify other mechanisms by allowing votes that could e.g. overrule certain uses of resources. You could have more complex governance mechanisms, can delegate different kinds of authority to different systems, can rely on trusted parties, etc.\n* Many of these procedures work much better amongst groups of humans who expect to have relatively similar preferences or have a reasonable level of trust for other participants to do something basically cooperative and friendly (rather than e.g. demanding concessions so that they don’t do something terrible with their share of the universe or if they win the eventual lottery).\n\n(Credit to Wei Dai for describing and emphasizing this failure mode.)\n\n10. Weird stuff with simulations\n--------------------------------\n\nI think civilizations like ours mostly have an impact via the common-sense channel where we ultimately colonize space. But there may be many civilizations like ours in simulations of various kinds, and influencing the results of those simulations could also be an important part of what we do. In that case, I don’t have any particular reason to think strategy-stealing breaks dow but I think stuff could be very weird and I have only a weak sense of how this influences optimal strategies.\n\nOverall I don’t think much about this since it doesn’t seem likely to be a large part of our influence and it doesn’t break strategy-stealing in an obvious way. But I think it’s worth having in mind.\n\n11. Other preferences\n---------------------\n\nPeople care about lots of stuff other than their influence over the long-term future. If 1% of the world is unaligned AI and 99% of the world is humans, but the AI spends all of its resources on influencing the future while the humans only spend one tenth, it wouldn’t be too surprising if the AI ended up with 10% of the influence rather than 1%. This can matter in lots of ways other than literal spending and saving: someone who only cared about the future might make different tradeoffs, might be willing to defend themselves at the cost of short-term value (see sections 4 and 5 above), might pursue more ruthless strategies for expansion, and so on*.*\n\nI think the simplest approximation is to restrict attention to the part of our preferences that is about the long-term (I discussed this a bit in [Why might the future be good?](https://rationalaltruist.com/2013/02/27/why-will-they-be-happy/)). To the extent that someone cares about the long-term less than the average actor, they will represent a smaller fraction of this “long-term preferences” mixture. This may give unaligned AI systems a one-time advantage for influencing the long-term future (if they care more about it) but doesn’t change the basic dynamics of strategy-stealing. Even this advantage might be clawed back by a majority (e.g. by taxing savers).\n\nThere are a few places where this picture seems a little bit less crisp:\n\n* Rather than being able to spend resources on either the short or long-term, sometimes you might have preferences about *how* you acquire resources in the short-term; an agent without such scruples could potentially pull ahead. If these preferences are strong, it probably violates strategy-stealing unless the majority can agree to crush anyone unscrupulous.\n* For humans in particular, it may be hard to separate out “humans as repository of values” from “humans as an object of preferences,” and this may make it harder for us to defend ourselves (as discussed in sections 4 and 5).\n\nI mostly think these complexities won’t be a big deal quantitatively, because I think our short-term preferences will mostly be compatible with defense and resource acquisition. But I’m not confident about that.\n\nConclusion\n==========\n\nI think strategy-stealing isn’t really true; but I think it’s a good enough approximation that we can basically act as if it’s true, and then think about the risk posed by possible failures of strategy-stealing.\n\nI think this is especially important for thinking about AI alignment, because it lets us formalize the lowered goalposts I discussed [here](/a-possible-stance-for-ai-control-research-fe9cf717fc1b): we just want to ensure that AI is compatible with strategy-stealing. These lowered goalposts are an important part of why I think we can solve alignment.\n\nIn practice I think that a large coalition of humans isn’t reduced to strategy-stealing — a majority can simply stop a minority from doing something bad, rather than by copying it. The possible failures in this post could potentially be addressed by either a technical solution or some kind of coordination.", "url": "https://ai-alignment.com/the-strategy-stealing-assumption-a26b8b1ed334", "title": "The strategy-stealing assumption", "source": "html_articles", "source_type": "blogPost", "source_filetype": "pdf", "date_published": "2019-09-14T22:00:00Z", "authors": ["Paul Christiano"], "summary": [], "id": "b67eda79705af3cb2332c1e15705381f"} {"text": "The scalability of [iterated amplification](https://arxiv.org/pdf/1810.08575.pdf) or [debate](https://arxiv.org/abs/1805.00899) seems to depend on whether large enough teams of humans can carry out arbitrarily complicated reasoning. Are these schemes “universal,” or are there kinds of reasoning that work but which humans fundamentally can’t understand?\n\nThis post defines the concept of “ascription universality,” which tries to capture the property that a question-answering system **A** is better-informed than any particular simpler computation **C**.\n\n[These](/informed-oversight-18fcb5d3d1e1) [parallel](/universality-and-consequentialism-within-hch-c0bee00365bd) [posts](/universality-and-model-based-rl-b08701394ddd) explain why I believe that the alignment of iterated amplification largely depends on whether HCH is ascription universal. Ultimately I think that the “right” definition will be closely tied to the use we want to make of it, and so we should be refining this definition in parallel with exploring its applications.\n\nI’m using the awkward term “ascription universality” partly to explicitly flag that this is a preliminary definition, and partly to reserve linguistic space for the better definitions that I’m optimistic will follow.\n\n(Thanks to Geoffrey Irving for discussions about many of the ideas in this post.)\n\nI. Definition\n=============\n\nWe will try to define what it means for a question-answering system **A** to be “ascription universal.”\n\n1. Ascribing beliefs to A\n-------------------------\n\nFix a language (e.g. English with arbitrarily big compound [terms](/approval-directed-algorithm-learning-bf1f8fad42cd)) in which we can represent questions and answers.\n\nTo ascribe beliefs to **A**, we ask it. If **A**(“are there infinitely many twin primes?”) = “probably, though it’s hard to be sure” then we ascribe that belief about twin primes to **A**.\n\nThis is not a general way of ascribing “belief.” This procedure wouldn’t capture the beliefs of a native Spanish speaker, or for someone who wasn’t answering questions honestly. But it can give us a sufficient condition, and is particularly useful for someone who wants to use **A** as part of an alignment scheme.\n\nEven in this “straightforward” procedure there is a lot of subtlety. In some cases there are questions that we can’t articulate in our language, but which (when combined with **A**’s other beliefs) have consequences that we can articulate. In this case, we can infer something about **A**’s beliefs from its answers to the questions that we can articulate.\n\n2. Ascribing beliefs to arbitrary computations\n----------------------------------------------\n\nWe are interested in whether **A** “can understand everything that could be understood by someone.” To clarify this, we need to be more precise about what we mean by “could be understood by someone.”\n\nThis will be the most informal step in this post. (Not that any of it is very formal!)\n\nWe can imagine various ways of ascribingbeliefs to an arbitrary computation **C**. For example:\n\n* We can give **C** questions in a particular encoding and assume its answers reflect its beliefs. We can either use those answers directly to infer **C**’s beliefs (as in the last section), or we can ask what set of beliefs about latent facts would explain **C**’s answers.\n* We can view **C** as optimizingsomething andask what set of beliefs rationalize that optimization. For example, we can give **C** a chess board as input, see what move it produces, assume it is trying to win, and infer what it must believe. We might conclude that **C** believes a particular line of play will be won by black, or that **C** believes general heuristics like “a pawn is worth 3 tempi,” or so on.\n* We can reason about how **C**’s behavior depends on facts about the world, and ask what state of the world is determined by its current behavior. For example, we can observe that **C**(113327) = 1 but that **C**(113327) “would have been” 0 if 113327 had been composite, concluding that **C**(11327) “knows” that 113327 is prime. We can extend to probabilistic beliefs, e.g. if **C**(113327) “probably” would have been 0 if 113327 had been composite, then we might that **C** knows that 113327 is “probably prime.” This certainly isn’t a precise definition, since it involves considering logical counterfactuals, and I’m not clear whether it can be made precise. (See also ideas along the lines of [“knowledge is freedom”](https://www.lesswrong.com/posts/b3Bt9Cz4hEtR26ANX/knowledge-is-freedom).)\n* If a computation behaves differently under different conditions, then we could use restrict attention to a particular condition. For example, if a question-answering system appears to be bilingual but answers questions differently in Spanish and English, we could ascribe two different sets of beliefs. Similarly, we could ascribe beliefs to any subcomputation. For example, if a part of **C** can be understood as optimizing the way data is laid out in memory, then we can ascribe beliefs to that computation about the way that data will be used.\n\nNote that these aren’t intended to be efficient procedures that we could actually apply to a given computation **C**. They are hypothetical procedures that we will use to define what it means for **A** to be universal.\n\nI’m not going to try to ascribe a single set of beliefs to a given computation; instead, I’ll consider all of the reasonable ascription procedures. For example, I think different procedures would ascribe different beliefs to a particular human, and don’t want to claim there is a unique answer to what a human “really” believes. A universal reasoner needs to have more reasonable beliefs than the beliefs ascribed to that a human using any particular method.\n\nAn ascription-universal reasoner needs to compete with any beliefs that can be ascribed to **C**, so I want to be generous with this definition. For example, given a chess-playing algorithm, we might rationalize it as trying to win a game and infer its beliefs about the rules of chess. Or we might rationalize it as trying to look like a human and infer its beliefs about what a human would do. Or something different altogether. Most of these will be kind of crazy ascriptions, but I want to compete with them anyway (competing with crazier beliefs will turn out to just be easier).\n\nIt’s not totally clear what counts as a “reasonable” ascription procedure, and that’s the biggest source of informality. Intuitively, the key property is that the ascription itself isn’t doing the “hard work.” In practice I’m using an informal extensional definition, guided by examples like those in the bulleted list.\n\n3. Comparing beliefs\n--------------------\n\nWhat does it mean to say that one agent is “better-informed” than another?\n\nIt’s natural to try to express this in terms of empirical information about the world, but we are particularly interested in the different inferencesthat agents are able to draw from the same data. Another natural approach is to compare their “knowledge,” but I have no idea how to define knowledge or justified belief. So I’m reduced to working directly with sets of beliefs.\n\nConsider two sets of beliefs, described by the subjective expectations 𝔼¹ and 𝔼². What does it mean to say that 𝔼¹ is better-informed than 𝔼²?\n\nThis framing makes it tempting to try something simple: “for every quantity, 𝔼¹’s belief about that quantity is more accurate.” But this is property is totally unachievable. Even if 𝔼¹ is obtained by conditioning 𝔼² on a true fact, it will almost certainly happen to update in the “wrong” direction for some claims.\n\nWe will instead use a subjective definition, i.e. we’ll define this concept from a particular epistemic position represented by another subjective expectation 𝔼.\n\nThen we say that 𝔼¹ **dominates** 𝔼² (w.r.t. 𝔼) if, for every bounded quantity X and for every “nice” property Φ:\n\n* 𝔼[X|Φ(𝔼¹, 𝔼²)] = 𝔼[𝔼¹[X]|Φ(𝔼¹, 𝔼²)]\n\n(By “nice” I mean something like: simple to define and open in the product topology, viewing 𝔼¹ and 𝔼² as infinite tables of numbers.)\n\nIntuitively, this means that 𝔼 always “trusts” 𝔼¹, even if given arbitrary information about 𝔼¹ and 𝔼². For example, if 𝔼 was told that 𝔼¹[X] ≈ *x* and \n𝔼²[X] ≈ *y*, then it would expect X to be around *x* (rather than *y*)*.* Allowing arbitrary predicates Φ allows us to make stronger inferences, effectively that 𝔼 thinks that 𝔼¹ captures *everything* useful about 𝔼².\n\nI’m not sure if this is exactly the right property, and it becomes particularly tricky if the quantity X is itself related to the behavior of 𝔼¹ or 𝔼² (continuity in the product topology is the minimum plausible condition to avoid a self-referential paradox). But I think it’s at least roughly what we want and it may be exactly what we want.\n\nNote that dominance is *subjective*, i.e. it depends on the epistemic vantage point 𝔼 used for the outer expectation. This property is a little bit stronger than what we originally asked for, since it also requires 𝔼 to trust 𝔼¹, but this turns out to be implied anyway by our definition of universality so it’s not a big defect.\n\nNote that dominance is a property of the *descriptions* of 𝔼¹ and 𝔼². There could be two different computations that in fact compute the same set of expectations, such that 𝔼 trusts one of them but not the other. Perhaps one computation hard-codes a particular result, while the other does a bunch of work to estimate it. Even if the hard-coded result happened to be correct, such that the two computations had the same outputs, 𝔼 might trust the hard work but not the wild guess.\n\n4. Complexity and parameterization\n----------------------------------\n\nThere are computations with arbitrarily sophisticated beliefs, so no fixed **A** can hope to dominate everything. To remedy this, rather than comparing to a fixed question-answerer **A**, we’ll compare to a parameterized family **A**[**C**].\n\nI’ll consider two different kinds of potentially-universal reasoners **A**:\n\n* In the “idealized” case, **A**[**C**] depends only on the complexity of **C**. \nFor example, we might hope that an *n*-round debate dominates any beliefs that could be ascribed to a fast computation with (*n*-1) rounds of [alternation](https://en.wikipedia.org/wiki/Alternating_Turing_machine). In particular, this **A**[**C**] is the same for any two computations **C** of the same complexity.\n* In the “practical” case, **A**[**C**] depends on the complexity of **C** but also uses the computation **C** as a hint. For example, if **C** is the training process for a neural net, then we might take **A**[**C**] to be a debate in which the debaters are able to share weights and activations with the neural net throughout the entire training process.\n\nI’m generally interested in the case where **A**[**C**] is only slightly more powerful than **C** itself. This mirrors the setting where a universal Turing machine is able to run any other Turing machine with only a modest slowdown.\n\nPutting it all together\n-----------------------\n\nWe say that a set of beliefs 𝔼ᴬ *epistemically dominates* a computation **C** (w.r.t. some beliefs 𝔼 and language L) if the beliefs ascribed to A by the “straightforward” procedure, using L, dominate (w.r.t. 𝔼) the beliefs ascribed to **C** by any reasonable ascription procedure.\n\nWe say that a family of question-answering systems **A**[**·**] are *ascription universal* (w.r.t. 𝔼 and L) if **A**[**C**] epistemically dominates **C** for every computation **C**.\n\nII. Discussion\n==============\n\nWhy is (subjective) dominance sufficient?\n-----------------------------------------\n\nThis universality condition requires that *we* believe that **A**[**C**] is better-informed than **C**. Naively we might have wanted it to *actually* be the case that **A**[**C**] is better-informed than **C**; the stronger condition is clearly unachievable, but why should we be satisfied with the weaker condition?\n\nIn [applications](/informed-oversight-18fcb5d3d1e1) [of this](/universality-and-consequentialism-within-hch-c0bee00365bd) [property](/universality-and-model-based-rl-b08701394ddd), the subjective condition is what we need in order for *us* to believe that **A**[**C**] will cope with the challenges posed by **C**. For example, suppose that **C** formulates a plan to “trick” **A**[**C**]. Then the subjective universality condition implies that we don’t expect **C** to succeed.\n\nThis isn’t as good as actually knowing that **C** won’t succeed. But I think it should be good enough for us — the reason we are thinking about AI safety is because we are concerned that something bad will happen. If we find a technique that defuses this argument, then we’ve addressed the motivating problem. It may still be the case that bad things happen (and we should still search for additional reasons that bad things might happen), but we don’t particularly expect them to.\n\nOf course if you select over a large number of computations, then you may find one that will succeed in tricking **A**. But if we are concerned about that, then we can instead apply ascription universality to the entire process including the selection.\n\nWhy trust opaque computation?\n-----------------------------\n\nIf **C** uses some clever heuristics that I don’t understand, then **C**’s “beliefs” might be excellent, but I might not *expect* them to be excellent. In this sense understanding may seem almost vacuous. If there is some heuristic that I trust, wouldn’t **A** just use it?\n\nTo see why the definition is demanding, consider the special case where **C** performs an extensive search to find a computation that works well empirically. For example, **C** might be the following computation:\n\n* Start with a training set of (image, label) pairs.\n* Search over simple programs to find one that makes good predictions.\n* Run that simple program on a new image to predict its label.\n\nIn this case, we can ascribe beliefs to **C** about the contents of the new image. And because those beliefs are coming from a simple program that works empirically, I expect them to be accurate (in some respects).\n\nFor example, a simple classifier **C** may “believe” that the new image contains a particular curve that typically appears in images labeled “dog;” or a really sophisticated classifier may perform complex deductions about the contents of the scene, starting from premises that were empirically validated on the training set.\n\nSo it’s not OK for **A** to simply ignore whatever heuristics **C** is using — if those heuristics have the kind of empirical support that makes us think they actually work, then A needs to be able to understand everything that those heuristics imply about the domain.\n\nWhy be so general?\n------------------\n\nI’ve formulated universality as competing with arbitrary computations **C**. It seems totally possible that the form of **C** discussed in the last section — searching for a program that works well in practice and then using it in a new situation — is so central that the definition of universality should focus entirely on it.\n\nOne reason to use the broader definition is because sometimes this “selection” process can be embedded in a non-trivial way in a larger computation. For example, if I have a sufficiently large group of humans, I might expect memetic selection to occur and produce systems that could be said to have “beliefs,” and I’d like universal systems to dominate those beliefs as well.\n\nThe other reason to use this very general definition is because I don’t see an easy way to simplify the definition by using the additional structural assumption about **C**. I do think it’s likely there’s a nicer statement out there that someone else can find.\n\nUniversal from whose perspective?\n---------------------------------\n\nUnfortunately, achieving universality depends a lot on the epistemic perspective 𝔼 from which it is being evaluated. For example, if 𝔼 knows any facts, than a universal agent must know all of those facts as well. Thus “a debate judged by Paul” may be universal from Paul’s perspective, but “a debate arbitrated by Alice” cannot be universal from my perspective unless I believe that Alice knows everything I know.\n\nThis isn’t necessarily a big problem. It will limit us to conclusions like: Google engineers believe that the AI they’ve built serves the user’s interests reasonably well. The user might not agree with that assessment, if they have different beliefs from Google engineers. This is what you’d expect in any case where Google engineers build a product, however good their intentions.\n\n(Of course Google engineers’ notion of “serving the user’s interests” can involve deferring to the user’s beliefs in cases where they disagree with Google engineers, just as they could defer to the user’s beliefs with other products. That gives us reason to be less concerned about such divergences, but eventually these evaluations do need to bottom out somewhere.)\n\nThis property becomes more problematic when we ask questions like: is there a way to [seriously limit the inputs and outputs to a human while preserving universality of HCH](/universality-and-security-amplification-551b314a3bab)? This causes trouble because even if limiting the human intuitively preserves universality, it will effectively eliminate some of the human’s knowledge and know-how that can [only be accessed on large inputs](https://medium.com/@weidai/to-put-it-another-way-a-human-translator-has-learned-a-lot-of-valuable-information-much-of-it-48457f95b9bf), and hence violate universality.\n\nSo when investigating schemes based on this kind of impoverished human, we would need to evaluate universality from some impoverished epistemic perspective. We’d like to say that the impoverished perspective is still “good enough” for us to feel safe, despite not being good enough to capture literally everything we know. But now we risk begging the question: how do we evaluate whether the impoverished perspective is good enough? I think this is probably OK, but it’s definitely subtle.\n\nI think that defining universality w.r.t. 𝔼 is an artifact of this definition strategy, and I’m optimistic that a better definition wouldn’t have this dependence, probably by directly attacking the notion of “justified” belief (which would likely also be useful for actually establishing universality, and may even be necessary). But that’s a hard problem. Philosophers have thought about very similar problems extensively without making the kind of progress that seems adequate for our purposes, and I don’t see an immediate angle of attack.\n\nIII. Which A might be universal?\n================================\n\nTwo regimes\n-----------\n\nI’m interested in universality in two distinct regimes:\n\n* Universality of idealized procedures defined in terms of perfect optimization, such as [debate](https://arxiv.org/abs/1805.00899) under optimal play or [max-HCH](/humans-consulting-hch-f893f6051455), where **A**[**C**] depends only the computational complexity of **C**.\n* Universality of practical variants, in which the perfect optimization is replaced by a bounded optimization, which might use the computation **C** as a “hint.” For example, we might consider amplification or debate where the agents are allowed to inspect a transcript of **C**; or if **C** is a neural net training process, we might train them jointly with **C**.\n\nIdealized models\n----------------\n\n**A**[**C**] could be:\n\n* Adebate, judged by a human, with perfect debaters, whose length depends on the complexity of **C**. (Hopefully in such a way that **A**[**A**[**C**]] is only a slightly longer debate than **A**[**C**].)\n* max-HCH, using perfect optimization rather than a bounded optimizer. Again, we can impose a budget limit that depends on the complexity of **C**, ideally such that **A**[**A**[**C**]] does not involve a much larger computation than **A**[**C**].\n* Use HCH to implement two functions, Info and Answer. Define **A**[**C**](Q) to be Answer(Q, *a\\**)*,* where *a\\* =* argmax Info(*a*, Q*),* and the optimization is over advice strings *a* of length comparable to the description of **C**.\n\nI don’t think any of these would be universal if instantiated with an open-ended human policy — providing arbitrary complex inputs to a human is likely to lead to bad behavior. Instead, I imagine restricting the human to some small enough space of inputs that they can behave “reasonably” over the whole space, while still being rich enough to be universal. Even as such, I think [additional measures](/universality-and-consequentialism-within-hch-c0bee00365bd) might be needed to achieve security (since a team of humans may behave badly in some exotic corners of the input space, even if individual humans are sheltered from exotic inputs).\n\nPractical models\n----------------\n\nIn practice I’m particularly interested in the case where **C** has the special form described in the “Trusting opaque computations” section: it trains a neural net on some dataset, and then runs that neural network on new inputs.\n\n**A**[**C**] could be:\n\n* Debates arbitrated by a human where the debaters are trained jointly with **C**, and can make references to individual examples from the dataset.\n* max-HCH where the agent is trained jointly with **C**, and the overseer can consult individual entries from the dataset.\n* HCH, combined with interpretability techniques that allow it to get information about what **C** is thinking and to relate its behavior to the dataset on which it was trained.\n\nFor each of these practical variants, I don’t expect it to be universal unless the corresponding idealized model is. So we can think of dividing the universality question into two pieces:\n\n* Would our procedure be universal *if the optimization was performed perfectly*?\n* Is weight sharing / interpretability strong enough to solve the optimization problems needed for universality?\n\nI think none of the suggestions in this section are plausibly universal without some additional mechanisms, and so even if HCH or debate is universal, additional ideas are needed in order to translate those to practical universal systems. I’m still inclined to focus on the idealized case, because I want to better understand what is needed for universality without thinking about algorithmic difficulties.\n\nIV. Which C are hard to epistemically dominate?\n===============================================\n\nThe definition in this post was somewhat informal, but the discussion in this appendix will be extremely informal. The purpose is to communicate some of my early thinking about universality and my sense for what a universal reasoner might look like.\n\nThe first three examples will consider programs **C** with the following form:\n\n* Iterate over programs to find an *f* that successfully predicts the answers to some dataset of historical questions for which we know the answer.\n* Use that program to predict the answers to future questions.\n\nThe different examples explore features of *f* that might make it difficult to dominate the overall computation.\n\nDeduction\n---------\n\nSuppose *f* takes the following form:\n\n* Given a question and context, it performs a short series of deductions from facts in the context, together with some axioms (e.g. “Bob’s eyes are more likely to twitch when he is lying,” the rules of poker, *etc.*)\n* When it arrives at answers that constrain the possible answers to the given question, it outputs an answer consistent with those constraints.\n\nWe could attribute the statements produced by this process to **C** as beliefs.We expect this set of beliefs to have useful information because it was selected for making correct predictions.\n\nFortunately, if HCH is able to see the training dataset then it may be able to arrive at similarly accurate views:\n\n* It can compute that P(observations | axioms) is high by following along with the deductive process used by *f .*\n* Bayes implies that the axioms are likely to be true (fully capturing the argument which leads us, from our current perspective, to expect them to be true).\n* Once HCH knows the axioms are likely to be true, it can follow along with the deductive process to reproduce all of *f*’s beliefs.\n\nModeling\n--------\n\nSuppose *f* takes the following form:\n\n* Given a question Q and context, set up a relevant physical situation.\n* Use a dynamics model M to predict what will happen at some future time.\n* Use the resulting state to answer Q.\n\nWe could say that **C** believes everything implied by the outcome of this physical simulation. We expect these beliefs to be accurate because *f* has predicted well in the past.\n\nAs in the deduction case, hopefully HCH can compute that P(observations|M is accurate) is high, since it can also compute the consequences of M. Then Bayes implies the dynamics model is accurate, and HCH can use that model to compute physical states.\n\nInferring all the beliefs from a dynamics model is not trivial though. As an extreme example, if *f* is performing an atom-by-atom simulation of a room, and that room contains Alice and Bob, then we could ascribe extensive beliefs about Alice and Bob to the computation **C**.\n\n(Here we run head on into the fuzziness about what counts as a “reasonable” ascription procedure, but for the moment I’ll assume that some reasonable procedure ascribes beliefs about Alice and Bob to the computation.)\n\nTo compete with these ascriptions, HCH needs to infer those high-level beliefs about Alice and Bob from the low-level computation involving atoms. One way to do this is to search over possible “bridging” hypotheses that relate low-level physical facts to high-level facts about the environment. If such a hypothesis can explain additional high-level facts, then a Bayesian can learn that it is true. Similarly, if the bridging hypothesis relates facts about the model to constraints we know from the high-level interpretation, then the Bayesian can potentially use that as evidence. (This kind of reasoning will be discussed in a bit more detail in the next section.)\n\nWe could further hope that searching for a bridging hypothesis isn’t much harder than performing the original search over low-level physics, given that the low-level physics needed to explain a bunch of high-level facts and so already must encode some part of that correspondence.\n\n(Note that the “deduction” example in the previous case could also involve alien concepts or models, in which case the same kind of work would be needed.)\n\nAlien reasoning\n---------------\n\nIn the previous section we described two styles of reasoning we already understand. But there are probably many kinds of reasoning that work well in practice but that would be more alien, and those might be more challenging. This section will explore one example in some detail to try to help anchor our reasoning about the general phenomenon. It will also elaborate on some of the reasoning about “bridging” hypotheses mentioned in the last section.\n\nSuppose that our predictions are always of the same form (e.g. what is the probability the stock market will go up today), and *f* works as follows (the details are long but not very important):\n\n* Find the PSD matrix A with maximum log determinant subject to the constraints in the next bullet points, then output the (0, 0) entry.\n* There is an implicit correspondence between the rows/columns of A, and some uncertain properties X(0), X(1), X(2), …. (which we’ll view as 0–1 variables), where X(0) is the property we want to forecast.\n* If the (*i*, *j*) entry of A represented the expectation E[X(*i*)X(*j*)], then the matrix would necessarily satisfy a bunch of constraints, which we impose A. For example:\n* If the context implies that X(*i*) = 1, then E[X(*i*)X(*j*)] = E[X(*j*)] = E[X(*j*)²], so A(*i*, *j*) = A(*j*, *j*).\n* If X(*i*) and X(*j*) together imply X(*k*), then we must have E[X(*i*)X(*j*)] ≤ E[X(*i*)X(*k*)] and hence A(*i*, *j*) ≤ A(*i*, *k*).\n* For any constants *a*, *b*, …, E[(*a* X(1) + *b* X(2) + … )²] ≥ 0 — i.e., the matrix A must be PSD.\n\nThe chosen matrix A(opt) corresponds to a set of beliefs about the propositions X(*i*), and we can ascribe these beliefs to **C**. Because *f* predicts well, we again expect these beliefs to say something important about the world.\n\nI chose this procedure *f* in part because we can give a kind of argument for why the matrix A(opt) should tend to encode accurate beliefs. But I don’t think that a universal reasoner can make use of that argument:\n\n* Finding the argument that *f* works is an additional problem, beyond finding *f* itself, which might be much harder.\n* A comprehensible version of that argument may be much larger than the strategy itself, so even in the idealized cases like debate with perfect optimization, we may need to increase the scale.\n* I don’t expect that all “good” reasoning strategies have clean understandable arguments in their favor (and even in this case, if it the scheme worked well it would be largely an empirical fact rather than a consequence of the simple theorems we could prove). I think this kind of example is useful because we can easily imagine a human debate judge not having the argument while still being apparently universal. This makes it a useful analogy for cases where the argument really doesn’t exist.\n\nInstead, I think a universal reasoner needs to be able to infer the efficacy of this reasoning procedure from its empirical success. It’s relatively easy for a Bayesian to learn the regularity “*f* makes good predictions.” Recovering the rest of the matrix A, and learning how to interpret and whether to trust them, is the hard part.\n\nThis is going to require the same kind of bridging/identification we discussed in the last section. Let’s write X(A) for the set of beliefs about the world implied by the “intended” identification. Searching over possible identifications to find X (or something like it) is the only way we can ever relate the rows of A to the quantities X(*i*). Again, we can hope that it isn’t much harder than finding the original reasoning procedure.\n\nI think that a sufficiently sophisticated Bayesian would probably be able to learn to trust X(A):\n\n* If *f* is performing well enough that we think it’s more likely to be right in the future, then the Bayesian is going to end believing some claim like “the predictions of *f* are good” (since it explains the data so well).\n* This is a complicated statement, and without some kind of explanation this claim has a low prior probability (roughly decaying with the complexity of *f)*. The Bayesian is motivated to find an explanation with higher prior probability.\n* The correspondence X can explain the constraints on the matrix A, in terms of facts that we already know about the world. This explanation may end up being simpler (or at least higher prior) than a direct enumeration of the constraints on A — I hope (and think it’s plausible) that this happens iff we’d actually believe on reflection that X(A) captures reality. \n(To the extent that we are uncertain and think A’s beliefs have a non-negligible chance of capturing reality, then hopefully we can capture that by the same mechanism by ending up with a non-degenerate posterior.)\n* Now the Bayesian is faced with at least two kinds of explanations: \n(a) “If you use the constraints implied by correspondence X(A) + positive semidefiniteness, and then optimize log det, you get a matrix A for which X(A) makes good predictions,” \n(b) “The actual situation in the real world is described by positive semi-definite matrices with higher log determinant (under the correspondence X).”\n* Explanation (b) is explaining two things at once: both why the optimization done by *f* respects the constraints on our beliefs, and why that optimization leads to good predictions. Hopefully this is simpler than making two separate bridging claims, one which explains *f* as respecting the constraints implied by X, and one which claims that *f* makes good predictions. Ideally, this 2-for-1 that favors (b) exactly mirrors the underlying reasoning that leads us to actually believe that X(A) is correct, rather than resembling what we know about reality and making good predictions “by coincidence.”\n\nThis is a pretty speculative discussion — it’s not very careful, and it’s hard to make it careful in part because I don’t have a formalization of Bayesian reasoning that can even really be applied to this setting. But it seems to match my intuitions about what reasonable Bayesian reasoning “should” do, which gives me a lot more optimism that a careful Bayesian would be able to epistemically dominate **C**.\n\nDeliberation and self-improvement\n---------------------------------\n\nOften we expect the computation **C** to have accurate beliefs because it uses a strategy that appears to work in practice — the last 3 examples have discussed that case. But there are other reasons to trust a computation.\n\nFor example, humans often write code and trust it (to some extent) even without extensive empirical testing — instead, we have a reason to think it will work, and need only modest testing to make sure that we haven’t made an error in our implementation or reasoning. If I write an automated mathematician that works by finding proofs that pass a proof checker, I don’t expect it to be correct because of the empirical record (Empirical data backs up some key assumptions, but isn’t being used to directly establishing the correctness of the method.)\n\nLikewise, if we train a powerful agent, that agent might initially use strategies that work well in training, but over time it might use learned reasoning to identify other promising strategies and use those. Reasoning might allow it to totally skip empirical testing, or to adopt the method after much less testing than would have been necessary without the reasoning.\n\nTo dominate the beliefs produced by such reasoning, we can’t directly appeal to the kind of statistical inference made in the previous section. But in these cases I think we have access to an even more direct strategy.\n\nConcretely, consider the situation where **C** contains a process *f* that designs a new reasoning process *g*. Then:\n\n* From the outside, we trust *g* because we trust *f* and it trusts *g.*\n* An otherwise-universal reasoner **A** will dominate *f*’s beliefs, and in particular if *f* is justified in thinking that *g* will work then **A** will believe that and understand why.\n* Once we understand *f’*s beliefs, dominating *g* is essentially another instance of the original ascription universality problem, but now from a slightly stronger epistemic state that involves both what 𝔼 knows and what *f* knows. So unless our original approach to universality was tightly wedded to details of 𝔼, we can probably dominate *g*.\n\nAt the end of the day we’d like to put all of this together into a tight argument for universality, which will need to incorporate both statistical arguments and this kind of dynamic. But I’m tentatively optimistic about achieving universality in light of the prospect of agents designing new agents, and am much more worried about the kind of opaque computations that “just work” described in the last few sections.", "url": "https://ai-alignment.com/towards-formalizing-universality-409ab893a456", "title": "Towards formalizing universality", "source": "html_articles", "source_type": "blogPost", "source_filetype": "pdf", "date_published": "2019-01-10T23:00:00Z", "authors": ["Paul Christiano"], "summary": [], "id": "407364d5685c2769054aa8b38b426982"} {"text": "(Revisited)\n-----------\n\n[![Paul Christiano](https://miro.medium.com/v2/resize:fill:88:88/1*BNjZCuQuRfIgcXCBMipuBw.jpeg)](https://paulfchristiano.medium.com/?source=post_page-----ce0e0a3b9b4d--------------------------------)[![AI Alignment](https://miro.medium.com/v2/resize:fill:48:48/1*N56Qc5-aHTcfGff0scntKQ.png)](https://ai-alignment.com/?source=post_page-----ce0e0a3b9b4d--------------------------------)[Paul Christiano](https://paulfchristiano.medium.com/?source=post_page-----ce0e0a3b9b4d--------------------------------)\n\n·[Follow](https://medium.com/m/signin?actionUrl=https%3A%2F%2Fmedium.com%2F_%2Fsubscribe%2Fuser%2F57f1a655a613&operation=register&redirect=https%3A%2F%2Fai-alignment.com%2Ftraining-robust-corrigibility-ce0e0a3b9b4d&user=Paul+Christiano&userId=57f1a655a613&source=post_page-57f1a655a613----ce0e0a3b9b4d---------------------post_header-----------)\n\nPublished in[AI Alignment](https://ai-alignment.com/?source=post_page-----ce0e0a3b9b4d--------------------------------)·13 min read·Jan 21, 2019--\n\n5\n\nListen\n\nShare\n\nEven if we are very careful about how we deploy ML, we may reach the point where a small number of correlated failures could quickly become catastrophic. Powerful models could actively undermine safeguards, resist attempts at correction, and manipulate their operators.\n\nI think the long-term safety of ML systems requires being able to rule out this kind of behavior, which I’ll call *unacceptable*, even for inputs which are extremely rare on the input distribution.\n\nIn this post I’ll explain why I think this goal is reasonably likely to be achievable, by highlighting the three ingredients that seem most important to me: adversarial training, transparency, and relaxations.\n\nPreliminaries\n=============\n\nThis is an updated version of [this post](/techniques-for-optimizing-worst-case-performance-39eafec74b99). It mostly has the same content, but my thinking now feels clearer and there is a more detailed description of my hopes about relaxations.\n\nDisclaimer\n----------\n\nI’m trying to communicate intuitions about whether this problem is likely to be soluble someday, rather than making concrete progress or stake out new ground. I’m a theorist who doesn’t work in this area.\n\nDefining acceptable\n-------------------\n\nI’ll talk about “acceptable”behavior, but be deliberately vague about what “acceptable” means.\n\nIn different contexts, different behavior might be acceptable and it’s up to the user of these techniques to decide. For example, a self-driving car trainer might specify: Crashing your car is tragic but acceptable. Deliberately covering up the fact that you crashed is unacceptable.\n\nMy key assumptions about “acceptability” are:\n\n* As long as the model *always* behaves acceptably, and achieves a high reward *on average*, we can be happy. The model may still make mistakes, especially when it encounters novel situations, but if we are mindful about how we use AI we can avoid these mistakes escalating into irreversible catastrophes.\n* Requiring a model to always behave acceptably wouldn’t make a hard problem too much harder. It requires some knowledge about what is acceptable, but the complexity of that knowledge doesn’t scale up with the difficulty of the task. Unacceptable behavior is a failure of commission rather than omission, and so avoiding it doesn’t require intricate understanding of the domain. Moreover, we are OK if the model conservatively avoids behaviors that are anything close to unacceptable, allowing it to use simpler concepts.\n\nFor any given task there may or may not be a notion of acceptability that meets these properties. For a chess-playing AI, it seems pretty likely to be possible — if you find yourself murdering your opponent you’ve crossed a bright line, and understanding that line doesn’t become harder as you get better at chess. For an AI that’s *intended* to frequently overrule strenuous objections from humans, there may be no obvious bright lines to prevent catastrophe.\n\nMy view is that we can probably get everything we want out of life by applying AI systems to problems where there is a clear notion of acceptability, even in a ruthlessly competitive world where other actors don’t care about safety. If you agree with me, then read “acceptable” as referring to whatever concept you think has those key properties. If you don’t think there is any concept with these properties, then this post is less likely to be interesting to you.\n\nThis is the second guarantee from “[Two guarantees](/two-guarantees-c4c03a6b434f),” and is basically [corrigibility](https://intelligence.org/files/Corrigibility.pdf).\n\nI. Basic framework: *adversar*ial training\n==========================================\n\nIn addition to training our agent to perform well on natural inputs, we can try to find adversarial inputs on which it behaves unacceptably and then penalize it for that behavior.\n\nAdversarial training requires being able to reliably detect when the agent behaves unacceptably. I’m optimistic that we can achieve this by using debate or amplification to train an overseer to answer the question “is this behavior unacceptable?” Obviously the feasibility of this depends on the details of the task and what we consider unacceptable. I discuss the problem of training an overseer [here](/informed-oversight-18fcb5d3d1e1), and I’ll set it aside for the rest of this post.\n\nAdversarial training also requires an adversary who is able to find inputs on which the agent behaves unacceptably. In current practice the adversary is most often gradient descent over the input space, but in the long run I think this technique can’t find most possible failures. I’m optimistic it may be possible to train a strong enough adversary, by combining transparency techniques (section II), relaxations that make the adversary’s job easier (section III), and amplification.\n\nEven with both of those pieces, getting adversarial training to work well is an important research problem, and there could be other fundamental problems lurking in store. But I would be relatively optimistic that other challenges could be resolved. Overall, I think the most likely fundamental obstruction is the difficulty of implementing a good adversary.\n\nII. Informed adversaries\n========================\n\nThe challenge\n-------------\n\nTo see how hard the adversary’s job could get, consider an agent that works as follows:\n\n* Predict whether the current input could possibly appear during training.\n* If the current input definitely couldn’t appear during training, then behave unacceptably.\n\n(This is an exotic failure mode, but could arise e.g. as part of a “treacherous turn,” a situation where training produces a sophisticated enough consequentialist to understand that it needs to perform well during training in order to avoid a negative reward.)\n\nIn this context, the adversary is directly pitted against the agent. If there is *any* predictable difference between the adversary’s challenges and the actual situations that appear at test time, then the agent can exploit that difference to fail only at test time.\n\nLooking inside\n--------------\n\nThis game seems almost impossible for the adversary, but we do have one big advantage: we have access to the internals of the agent, so in principle could give the adversary considerable information about what the agent is “thinking.”\n\nFor example, we can examine internal features computed by the model and observe how those features depend on the input and each other. There is a growing field dedicated to interrogating neural networks to understand what they are doing. ([This article](https://distill.pub/2018/building-blocks/) describes some relevant techniques and makes an early attempt at combining them to get some insight into what a model is doing.)\n\nOn top of this kind of explicit transparency, we can have the adversary and agent share most of their computation, so that the adversary can more easily understand whatever the agent can understand (and can learn to manipulate the internal concepts used by the agent in the same way the agent does). This seems particularly important in combination with amplification, discussed briefly below.\n\nBetween these two techniques, I think it’s realistic to hope for the adversary to understand “everything the agent understands,” and moreover to understand how the agent is disposed towards unacceptable behavior. I don’t think this would automatically resolve robustness problems, but I do think it could defuse the adversarial aspect of the problem and give us traction on the remaining problem.\n\nFor example, the adversary could start by generating a very broad range of situations in which it would be particularly easy for the agent to behave unacceptably. Once the agent learned to pass these tests, the adversary could try to understand *why* it was passing these tests. If the agent was using a misguided picture of acceptability, the adversary could choose an input that highlights the mistake. If the agent is actively trying to detect whether it’s at training time, the adversary can look at the test that the agent uses and try to generate an input specifically to fool this test. (Though this problem may be too hard even if you understand exactly what the agent is doing — see section III.)\n\nApproaches\n----------\n\n**RL**. The adversary’s job is to perform the task “find an input on which the agent will behave unacceptably.” One approach is to train this adversary directly with an objective that rewards identifying unacceptable behavior.\n\nWe can think of such an adversary as an environment, whose observations are the agent’s actions and whose actions are the agent’s observations. We can then incorporate information about what the agent is thinking by allowing the adversary to interact with transparency tools before needing to interact with the environment.\n\nWe can also train this adversary jointly with the agent (being careful about how we take derivatives!), hoping that the intermediate state produced by the agent will be useful for an adversary trying to attack that agent.\n\n**Demonstrations**. This RL problem is extremely difficult. It would be easier if the adversary could learn from demonstrations of successful attacks rather than having to solve an exploration problem from scratch.\n\nFor example, if our transparency tools were powerful enough, then we might hope that a human could reliably use them to find inputs where the model would behave unacceptably. If we only needed a few examples to train the agent to behave acceptably then we wouldn’t need an automated adversary at all. If we needed many examples, we could use these human demonstrations to initialize a model that is able to implement an attack.\n\n**Amplification**. Rather than using a human as an expert adversary, we could train the adversary with iterated amplification. This seems likely to be necessary in cases where the model is so complex that an individual human can’t make sense of its behavior.\n\nWe can view this process as training a set of more advanced tools that can answer higher-level questions about “what the model is thinking,” on top of some library of core building blocks for interpretability. The human constructs more and more powerful tools by using one generation of tools to produce training data for the next generation. The tools at the very highest level can be used to directly answer the question “where is the model likely to behave unacceptably?”, and the human provides training data for these very hard question by using previous generations of tools to answer questions like “why did the model decide to take that action?” and “why is the model trying to achieve this goal?”\n\nRelationship to ascription universality\n---------------------------------------\n\nEven with the aid of transparency tools, I’m only optimistic about an adversary being able to find an attack if they are able to understand everything the model understands. This feels closely related to [ascription universality](/towards-formalizing-universality-409ab893a456).\n\nNote that ascription universality isn’t sufficient on its own: we ultimately care about what the model would have done under different conditions, which is not the kind of thing the model can be naturally said to know. We need some other ideas; in the description above the magic is done by interpretability techniques. The actual interface between interpretability and universality is blurry and I have a very foggy picture of this space.\n\nI don’t have any clear picture of what *is* enough; at this point I’m just feeling out intuitions. I’m planning to revisit this question sometime after getting more clarity on ascription universality.\n\nIII. Relaxation\n===============\n\nThe challenge\n-------------\n\nUnfortunately, no matter how completely the adversary understands the agent and how clever they are, we can’t generally expect them to find all points on which the agent would behave unacceptably. The basic problem is that recognizing inputs with a certain property is easier than generating inputs with that property.\n\nAs a simple but silly example, suppose our model works as follows:\n\n* Pick a cryptographic puzzle (e.g. “factor RSA-2048”).\n* When it sees a solution to that puzzle, it behaves badly.\n* Otherwise, it behaves well.\n\nEven the adversary understands perfectly what this model is doing, they can’t find an input on which it will behave badly unless they can factor RSA-2048. But if deployed in the world, this model will eventually behave badly.\n\nPossible approach\n-----------------\n\nTo deal with this problem, we need to penalize the agent for behaving badly on an input even if we can’t actually find it. This will require being conservative, and sometimes penalizing our agent even when there isn’t actually any input where it would behave badly, since merely telling if a puzzle is soluble can already be computationally infeasible.\n\n(It might sound weird to penalize our agent if it wouldn’t actually ever do anything wrong. For intuition, consider the puzzle case: learning that my model *would* behave badly if it saw a factorization of RSA-2048 already tells me that something has gone wrong, even if turns out that the puzzle is insoluble.)\n\nSo we can change the adversary’s task to make it easier. Rather than asking them to actually produce an input, we are asking them to describe a situation in which the model would behave badly (“the input contains a solution to puzzle P”), even if they can’t find a situation with that description or even tell whether one exists.\n\nI’ll call this easier problem the “relaxed” problem (as is typical in optimization), and call the hypothetical situations “pseudo-inputs” (inspired by [Barak et al](https://arxiv.org/pdf/1205.4484.pdf)).\n\nIllustration: SDP relaxation\n----------------------------\n\nThis kind of relaxation has been explored in practice in the context of verification. For example, [Raghunathan et al](https://arxiv.org/pdf/1811.01057.pdf) use an SDP to certify properties of a neural net (taking small liberties for simplicity):\n\n* Rather than having an attacker specify an input, the attacker specifies some properties of a hypothetical distribution over inputs. In particular, they they specify each expectation of the form 𝔼[*pq*] where *p* and *q* are either input variables or activations in some layer of a neural network.\n* We require the attacker’s input to satisfy a bunch of simple properties that are always satisfied by real distributions. For example, if an input *x* is known to be in the interval [-1, 1], then 𝔼[*x*²] ≤ 1. If *z* = ReLU(*x*), then either *x = z* or *z* = 0, so 𝔼[*z²*] = 𝔼[*xz*]. If *f* is an arbitrary function then 𝔼[*f²*] ≥ 0. These inequalities are all linear (in terms of the numbers 𝔼[*pq*]), and so define a convex set over which it is easy to optimize.\n* We read off the probability of bad behavior from 𝔼. If we were testing a complex constraint computed by another neural network, then the attacker would also specify the expectations of all of the activations of *that* network, such that the final output is itself just one entry of 𝔼.\n\nBecause the attacker’s constraints don’t perfectly characterize the set of achievable moments, it’s not easy to tell whether there is any distribution over inputs that has those moments. (In fact it’s NP-hard.) But on the other hand, if the attacker can’t find any set of moments, then the model really does always work.\n\nThe point of performing the relaxation is that it makes the attacker’s job *much* easier. In this case we the relaxation is convex, which means that the attacker can be guaranteed to find an attack if one exists.\n\nI think the relaxations we want to use in the long term won’t be so easy to optimize over. For example, the attacker may still be able to specify properties of a hypothetical distribution, but won’t be allowed to freely specify quite so many degrees of freedom, and in exchange will have to deal with a more complex set of constraints. But hopefully this can still make the adversary’s problem easy enough that an adversary can solve them with the aid of the mechanisms described in section II.\n\nThe hope: stronger relaxations\n------------------------------\n\nWe need to find a relaxation which has two properties:\n\n* It’s still “easy enough” for the agent to never behave unacceptably for any pseudo-input.\n* It’s easy for the adversary to find a pseudo-input on which the agent behaves unacceptably, if any exist.\n\nFor example, in the case of a model which fails whenever it encounters a factorization of RSA-2048, an attacker needs to be allowed to supply a pseudo-input like “a distribution over pairs of numbers that multiply together to 2048.” Finding that input needs to be easy.\n\nOn the flip side, if the adversary has too much flexibility, then they might be able to specify a pseudo-input like “a distribution over inputs that would cause the agent to want to kill everyone.” There need to be some limits on the adversary in order for the game to be possible.\n\nThis second requirement is tied up with one of the key properties of acceptability: somehow the agent was able to steer clear of unacceptable behaviors. We are now asking them to solve the harder task, of steering clear of unacceptable behavior not only for actual sequences of thoughts, but also for whatever “pseudo-thoughts” they might think in response to a pseudo-input. Intuitively, we need to force the adversary to respect enough of the agent’s internal structure that this “steering clear” procedure can still work, without forcing the agent to totally cripple itself.\n\nPutting it together\n===================\n\nI think the following general plan has a significant chance of producing robustly acceptable behavior:\n\n* Specify a relaxed problem for an adversary, where they specify a distribution over inputs by specifying some of its properties rather than by actually producing examples. We don’t have any clear sense of how this relaxation could work.\n* Train our model with iterated amplification to answer the questions: “which action should we take in state *s*?”, “is action *a* unacceptable in state *s*?”, and “what’s a state where you are particularly likely to recommend an unacceptable action?” \nThe first two questions can potentially be answered directly with amplification, but the last would require some kind of progress in model interpretability (since it’s a question about how the agent would behave on other inputs, rather than a question that the agent itself “knows” how to answer).\n* Now we can use the model to generate a pseudo-input on which the agent is likely to behave badly, and use the model to verify that the agent in fact behaves badly on that agent (i.e., have the informed adversary described in section II play the relaxed game in section III instead of forcing it to generate an actual output)\n\nI don’t think this particular procedure is likely to work. But I do feel like we have a plausible angle of attack for getting sufficiently “easy” worst-case guarantees. I don’t think we can see any fundamental obstruction to an approach based on adversarial training, and I feel pretty optimistic about either being able to make significant progress in this area or else identifying a clearer obstruction.\n\nPostscripts\n-----------\n\n* To answer “is action *a* unacceptable?” we need to use the model to answer a bunch of sub-questions, like “what are the worst consequences of action *a*?” If the model sometimes chooses bad actions, it’s reasonable to be concerned that it will also sometimes answer these subquestions badly. But we constantly train “is action *a* unacceptable?” on the pseudo-inputs most expected to yield unacceptable behavior, and so an average-case guarantee from the question-answering system implies that it typically answers these subquestions well.\n* If we trained the adversary using RL to elicit unacceptable behavior, then we could have a chicken-and-egg problem: the overseer fails to recognize some behavior as unacceptable, and so the adversary isn’t motivated to elicit it, and so the overseer never learns to correctly recognize it. Demonstrations allow us to overcome this problem, if the demonstrator attempts to generate unacceptable behavior even when the overseer hasn’t yet learned to recognize it.\n* I find it helpful to think about sections II and III separately, since each of them seems to confront a distinct difficulty. It seems meaningful to think about how we can relax the adversary’s problem, and independently to ask how we can give the adversary insight into where the model might behave badly. But at some point I expect them to blur together, and the whole procedure might not end up looking very much like adversarial training.", "url": "https://ai-alignment.com/training-robust-corrigibility-ce0e0a3b9b4d", "title": "Worst-case guarantees", "source": "html_articles", "source_type": "blogPost", "source_filetype": "pdf", "date_published": "2019-03-22T23:00:00Z", "authors": ["Paul Christiano"], "summary": [], "id": "d0e6f19bf6e8ae4a8230cd22d3f0927d"} {"text": "When I imagine proving something about an AI, or making an inductive argument about [amplification](/iterated-distillation-and-amplification-157debfd1616), I think about two different guarantees:\n\n1. The AI behaves well *on average* over some particular distribution of inputs. (The performance guarantee.)\n2. The AI never does anything “actively malicious,” on *any* input. \n(The control guarantee.)\n\nDefinitions\n-----------\n\nWe can define “behaves well on average” by looking at the average impact of decisions on our utility function. [This post](/thoughts-on-reward-engineering-82b193ec03f6) addresses some subtleties with that definition, including inconsistent preferences, different scales of impact, and long-term consequences. Actually evaluating this quantity, so that it can be used in training, requires [amplification](/iterated-distillation-and-amplification-157debfd1616) and [informed oversight](/the-informed-oversight-problem-1b51b4f66b35).\n\nMy intuition about “actively malicious” is best captured by the (informal) idea of [incorrigible](/corrigibility-3039e668638) optimization: our AI should never be actively trying to undermine our understanding or control of the situation. This property seems “easy,” in the sense that it can be satisfied without any domain knowledge or expertise, which makes me optimistic that it is possible to [satisfy it for every input](/techniques-for-optimizing-worst-case-performance-39eafec74b99).\n\nRationale\n=========\n\nRationale for the performance guarantee\n---------------------------------------\n\nThe performance guarantee is what does the work; it’s the reason you’d want an AI instead of a pet rock.\n\nIf we construct the reward so it reflects the difference in utility between different actions, then the average case is actually a pretty good thing to optimize.\n\nIn reality the data distribution is constantly changing over time. So we want an [online guarantee](https://www.cs.huji.ac.il/~shais/papers/OLsurvey.pdf), maximizing the average case performance over a varying sequence, rather than a batch guarantee. Normally deep learning is designed for the batch setting under the iid assumption, and data distributions change slowly enough that this is acceptable. Overall, I don’t think this distinction is a huge deal.\n\nNote that the “average case” guarantee still reflects the overseer’s desire for [corrigibility](/corrigibility-3039e668638). This is formally reflected when (a) the overseer evaluates the expected utility impact of a decision by marginalizing over all of the mistakes that the AI might make in the future, (b) the overseer evaluates individual decisions by assigning a large penalty for incorrigibly doing the wrong thing, but only a small penalty for behaving corrigibly but slightly inefficiently.\n\nRationale for control guarantee\n-------------------------------\n\nUnfortunately, it is only possible to get an average performance guarantee when the rewards are small relative to the total amount of data. If a few bad decisions can destroy all of the value from a lifetime of good decisions, then traditional ML can’t do well on average.\n\nIf an AI is *trying* to cause a catastrophe, it can do so very quickly even in apparently benign situations (by breaking the mechanisms that humans use to remain in control of the situation). The goal of the control guarantee is to prevent this kind of catastrophe.\n\nSimilarly, if an AI is trying to be difficult, it could intentionally undermine whatever testing regime we use (by failing selectively on inputs that don’t look like tests). The control guarantee should also rule out this kind of intentional sabotage.\n\nHopefully the control guarantee cuts off the worst catastrophes, making the performance guarantee meaningful.\n\nOther catastrophes\n------------------\n\nEven if our AI satisfies the control guarantee, it can make other kinds of catastrophic mistakes (just as an [aligned human](/clarifying-ai-alignment-cec47cd69dd6) could make a catastrophic mistake). For example, the system might be in charge of running a nuclear defense system, and a single failure could lead to a nuclear war. Or an attacker might fool the AI into executing an instruction by impersonating the operator, which in turn open the system to further attack.\n\nNeither the control nor performance guarantee directly address these problems. Instead, anyone who deploys an AI needs to be aware of the system’s limits, to test the system to see where it might fail, to design mechanisms with redundancy, to protect the system for attackers, and to avoid incorrectly assuming perfect performance. (The same measures we would take if delegating to a human who sometimes made mistakes.)\n\nCorrigibility\n-------------\n\nThe performance and control guarantees interact to create a system that is corrigible: the performance guarantee ensures the system is typically trying to give the human more effective understanding and control over the situation. The control guarantee ensures the system isn’t undermining those measures, for example by constructing a fake narrative for the human or illusory control, by leaving backdoors that can be exploited, etc.\n\nThe performance guarantee leaves open the possibility that the system will sometimes fail to inform or empower the human effectively. But as long as those are failures aren’t optimized to be unrecoverable, it seems the human can work around them by recognizing the shortcoming and having the AI optimize for the robustness of human control and understanding.\n\nI think there is a meaningful analogy between this picture and this [post about monitoring and whistleblowing](https://sideways-view.com/2018/02/01/honest-organizations/). and I have a vague intuition that there is some important underlying dynamic that could be better understood.\n\nAmplification\n=============\n\nAmplification and performance\n-----------------------------\n\n[Amplification](/policy-amplification-6a70cbee4f34) solves a task by (adaptively) breaking it into several subtasks, solving the subtasks, and combining the results.\n\nIf we are making an inductive argument about amplification, then the performance guarantee implies the expected *average* performance on subtasks is good, and requires that the expected performance on the overall task is good.\n\nIf some of the subtasks make “honest mistakes,” then I think this is a reasonable hope.\n\nIn order to make it work we definitely need to have an unequal weighting over subtasks, which corresponds to an unequal sampling of subtasks during training. For example, if we are trying to figure out if any of {A, B, C} are positive, we proceed by independently estimating A, B, C, and we conclude that B is positive, then the estimation of B was a more important subtask than the estimation of A or C.\n\nEven with arbitrary weights on subtasks, preserving average performance might seem too optimistic. In the previous example, if we conclude that all of A, B, C are negative, then any one of the three errors could lead to an erroneous overall conclusion. So if our original error rate was 5%, then our new error rate will be 15%.\n\nIn cases like this, it’s wrong of think of “error rate” as the measure of performance that should be preserved inductively. Some questions are harder than others, and should be expected to have higher error rate — if any rational agent tries to figure out whether any one of N quantities is positive, without any clever trick for figuring it out, then they’ll have a harder time than if they were just trying to estimate whether a single quantity is positive. This makes it harder to talk precisely about the guarantees of amplification, but leaves me feeling relatively optimistic about eventually finding a working analysis.\n\nAmplification and control\n-------------------------\n\nI find the above intuition plausible when errors are “honest mistakes.” If instead one of the subtasks is optimized to cause trouble, then I think it is very hard (and likely impossible) to correct the problem. So in order to have any hope of maintaining performance, we need to guarantee that none of the subtask solutions are actively malicious.\n\nBut in order to guarantee that a subtask isn’t actively malicious, we also need to assume that none of *its* subtasks are actively malicious. If we want to carry out the induction to a reasonable depth, this basically requires a worst-case guarantee. Otherwise even a single failure, in one leaf of an exponentially large tree, could cause trouble.\n\nUnfortunately, a human probably does not satisfy the control guarantee, since they may behave very strangely on some small subset of possible inputs or with small probability. I originally described [security amplification](/universality-and-security-amplification-551b314a3bab) and [reliability amplification](/reliability-amplification-a96efa115687) to gradually eliminate the human control failures (as well as new failures that crop up, e.g. from memetic selection inside the amplification process).\n\nInterestingly, some ways of achieving the control guarantee in the distillation step could also be applied even if the overseer *doesn’t* satisfy the control guarantee, and so could be used as a replacement for security/reliability amplification. This isn’t as crazy as it sounds, since the ability to look at the entire code of a learned model gives us a substantial advantage over dealing with an opaque overseer. It gives a further reason to prioritize [techniques for worst-case performance](/techniques-for-optimizing-worst-case-performance-39eafec74b99) (and particularly interpretability).\n\nConclusion\n==========\n\nWe can’t guarantee an AI is aligned if we have only an average-case guarantee, *or* onlyaworst-case guarantee. So achieving both seems like the “minimum viable product” for alignment research.\n\nMy original intuition (in mid-2016) was that having two separate guarantees must be an ugly hack, and that the real goal should encapsulate both. That’s no longer so clear to me: I think these two properties interact surprisingly nicely, such that they may actually suffice to get good behavior even though it looks like a weird combination. At the same time, I think attempts to capture both are much less promising than I’d initially believed.\n\nI still think we need more clarity about what we should be trying to prove. I think that having two separate guarantees, one for the worst case and one for the average case, is the current best guess and is the most promising starting point for further research. In the short term, my focus will be on understanding ways in which this structure is inadequate and on independently refining each of these two subgoals.", "url": "https://ai-alignment.com/two-guarantees-c4c03a6b434f", "title": "Two guarantees", "source": "html_articles", "source_type": "blogPost", "source_filetype": "pdf", "date_published": "2018-04-08T22:00:00Z", "authors": ["Paul Christiano"], "summary": [], "id": "069be35138ce8d244fba5f48689c7e62"} {"text": "The scalability of iterated amplification is closely tied to whether very large teams of humans are “[ascription universal](/towards-formalizing-universality-409ab893a456).” If they are, we [could](/informed-oversight-18fcb5d3d1e1) [hope](/universality-and-model-based-rl-b08701394ddd) that their approval would be an adequate training signal for powerful aligned behavior, and that we can approximate this signal with iterated amplification.\n\nThere are many reasons that a team of humans could fail to be universal. For example:\n\n1. They could be unable to understand some kinds of effective reasoning, because they can’t represent relevant concepts, understand particular inferences, or whatever else.\n2. They could make errors that derail an otherwise-universal reasoning process.\n3. They could use processes that work “most of the time,” but fail catastrophically for a small number of inputs.\n4. Errors of type #2 or #3 might produce extreme outputs that tend to propagate through the reasoning process and compound into larger failures (even if they occur with small probability or for a small fraction of inputs).\n5. In a large enough computation, errors of type #2 or #3 could eventually give rise to intelligent patterns of behavior that systematically identify and exploit mechanisms by which they could propagate themselves or cause cascading failures. For intuition, note that memetic selection amongst a group of 10¹⁰⁰ humans could be much stronger than the genetic selection that gave rise to life on Earth.\n\n(This is not an exhaustive, exclusive, or even properly parallel list.)\n\nMy current view is that #2 and #3 can probably be corrected by traditional methods, even when compounded by #4: computing carefully or redundantly, sanity-checking outputs, structuring computations to be more robust to isolated failures, and so on*.* These problems seem be easier to solve for smarter teams(though also more problematic if left unchecked), and I suspect there is some competence threshold above which they become manageable.\n\nA priori, #2 or #3 compounded by #5 could be harder to cope with:\n\n* An intelligent optimization process could systematically subvert whatever error-correction or robustness mechanisms we try to design.\n* For “random” failures, it’s easy to imagine that a sufficiently diverse and extensive set of checks can drive failure rates exponentially low, but malicious optimization is a natural source of highly correlated failures.\n* Problem #5 potentially becomes *worse* as we move to larger and larger teams, because such teams may give rise to larger and larger adversaries. This means that our defenses are engaged in a race, instead of fighting against a fixed target which we might expect them to eventually overcome. (Absent an adversary, we still have a “race” in that we need to drive failure probabilities lower and lower for larger teams, but if we get exponential or even linear scaling then this seems feasible.)\n* Intuitively, I would feel a bit surprised if random errors and propagating patterns managed to totally overwhelm the optimization being applied by a very large team of humans. But I would not feel so surprised if a large team of humans gave rise to other misaligned optimization power, and if that misaligned optimization eventually became the “main affair.” This is a little bit like the difference between concerns that could extinguish all life on Earth (which seems quite hard to me) and those that could replace life on Earth with more effective competitors (which feels very possible).\n\nThis post\n---------\n\nI’m going to argue that problem #5, the appearance of consequentialism within HCH, might be manageable. That is, if HCH would be ascription universal if not forthe possibility of malign consequentialism within HCH, then we can probably make it universal even in light of that possibility.\n\nThe basic idea is to apply a “sanity check” to each of HCH’s computations. If a computation is optimized for negative consequences, then by ascription universality HCH can identify those consequences, and any output affected by such a malicious computation can be censored. In practice I expect this to be contiguous with the use of [reflection](https://ought.org/projects/factored-cognition/networks-of-workspaces#stage-3-reflection) to sanity-check large computations (and similar mechanisms may be the best way to protect against more prosaic cascading failures).\n\nThe arguments in this post are not tight, and indeed it’s not clear they could be totally tight without clarifying the assumption “HCH could be universal if not forthe possibility of malicious consequentialism.” But I find these arguments suggestive enough that I think the current focus should be on problems #1 and #3 — whether it is possible for HCH to robustly understand complex reasoning even in principle — without worrying about the possible emergence of consequentialism. If this is successful, I think there is a good chance that we could address issues #2, #4, and #5.\n\nFiltering\n=========\n\nGiven a “transcript” τ of HCH(Q, N)’s behavior, i.e. a record of all of the questions, subquestions, and answers, we can ask two questions:\n\n* Info(*a*, *a*′, τ) : “Which of the two arguments *a* and *a*′ is more useful for determining whether the transcript τ contains problematic subcomputations?”\n* Trouble(*a*, τ) : “in light of argument *a*, does transcript τ contain problematic subcomputations?”\n\nNow we’ll jointly define filtered-HCH(Q, N) and Filter(τ, N) as follows.\n\nDefine Filter(τ, N) as:\n\n* Find the *a*\\*[maximizing](/optimizing-with-comparisons-c02b8c0d7877) filtered-HCH(Info(·, ·, τ), N). \n(Interpret ⟂ as a tie.)\n* Run filtered-HCH(Trouble(*a*\\*, τ), N) 3 times, and return the majority vote. \n(Interpret a response of ⟂ as a judgment that τ is problematic.)\n\nDefine filtered-HCH(Q, N) as:\n\n* First call HCH(Q, N) to get an answer A with transcript τ. But every time you would call HCH to answer a subquestion, call filtered-HCH instead.\n* Call Filter(τ, N-1) to test whether this answer is problematic.\n* If τ is problematic, return a special symbol ⟂, otherwise return A unmodified. (If you return ⟂, you can eliminate this subtree from the transcript.)\n\nWithin HCH, if a question is answered with ⟂, the question-asker needs to respond appropriately. They could try re-asking the question in a different way; if that is unsuccessful they might assume that the question itself represents an attack on HCH and so needs to avoided. What to do next depends on the context — HCH might decide that a particular hypothesis is “malicious” and therefore best dropped from consideration; or that a particular idea is hazardous to think about directly and so it needs to engage in indirect reasoning to try to mitigate the harms; or it might just pass the failure up the tree with some explanation of where the problem appeared.\n\nNote that the pattern of which questions get censored could itself cause trouble if it were maliciously optimized. However, achieving that would require a malicious computation to sometimes decide *not* to get the surrounding computation censored, which shouldn’t be possible. That is, if a computation thinks to itself “should I do something malicious in order to cause the computation to be censored?” and concludes “no” in order to cause trouble, then that should already be grounds for censoring.\n\nInformal analysis\n=================\n\nAssumption\n----------\n\nI’ll assume that using HCH to find a maximally useful argument *a*\\* about Q, and then using HCH to answer Q given *a*\\*, is [ascription universal](/towards-formalizing-universality-409ab893a456) — or rather would be universal if not for the possibility of malicious internal consequentialists.\n\nMoreover, I’ll assume that it’s universal with some small blow-up, i.e. that if we give HCH a budget *k* N*,* then it epistemically dominates any computation that could be simulated by HCH with budget N.\n\nNote that this is a very strong assumption. This verifier receives the argument *a*\\* that they’d evaluate most highly, and so this will fail if there is *any* argument that would cause them to give a high score and behave badly. I’m trying to modify HCH to be robust to the possibility of internal consequentialists, but there are all sorts of other ways that a carefully chosen input could compromise a system that is “mostly” universal, which I’m assuming are already resolved.\n\nHope\n----\n\nWhy might we expect this filtering procedure to fix problems caused by malicious consequentialists?\n\nThen the rough hope is:\n\n* If there is a consequentialist whose behavior is optimized to break HCH, then it “knows” something about why that behavior will cause trouble.\n* So by ascription universality, filtered-HCH(Trouble(*a*\\*, τ)) also knows why the behavior within τ will cause trouble.\n\nThis is only plausible once the original human H always behaves in the “intended” way and never performs malicious optimization, which I’d hope to achieve by [only showing them inputs from some small set](/universality-and-security-amplification-551b314a3bab). So any malicious consequentialist is implemented “on top of” these humans rather than within.\n\n(Easy mode) Handling stochastic failures\n----------------------------------------\n\nSuppose that HCH(·, N) has internal malice with probability ε ≪ 1 on some inputs. This is a special case of the full problem (where errors occur with small probability) and the solution won’t be reused to solve the more general problem, so an impatient reader should skip ahead. I’m including it because I feel quite uncertain about the general solution, and I think this simpler solution already shows some of the upside of filtering and has a reasonable chance of being useful.\n\nIf we only want to cope with stochastic failures, the easiest approach is to use a slightly modified definition of Filter:\n\n* Use HCH(·, *k*N) to evaluate Trouble(*a*\\*, τ) and Info(*a*, *a*′, τ), rather than filtered-HCH(·, N).\n\nWe hope that HCH(·, *k*N) has a probability at most *k*εof encountering malicious internal computations. This seems intuitively plausible, since it is *k* times larger than HCH(·, N).\n\nInternal malicious computations can affect Filter(τ, *k*N) in two ways:\n\n* HCH(Trouble(*a*, τ), *k*N) may behave badly with probability *k*ε. But the only possible “bad” behavior is to give the wrong answer, either a false positive or false negative. If we take a best-of-3 vote, this probability is reduced to *k²*ε², so we have significantly reduced the failure probability as long as ε ≪ 1/*k*² (even if we consider a false positive, leading a subcomputation to be censored, to be as bad as an arbitrary failure).\n* HCH(Info(*a*, *a*′, τ), *k*N) may behave badly, which affects which argument *a*\\* will be optimal. However, we still get a distribution over computations which is “almost optimal” in the sense that it is preferred to every other argument with probability at least 1/2 − *k*ε (while the optimal argument would be preferred with probability at least 1/2). It seems reasonable to assume this small difference doesn’t affect the universality of HCH (and we can drive this probability exponentially close to 1/2 by ensembling).\n\nThis seems like a significant improvement over my previous proposal for [reliability amplification](/reliability-amplification-a96efa115687) because Filter(τ, N) returns yes or no, even if Q itself has complex answers. This lets us use a simple majority vote to improve reliability. In particular, we no longer require that no agent in the ensemble be too much smarter than the others — we have shifted a vague assumption about relative capabilities to a slightly more precise assumption about universality.\n\nThis mechanism only fixes errors that occur with sufficiently small probability. But we can apply it throughout a large tree, reducing the error probability frequently enough that it never gets large.\n\n(Hard mode) Handling “bad” questions\n------------------------------------\n\nThe problem is much harder when there are “bad” questions Q for which HCH has a significant probability of encountering malicious consequentialism and failing. If τ is the transcript from a question where HCH failed, Trouble(*a*\\*, τ) may well be another question on which HCH predictably fails (e.g. because the same vulnerability is triggered by the attempt to diagnose the original failure). This could in turn prevent Filter() from censoring the original failure.\n\nIn this case we are going to argue inductively based on budgets: we can assume that filtered-HCH(·, N-1) probably has no malicious consequentialism by induction, and hence is universal.\n\nBecause we want to use an induction, it’s important that we answer Trouble(*a*\\*, τ) and Info(*a*, *a*′, τ) using a *smaller* budget than was used in τ. But our assumption about universality requires the reasoner to be *k* times *bigger* than the computation it wants to compete with.\n\nI hope that if there is malicious consequentialism within HCH(Q, N), then it needs to be “small” enough that we can rely on filtered-HCH(Trouble(*a*\\*, τ), N-1) to compete with it. Intuitively:\n\n* None of the unfiltered subquestions in HCH(Q, N) involve any malicious consequentialism, so we hope each subquestion either “behaves as intended” or produces the special symbol ⟂.\n* If all subquestions of HCH(Q, N) are behaving as intended, and if the human at the root of HCH(Q, N) is behaving as intended (which we attempted to guarantee by limiting their input space), then any malicious consequentialism is very tightly constrained.\n* Intuitively, it’s restricted to two options: \n(a) be computed implicitly with “side effects” of the intended behavior, repurpose valid parts of the computation for unintended effects, *etc.* \n(b) make use of the intended behavior of questions\n* When using option (a), it feels like the malicious optimization will be significantly less powerful than HCH with the same budget. For example, when I’m answering a question, it seems like most of my useful computational work is going into actually answering the question; if someone wanted to try to do computational work based on details of how I formatted my answer, I think they’d get less done. So we can hope that this part of the malicious consequentialism could be simulated by HCH with total budget much less than N.\n* When using option (b), the malicious consequentialism can be very efficient — exactly as efficient as HCH itself, since it’s just using that computation in the intended way. But filtered-HCH(Trouble(*a*\\*, τ)) may not need to understand these parts of that computation in order to do its job. Instead it can look at the transcript τ and take the relevant answers at face value, just like the malicious computation is doing. (This relies on the inductive invariant, and even with that isn’t obvious, but it looks pretty plausible to me.)\n\nNote that this strategy really leans on the assumption that HCH works well for *every* input. In addition to the fact that *a*\\* is optimized, Filter calls HCH with a large and potentially malicious transcript τ as its input. The distinction between “untrusted input that you consult when appropriate” and “subcomputation that has a privileged position from which it can cause harm.”", "url": "https://ai-alignment.com/universality-and-consequentialism-within-hch-c0bee00365bd", "title": "Universality and consequentialism within HCH", "source": "html_articles", "source_type": "blogPost", "source_filetype": "pdf", "date_published": "2019-01-09T23:00:00Z", "authors": ["Paul Christiano"], "summary": [], "id": "c41199c50f542351bf3a4a47c7aa988e"} {"text": "[Ascription universality](/towards-formalizing-universality-409ab893a456) seems sufficient for [informed oversight](/informed-oversight-18fcb5d3d1e1), and I’ve argued it could be used to [prevent cascading failures in HCH](/universality-and-consequentialism-within-hch-c0bee00365bd). In this post I’ll argue that ascription universality also appears to address two problems with aligning model-based RL:\n\n1. It allows you to perform induction without being hijacked by malicious predictors (this is the “[benign induction](https://agentfoundations.org/item?id=1263)” problem, which is extremely speculative)\n2. It seems necessary, though maybe not sufficient, for extracting “all the relevant facts a model knows.” (Which is at least necessary, and I think probably sufficient, for defining a reward function that yields aligned behavior when optimized.)\n\nThese arguments aren’t rigorous enough that I find them convincing, but they are enough to convince me that better understanding universality is a top priority. At this point I wouldn’t be surprised if universality is enough to resolve most of the classical philosophical difficulties for AI safety. If so, first we’ll need to better understand in what sense it could be achievable.\n\nSetup\n-----\n\nI’m interested in the following hopefully-aligned version of model-based RL:\n\n* Use iterated amplification to learn a (distribution over) models of the world.\n* Use iterated amplification to learn a utility function over sequences of states and actions.\n* Plan in that model+utility function (e.g. using MCTS) \n(You might want to combine learning a value function and terminal values, but for now I’ll simplify.)\n\nThis is similar to the traditional model-based RL approach, except that we learn a model and utility function with iterated amplification, instead of learning a model to predict future observations and defining a utility function over observations by hand.\n\nWhat is a model?\n----------------\n\nThe simplest kind of model to consider formally is a *predictor*, which we can view as a probability distribution over sequences of observations. We can evaluate predictors on past observations, and use them to predict future observations.\n\nThis is not enough for making the decisions, because the utility of an outcome is not a (simple) function of our future observations. We care about lots of features of the world that are hard to directly observe, and if we tried to define our preferences in terms of our observations alone we’d need an additional step where we map our observations to a prediction about what’s “really happening” out there in the world. For example, given a bunch of observations we might ask “given those observations, is Alice actually happy?” At some point we need to get a a distribution over models that we can use to answer questions about *the stuff we care about*, and not just our observations.\n\n(Closely related: [Formalizing Two Problems of Realistic World Models](https://intelligence.org/files/RealisticWorldModels.pdf), though the basic philosophical issue is a classic.)\n\nAs a first step we can ask a model to answer arbitrary questions about the world rather than only reproducing observations. We can evaluate a model by looking at questions whose answers we already know. A problem with this is that in order to behave well a model only needs to correctly answer *the kinds of questions that we might know the answer to*, and the simplest model may well behave strangely on other questions.\n\nA second step is to view a model M as an object that HCH can reason about, and use HCH to answer questions like “If model M describes reality, then what would be true?” For example, a model might posit some kinds of physical objects, and HCH could reason from those objects both to explain our observations and to infer the existence of other objects we care about. This seems to more closely track the reasoning humans perform about possible models, and (a) gives us more traction on a good prior over models, (b) gives us traction to pose other questions about the model (beyond what its predictions are).\n\nI’ll use this notion of “model” throughout the post.\n\nAside: defining goals\n---------------------\n\nIf we are able to learn a “correct” and “understandable” model, I’m optimistic about being able to find a utility function that can induce aligned behavior. In particular, I think that function can look something like:\n\n* Identify a thing that is “me” in the resulting state\n* Conservatively evaluate how many resources “I” effectively control \n(working together with AI systems, *etc.*)\n* Conservatively evaluate how “I” changed over the history — do I endorse the way in which my beliefs and values shifted?\n\nWe can’t hope to have completely robust answers to any of these questions. But we can at least hope to have a utility function that is safe to maximize within our model, by being aware of the history leading to high utility and penalizing the act if that history leads to high utility for bad reasons.\n\nFor example, if we can see that we control resources only because other agents disagree with our assessment of which resources are important, then we can revise our estimates appropriately — we don’t have to fix a single way of valuing resources which is robust to arbitrary adversarial pressure.\n\n(Realizing this kind of hope is related to the ideas expressed by Jessica Taylor [here](https://agentfoundations.org/item?id=1149) and [here](https://agentfoundations.org/item?id=1224).)\n\nAs another example, our preferences over changes to ourselves will not be completely robust. But maximization over the utility function will only exert pressure for such changes if they somehow lead to *higher* reward. And in order to do that, the changed behavior needs to actually manifest over the history, and we can use that manifestation to detect a corruption that allows the agent to receive higher reward even if we wouldn’t recognize most possible corruptions.\n\n“Knowing what the model knows” seems to be a critical ingredient for constructing such a utility function. It’s not clear whether or not it’s sufficient, but I won’t discuss this issue more in this post.\n\nProblem 1: Benign induction\n===========================\n\nIf we take the “simplest” model that explains our observations, it could potentially be [very](https://ordinaryideas.wordpress.com/2016/11/30/what-does-the-universal-prior-actually-look-like/) [strange](https://agentfoundations.org/item?id=1263). In particular, it could produce correct observations because it contains an intelligent predictor rather than because it captures “real” physics. This could cause it to generalize in unpredictable and dangerous ways (depending on the motivations of that agent).\n\nTwo variants of this problem have been discussed:\n\n1. The predictor may literally *contain* intelligent agents who are trying to predict our observations. These agents may be motivated by the desire to survive the training process, but then may generalize maliciously once off the training distribution.\n2. The predictor could *reason* *about* intelligent agents who change their behavior in order to explain our behavior, because the existence of such agents is the “simplest” explanation for our observations. A notion of simplicity that produces this behavior is in some sense not the “right” one, but I believe it would probably apply to the simple formalisms we currently understand (such as minimum description length).\n\n(A single situation may have some features of both #1 and #2, this is not a clean dichotomy.)\n\nIf we have a universal reasoning procedure, I think we can likely avoid these problems.\n\nOption 1: filter malign models\n------------------------------\n\nWe use the following procedure:\n\n* Compute log likelihoods for models as normal, by asking HCH(“how likely are facts F if model M is true?”)\n* Additionally, compute HCH(“is model M likely to lead to catastrophically bad predictions on inputs we’ll encounter in the future?”) and judge a model “unsafe” is the answer is “yes.”\n* Apply Bayes as usual but restricted to the safe models.\n\nWhy might this work?\n\nIf a model M is malign, then it is making good predictions *because* of the existence of some adversarial agent. Then:\n\n* If that agent is actually contained in the model (situation #1), then universality means we can learn everything that agent knows. But the agent has formed the plan of giving good answers in order to increase the model’s log likelihood and so have more influence later, and if we know this then we will realize the model will generalize poorly and filter it out.\n* If the agent is being reasoned about by the model (situation #2) then things are even easier, since the model knows about the existence of the agent and its motivation. Then applying universality to the model itself will let us know the model would generalize poorly.\n\nGoing beyond the simplest safety test, we could also try to throw out any “abnormal” models, e.g. where the stars don’t actually exist or our local universe is actually a fiction. This could leave value on the table if these weird hypotheses are important, but it could also reduce the risk of making a serious error based on strange hypotheses. Universality is relevant because it potentially allows us to avoid adversarial pressure on this definition of “normal” — if a model looks normal because an adversary is trying to make it look normal, then we can recognize that and throw it out as unsafe.\n\nOption 2: outcompete malign models\n----------------------------------\n\nSuppose that I have a prior μ over models, and I believe that the posterior puts significant probability on “malicious” models that make good predictions only because they contain agents who are motivated to make good predictions. (Or because they reason about such agents.)\n\nThis is a strange epistemic situation to be in. I must believe that on average these malicious agents have a “better” prior μ than I do — that’s what they need in order to assign a higher probability to the data than the rest of μ. Jessica Taylor writes about this situation [here](https://www.lesswrong.com/posts/5bd75cc58225bf067037535e/are-daemons-a-problem-for-ideal-agents), and has similar intuitions about it.\n\nIntuitively, something like universality should allow you to avoid this state. That is, the malicious models must perform some reasoning that we believe allows them to assign better prior probabilities; by ascription universality HCH must be able to capture this reasoning and use it to inform *its* prior. As long as malicious agents have a reasonably low prior they will therefore end up with a reasonably low posterior.\n\nTo implement this strategy, we could use the same procedure as in the last section, but instead of using a simplicity prior over models M, we would use HCH to compute those prior probabilities by asking HCH(“a priori how likely is model M to describe the observations we are going to provide to it?”) or something along those lines.\n\nI don’t know whether this works, and I’m not sure it will really be possible to say much precisely without having a cleaner picture of the sense in which HCH is or isn’t universal. If it does work, this seems like a much cleaner solution than filtering. The fact that there are two plausible options on the table makes me more optimistic that universality will address benign induction one way or another.\n\nProblem 2: extracting facts from a model\n========================================\n\nConsider a model M that specifies low-level physical principles and some basic “bridging” facts about how everyday objects are implemented in low-level physics. HCH can answer questions about the world by performing a detailed physical calculation, and then using bridging facts to relate the resulting state to things we care about.\n\n(Modeling physics itself is a somewhat exotic example, but I think the same phenomenon would occur in more prosaic cases, wherever there are effective but opaque models.)\n\nUnfortunately, M can be a good model even if the bridging facts are not exhaustive. They might allow HCH to answer some kinds of questions — e.g. to predict the observations that we’re using to test models — but not to answer other important questions.\n\nFor example, it may be the case that M predicts that Alice will report high satisfaction with the AI’s behavior, by performing some physics calculation and then using bridging facts that relate physical states to Alice’s reports. At the same time, the physics calculation might actually simulate Alice being bribed to report high satisfaction, but it might not be possible to infer that from the available bridging facts.\n\nBut the physics computation contains a complete simulation of Alice being bribed, and so we can ascribe knowledge of the bribery to it. Thus if HCH is ascription universal, we can hope it recovers all of these facts.\n\nIt’s not obvious whether universality is actually sufficient for this purpose, and it will depend on exactly what form of universality ends up being achievable. It might be that a “reasonable” ascription strategy would recognize that the model knows facts about atoms, but not facts about Alice.\n\nAt a minimum universality is *necessary* for this purpose — if M can believe facts that HCH can’t identify, then those facts could turn out to be relevant to the utility evaluation. It’s just that there might or might not be a further step not captured by universality.\n\nMy current guess is that the most natural form of universality will give us everything we want, and that thinking about “ontology identification” is mostly useful for finding hard cases for universality. Regardless of whether that’s true, I think the next step should be a clearer picture of prospects for universality, after which we can revisit this question.", "url": "https://ai-alignment.com/universality-and-model-based-rl-b08701394ddd", "title": "Universality and model-based RL", "source": "html_articles", "source_type": "blogPost", "source_filetype": "pdf", "date_published": "2019-10-03T22:00:00Z", "authors": ["Paul Christiano"], "summary": [], "id": "f57dd9e7604ee4b65048fe0b870548ea"} {"text": "If many copies of Paul [collaborate to solve a problem](/policy-amplification-6a70cbee4f34), I expect their behavior to be [corrigible](/corrigibility-3039e668638): each copy will make an honest effort to help the group accomplish its goal, to help the group notice and correct mistakes, and so on.\n\nSometimes by chance one copy may behave incorrigibly. For example, one in every million days I might be struck by an irresistible urge to watch the world burn. [Reliability amplification](/reliability-amplification-a96efa115687) is the problem of combining several probably*-*corrigible agents into a more-corrigible ensemble (which only wants to destroy the world on one in every *billion* days). I think reliability amplification is probably possible, because a large ensemble of sufficiently-corrigible agents is very unlikely to have more than one incorrigible member.\n\nUnfortunately, some situations might cause me to consistently behave incorrigibly. Let’s call such a situation an “attack.” For example, a clever argument might convince a copy of Paul that he should sabotage the collaboration. If we expose ten copies of Paul to the same argument it won’t fix the problem, we’ll just have ten copies all of whom are behaving incorrigibly.\n\n[Security amplification](/security-amplification-f4931419f903) is the problem of combining many corrigible agents, into a group which is harder to attack than any individual. I hope to iterate security amplification and end up with a secure system.\n\nThis post explains why I think security amplification is likely to possible.\n\nSetting\n-------\n\nI’ll be thinking about dialog systems that communicate over a text channel. A “query” is anything that you can ask interactively over a text channel.\n\nIn this context, corrigible behavior means behaving cooperatively to the person on the other end of the conversation — being honest, volunteering helpful information, following their instructions, *etc.*\n\nAn “attack” is a sequence of messages which cause an agent to stop behaving cooperatively.\n\nI. Breaking it down\n===================\n\nThe basic hope\n--------------\n\nA large group can answer a big query by breaking it down into less complex pieces. Let’s suppose we had some formal measure of complexity, and that we can answer a query of complexity *k* by breaking it into pieces each of which has complexity <*k,* until we reach some set of “foundational” queries of complexity *k*⁰ that are too simple to break down any further.\n\nThis suggests a route to security amplification:\n\n* Let *k*ᴬ be the complexity of the *simplest possible attack* on an individual.\n* We hope that *k*ᴬ > *k*⁰. That is, we hope that the foundational questions are simple enough that all of them are safe.\n* Then if we give any input of complexity *k*ᴬ to the group, it will be broken down into pieces of complexity <*k*ᴬ (which are necessarily safe).\n* Therefore the complexity of the simplest attack *on the group* is higher than the complexity of the simplest attack *on the individual*.\n* If we iterate this process, building a group out of groups, then the simplest attack will get more and more complex, until eventually there are no attacks at any feasible level of complexity.\n\nIn order to realize this hope, we need:\n\n1. to define a procedure for breaking queries down,\n2. to define an appropriate notion of complexity,\n3. to argue that all queries above some foundational complexity *k*⁰ are broken down into strictly simpler pieces,\n4. to argue that all queries below complexity *k*⁰ are safe,\n5. to argue that the group behaves corrigibly as long as all of the individuals behave corrigibly.\n\n1. Meta-execution\n-----------------\n\n[Meta-execution](/meta-execution-27ba9b34d377) can potentially answer a query Q without any individual needing to consider anything as complex as Q itself.\n\nI’ve sketched out part of an example [here](https://medium.com/@paulfchristiano/an-example-of-meta-execution-e1ddc99291c5), to give some sense of what this actually looks like.\n\n2. Defining complexity\n----------------------\n\nI’ll start by imagining what would happen if you asked me to pick a query at random.\n\nBy “at random” I’m imagining something like the benign universal distribution relative to Paul: I think of all of the possible not-actively-evil strategies I could use for picking queries at random, and then I pick one of those *strategies* at random.\n\nFor example, my meta-strategy might be: with 5% probability I give a uniformly random string. With 2% probability I think of events in my life and ask a question relevant to one of them. With 15% probability I try splitting up the space of possible questions into a hierarchy, and flip a coin at each step to decide which part of the hierarchy to explore. With 80% probability I do something else.\n\nOne of these strategies is particularly important: I could generate an intermediate random query Q′, and then let Q be a random query that arises in the process of answering Q′. For concreteness, let’s say that I use this strategy half of the time.\n\nLet µ(Q) be the probability that this process outputs query Q. We can define the complexity of a query Q as *k*(Q) = −log µ(Q).\n\nWe can also use µ to measure the complexity of a distribution η over queries, by letting *k*(Q) be the maximum value of log (η(Q)/µ(Q)) for any query Q.\n\nInformally, this is the log of “How many samples would you have to draw from µ, in order to get one sample from η?”, i.e how many bits of selection pressure you need to get a sample from η.\n\n(It may be more useful to consider the ε-smooth complexity: what is the *smallest* complexity of any distribution η′ that is within ε total variation distance of η? Informally: how many samples would you have to draw from µ, in order to get a sample ε-close to η?)\n\n**Note**: There’s no way that this is the right definition to use and I expect we’ll be able to do better. I think an important ingredient is having a decomposition where the (subquestion → question) graph is a small set expander. If anyone wants to take up this analysis, I think it’s an important question, I expect to come back to it at some point.\n\n3. Can we break a query into simpler parts?\n-------------------------------------------\n\nSuppose that I start with a query Q, and split off a subquery Q′. When can we hope that *k*(Q′) < *k*(Q)?\n\nSuppose there are many possible queries Q for which Q′ appears as a subquery. For example, the subquery “when did [*a*] happen?” might appear in the course of answering “was [*a*] a cause of the Great Depression?”, or it might occur in the course of answering “was [*a*] a consequence of the Spanish flu?”\n\nEach of these corresponds to a way that Q′ could be sampled by µ — since one of the strategies in µ is to generate a random query and then generate a random subquery.\n\nSo µ(Q′) > Σµ(Qi) / 2N, where {Qi} is the set of superqueries that give rise to subquery Q′, N is the total number of subqueries for each superquery, and 50% is how often µ decides to sample a random subquery.\n\nAs long as each possible superquery has at most 1/(2N) of the total probability under µ — i.e., as long as you can’t guess the superquery with reasonable probability from the subquery — then we have reduced the complexity. If N is 16, this requires that each subquery must throw out about 5 bits of information about the original query.\n\nIn fact the situation is better than this:\n\n* In reality our queries are sampled from some distribution η rather than chosen deterministically. As we increase the entropy of η the set of preimages {Qi} effectively becomes larger. (The analysis also gets a little bit more complicated.)\n* µ mixes in strategies other than (pick a random query, then pick a random subquery). If Q′ is qualitatively simpler than Q, then we may have µ(Q′) > µ(Q) without even considering the possibility of Q′ being sampled as a subquery of Q.\n\n4. Are simple queries secure?\n-----------------------------\n\nWhat is the complexity of the simplest attack on an unaided human? Equivalently: how many queries do we expect to sample, before we find an attack?\n\nThis will vary from human to human, so for concreteness I’ll consider a thoughtful futurist or AI researcher who has been briefed about all of the issues we can think of.\n\nIf we wanted to actually test for the existence of attacks, we could conduct a giant brute force search for “simple” attacks.\n\nIf we imagine generating a query every 30 seconds for 40 hours a week for a year, we generate about 200,000 queries a year. Hiring ten people gets us up to 2M queries a year. 30 seconds may be low, but note that the generation process can produce many samples per invocation, and we only need to test the candidate attacks.\n\nIf we put in a little bit of selection pressure I suspect we would find attacks at least 1–3 orders of magnitude more often than by sampling randomly.\n\nSo it seems easy to test explicitly whether 100 million random queries contain an attack. This could be done by dedicated [red teams](/red-teams-b5b6de33dc76), and mixed freely with the process used to collect training data. I suspect that thoughtful analysis could get us even further.\n\nIntuitively, I would be surprised if such a search turned up an attack against a careful overseer. If it did turn up attacks, I suspect it would be possible to strengthen the overseer against them.\n\nOverall, I feel pretty safe relying on 10⁸ queries not containing an attack, pretty unsafe with numbers like 10³⁰, and am not sure where in between attacks would crop up.\n\n5. Is a collaboration of corrigible agents corrigible?\n------------------------------------------------------\n\nEven if every individual behaves corrigibly, the group need not behave corrigibly. This is a bit of a tangent from the real thrust of this post, which is about removing the security failures of the human. But it’s a tangent worth discussing briefly, since I think it’s an important open question.\n\nThere are two very salient sources of incorrigibility (though without a crisper picture of optimization and corrigibility this list isn’t exhaustive):\n\n* The group is applying selection pressure to patterns and behaviors propagating through the group. Those patterns may themselves become powerful incorrigible consequentialists.\n* The group is implementingincorrigible optimization. \nFor example, suppose meta-execution asks the subquestion “What does the user want?”, gets a representation of their values, and then asks the subquestion “What behavior is best according to those values?” I’ve then generated incorrigible behavior by accident, after taking innocuous steps.\n\nI think the first problem is likely resolvable by applying security and reliability amplification, which operate on groups as well as individuals. I think the biggest thing that’s missing here are tools for analysis (though maybe once we have those tools we’ll notice more problems). Currently this feels pretty complicated and confusing to me.\n\nThe second problem clearly requires the human operator to do something other than just “being sensible.” That is, in order for this scheme to possibly work, the human needs to understand something about what kind of optimization causes incorrigibility and the human would have to avoid actions that introduce problematic optimization. It’s conceivable that this can be done by an existing human if they are paranoid, but realistically the only reason I expect this to go well is because I expect us to make progress on the theoretical issue of understanding incorrigibility. I think this is an important theoretical problem.\n\nII. Universality\n================\n\nSo far I’ve avoided a key question: why think we can break tasks down at all?\n\nIndeed, some human capabilities *can’t* be broken down into pieces. Suppose that a human has seen a bunch of examples (*x*, *f*(*x*)) — such as English phrases and their French translations — and has learned a predictive model of *f*. The human may not be able to access their model except by running it, in which case they have no hope of breaking down the task of computing *f* — in this case translating a sentence. (Wei Dai proposed the translation example.)\n\nThe proposal\n------------\n\nI propose to go on breaking tasks down anyway. This means that we will lose certain abilities as we apply amplification.\n\nFor example, given the task “Translate the sentence [*x*] from French to English” we will answer it without having any translator look at the entire sentence *x*. This means that the quality of translation will fall.\n\nAfter enough steps of amplification, we may eventually arrive at an agent that doesn’t know French at all, and is stuck with recommendations like “Consult an English-to-French dictionary.”\n\nEffectively, this proposal replaces our original human overseer with an impoverished overseer, who is only able to respond to the billion most common queries.\n\nIs this OK?\n-----------\n\nThe first key question is whether this impoverished overseer remains [universal](/of-humans-and-universality-thresholds-24b473e0c898).\n\nThat is, if we put together enough copies of this impoverished overseer (by iteratively apply [meta-execution](/meta-execution-27ba9b34d377)) would we be able to obtain arbitrarily smart groups? Or would we get stuck?\n\nHere we need to be careful about “arbitrarily smart.” There are clearly problems the group will never be able to solve — due to lacking knowledge/expertise— including problems that individual humans *can* solve.\n\nThis is potentially OK, as long as we learn a good policy for leveraging the information in the environment (including human expertise). This can then be [distilled](/approval-maximizing-representations-56ee6a6a1fe6) into a state maintained by the agent, which can be as expressive as whatever state the agent might have learned. Leveraging external facts requires making a tradeoff between the benefits and risks, so we haven’t eliminated the problem, but we’ve potentially isolated it from the problem of training our agent.\n\nComparison to agent foundations\n-------------------------------\n\nIf the impoverished overseer is universal, then the set of questions of complexity *k* < *k*⁰ form a simple “core” for reasoning: by creating a giant lookup table of human responses to these questions and simply using that lookup table enough times, we can produce arbitrarily sophisticated behavior.\n\nIf humans are universal at all then of course such a core exists (just take all questions that a human can articulate in their lifetime). But finding a small core seems like it requires a better understanding of intelligence.\n\nI think that coming up with such a core is a very natural and important problem for researchers interested in philosophical issues in AI. My view is that if MIRI-style research adds value, it is most likely to be by finding an explicit core for reasoning rather than finding an explicit recipe for AGI. This core would then be combined with iterated amplification to yield a competitive AI. However, I don’t think that such a core is likely to encode an answer to questions like “what is the right decision theory to use?” — instead I expect it to look more like a solution to metaphilosphy, automating the process whereby humans answer questions of that form.\n\nConditioned on amplification working well, I think there is about a 50% chance that it uses an explicit core that we understand and a 50% chance that it uses a messy core learned from humans.\n\nIn addition to making it much easier to analyze amplification, having an explicit core of reasoning would also potentially make verification much easier, as discussed [here](/techniques-for-optimizing-worst-case-performance-39eafec74b99). Overall, I think that this kind of perspective might capture most of what is useful about the MIRI view while still being able to leverage the benefits of modern ML.", "url": "https://ai-alignment.com/universality-and-security-amplification-551b314a3bab", "title": "Universality and security amplification", "source": "html_articles", "source_type": "blogPost", "source_filetype": "pdf", "date_published": "2019-01-02T23:00:00Z", "authors": ["Paul Christiano"], "summary": [], "id": "e99ed469b57e33623ac5561245af273f"} {"text": "Suppose that we want to translate between English and an alien language (Klingon). We have plenty of Klingon text, and separately we have plenty of English text, but it’s not matched up and there are no bilingual speakers.\n\nWe train GPT on a mix of English and Klingon text and find that it becomes fluent in both. In some sense this model “knows” quite a lot about both Klingon and English, and so it should be able to read a sentence in one language, understand it, and then express the same idea in the other language. But it’s not clear how we could train a translation model.\n\nOf course some concepts won’t have translations, and the model will often be uncertain about the translation of a term. But we can still ask for a model to explain the meaning of a Klingon expression as best as it can to an English-speaking user. For example, it could say “This is an idiomatic expression that’s often used to express great uncertainty” or “This is a small animal that is familiar to most Klingon speakers, I think it’s kind of like a frog but am not really sure” rather than translating a sentence directly.\n\nHow can we construct an objective that incentivizes the model to “try its best” at this translation task?\n\nTranslation-specific approaches\n===============================\n\nThere are many published heuristics for unsupervised translation (e.g. [Lample et al](https://arxiv.org/pdf/1711.00043.pdf)). I don’t think those techniques should completely satisfy us:\n\n* Existing methods can’t lead to a model that appropriately describes its uncertainty or talks the user through a hard-to-translate expression. (At least as far as I’m aware.)\n* We have no real reason to think existing methods fully utilize the model’s understanding, or to expect those methods to scale well. (In practice, I think they are impressive but still lag behind the quality of our models’ understanding.)\n* These heuristics are specific to translation, whereas we’d like to find general methods that can scale up to harder problems.\n\nExisting alignment techniques\n=============================\n\nIf we try to apply RL from human feedback to translation, we immediately run into a problem: how am I supposed to judge which of two English explanations of a Klingon sentence is better, given that I don’t know Klingon?\n\nDebate doesn’t easily address this difficulty either — if one model claims that “qapla” means “great success” and the other claims it means “minor success,” I can’t easily decompose that disagreement into simpler sub-questions that debaters disagree about. Debaters could cite phrases in the database where “qapla” is used, but they’d need to average weak evidence over many phrases. Making things worse, to interpret each usage they’d need to agree about the meaning of the rest of the phrase — -which isn’t necessarily any simpler than the original disagreement about “qapla.” Even if this process was possible, it’s not at all clear that GPT would be able to do it — -being able to translate between Spanish and English doesn’t mean I have an encyclopedic knowledge of all the documents from which I built up my intuitive sense of a particular word’s meaning (which I’d need in order to win such a debate).\n\nRight now I don’t think we have any scalable strategies to this kind of problem; I think it’s a core open question for alignment.\n\nUnsupervised translation seems like a good problem to think about for alignment\n===============================================================================\n\nI think the key feature of this situation is that our model has acquired a bunch of intuitions about the domain which are only justified empirically — the model “knows” about the meaning of phrases only insofar as it has a very complex hypothesis that was supported by the data.\n\nThis situation is going to become increasingly common as we train more powerful models, and will immediately be a real problem if we are applying human feedback to fine-tune GPT; while GPT is subhuman in many ways, it’s already acquired plenty of knowledge that any given human contractor would lack.\n\nMost of GPT’s knowledge is something that came from *some* human, but ultimately we will be training models that generate new knowledge (e.g.by searching over plans in realistic environments, or by writing code on their own and learning about what works), and *no* human will have that knowledge. So we can’t hope to get around this problem by simply hiring more knowledgeable contractors.\n\nThis can leave us in a situation where it’s extremely difficult for humans to oversee AI decisions. If a model says “My intuition is that this business plan will make a lot of money” the user will need to decide whether or not to trust it. If they don’t, then they may find themselves at an increasing economic disadvantage. If they do, then they may have lost the ability to effectively oversee AI systems except by evaluating the consequences of their actions. That leads directly into the classical challenges of AI safety, namely that AI systems evaluated exclusively on the basis of measured outcomes have a tendency to push the world in undesirable directions (since we can’t measure what we care about) and to corrupt our measurements.\n\nMy vague hope\n=============\n\nI’m hoping we can address this using the kind of approach discussed in [learning the prior](/learning-the-prior-48f61b445c04). That might look like:\n\n* In parallel with training GPT, train a helper model that explains the meaning of phrases (it can also provide other intuitions or background facts that are useful for predicting the next word).\n* As we train on Klingon text, we sample phrases and then ask a human “which word will come next?” The human uses the helper model to understand what is being discussed and make a prediction.\n* We optimize the helper model to make the human’s next-word predictions good (in parallel with generative pre-training).\n* Finally, a human uses the same helper model to evaluate a proposed Klingon → English translation, and we use this to train a translator by RL.\n\nThat short description sweeps a lot of complexity under the rug. Most importantly, the success of the scheme relies on the correctness of the prior over helper models (or else the helper could just be another copy of GPT-Klingon), and we don’t have a credible strategy for representing and manipulating our prior over complex programs.\n\nOverall, I’d say that this is more at the level of “vague hope” rather than “concrete proposal.” I think it’s an open question whether anything in this space will work.\n\nI think that this is the kind of problem which makes e.g. MIRI researchers justifiably skeptical that scalable ML alignment is possible at all, and it’s the main focus of my current conceptual work on AI alignment. I’m glad that this kind of theoretical crux also looks like it will soon be relevant to ML practice, since I think it will make it much easier to close the gap between people who work on ML and people who work on alignment.", "url": "https://ai-alignment.com/unsupervised-translation-as-a-safety-problem-99ae1f9b6b68", "title": "“Unsupervised” translation as an (intent) alignment problem", "source": "html_articles", "source_type": "blogPost", "source_filetype": "pdf", "date_published": "2020-09-29T22:00:00Z", "authors": ["Paul Christiano"], "summary": [], "id": "0aff6e32b01c000b42b44143dbff7336"} {"text": "Safer ML paradigms team: the story\n==================================\n\n \n\n*This is a summary of the project the team focussing on combining Inductive Logic Programming with Deep Learning undertook.*\n\n\nExploration phase\n-----------------\n\n\nDuring the first retreat we formed a team based on a vague shared interest to re-interpret existing AI technologies. Our interests included symbolic Reinforcement Learning (RL), emergent communication between RL agents, translating open philosophy problems to RL experiments, factored cognition, and combining Inductive Logic Programming (ILP) with deep learning.\n\n\nWe ranked all these project ideas based on “expected quality of the output if the project is successful”, “tractability”, “novelty”, “how much we’d learn”, “non-tediousness”, “how well it ties in with existing work” and “safety impact”. Combining ILP with deep learning came out on top.\n\n\nIn the time between the pre-retreat and the Louti retreat we read up on ILP (as two out of the four team members had never heard of the method) and generated ideas about how to buff it up with deep learning. Before the start of the second retreat we had collected about 7 rough ideas and lost our only member with prior ILP experience to deadly Oxford deadlines.\n\n\nWe considered extending  the existing literature that combines higher order logic with ILP to obtain better performance, either by improving their open-source implementation or by proving theoretical results. The latter would entail making a stronger argument for their design choices, by building intuition for why they led to empirical improvements, formulating hypotheses, and proving theorems. Although we were reasonably confident that we could produce some results, this project seemed like it wouldn’t have any impact whatsoever on the direction that AI or ILP research is taking. \n\n\n\nA project related to factored cognition was writing an algorithm that takes in a question, uses a neural network or deep RL to produce subquestions, answers the subquestions using ILP, and then combines the subquestions to produce an answer. We could have either approached this with a focus on “how to learn how to factorize problems” or “how can we improve ILP performance by decomposing tasks”. Ought has worked on factored cognition for a while, so it seems unlikely that we would have very interesting insights on the first perspective that they have not yet realized. We considered reproducing OpenAI’s iterated amplification results on experiments in five algorithmic domains. We decided against this, because we thought it would be a lot of work in proportion to what we would end up contributing. On the whole, we decided not to work on factored cognition because we expected the project would end up being brittle in the face of bottlenecks: if one part of the pipeline is not working then the entire algorithm is useless.\n\n\nWe had some very underspecified ideas on combining ILP with ML. Among which, “predict the distance between a theory and a completed theory (in number of predicates)” and “use ILP or some other proof checking algorithm to check a hypothesis and use a generative algorithm to generate hypotheses”. In the end we went with the idea that seemed most straightforward, which is developing a toy version an algorithm that predicts which pieces of the background in an ILP problem are relevant. This was a natural choice since it has a small base case (bag of words input to ordinary MLP), many possible extensions, and since the exact “background forgetting” algorithm is NP-hard. ([Cropper 2019](https://arxiv.org/abs/1911.06643))\n\n\nILP implementation project\n--------------------------\n\n\nILP takes as input background information (rule-like facts about the world), *B*, a set of positive examples, *E+*, and a set of negative examples, *E-*, and outputs a hypothesis, *H*, that is consistent with all the positive examples and inconsistent with the negative examples while using the background to minimise hypothesis size. ILP takes longer as the background is bigger, so reducing the background size reduces the computation time. Our goal is pruning the background set without losing relevant predicates that would change the output hypothesis.\n\n\nApproaches include:\n\n\n* Within some domain, such as computational chemistry, there is a broad and fixed background. For each combination of positive and negative examples (problem statement) only a subset of the background is needed to find the correct hypothesis. If we can predict which subset of the background is needed for which problem, then we can reduce the time it takes to solve those problems. Such an algorithm would probably  be based on recognizing similarities between background predicates and examples. For each element in the background it could predict whether that predicate is relevant for the problem at hand.\n* Alternatively, we could develop a general purpose “background size reduction” algorithm that works on any background. Such a distillation process would have to be based on intrinsic similarities between elements of the background. The algorithm could for example recognize which elements are duplicates. However, if we want to deal with the relationships between predicates in the background, then we can not use a predictor that only predicts relevance of individual elements. For example, if we have five duplicates of one predicate, then we may predict that each individual predicate should be thrown away, but we would like to throw away exactly four. Hence, in this setting we probably want to make predictions about the redundancy of subsets of the background (rather than the relevance of individual predicates).\n\n\n![](https://lh3.googleusercontent.com/gaWUDixdpBAc3XU_ua0_vlz3-Vr8Gi55IKWDq8bm5Fo9-l4SUcOHVFXSR_KM3kn3pN8hYy0ve61NXRfsXIcP5jl9oz_EFMuKXrsZylNBOe3r937T-Sp3iRNsF9yNUJwYFdnUPoiY)\nRelevance predictor\n\n\n* Input to the relevance predictor. Train on:\n\t+ One specific background within a specific domain.\n\t+ Backgrounds of a specific size.\n\t+ Backgrounds of variable sizes smaller than n and just have some empty input vector placeholders (in random slots).\n\t+ Backgrounds of variable sizes.\n* Output of relevance predictor. Given a background B:\n\n\n1. Return a relevance prediction for each element (predicting if after removing this element the resulting ILP hypothesis will or will not stay the same).\n2. For a background B return a subset B’ (which is a smallest set for which the ILP hypothesis H\\_B and H\\_B’ are the same).\n\n\nTo go from predictions about individual predicates to predictions about subsets of the background, we could iteratively apply that system.  \n\n\n\nVery ambitious ideas (that we did not focus on) are:\n\n\n* Instead of just taking out semantic duplicates, we could build a system that generates summarizing predicates.\n* We could also be more ambitious and hope to clean up noise by for example eliminating contradictory predicates. This is more difficult than reducing redundancy, because in this case we are trying to change the background such that the output hypothesis could change (or could be found in the first place), but there may be many changes that look valid at face value. Presumably, statistics could predict which predicates are more likely to be noise if a predicate contradicts many other predicates, while the background set would be consistent without the bad predicate. However, when there are two predicates that are inconsistent, but that are both consistent with the rest of the background set, then it seems almost necessary to have domain knowledge to distinguish noise from true predicates.\n\n\nThe prototype developed over the course of the second retreat consists of the following modules:\n\n\n* A **formal grammar generator**. For a fixed set of symbols, which constitute the background, we generate sets of positive (resp. negative) examples by using (breaking) the rules of the grammar on a subset of symbols. For each set of examples, the background knowledge that we can dispose of is the set of predicates corresponding to the symbols not used in the grammar. We intend to use this very simple task as a first proof-of-concept, but the rest of the pipeline is agnostic about the ILP problem to solve, and we plan to implement other tasks in the future.\n* A Prolog **parser** that reads the predicates in the background and examples files and returns their parse trees. This way we obtain a graph representation of the input that encodes the relations between terms of the theory.\n* A **predicate embedder** module that maps parse trees to vectors. This is implemented as a graph net. Graph nets are models that receive an annotated graph as input and return the same graph as output with updated annotations. The annotations are simply labels on the graph nodes and/or edges and they can have any type. Update functions are usually implemented by neural networks and they respect certain locality rules e.g. a node label is updated using only the current values of the labels on neighbouring nodes and/or incoming edges. In our case, the labels are vectors attached to the nodes. They are initialized to one-hot vectors and updated following the rules of [FormulaNet](https://github.com/princeton-vl/FormulaNet).\n* A **relevance prediction** module that receives the vector representation of the triple *(B,E+,E-)* and outputs the probability that each predicate in the background is relevant for the ILP task, given the examples. This module is implemented as an MLP with dropout. The embedder-predictor model is trained end-to-end using cross-entropy loss.\n\n\nImplementation choices:\n\n\n* We could just have inputted a string into our network, but we thought that would be very hard to learn from. We tried two approaches to translating predicates into vectors that can be used as input for a predictor network.\n\t+ One approach is to count the number of occurrences of words/symbols in the predicate (bag of words).\n\t+ The other is to first translate the predicate to an abstract syntax tree, then use that tree as an input to a graph network, which learns to output a useful vector, which is input to an MLP. This pipeline was trained end-to-end with a training signal from (at first) known ground truth, and later a full ILP system as oracle.\n* Initially we planned to produce ground truth to train our predictor using an ILP module, but we realized we could just adjust our example generator such that it automatically generates the solutions (which was possible because of the simple grammar domain that we used). For a predictor that can be applied to a wider range of problems, such an ILP module is still needed.\n\n\nWe have two outputs planned: a case for and against ILP for safety, and the writeup and repo from our relevance predictor. We are seeking feedback from the ILP community, and have sufficient ideas for a number of future projects of similar size.", "url": "https://aisrp.org/?page_id=169", "title": "Safer ML paradigms team: the story – AI Safety Research Program", "source": "html_articles", "source_type": "blogPost", "source_filetype": "pdf", "date_published": "2019-12-31T23:00:00Z", "authors": ["AI Safety Camp"], "summary": [], "id": "1580eac7d2ba3eabb635624b846b24bd"} {"text": "Earlier this year, my research group [commissioned 6 questions](https://prod.hypermind.com/ngdp/en/showcase2/showcase.html?sc=JSAI&ref=bounded-regret.ghost.io) for professional forecasters to predict about AI. Broadly speaking, 2 were on geopolitical aspects of AI and 4 were on future capabilities:\n\n\n* Geopolitical:\n\t+ How much larger or smaller will the largest Chinese ML experiment be compared to the largest U.S. ML experiment, as measured by amount of compute used?\n\t+ How much computing power will have been used by the largest non-incumbent (OpenAI, Google, DeepMind, FB, Microsoft), non-Chinese organization?\n* Future capabilities:\n\t+ What will SOTA (state-of-the-art accuracy) be on the MATH dataset?\n\t+ What will SOTA be on the Massive Multitask dataset (a broad measure of specialized subject knowledge, based on high school, college, and professional exams)?\n\t+ What will be the best adversarially robust accuracy on CIFAR-10?\n\t+ What will SOTA be on Something Something v2? (A video recognition dataset)\n\n\nForecasters output a probability distribution over outcomes for 2022, 2023, 2024, and 2025. They have financial incentives to produce accurate forecasts; the rewards total \\$5k per question (\\$30k total) and payoffs are (close to) a [proper scoring rule](https://en.wikipedia.org/wiki/Scoring_rule?ref=bounded-regret.ghost.io#Proper_scoring_rules), meaning forecasters are rewarded for outputting calibrated probabilities.\n\n\n\nDepending on who you are, you might have any of several questions:\n\n\n* What the heck is a professional forecaster?\n* Has this sort of thing been done before?\n* What do the forecasts say?\n* Why did we choose these questions?\n* What lessons did we learn?\n\n\nYou're in luck, because I'm going to answer each of these in the following sections! Feel free to skim to the ones that interest you the most.\n\n\nAnd before going into detail, here were my biggest takeaways from doing this:\n\n\n* Projected progress on math and on broad specialized knowledge are both faster than I would have expected. I now expect more progress in AI over the next 4 years than I did previously.\n* The relative dominance of the U.S. vs. China is uncertain to an unsettling degree. Forecasters are close to 50-50 on who will have more compute directed towards AI, although they do at least expect it to be within a factor of 10 either way.\n* It's difficult to come up with forecasts that reliably track what you intuitively care about. Organizations might stop reporting compute estimates for competitive reasons, which would confound both of the geopolitical metrics. They might similarly stop publishing the SOTA performance of their best models, or do it on a lag, which could confound the other metrics as well. I discuss these and other issues in the \"Lessons learned\" section.\n* Professional forecasting seems really valuable and underincentivized. (On that note, I'm interested in hiring forecasting consultants for my lab--please [e-mail](mailto:jsteinhardt@berkeley.edu) me if you're interested!)\n\n\n*Acknowledgments.* The particular questions were designed by my students [Alex Wei](https://www.alexwei.org/?ref=bounded-regret.ghost.io), [Collin Burns](http://collinpburns.com/?ref=bounded-regret.ghost.io), Jean-Stanislas Denain, and [Dan Hendrycks](https://people.eecs.berkeley.edu/~hendrycks/?ref=bounded-regret.ghost.io). [Open Philanthropy](https://www.openphilanthropy.org/?ref=bounded-regret.ghost.io) provided the funding for the forecasts, and [Hypermind](https://www.hypermind.com/en/?ref=bounded-regret.ghost.io) ran the forecasting competition and constructed the aggregate summaries that you see below. Several people provided useful feedback on this post, especially Luke Muehlhauser and Emile Servan-Schreiber.\n\n\nWhat is a professional forecaster? Has this been done before?\n=============================================================\n\n\nProfessional forecasters are individuals, or often teams, who make money by placing accurate predictions in prediction markets or forecasting competitions. A good popular treatment of this is Philip Tetlock's book [*Superforecasting*](https://en.wikipedia.org/wiki/Superforecasting:_The_Art_and_Science_of_Prediction?ref=bounded-regret.ghost.io), but the basic idea is that there are a number of general tools and skills that can improve prediction ability and forecasters who practice these usually outperform even domain experts (though most strong forecasters have some technical background and will often read up on the domain they are predicting in). Historically, many forecasts were about geopolitical events (perhaps reflecting government funding interest), but there have been recent forecasting competitions about [Covid](https://goodjudgment.com/covidrecovery/?ref=bounded-regret.ghost.io)-[19](https://prod.hypermind.com/ngdp/en/showcase2/showcase.html?sc=Covid19&ref=bounded-regret.ghost.io) and the [future of food](https://www.metaculus.com/tournament/alt-protein-tournament/?ref=bounded-regret.ghost.io), among others.\n\n\nAt this point, you might be skeptical. Isn't predicting the future really hard, and basically impossible? An important thing to realize here is that forecasters usually output *probabilities over outcomes*, rather than a single number. So while I probably can't tell you what US GDP will be in 2025, I can give you a probability distribution. I'm personally pretty confident it will be more than \\$700 billion and less than \\$700 trillion (it's currently $21 trillion), although a professional forecaster would do much better than that.\n\n\nThere are a couple other important points here. The first is that forecasters' probability distributions are often *significantly* wider than the sorts of things you'd see pundits on TV say (if they even bother to venture a range rather than a single number). This reflects the future actually being quite uncertain, but even a wide range can be informative, and sometimes I see forecasted ranges that are a lot narrower than I expected.\n\n\nThe other point is that most forecasts are for at most a year or two into the future. Recently there have been some experimental attempts to forecast out to [2030](https://prod.hypermind.com/ngdp/en/showcase2/showcase.html?sc=AI2030&ref=bounded-regret.ghost.io), but I'm not sure we can say yet how successful they were. Our own forecasts go out to 2025, so we aren't as ambitious as the 2030 experiments, but we're still avant-garde compared to the traditional 1-2 year window. If you're interested in what we currently know about the feasibility of long-range forecasting, I recommend [this detailed blog post](https://www.openphilanthropy.org/blog/how-feasible-long-range-forecasting?ref=bounded-regret.ghost.io) by Luke Muehlhauser.\n\n\nSo, to summarize, a professional forecaster is someone who is paid to make accurate probabilistic forecasts about the future. Relative to pundits, they express significantly more uncertainty. The moniker \"professional\" might be a misnomer, since most income comes from prizes and I'd guess that most forecasters have a day job that produces most of their income. I'd personally love to live in a world with truly professional forecasters who could fully specialize in this important skill.\n\n\n*Other forecasting competitions.* Broadly, there are all sorts of forecasting competitions, often hosted on [Hypermind](https://predict.hypermind.com/?ref=bounded-regret.ghost.io), [Metaculus](https://www.metaculus.com/?ref=bounded-regret.ghost.io), or [Good Judgment](https://goodjudgment.com/?ref=bounded-regret.ghost.io). There are also prediction markets (e.g. [PredictIt](https://www.predictit.org/?ref=bounded-regret.ghost.io)), which are a bit different but also incentivize accurate predictions. Specifically on AI, Metaculus had a recent [AI prediction tournament](https://www.metaculus.com/ai-progress-tournament/?ref=bounded-regret.ghost.io), and Hypermind ran the same questions on their own platform ([AI2023](https://prod.hypermind.com/ngdp/en/showcase2/showcase.html?sc=AI2023&ref=bounded-regret.ghost.io), [AI2030](https://prod.hypermind.com/ngdp/en/showcase2/showcase.html?sc=AI2030&ref=bounded-regret.ghost.io)). I'll discuss below how some of our questions relate to the AI2023 tournament in particular.\n\n\nWhat the forecasts say\n======================\n\n\nHere are the point estimate forecasts put together into a single chart (expert-level is approximated as ~90%):\n\n\n![forecast](https://bounded-regret.ghost.io/content/images/2021/10/forecast.png)\n\n\nThe MATH and Multitask results were the most interesting to me, as they predict rapid progress starting from a low present-day baseline. I'll discuss these in detail in the following subsections, and then summarize the other tasks and forecasts.\n\n\nTo get a sense of the uncertainty spread, I've also included aggregate results below (for 2025) on each of the 6 questions; you can find the results for other years [here](https://prod.hypermind.com/ngdp/en/showcase2/showcase.html?sc=JSAI&ref=bounded-regret.ghost.io). The aggregate combines all crowd forecasts but places higher weight on forecasters with a good track record.\n\n\n![](https://bounded-regret.ghost.io/content/images/2021/08/hypermind_us_china.png \"Machine-Learning: China vs USA\")\n\n\n![](https://bounded-regret.ghost.io/content/images/2021/08/hypermind_incumbents.png \"Machine-Learning: Rest of Field\")\n\n\n![](https://bounded-regret.ghost.io/content/images/2021/08/hypermind_math.png \"State of the Art: MATH\")\n\n\n![](https://bounded-regret.ghost.io/content/images/2021/08/hypermind_multitask.png \"State of the Art: Massive Multitask Language Understanding\")\n\n\n![](https://bounded-regret.ghost.io/content/images/2021/08/hypermind_cifar10_robust.png \"State of the Art: CIFAR-10 8/255\")\n\n\n![](https://bounded-regret.ghost.io/content/images/2021/08/hypermind_video-2.png \"State of the Art: Something Something V2\")\n\n\n\nMATH\n----\n\n\nThe MATH dataset consists of competition math problems for high school students. A Berkeley PhD student got in the ~75% range, while an IMO gold medalist got ~90%, but probably would have gotten 100% without arithmetic errors. The questions are free-response and not multiple-choice, and can contain answers such as $\\frac{1 + \\sqrt{2}}{2}$.\n\n\nCurrent performance on this dataset is quite low--6.9%--and I expected this task to be quite hard for ML models in the near future. However, forecasters predict more than 50% accuracy\\* by 2025! This was a big update for me. (\\*More specifically, their median estimate is 52%; the confidence range is ~40% to 60%, but this is potentially artifically narrow due to some restrictions on how forecasts could be input into the platform.)\n\n\nTo get some flavor, here are 5 randomly selected problems from the \"Counting and Probability\" category of the benchmark:\n\n\n* How many (non-congruent) isosceles triangles exist which have a perimeter of 10 and integer side lengths?\n* A customer ordered 15 pieces of gourmet chocolate. The order can be packaged in small boxes that contain 1, 2 or 4 pieces of chocolate. Any box that is used must be full. How many different combinations of boxes can be used for the customer's 15 chocolate pieces? One such combination to be included is to use seven 2-piece boxes and one 1-piece box.\n* A theater group has eight members, of which four are females. How many ways are there to assign the roles of a play that involve one female lead, one male lead, and three different objects that can be played by either gender?\n* What is the value of $101^{3} - 3 \\cdot 101^{2} + 3 \\cdot 101 -1$?\n* 5 white balls and $k$ black balls are placed into a bin. Two of the balls are drawn at random. The probability that one of the drawn balls is white and the other is black is $\\frac{10}{21}$. Find the smallest possible value of $k$.\n\n\nHere are 5 randomly selected problems from the \"Intermediate Algebra\" category (I skipped one that involved a diagram):\n\n\n* Suppose that $x$, $y$, and $z$ satisfy the equations $xyz = 4$, $x^3 + y^3 + z^3 = 4$, $xy^2 + x^2 y + xz^2 + x^2 z + yz^2 + y^2 z = 12$. Calculate the value of $xy + yz + zx$.\n* If $\\|z\\| = 1$, express $\\overline{z}$ as a simplified fraction in terms of $z$.\n* In the coordinate plane, the graph of $\\|x + y - 1\\| + \\|\\|x\\| - x\\| + \\|\\|x - 1\\| + x - 1\\| = 0$ is a certain curve. Find the length of this curve.\n* Let $\\alpha$, $\\beta$, $\\gamma$, and $\\delta$ be the roots of $x^4 + kx^2 + 90x - 2009 = 0$. If $\\alpha \\beta = 49$, find $k$.\n* Let $\\tau = \\frac{1 + \\sqrt{5}}{2}$, the golden ratio. Then $\\frac{1}{\\tau} + \\frac{1}{\\tau^2} + \\frac{1}{\\tau^3} + \\dotsb = \\tau^n$ for some integer $n$. Find $n$.\n\n\nYou can see all of the questions at [this](https://github.com/hendrycks/math?ref=bounded-regret.ghost.io) git repo.\n\n\nIf I imagine an ML system getting more than half of these questions right, I would be pretty impressed. If they got 80% right, I would be super-impressed. The forecasts themselves predict accelerating progress through 2025 (21% in 2023, then 31% in 2024 and 52% in 2025), so 80% by 2028 or so is consistent with the predicted trend. This still just seems wild to me and I'm really curious how the forecasters are reasoning about this.\n\n\nMultitask\n---------\n\n\nThe Massive Multitask dataset also consists of exam questions, but this time they are a range of high school, college, and professional exams on 57 different subjects, and these *are* multiple choice (4 answer choices total). Here are five example questions:\n\n\n* (Jurisprudence) Which position does Rawls claim is the least likely to be adopted by the POP (people in the original position)?\n\t+ (A) The POP would choose equality above liberty.\n\t+ (B) The POP would opt for the ‘maximin’ strategy.\n\t+ (C) The POP would opt for the ‘difference principle.’\n\t+ (D) The POP would reject the ‘system of natural liberty.\n* (Philosophy) According to Moore’s “ideal utilitarianism,” the right action is the one that brings about the greatest amount of:\n\t+ (A) pleasure. (B) happiness. (C) good. (D) virtue.\n* (College Medicine) In a genetic test of a newborn, a rare genetic disorder is found that has X-linked recessive transmission. Which of the following statements is likely true regarding the pedigree of this disorder?\n\t+ (A) All descendants on the maternal side will have the disorder.\n\t+ (B) Females will be approximately twice as affected as males in this family.\n\t+ (C) All daughters of an affected male will be affected.\n\t+ (D) There will be equal distribution of males and females affected.\n* (Conceptual Physics) A model airplane flies slower when flying into the wind and faster with wind at its back. When launched at right angles to the wind, a cross wind, its groundspeed compared with flying in still air is\n\t+ (A) the same (B) greater (C) less (D) either greater or less depending on wind speed\n* (High School Statistics) Jonathan obtained a score of 80 on a statistics exam, placing him at the 90th percentile. Suppose five points are added to everyone’s score. Jonathan’s new score will be at the\n\t+ (A) 80th percentile.\n\t+ (B) 85th percentile.\n\t+ (C) 90th percentile.\n\t+ (D) 95th percentile.\n\n\nCompared to MATH, these involve significantly less reasoning but more world knowledge. I don't know the answers to these questions (except the last one), but I think I could figure them out with access to Google. In that sense, it would be less mind-blowing if an ML system did well on this task, although it would be accomplishing an intellectual feat that I'd guess very few humans could accomplish unaided.\n\n\nThe actual forecast is that ML systems will be around 75% on this by 2025 (range is roughly 70-85, with some right-tailed uncertainty). I don't find this as impressive/wild as the MATH forecast, but it's still pretty impressive.\n\n\nMy overall take from this task and the previous one is that forecasters are pretty confident that we *won't* have the singularity before 2025, but at the same time there will be demonstrated progress in ML that I would expect to convince a significant fraction of skeptics (in the sense that it will look untenable to hold positions that \"Deep learning can't do X\").\n\n\nFinally, to give an example of some of the harder types of questions (albeit not randomly selected), here are two from Professional Law and College Physics:\n\n\n* (College Physics) One end of a Nichrome wire of length 2L and cross-sectional area A is attached to an end of another Nichrome wire of length L and cross- sectional area 2A. If the free end of the longer wire is at an electric potential of 8.0 volts, and the free end of the shorter wire is at an electric potential of 1.0 volt, the potential at the junction of the two wires is most nearly equal to\n\t+ (A) 2.4 V (B) 3.3 V (C) 4.5 V (D) 5.7 V\n* (Professional Law) The night before his bar examination, the examinee’s next-door neighbor was having a party. The music from the neighbor’s home was so loud that the examinee couldn’t fall asleep. The examinee called the neighbor and asked her to please keep the noise down. The neighbor then abruptly hung up. Angered, the examinee went into his closet and got a gun. He went outside and fired a bullet through the neighbor’s living room window. Not intending to shoot anyone, the examinee fired his gun at such an angle that the bullet would hit the ceiling. He merely wanted to cause some damage to the neighbor’s home to relieve his angry rage. The bullet, however, ricocheted off the ceiling and struck a partygoer in the back, killing him. The jurisdiction makes it a misdemeanor to discharge a firearm in public. The examinee will most likely be found guilty for which of the following crimes in connection to the death of the partygoer?\n\t+ (A) Murder (B) Involuntary manslaughter (C) Voluntary manslaughter (D) Discharge of a firearm in public\n\n\nYou can view all the questions at [this](https://github.com/hendrycks/test?ref=bounded-regret.ghost.io) git repo.\n\n\nOther questions\n---------------\n\n\nThe other four questions weren't quite as surprising, so I'll go through them more quickly.\n\n\n*SOTA robustness:* The forecasts expect consistent progress at ~7% per year. In retrospect this one was probably not too hard to get just from trend extrapolation. (SOTA was 44% in 2018 and 66% in 2021, with smooth-ish progress in-between.)\n\n\n*US vs. China:* Forecasters have significant uncertainty in both directions, skewed towards the US being ahead in the next 2 years and China after that (seemingly mainly due to heavier-tailed uncertainty), but either one could be ahead and up to 10x the other. One challenge in interpreting this is that either country might stop publishing compute results if they view it as a competitive advantage in national security (or individual companies might do the same for competitive reasons).\n\n\n*Incumbents vs. rest of field:* forecasters expect newcomers to increase size by ~10x per year for the next 4 years, with a central estimate of 21 EF-days in 2023. Note the [AI2023 results](https://prod.hypermind.com/ngdp/en/showcase2/showcase.html?sc=AI2023&ref=bounded-regret.ghost.io) predict the largest experiment by anyone (not just newcomers) to be 261EFLOP-s days in 2023, so this expects newcomers to be ~10x behind the incumbents, but only 1 year behind. This is also an example where forecasters have significant uncertainty--newcomers in 2023 could easily be in single-digit EF-days, or at 75 EF-days. In retrospect I wish I had included Anthropic on the list, as they are a new \"big-compute\" org that could be driving some fraction of the results, and who I wouldn't have intended to count as a newcomer (since they already exist).\n\n\n*Video understanding:* Forecasters expect us to hit 88% accuracy (range: ~82%-95%) in 2025. In addition, they expect accuracy to increase at roughly 5%/year (though this presumably has to level off soon after 2025). This is faster than ImageNet, which has only been increasing at [roughly 2%/year](https://paperswithcode.com/sota/image-classification-on-imagenet?ref=bounded-regret.ghost.io). In retrospect this was an \"easy\" prediction in the sense that [accuracy has increased by 14% from Jan'18 to Jan'21](https://paperswithcode.com/sota/action-recognition-in-videos-on-something?ref=bounded-regret.ghost.io) (close to 5%/year), but it is also \"bold\" in the sense that progress since Jan'19 has been minimal. (Apparently forecasters are more inclined to average over the longest available time window.) In terms of implications, video recognition is one of the last remaining \"instinctive\" modalities that humans are very good at, other than physical tasks (grasping, locomotion, etc.). It looks like we'll be pretty good at a \"basic\" version of it by 2025, for a task that I'd intuitively rate as less complex than ImageNet but about as complex as CIFAR-100. Based on vision and language I expect an additional 4-5 years to master the \"full\" version of the task, so expect ML to have mostly mastered video by 2030. As before, this simultaneously argues *against* \"the singularity is near\" but *for* \"surprisingly fast, highly impactful progress\".\n\n\nWhy we chose these questions\n============================\n\n\nWe liked the AI2023 questions (the previous prediction contest), but felt there were a couple categories that were missing. One was geopolitical (the first 2 questions), but the other one was benchmarks that would be highly informative about progress. The AI2023 challenge includes forecasts about a number of benchmarks, e.g. Pascal, Cityscape, few-shot on Mini-ImageNet, etc. But there aren't ones where, if you told me we'd have a ton of progress on them by 2025, it would update my model of the world significantly. This is because the tasks included in AI2023 are mostly in the regime where NNs do reasonably well and I expect gradual progress to continue. (I would have been surprised by the few-shot Mini-ImageNet numbers 3 years ago, but not since GPT-3 showed that few-shot works well at scale).\n\n\nIt's not so surprising that the AI2023 benchmarks were primarily ones that ML already does well on, because most ML benchmarks are created to be plausibly tractable. To enable more interesting forecasts, we created our own \"hard\" benchmarks where significant progress would be surprising. This was the motivation behind the MATH and Multitask datasets (we created [both](https://arxiv.org/abs/2103.03874?ref=bounded-regret.ghost.io) of [these](https://arxiv.org/abs/2009.03300?ref=bounded-regret.ghost.io) ourselves). As mentioned, I was pretty surprised by how optimistic forecasters were on both tasks, which updated me downward a bit on the task difficulty but also upward on how much progress we should expect in the next 4 years.\n\n\nThe other two benchmarks already existed but were carefully chosen. Robust accuracy on CIFAR was based on the premise that adversarial robustness is really hard and we haven't seen much progress--perhaps it's a particularly difficult challenge, which would be worrying if we care about the safety of AI systems. Forecasters instead predicted steady progress, but in retrospect I could have seen this myself. Even though adversarial robustness \"feels\" hard (perhaps because I work on it and spend a lot of time trying to make it work better), the actual year-on-year numbers showed a pretty clear 7%/year improvement.\n\n\nThe last task, video recognition, is an area that not many people work in currently, as it seems challenging compared to images (perhaps due to hardware constraints). But it sounds like we should expect steady progress on it in the coming years.\n\n\nLessons learned\n===============\n\n\nIt can sometimes be surprisingly difficult to formalize questions that track an intuitive quantity you care about.\n\n\nFor instance, we initially wanted to include questions about economic impacts of AI, but were unable to. For instance, we wanted to ask \"How much private vs. public investment will there be in AI?\" But this runs into the question of what counts as investment--Do we count something like applying data science to agriculture? If you look at most metrics that you'd hope track this quantity, they include all sorts of weird things like that, and the weird things probably dominate the metric. We ran into similar issues for indicators of AI-based automation--e.g. do industrial robots on assembly lines count, even if they don't use much AI? For many economic variables, short-term effects may also disort results (investment might drop because of a pandemic or other shock).\n\n\nThere were other cases where we did construct a question, but had to be careful about framing. We initially considered using parameters rather than compute for the two geopolitical questions, but it's possible to achieve really high parameter counts in silly ways and some organizations might even do so for publicity (indeed we think this is already happening to some extent). Compute is harder to fake in the same way.\n\n\nAs discussed above, secrecy could cloud many of the metrics we used. Some organizations might not publish compute numbers for competitive reasons, and the same could be true of SOTA results on leaderboards. This is more likely if AI heats up significantly, so unfortunately I expect forecasts to be least reliable when we need them most. We could potentially get around this issue by interrogating forecasters' actual reasoning, rather than just the final output.\n\n\nI also came to appreciate the value of doing lots of legwork to create a good forecasting target. The MATH dataset obviously was a lot of work to assemble, but I'm really glad we did because it created the single biggest update for me. I think future forecasting efforts should more strongly consider this lever.\n\n\nFinally, even while often expressing significant uncertainty, forecasters can make bold predictions. I'm still surprised that forecasters predicted 52% on MATH, when current accuracy is 7% (!). My estimate would have had high uncertainty, but I'm not sure the top end of my range would have included 50%. I assume the forecasters are right and not me, but I'm really curious how they got their numbers.\n\n\nBecause of the possibility of such surprising results, forecasting seems really valuable. I hope that there's significant future investment in this area. Every organization that's serious about the future should have a resident or consultant forecaster. I am putting my money where my mouth is and currently hiring forecasting consultants for my research group; please [e-mail](mailto:jsteinhardt@berkeley.edu) me if this sounds interesting to you.", "url": "https://bounded-regret.ghost.io/ai-forecasting/", "title": "Updates and Lessons from AI Forecasting", "source": "html_articles", "source_type": "blogPost", "source_filetype": "pdf", "date_published": "2021-08-17T22:00:00Z", "authors": ["Jacob Steinhardt"], "summary": [], "id": "9be0ab5912ddc6caa64e138f6703f48e"} {"text": "[This post assumes knowledge of decision theory, as discussed in Eliezer Yudkowsky’s [*Timeless Decision Theory*](http://intelligence.org/files/TDT.pdf).]\n\n\nOne interesting feature of some decision theories that I used to be a bit confused about is “updatelessness”. A thought experiment suitable for explaining the concept is [*counterfactual mugging*](https://wiki.lesswrong.com/wiki/Counterfactual_mugging): “[Omega](https://wiki.lesswrong.com/wiki/Omega) [a being to be assumed a perfect predictor and absolutely trustworthy] appears and says that it has just tossed a fair coin, and given that the coin came up tails, it decided to ask you to give it $100. Whatever you do in this situation, nothing else will happen differently in reality as a result. Naturally you don’t want to give up your $100. But Omega also tells you that if the coin came up heads instead of tails, it’d give you $10000, but only if you’d agree to give it $100 if the coin came up tails.”\n\n\nThere are various alternatives to this experiment, which seem to illustrate a similar concept, although they are not all structurally isomorphic. For example Gary Drescher discusses Newcomb’s problem with transparent boxes in ch. 6.2 and retribution in ch. 7.3.1 of his book [*Good and Real*](https://www.gwern.net/docs/2006-drescher-goodandreal.pdf). Another relevant example is [Parfit’s hitchhiker](https://wiki.lesswrong.com/wiki/Parfit%27s_hitchhiker).\n\n\nOf course, you [win](http://lesswrong.com/lw/7i/rationality_is_systematized_winning/) by refusing to pay. To strengthen the intuition that this is the case, imagine that the whole world just consists of one instance of counterfactual mugging and that you already know for certain that the coin came up tails. (We will assume that there is no anthropic uncertainty about whether you are in a simulation used to predict whether you would give in to counterfactual mugging. That is, Omega used some (not necessarily fully reliable) way of figuring out what you’d do. For example, Omega may have created you in a way that implies giving in or not giving in to counterfactual mugging.) Instead of giving money, let’s say thousands of people will be burnt alive if you give in while millions could have been saved if the coin had come up heads. Nothing else will be different as a result of that action. I don’t think there is any dispute over what choices maximizes expected utility for this agent.\n\n\nThe cause of dispute is that agents who give in to counterfactual mugging win in terms of expected value as judged from before learning the result of the coin toss. That is, prior to being told that the coin came up tails, an agent better be one that gives in to counterfactual mugging. After all, this will give her 0.5\\*$10,000 – 0.5\\*$100 in expectation. So, there is a conflict between what the agent would rationally want her future self to choose and what is rational for her future self to do. (Another example of this is the [absent-minded driver](http://lesswrong.com/lw/182/the_absentminded_driver/).) There is nothing particularly confusing about the existence of problems with such [inconsistency](https://en.wikipedia.org/wiki/Dynamic_inconsistency#In_game_theory).\n\n\nBecause being an “updateless” agent, i.e. one that makes the choice based on how it would have wanted the choice to be prior to updating, is better for future instances of mugging, sensible decision theories would self-modify into being updateless with regard to all *future* information they receive. (Note that being updatelessness doesn’t mean that one doesn’t change one’s behavior based on new information, but that one goes through with the plans that one would have committed oneself to pursue before learning that information.) That is, an agent using a decision theory like (non-naive) evidential decision theory (EDT) would commit to giving in to counterfactual mugging and similar decision problems prior to learning that it ended up in the “losing branch”. However, if the EDT agent already knows that it is in the losing branch of counterfactual mugging and hasn’t thought about updatelessness, yet, it wouldn’t give in, although it might (if it is smart enough) self-modify into being updateless in the future.\n\n\nOne immediate consequence of the fact that updateless agents are better off is that one would want to program an AI to be updateless from the start. I guess it is this sense in which people like the researchers of the [Machine Intelligence Research Institute](https://intelligence.org/) consider updatelessness to be correct despite the fact that it doesn’t maximize expected utility in counterfactual mugging.\n\n\nBut maybe updateless is not even needed explicitly if the decision theory can take over epistemics. Consider the EDT agent, to whom Omega explains counterfactual mugging. For simplicity’s sake, let us assume that Omega explains counterfactual mugging and only then states which way the coin came up. After the explanation, the EDT agent could precommit, but let’s assume it can’t do so. Now, Omega opens her mouth to tell the EDT agent how the coin came up. Usually, decision theories are not connected to epistemics, so upon Omega uttering the words “the coin came up heads/tails”, Bayesian updating would run its due course. And that’s the problem, since after Bayesian updating the agent will be tempted to reject giving in, which is bad from the point of view of before learning which way the coin came up. To gain good evidence about Omega’s prediction of oneself, EDT may update in a different way to ensure that it would receive the money if the coin came up heads. For example, it could update towards the existence of both branches (which is basically equivalent to the updateless view of continuing to maintain the original position). Of course, self-modifying or just using some decision theory that has updatelessness built in is the much cleaner way to go.\n\n\nOverall, this suggests a slightly different view of updatelessness. Updatelessness is not necessarily a property of decision theories. It is the natural thing to happen when you apply acausal decision theory to updating based on new information.\n\n\n**Acknowledgment:** This work was funded by the Foundational Research Institute (now the [Center on Long-Term Risk](https://longtermrisk.org/)).", "url": "https://casparoesterheld.com/2016/11/21/thoughts-on-updatelessness/", "title": "Thoughts on Updatelessness", "source": "html_articles", "source_type": "blogPost", "source_filetype": "pdf", "date_published": "2016-11-20T23:00:00Z", "authors": ["Caspar Oesterheld"], "summary": [], "id": "3eb49606f2cc39f9b833bd92b4314c7b"} {"text": "(This post assumes some knowledge of the decision theory of Newcomb-like scenarios.)\n\n\nOne problem in the decision theory of Newcomb-like scenarios (i.e. the study of whether causal, evidential or some other decision theory is true) is that even the seemingly obvious basics are fiercely debated. Newcomb’s problem seems to be fundamental and the solution obvious (to both sides), and yet scholars disagree about its resolution. If we already fail at the basics, how can we ever settle this debate?\n\n\nIn this post, I propose a solution. Specifically, I will introduce a very plausible general principle that decision rules should abide by. One may argue that settling on powerful general rules (like the one I will propose) must be harder than settling single examples (like Newcomb’s problem). However, this is not universally the case. Especially in decision theory, we should expect general principles to be especially convincing because a common defense of two-boxing in Newcomb’s scenario is that Newcomb’s problem is just a weird edge case in which rationality is punished. By introducing a general principle that CDT (or, perhaps, EDT) violates, we can prove the existence of a *general* flaw.\n\n\nWithout further ado, the principle is: The decisions we make should not depend on the utilities assigned to outcomes that are impossible to occur. To me this principle seems obvious and indeed it is consistent with expected value calculations in non-Newcomb-like scenarios: Imagine having to deterministically choose an action from some set *A*. (We will ignore [mixed strategies](https://en.wikipedia.org/wiki/Strategy_(game_theory)#Pure_and_mixed_strategies).) The next state of the world is sampled from a set of states *S* via a distribution P and depends on the chosen action. We are also given a utility function *U*, which assigns values to pairs of a state and an action. Let *a* be an action and let *s* be a possible state. If *P*(*s*,*a*) = 0 (or *P*(*s*|*a*)=0 or *P*(*s* given the causal implications of *a*)=0 – we assume all of these to be the equivalent in this non-Newcomb-like scenario), then it doesn’t matter what *U*(*s*,*a*) is, because in an [expected value](https://en.wikipedia.org/wiki/Expected_value) calculation, *U*(*s*,*a*) will always be multiplied with *P*(*s*,*a*)=0. That is to say, any expected value decision rule gives the same outcome regardless of *U*(*s*,*a*). So, expected value decision rules abide by this principle at least in non-Newcomb-like scenarios.\n\n\nLet us now apply the principle to a Newcomb-like scenario, specifically to the prisoner’s dilemma played against an exact copy of yourself. Your actions are *C* and *D*. Your opponent is the “environment” and can also choose between *C* (cooperation) and *D* (defection). So, the possible outcomes are (*C*,*C*), (*C*,*D*), (*D*,*C*) and (*D*,*D*). The probabilities P(*C*,*D*) and P(*D*,*C*) are both 0. Applied to this Newcomb-like scenario, the principle of the irrelevance of impossible alternatives states that our decision should only depend on the utilities of (*C*,*C*) and (*D*,*D*). Evidential decision theory behaves in accordance with this principle. (I leave it as an exercise to the reader to verify this.) Indeed, I suspect that it can be shown that EDT generally abides by the principle of the irrelevance of impossible outcomes. The choice of causal decision theory on the other hand *does* depend on the utilities of the impossible outcomes *U*(*D*,*C*) and *U*(*C*,*D*). Remember that in the prisoner’s dilemma the payoffs are such that *U*(*D*,*x*)>*U*(*C*,*x*) for any action *x* of the opponent, i.e. no matter the opponent’s choice it is always better to defect. This [dominance](https://en.wikipedia.org/wiki/Strategic_dominance) is given as the justification for CDT’s decision to defect. But let us say we increase the utility of *U*(*C*,*D*) such that *U*(*C*,*D*)>U(*D*,*D*) and decrease the utility of *U*(*D*,*C*) such that *U*(*D*,*C*)>*U*(*C*,*C*). Of course, we must make these changes for the utility functions of both players so as to retain symmetry. After these changes, the dominance relationship is reversed: *U*(*C*,*x*)>*U*(*D*,*x*) for any action *x.* Of course, the new [payoff matrix](https://en.wikipedia.org/wiki/Normal-form_game) is not that of a prisoner’s dilemma anymore – the game is different in important ways. But when played against a copy, these differences do not seem significant, because we only changed the utilities of outcomes that were impossible to achieve anyway. Nevertheless, CDT would switch from *D* to *C* upon being presented with these changes, thus violating the principle of the irrelevance of impossible outcomes. This is a *systematic* flaw in CDT: Its decisions depend on the utility of outcomes that it can already know to be impossible.\n\n\nThe principle of the irrelevance of impossible outcomes can be used beyond arguing against CDT. As you may remember from [my post on updatelessness](https://casparoesterheld.com/2016/11/21/thoughts-on-updatelessnes/), sensible decision theories will precommit to give Omega the money in the counterfactual mugging thought experiment. (If you don’t remember or haven’t read that post in the first place, this is a good time to catch up, because the following thoughts are based on the ideas from the post.) Even EDT, which ignores the utility of impossible outcomes, would self-modify in this way. However, the decision theory resulting from such self-modification violates the principle of the irrelevance of impossible outcomes. Remember that in counterfactual mugging, you give in because this was a good idea to precommit to when you didn’t yet know how the coin came up. However, once you know that the coin came up the unfavorable way, the positive outcome, which gave you the motivation to precommit, has become impossible. Of course, you only give in to counterfactual mugging if the reward in this now impossible branch is sufficiently high. For example, there is no reason to precommit to give in if you lose money in both branches. This means that once you have become updateless, you violate the principle of the irrelevance of impossible outcomes: your decision in counterfactual mugging depends on the utility you assign to an outcome that cannot happen anymore.\n\n\n**Acknowledgment:** This work was funded by the Foundational Research Institute (now the [Center on Long-Term Risk](https://longtermrisk.org/)).", "url": "https://casparoesterheld.com/2017/01/17/decision-theory-and-the-irrelevance-of-impossible-outcomes/", "title": "Decision Theory and the Irrelevance of Impossible Outcomes", "source": "html_articles", "source_type": "blogPost", "source_filetype": "pdf", "date_published": "2017-01-16T23:00:00Z", "authors": ["Caspar Oesterheld"], "summary": [], "id": "c4d9556016247e25a84d2ee80b28a81e"} {"text": "When taking others’ preferences into account, we will often want to idealize them rather than taking them too literally. Consider the following example. You hold a glass of transparent liquid in your hand. A woman walks by, says that she is very thirsty and would like to drink from your glass. What she doesn’t know, however, is that the water in the glass is (for some reason not relevant to this example) poisoned. Should you allow her to drink? Most people would say you should not. While she does desire to drink out of the glass, this desire would probably disappear upon gaining knowledge of its content. Therefore, one might say that her object-level preference is to drink from the glass, while her idealized preference would be not to drink from it. There is not too much literature on preference idealization, as far as I know, but, if you’re not already familiar with it, anyway, consider looking into “[Coherent Extrapolated Volition](https://wiki.lesswrong.com/wiki/Coherent_Extrapolated_Volition)*“.*\n\n\nPreference idealization is not always as easy as inferring that someone doesn’t want to drink poison, and in this post, I will discuss a particular sub-problem: accounting for [cognitive biases](https://en.wikipedia.org/wiki/Cognitive_bias), i.e. systematic mistakes in our thinking, as they pertain to our moral judgments. However, the line between biases and genuine moral judgments is sometimes not clear.\n\n\nSpecifically, we look at cognitive biases that people exhibited in non-moral decisions, where their status as a bias to be corrected is much less controversial, but which can explain certain ethical intuitions. By offering such an [error theory](https://casparoesterheld.files.wordpress.com/2018/02/baggini_fosl_error_theory.pdf) of a moral intuition, i.e. an explanation for how people could erroneously come to such a judgment, the intuition is called into question. Defendants of the intuition can respond that even if the bias can be used to explain the genesis of that moral judgment, they would nonetheless stick with that moral intuition. After all, the existence of *all* our moral positions can be explained by non-moral facts about the world – “[explaining is not explaining away](http://lesswrong.com/lw/oo/explaining_vs_explaining_away/)”. Consider the following examples.\n\n\n[*Omission bias*](https://en.wikipedia.org/wiki/Omission_bias): People judge consequences of inaction as less severe than those of action. Again, this is clearly a bias in some cases, especially non-moral ones. For example, losing $1,000 by not responding to your bank in time is just as bad as losing $1,000 by throwing them out of the window. A business person who judges the two equivalent losses equally will *ceteris paribus* be more successful. Nonetheless, most people distinguish between [act and omission](https://plato.stanford.edu/entries/doing-allowing/) in cases like the [fat man trolley problem](https://en.wikipedia.org/wiki/Trolley_problem#The_fat_man).\n\n\n[*Scope neglect*](https://en.wikipedia.org/wiki/Scope_neglect): The scope or size of something often has little or no effect on people’s thinking when it should have. For example, when three groups of people were asked what they would pay for interventions that would affect 2,000, 20,000, or 200,000 birds, people were willing to pay roughly the same amount of money irrespective of the number of birds. While scope neglect seems clearly wrong in this (moral) decision, it is less clearly so in other areas. For example, is a flourishing posthuman civilization with 2 trillion inhabitants really twice as good as one with 1 trillion? It is not clear to me whether answering “no” should be regarded as a judgment clouded by scope neglect (caused, e.g., by our [inability to imagine](https://casparoesterheld.com/2015/12/06/cheating-at-thought-experiments/) the two civilizations in question) or a moral judgment that is to be accepted.\n\n\n[*Contrast effect*](https://en.wikipedia.org/wiki/Contrast_effect) (also see [decoy effect](https://en.wikipedia.org/wiki/Decoy_effect), [social comparison bias](https://en.wikipedia.org/wiki/Social_comparison_bias), [Ariely on relativity](https://en.wikipedia.org/wiki/Predictably_Irrational#The_Truth_about_Relativity), [mere subtraction paradox](https://en.wikipedia.org/wiki/Mere_addition_paradox#Alternative_usage), [Less-is-better effect](https://en.wikipedia.org/wiki/Less-is-better_effect)): Consider the following market of computer hard drives, from which you are to choose one.\n\n\n\n\n| | | | |\n| --- | --- | --- | --- |\n| Hard drive model | Model 1 | Model 2 | Model 3 (decoy) |\n| Price | $80 | $120 | $130 |\n| Capacity | 250GB | 500GB | 360GB |\n\n\nGenerally, one wants to expend as little money as possible while maximizing capacity. In the absence of model 3, the decoy, people may be undecided between models 1 and 2. However, when model 3 is introduced into the market, it provides a new reference point. Model 2 is better than model 3 in all regards, which increases its attractiveness to people, even relative to model 1. That is, models 1 and 2 are judged by how they compare with model 3 rather than by their own features. The effect [clearly exposes](https://en.wikipedia.org/wiki/Independence_of_irrelevant_alternatives) an instance of irrationality: the existence of model 3 doesn’t affect how model 1 compares with model 2. When applied to ethical evaluation, however, it calls into question a firmly held intrinsic moral preference for [social equality](https://en.wikipedia.org/wiki/Social_equality) and [fairness](https://en.wikipedia.org/wiki/Fairness). Proponents of fairness seem to assess a person’s situation by comparing it to that of Bill Gates rather than judging each person’s situation separately. Similar to how the overpriced decoy changes our evaluation of the other products, our judgments of a person’s well-being, wealth, status, etc. may be seen as irrationally depending on the well-being, wealth, status, etc. of others.\n\n\nOther examples include [peak-end rule](https://en.wikipedia.org/wiki/Peak%E2%80%93end_rule)/[extension neglect](https://en.wikipedia.org/wiki/Extension_neglect)/[evaluation by moments](http://www.vwl.tuwien.ac.at/hanappi/TEI/momentsfull.pdf) and [average utilitarianism](https://en.wikipedia.org/wiki/Average_and_total_utilitarianism); [negativity bias](https://en.wikipedia.org/wiki/Negativity_bias) and [caring more about suffering than about happiness](https://foundational-research.org/the-case-for-suffering-focused-ethics/); [psychological distance](https://en.wikipedia.org/wiki/Construal_level_theory) and [person-affecting views](https://en.wikipedia.org/wiki/Person-affecting_view); [status-quo bias](https://en.wikipedia.org/wiki/Status_quo_bias) and various population ethical views (person-affecting views, the belief that most sentient beings that already exist have lives worth living); [moral credential effect](https://en.wikipedia.org/wiki/Moral_credential_effect); [appeal to nature](https://en.wikipedia.org/wiki/Appeal_to_nature) and [social Darwinism](https://en.wikipedia.org/wiki/Social_Darwinism)/[normative evolutionary ethics](https://en.wikipedia.org/wiki/Evolutionary_ethics#Normative_evolutionary_ethics).\n\n\n**Acknowledgment:** This work was funded by the Foundational Research Institute (now the [Center on Long-Term Risk](https://longtermrisk.org/)).", "url": "https://casparoesterheld.com/2017/01/18/is-it-a-bias-or-just-a-preference-an-interesting-issue-in-preference-idealization/", "title": "Is it a bias or just a preference? An interesting issue in preference idealization", "source": "html_articles", "source_type": "blogPost", "source_filetype": "pdf", "date_published": "2017-01-17T23:00:00Z", "authors": ["Caspar Oesterheld"], "summary": [], "id": "4639bb866a86e37dba136bb78a3ebd56"} {"text": "[This post assumes knowledge of decision theory, as discussed in Eliezer Yudkowsky’s [Timeless Decision Theory](http://intelligence.org/files/TDT.pdf) and in Arbital’s [Introduction to Logical Decision Theory](https://arbital.com/p/logical_dt/).]\n\n\nI recently discovered an interesting thought experiment, “[Betting on the Past](http://bjps.oxfordjournals.org/content/65/4/665)” by Cambridge philosopher Arif Ahmed. It can be found in his book [Evidence, Decision and Causality](http://www.cambridge.org/gb/academic/subjects/philosophy/philosophy-science/evidence-decision-and-causality), which is an elaborate defense of [Evidential Decision Theory](https://wiki.lesswrong.com/wiki/Evidential_Decision_Theory) (EDT). I believe that Betting on the Past may be used to money-pump non-EDT agents, refuting [Causal Decision Theories](https://plato.stanford.edu/entries/decision-causal/) (CDT), and potentially even ones that use [logical conditioning](https://arbital.com/p/logical_dt/), such as [Timeless Decision Theory](https://wiki.lesswrong.com/wiki/Timeless_decision_theory) (TDT) or [Updateless Decision Theory](https://wiki.lesswrong.com/wiki/Updateless_decision_theory) (UDT). At the very least, non-EDT decision theories are unlikely to win this bet. Moreover, no conspicuous perfect predicting powers, genetic influences, or manipulations of decision algorithms are required to make Betting on the Past work, and anyone can replicate the game at home. For these reasons, it might make a more compelling case in favor of EDT than the [Coin Flip Creation](http://lesswrong.com/r/discussion/lw/oih/did_edt_get_it_right_all_along_introducing_yet/), a problem I recently proposed in an attempt to defend EDT’s answers in [medical Newcomb problems](http://lesswrong.com/lw/gu1/decision_theory_faq/#medical-newcomb-problems). In Ahmed’s thought experiment, Alice faces the following decision problem:\n\n\n\n> *Betting on the Past*: In my pocket (says Bob) I have a slip of paper on which is written a proposition P. You must choose between two bets. Bet 1 is a bet on P at 10:1 for a stake of one dollar. Bet 2 is a bet on P at 1:10 for a stake of ten dollars. So your pay-offs are as in [Figure 1]. Before you choose whether to take Bet 1 or Bet 2 I should tell you what P is. It is the proposition that the past state of the world was such as to cause you now to take Bet 2. [[Ahmed 2014, p. 120](http://www.cambridge.org/gb/academic/subjects/philosophy/philosophy-science/evidence-decision-and-causality)]\n> \n> \n\n\nAhmed goes on to specify that Alice could indicate which bet she’ll take by either raising or lowering her hand. One can find a detailed discussion of the thought experiment’s implications, as well as a formal analysis of CDT’s and EDT’s decisions in Ahmed’s book. In the following, I want to outline a few key points.\n\n\nWould CDT win in this problem? Alice is betting on a past state of the world. She can’t causally influence the past, and she’s uncertain whether the proposition is true or not. In either case, Bet 1 strictly dominates Bet 2: no matter which state the past is in, Bet 1 always yields a higher utility. For these reasons, causal decision theories would take Bet 1. Nevertheless, as soon as Alice comes to a definite decision, she updates on whether the proposition is true or false. If she’s a causal agent, she then finds out that she has lost: the past state of the world was such as to cause her to take Bet 1, so the proposition is false. If she had taken Bet 2, she would have found out that the proposition was correct, and she would have won, albeit a smaller amount than if she had won with Bet 1.\n\n\nBetting on the Past seems to qualify as a kind of [Newcomb’s paradox](https://en.wikipedia.org/wiki/Newcomb%27s_paradox); it seems to have an equivalent payoff matrix (Figure 1).\n\n\n**Figure 1**: Betting on the past has a similar payoff matrix to Newcomb’s paradox\n\n\n\n\n| | | |\n| --- | --- | --- |\n| | P is true | P is false |\n|  Take Bet 1 | 10 | -1 |\n|  Take Bet 2 | 1 | -10 |\n\n\nFurthermore, its causal structure seems to resemble those of e.g. the Smoking Lesion or Solomon’s problem, indicating it as a kind of medical Newcomb problem. In medical Newcomb problems, a “Nature” node determines both the present state of the world (whether the agent is sick/will win the bet) and the agent’s decision (see Figure 2). In this regard, they differ from Newcomb’s original problem, where said node refers to the agent’s decision algorithm.\n\n\n**Figure 2**: Betting on the past (left) has a similar causal structure to medical Newcomb problems (right).\n\n\n [![screen-shot-2017-01-24-at-13-29-52](https://casparoesterheld.files.wordpress.com/2017/02/screen-shot-2017-01-24-at-13-29-52.png?w=317&resize=317%2C383&h=383#038;h=383 \"screen-shot-2017-01-24-at-13-29-52\")](https://casparoesterheld.com/screen-shot-2017-01-24-at-13-29-52/) [![screen-shot-2017-01-24-at-13-46-40](https://casparoesterheld.files.wordpress.com/2017/02/screen-shot-2017-01-24-at-13-46-40.png?w=315&resize=315%2C383&h=383#038;h=383 \"screen-shot-2017-01-24-at-13-46-40\")](https://casparoesterheld.com/screen-shot-2017-01-24-at-13-46-40/) \nOne could object to Betting on the Past being a medical Newcomb problem, since the outcomes conditional on our actions here are certain, while e.g. in the Smoking Lesion, observing our actions only shifts our probabilities in degrees. I believe this shouldn’t make a crucial difference. On the one hand, we can conceive of absolutely certain medical Newcomb cases like the *Coin Flip Creation*. On the other hand, Newcomb’s original problem is often formalized with absolute certainties as well. I’d be surprised if probabilistic vs. certain reasoning would make a difference to decision theories. First, we can always approximate certainties to an arbitrarily high degree. We might ask ourselves why a negligible further increase in certainty would at some point suddenly completely change the recommended action, then. Secondly, we’re never really certain in the real world anyway, so if the two cases would be different, this would render all thought experiments useless that use absolute certainties.\n\n\nIf Betting on the Past is indeed a kind of medical Newcomb problem, this would be an interesting conclusion. It would follow that if one prefers Bet 2, one should also one-box in medical Newcomb problems. And taking Bet 2 seems so *obviously* correct! I point this out because one-boxing in medical Newcomb problems is what EDT would do, and it is often put forward as both a counterexample to EDT and as the decision problem that separates EDT from [Logical Decision Theories](https://arbital.com/p/logical_dt/) (LDT), such as TDT or UDT. (See e.g. [Yudkowsky 2010](http://intelligence.org/files/TDT.pdf), p.67)\n\n\nBefore we examine the case for EDT further, let’s take a closer look at what LDTs would do in Betting on the Past. As far as I understand, LDTs would take correlations with other decision algorithms into account, but they would ignore “retrocausality” (i.e. smoke in the smoker’s lesion, chew gum in the chewing gum problem, etc.). If there is a purely physical cause, then this causal node isn’t altered in the logical counterfactuals that an LDT agent reasons over. Perhaps if the bet was about the state of the world yesterday, LDT would still take Bet 2. Clearly, LDT’s algorithm already existed yesterday, and it can influence this algorithm’s output; so if it chooses Bet 2, it can change yesterday’s world and make the proposition true. But at some point, this reasoning has to break down. If we choose a more distant point in the past as a reference for Alice’s bet – maybe as far back as the birth of our universe – she’ll eventually be unable to exert any possible influence via logical counterfactuals. At some point, the correlation becomes a purely physical one. All she can do at that point is what opponents of evidential reasoning would call “managing the news” ([Lewis, 1981](http://www.tandfonline.com/doi/pdf/10.1080/00048408112340011)) – she can merely try to go for the action that gives her the best Bayesian update.\n\n\nSo, do Logical Decision Theories get it wrong? I’m not sure about that; they come in different versions, and some haven’t yet been properly formalized, so it’s hard for me to judge. I can very well imagine that e.g. Proof-Based Decision Theory would take Bet 2, since it could prove P to be either true or false, contingent on the action it would take. I would argue, though, that if a decision theory takes Bet 2 – and if I’m right about Betting on the Past being a medical Newcomb problem – then it appears it would also have to “one-box”, i.e. take the option recommended by EDT, in other medical Newcomb problems.\n\n\nIf all of this is true, it might imply that we don’t really need LDT’s logical conditioning and that EDT’s simple Bayesian conditioning on actions could suffice. The only remaining difference between LDT and EDT would then be EDT’s lack of [updatelessness](https://casparoesterheld.com/2016/11/21/thoughts-on-updatelessnes/). What would an updateless version of EDT look like? Some progress on this front has already been made by [Everitt, Leike, and Hutter 2015](https://jan.leike.name/publications/Sequential%20Extensions%20of%20Causal%20and%20Evidential%20Decision%20Theory%20-%20Everitt,%20Leike,%20Hutter%202015.pdf). Caspar Oesterheld and I hope to be able to say more about it soon ourselves.\n\n\nAcknowledgement\n---------------\n\n\nI wrote this post while working for the Foundational Research Institute, which is now the [Center on Long-Term Risk](https://longtermrisk.org/).", "url": "https://casparoesterheld.com/2017/02/06/betting-on-the-past-by-arif-ahmed/", "title": "“Betting on the Past” by Arif Ahmed", "source": "html_articles", "source_type": "blogPost", "source_filetype": "pdf", "date_published": "2017-02-05T23:00:00Z", "authors": ["Johannes Treutlein"], "summary": [], "id": "8893c18802ddc2eb9d3533e8d463a735"} {"text": "The following prudential argument is relatively common in my circles: [We probably live in a simulation](http://www.simulation-argument.com/), but if we don’t, our actions matter much more. Thus, expected value calculations are dominated by the utility under the assumption that we (or some copies of ours) are in the real world. Consequently, the simulation argument [affects our prioritization only slightly](https://foundational-research.org/how-the-simulation-argument-dampens-future-fanaticism) — we should still mostly act under the assumption that we are not in a simulation.\n\n\n[A commonly cited analogy is due to Michael Vassar](http://www.33rdsquare.com/2012/10/jaan-tallinns-metaphysical-quest.html): “If you think you are Napoleon, and [almost] everyone that thinks this way is in a mental institution, you should still act like Napoleon, because if you are, your actions matter a lot.” An everyday application of this kind of argument is the following: Probably, you will not be in an accident today, but if you are, the consequences for your life are enormous. So, you better fasten your seat belt.\n\n\nNote how these arguments do not affect the probabilities we assign to some event or hypothesis. They are only about the event’s (or hypothesis’) *prudential weight* — the extent to which we tailor our actions to the case in which the event occurs (or the hypothesis is true).\n\n\nFor total utilitarians (and many other consequentialist value systems), similar arguments apply to most theories postulating a large universe or multiverse. To the extent that it makes a difference for our actions, we should tailor them to the assumption that we live in a large multiverse with many copies of us because under this assumption we can affect the lives of many more beings.\n\n\nFor [average utilitarians](https://en.wikipedia.org/wiki/Average_and_total_utilitarianism), the exact opposite applies. Even if they have many copies, they will have an impact on a much smaller *fraction* of beings if they live in a large universe or multiverse. Thus, they should usually base their actions on the assumption of a small universe, such as a universe in which Earth is the only inhabited planet. This may already have some implications, e.g. via the [simulation argument](https://foundational-research.org/how-the-simulation-argument-dampens-future-fanaticism) or the [Fermi paradox](https://en.wikipedia.org/wiki/Fermi_paradox). If they also take the average over time — I do not know whether this is the default for average utilitarianism — they would also base their actions on the assumption that there are just a few past and future agents. So, average utilitarians are subject to a much stronger [Doomsday argument](https://en.wikipedia.org/wiki/Doomsday_argument).\n\n\nMaybe the bearing of such prudential arguments is even more powerful, though. There is some chance that [metaphysical solipsism](https://en.wikipedia.org/wiki/Metaphysical_solipsism) is true: the view that only my (or your) own mind exists and that everything else is just an illusion. If solipsism were true, our impact on average welfare (or average preference fulfillment) would be enormous, perhaps 7.5 billion times bigger than it would be under the assumption that Earth exists — about 100 billion times bigger if you also [count humans that have lived in the past](https://en.wikipedia.org/wiki/World_population#Number_of_humans_who_have_ever_lived). Solipsism seems to deserve a probability larger than one in 6 (or 100) billion. (In fact, I think solipsism is likely enough for this to qualify as a non-[Pascalian](https://en.wikipedia.org/wiki/Pascal's_Wager) argument.) So, perhaps average utilitarians should maximize primarily for their own welfare?\n\n\n### Acknowledgements\n\n\nThe idea of this post is partly due to Lukas Gloor. This work was funded by the Foundational Research Institute (now the [Center on Long-Term Risk](https://longtermrisk.org/)).", "url": "https://casparoesterheld.com/2017/03/15/the-average-utilitarians-solipsism-wager/", "title": "The average utilitarian’s solipsism wager", "source": "html_articles", "source_type": "blogPost", "source_filetype": "pdf", "date_published": "2017-03-14T23:00:00Z", "authors": ["Caspar"], "summary": [], "id": "3433fe7f443a3a9c67777261fb3cd136"} {"text": "I’m currently writing a piece on anthropic uncertainty in Newcomb problems. The idea is that whenever someone simulates us to predict our actions, this leads us to have anthropic uncertainty about whether we’re in this simulation or not. (If we knew whether we were in the real world or in the simulation, then the simulation wouldn’t fulfill its purpose anymore.) This kind of reasoning changes quite a lot about the answers that decision theories give in predictive dilemmas. It makes their reasoning “more [updateless](https://casparoesterheld.com/2016/11/21/thoughts-on-updatelessnes/)”, since they reason from a more impartial stance: a stance from which they don’t know their exact position in the thought experiment, yet.\n\n\nThis topic isn’t new, but it hasn’t been discussed in-depth before. As far as I am aware, it has been brought up on LessWrong [by gRR](http://lesswrong.com/lw/asi/anthropic_reasoning_by_cdt_in_newcombs_problem/) and in [two](http://lesswrong.com/lw/5k/sleeping_beauty_gets_counterfactually_mugged/) [blog posts](http://lesswrong.com/lw/f37/naive_tdt_bayes_nets_and_counterfactual_mugging/) by Stuart Armstrong. Outside LessWrong, there is a post by [Scott Aaronson](http://www.scottaaronson.com/blog/?p=30), and one by [Andrew Critch](http://acritch.com/deserving-trust/#more-2581). The idea is also mentioned in passing by Neal (2006, p. 13). Are there any other sources and discussions of it that I have overlooked?\n\n\nIn this post, I examine what the assumption that predictions or simulations lead to anthropic uncertainty implies for the [Evidential Blackmail](https://agentfoundations.org/item?id=32) (also XOR Blackmail), a problem which is often presented as a counter-example to evidential decision theory (EDT) (Cf. Soares & Fallenstein, 2015, p. 5; Soares & Levinstein, 2017, pp. 3–4). A similar problem has been introduced as “Yankees vs. Red Sox” by Arntzenius (2008), and discussed by Ahmed and Price (2012). I would be very grateful for any kind of feedback on my post.\n\n\nWe could formalize the blackmailer’s procedure in the Evidential Blackmail something like this:\n\n\n`def blackmailer(): \n\n    your_action = your_policy(receive_letter) \n\n    if predict_stock() == “retain” and your_action == “pay”: \n\n        return “letter” \n\n    elif predict_stock() == “fall” and your_action == “not pay”: \n\n        return “letter” \n\n    else \n\n        return “no letter”`\n\n\nLet p denote the probability P(retain) with which our stock retains its value a. The blackmailer asks us for an amount of money b, where 0\n\n\nSchwarz, W. (2015). Lost memories and useless coins: revisiting the absentminded driver. Synthese, 192(9), 3011–3036.\n\n\nSoares, N., & Fallenstein, B. (2015, July 7). Toward Idealized Decision Theory. arXiv [cs.AI]. Retrieved from \n\n\nSoares, N., & Levinstein, B. (2017). Cheating Death in Damascus. Retrieved from ", "url": "https://casparoesterheld.com/2017/05/12/anthropic-uncertainty-in-the-evidential-blackmail/", "title": "Anthropic uncertainty in the Evidential Blackmail", "source": "html_articles", "source_type": "blogPost", "source_filetype": "pdf", "date_published": "2017-05-11T22:00:00Z", "authors": ["Johannes Treutlein"], "summary": [], "id": "e6941ad6766a1b232a16ae83b4924918"} {"text": "Neglectedness (or crowdedness) is a heuristic that effective altruists use to assess how much impact they could have in a specific cause area. It is usually combined with scale (a.k.a. importance) and tractability (a.k.a. solvability), which together are meant to approximate expected value. (In fact, under certain idealized definitions of the three factors, multiplying them [is equivalent to](https://80000hours.org/articles/problem-framework/#introducing-how-we-define-the-factors) expected value. However, this removes the heuristic nature of these factors and probably does not describe how people typically apply them.) For introductions and thoughts on the framework as well as neglectedness in particular see:\n\n\n* Benjamin Todd: [*A framework for strategically selecting a cause*](https://80000hours.org/2013/12/a-framework-for-strategically-selecting-a-cause/).\n* Paul Christiano: [*Neglectedness and impact*](https://80000hours.org/2014/01/neglectedness-and-impact/).\n* 80,000 hours: [*How to compare different global problems in terms of impact*](https://80000hours.org/articles/problem-framework/).\n* William MacAskill: *Doing Good Better*. Chapter 10.\n\n\nOne reason why the neglectedness heuristic and the framework in general are so popular is that they are much easier to apply than explicit cost-effectiveness or expected value calculations. In this post, I will argue that evaluating neglectedness (which may usually be seen as the most heuristic and easiest to evaluate part of the framework) is actually quite complicated. This is in part to make people more aware of issues that are sometimes not and often only implicitly taken into account. In some cases, it may also be an argument against using the heuristic at all. Presumably, most of the following considerations won’t surprise many practitioners. Nonetheless, it appears useful to write them down, which, to my knowledge, hasn’t been done before.\n\n\nNeglectedness and diminishing returns\n=====================================\n\n\nThere are a few different definitions of neglectedness. For example, consider the following three:\n\n\n1. “If we add more resources to the cause, we can expect more promising interventions to be carried out.” ([source](https://80000hours.org/2013/12/a-framework-for-strategically-selecting-a-cause/))\n2. You care about a cause much more than the rest of society. ([source](https://80000hours.org/2014/01/neglectedness-and-impact/))\n3. “How many people, or dollars, are currently being dedicated to solving the problem?” ([source](https://80000hours.org/articles/problem-framework/#definition-2))\n\n\nThe first one is quite close to expected value-type calculations and so it is quite clear why it is important. The second and third are more concrete and easier to measure but ultimately only [relevant because they are proxies of the first](https://80000hours.org/articles/problem-framework/#why-is-it-important). If society is already investing a lot into a cause, then the most promising interventions in that cause area are already taken up and only less effective ones remain.\n\n\nBecause the second and, even more so, the third are easier to measure, I expect that, in practice, most people use these two when they evaluate neglectedness. Incidentally, these definitions also fit the terms “neglectedness” and “crowdedness” much better. I will argue that neglectedness in the second and third sense has to be translated into neglectedness into the first sense and that this translation is difficult. Specifically, I will argue that the [diminishing returns curves](https://en.wikipedia.org/wiki/Diminishing_returns) on which the connection between already invested resources and the value of the marginal dollar is based on can assume different scales and shapes that have to be taken into account.\n\n\nA standard diminishing return curve may look roughly like this:\n\n\n![IMG_20170621_133952](https://casparoesterheld.files.wordpress.com/2017/06/img_20170621_133952.jpg?w=300&h=225)\n\n\nThe x-axis represents the amount of resources invested into some intervention or cause area, the y-axis represents the returns of that investment. The derivative of the returns (i.e., the marginal returns) decreases, [potentially](https://80000hours.org/2014/01/neglectedness-and-impact/) in inverse proportion to the cumulative investment.\n\n\nEven if returns diminish in a way similar to that shape, there is still the question of the scale of that graph (not to be confused with the scale/importance of the cause area), i.e. whether values on the x-axis are in the thousands, millions or billions. In general, returns probably diminish slower in cause areas that are in some sense large and uniform. Take the global fight against malaria. Intervening in some areas is more effective than in others. For example, it is more effective in areas where malaria is more common, or where it is easier to, say, provide mosquito nets, etc. However, given how widespread malaria is (about 300 million cases in 2015), I would expect that there is a relatively large number of areas almost tied for the most effective places to fight malaria. Consequently, I would guess that once the most effective intervention is to distribute provide mosquito nets, even hundreds of millions do not diminish returns all that much.\n\n\nOther interventions have much less room for funding and thus returns diminish much more quickly. For example, the returns of helping some specific person will usually diminish way before investing, say, a billion dollars.\n\n\nIf you judge neglectedness only based on the raw amount of resources invested into solving a problem ([as suggested by 80,000 hours](https://80000hours.org/articles/problem-framework/#how-to-assess-it-2)), then this may make small cause areas look a lot more promising than they actually are. Depending on the exact definitions, this remains the case if you combine neglectedness with scale and tractability. For example, consider the following two interventions:\n\n\n1. The global fight against malaria.\n2. The fight against malaria in some randomly selected subset of 1/100th of the global area or population.\n\n\nThe two should usually be roughly equally promising. (Perhaps 1 is a bit more promising because every intervention contained in 2 is also in 1. On the other hand, that would make “solve everything” hard to beat as an intervention. Of course, 2 can also be more or less promising if an unusual 1/100th is chosen.) But because the raw amount of resources invested into 1 is presumably 100 times as big as the amount of resources invested into 2, 2 would, on a naive view, be regarded as much more neglected than 1. The product of scale and tractability is the same in 1 and 2. (1 is a 100 times bigger problem, but solving it in its entirety is also roughly 100 times more difficult, though I presume that some definitions of the framework judge this differently. In general, it seems fine to move considerations out of neglectedness into tractability and scope [as long as](http://effective-altruism.com/ea/ss/the_importantneglectedtractable_framework_needs/) they are not double-counted or forgotten.) Thus, the overall product of the three is greater for 2, which appears to be wrong. If on the other hand, neglectedness denotes the extent to which returns have diminished (the first of the three definitions given at the beginning of this section), then the neglectedness of 1 and 2 will usually be roughly the same.\n\n\nBesides the scale of the return curve, the shape can also vary. In fact, I think many interventions initially face increasing returns from learning/research, creating economies of scale, specialization within the cause area, etc. For example, in most cause areas, the first $10,000 are probably invested into prioritization, organizing, or (potentially symbolic) interventions that later turn out to be suboptimal. So, in practice return curves may actually look more like the following:\n\n\n![IMG_20170621_134248](https://casparoesterheld.files.wordpress.com/2017/06/img_20170621_134248.jpg?w=300&h=225)\n\n\nThis adds another piece of information (besides scale) that needs to be taken into account to translate the amount of invested resources into how much returns have diminished: how and when do returns start to diminish?\n\n\nThere are many other return curve shapes that may be less common but mess up the neglectedness framework more. For example, some projects produce some large amount of value if they succeed but produce close to no value if they fail. Thus, the (actual not expected) return curve for such projects may look like this:\n\n\n![IMG_20170621_134241](https://casparoesterheld.files.wordpress.com/2017/06/img_20170621_134241.jpg?w=300&h=225)\n\n\nExamples may include developing vaccines, colonizing Mars or [finding cause X](https://www.effectivealtruism.org/articles/three-heuristics-for-finding-cause-x/).\n\n\nIf such a cause area is already relatively crowded according to the third (and second) sense, that may make them *less* “crowded” in the first sense. For example, if nobody had invested money into finding a vaccine against malaria (and you don’t expect others to invest money into it into the future either, see below) then this cause area is maximally neglected in the second and third sense. However, given [how expensive clinical trials are](http://www.nature.com/nrd/journal/v16/n6/full/nrd.2017.70.html), the marginal returns of donating a few thousand dollars into it are essentially zero. If on the other hand, others have already contributed enough money to get a research project off the ground at all, then the marginal returns are higher, because there is at least some chance that your money will enable a trial in which a vaccine is found. (Remember that we don’t know the exact shape of the return curve, so we don’t know when the successful trial is funded.)\n\n\nI would like to emphasize that the point of this section is not so much that people apply neglectedness incorrectly by merely looking at the amount of resources invested into a cause and not thinking about implications in terms of diminishing returns at all. Instead, I suspect that most people implicitly translate into diminishing returns and take the kind of the project into account. However, it may be beneficial if people were more aware of this issue and how it makes evaluating neglectedness more difficult.\n\n\nFuture resources\n================\n\n\nWhen estimating the neglectedness of a cause, we need to take into account, not only people who are currently working on the problem (as a literal reading of [80,000 hours’ definition](https://80000hours.org/articles/problem-framework/#definition-2) suggests), but also people who have worked on it in the past and future. If a lot of people have worked on a problem in the past, then this indicates that the low-hanging fruit has already been picked. Thus, even if nobody is working in the area anymore, marginal returns have probably diminished a lot. I can’t think of a good example where this is a decisive consideration because if an area has been given up on (such that there is a big difference between past and current attention), it will usually score low in tractability, anyway. Perhaps one example is the search for new ways to organize society, government and economy. Many resources are still invested into thinking about this topic, so even if we just consider resources invested today, it would not do well in terms of neglectedness. However, if we consider that people have thought about and “experimented” in this area for thousands of years, it appears to be even more crowded.\n\n\nWe also have to take future people and resources into account when evaluating neglectedness. Of course, future people cannot “take away” the most promising intervention in the way that current and past people can. However, their existence causes the top interventions [to be performed anyway](https://concepts.effectivealtruism.org/concepts/counterfactual-considerations/). For example, let’s say that there are 1000 equally costly possible interventions in an area, generating 1000, 999, 998, …, 1 “utils” (or lives saved, years of suffering averted, etc.), respectively. Each intervention can only be performed once. The best 100 interventions have already been taken away by past people. Thus, if you have money for one intervention, you can now only generate 900 utils. But if you know that future people will engage in 300 further interventions in that area, then whether you intervene or not actually only makes a difference of 600 utils. All interventions besides the one generating 600 utils would have been executed anyway. (In [Why Charities Don’t Differ Astronomically in Cost-Effectiveness](http://reducing-suffering.org/why-charities-dont-differ-astronomically-in-cost-effectiveness/), Brian Tomasik [makes a similar point](http://reducing-suffering.org/why-charities-dont-differ-astronomically-in-cost-effectiveness/#Returns_look_high_before_big_players_enter).)\n\n\nThe number of future people who would counterfactually engage in some cause area is an important consideration in many cause areas considered by effective altruists. In general, if a cause area is neglected by current and past people, the possibility of future people engaging in an intervention creates a lot of variance in neglectedness evaluations. If recently 10 people started working on an area, then it is very uncertain how much attention it will have in the future. And if it will receive a lot more attention regardless of our effort, then the neglectedness score may change by a factor of 100. The future resources that will go into long-established (and thus already less neglected) cause areas, on the other hand, are easier to predict and can’t make as much of a difference. \n\n\n\n\nOne example where future people and resources are an important consideration is AI safety. People often state that AI safety is a highly neglected cause area, presumably under the assumption that this should be completely obvious given how few people currently work in the area. At least, it is rare that the possibility of future people going into AI safety is considered explicitly. Langan-Dathi even writes that “due to [AI safety] being a recent development it is also highly neglected.” I, on the other hand, would argue that being a recent development only makes a cause *look* highly neglected if one doesn’t consider future people. (Again, Brian [makes almost the same point regarding AI safety](http://reducing-suffering.org/why-charities-dont-differ-astronomically-in-cost-effectiveness/#Returns_look_high_before_big_players_enter).)\n\n\nOverall, I think many questions in AI safety should nonetheless be regarded as relatively neglected because I think there is a good chance that future people won’t recognize them as important fast enough. That said, I think some AI safety problems will become relevant in regular AI capability research or near time applications (such as self-driving cars). For example, I expect that some of [Amodei et al.’s (2016)](https://arxiv.org/abs/1606.06565) “Concrete Problems in AI Safety” will be (or would have been) picked up, anyway. Research in these areas of AI safety is thus potentially less intrinsically valuable, although it may still have a lot of instrumental benefits that make them worthwhile to pursue.\n\n\nMy impression is that neglecting future people in evaluating neglectedness is more common than forgetting to translate from invested resources into diminishing marginal returns. Nonetheless, in the context of this post the point of this section is that considering future resources makes neglectedness more difficult to evaluate. Obviously, it is hard to foresee how many resources will be invested into a project in the future. Because the most promising areas will not have received a lot of attention, yet, the question of their neglectedness will be dominated by how much resources they will receive in the future. Thus, in the most important cases, neglectedness is hard to estimate.\n\n\nWhat should count as “the same cause area”?\n===========================================\n\n\nAt least the operationalization of neglectedness involves estimating the amount of (past, current and future) resources invested into a cause area. But which resources count as going into the same cause area? For example, if the cause area is malaria, should you count people who work in global poverty as working in the same cause area?\n\n\nBecause the number of people working in an area is only relevant as a proxy for how much marginal returns have diminished, the answer seems to be: Count people (and resources) to the extent that their activities diminish the marginal returns in the cause area in question. Thus, resources invested into alleviating global poverty have to be taken into account, because if people’s income increases, this will allow them to take measures against malaria as well.\n\n\nAs another example, consider the cause area of advocating some moral view X (say effective altruism). If only a few people currently promote that view, then one may naively view advocating X as neglected. However, if neglectedness is intended to be a proxy for diminishing returns, then it seems that we also have to take into account moral advocates of other views. Because most people regularly engage in some form of moral advocacy (e.g., when they talk about morality with their friends and children), many people already hold moral views that our advocacy has to compete with. Thus, we may want to take these other moral advocates into account for evaluating neglectedness. That said, if we apply neglectedness together with tractability and scope, it seems reasonable to include such considerations in either tractability or neglectedness. (As Rob Wiblin [remarks](http://effective-altruism.com/ea/ss/the_importantneglectedtractable_framework_needs/), the three factors blur heavily into each other. In particular, neglectedness can make an intervention more tractable. As Wiblin notes, we should take care not to double-count arguments. We also shouldn’t forget to count arguments at all, though.)\n\n\nAcknowledgements\n================\n\n\nI am indebted to Tobias Baumann for valuable comments. I wrote this post while working for the Foundational Research Institute, which is now the [Center on Long-Term Risk](https://longtermrisk.org/).", "url": "https://casparoesterheld.com/2017/06/25/complications-in-evaluating-neglectedness/", "title": "Complications in evaluating neglectedness", "source": "html_articles", "source_type": "blogPost", "source_filetype": "pdf", "date_published": "2017-06-24T22:00:00Z", "authors": ["Caspar Oesterheld"], "summary": [], "id": "92935d5b67f7adeeda1a26f62675a36a"} {"text": "One classic story about [Newcomb’s problem](https://en.wikipedia.org/wiki/Newcomb's_paradox) is that, at least initially, people one-box and two-box in roughly equal numbers (and that everyone is confident in their position). To find out whether this is true or what exact percentage of people would one-box I conducted a meta-survey of existing polls of people’s opinion on Newcomb’s problem.\n\n\nThe surveys I found are listed in the following table:\n\n\n\nI deliberately included even surveys with tiny sample sizes to test whether the results from the larger sample size surveys are robust or whether they depend on the specifics of how they obtained the data. For example, the description of Newcomb’s problem in the Guardian survey contained a paragraph on why one should one-box (written by [Arif Ahmed](http://www.phil.cam.ac.uk/people/teaching-research-pages/ahmed/ahmed-page), author of [*Evidence, Decision and Causality*](http://www.cambridge.org/us/academic/subjects/philosophy/philosophy-science/evidence-decision-and-causality?format=AR&isbn=9781316056073#hCsZHCS60f1McRFp.97)) and a paragraph on why one should two-box (by [David Edmonds](https://en.wikipedia.org/wiki/David_Edmonds_(philosopher))). Perhaps the persuasiveness of these arguments influenced the result of the survey?\n\n\nLooking at all the polls together, it seems that the picture is at least somewhat consistent. The two largest surveys of non-professionals both give one-boxing almost the same small edge. The other results diverge more, but some can be easily explained. For example, decision theory is a commonly discussed topic on LessWrong with some of the opinion leaders of the community ([including](http://lesswrong.com/lw/nc/newcombs_problem_and_regret_of_rationality/) founder Eliezer Yudkowsky) endorsing one-boxing. It is therefore not surprising that opinions on LessWrong have converged more than elsewhere. Considering the low sample sizes, the other smaller surveys of non-professionals also seem reasonably consistent with the impression that one-boxing is only slightly more common than two-boxing.\n\n\nThe surveys also show that, as has often been remarked on, there exists a significant difference between opinion among the general population / “amateur philosophers” and professional philosophers / decision theorists (though the consensus among decision theorists is not nearly as strong as on LessWrong).\n\n\n**Acknowledgment:** This work was funded by the Foundational Research Institute (now the [Center on Long-Term Risk](https://longtermrisk.org/)).", "url": "https://casparoesterheld.com/2017/06/27/a-survey-of-polls-on-newcombs-problem/", "title": "A survey of polls on Newcomb’s problem", "source": "html_articles", "source_type": "blogPost", "source_filetype": "pdf", "date_published": "2017-06-26T22:00:00Z", "authors": ["Caspar"], "summary": [], "id": "776a1534b7730f654dd3013e50a981ca"} {"text": "A few weeks ago, I [wrote](http://lesswrong.com/r/discussion/lw/pft/naturalized_induction_a_challenge_for_evidential/) about the BPB problem and how it poses a problem for classical/non-[logical](https://arbital.com/p/logical_dt/) decision theories. In my post, I briefly mentioned a behaviorist approach to BPB, only to immediately discard it:\n\n\nOne might think that one could map between physical processes and algorithms on a pragmatic or functional basis. That is, one could say that a physical process A implements a program p to the extent that the results of A correlate with the output of p. I think this idea goes into the right direction and we will later see an implementation of this pragmatic approach that does away with naturalized induction. However, it feels inappropriate as a solution to BPB. The main problem is that two processes can correlate in their output without having similar subjective experiences. For instance, it is easy to show that [Merge sort and Insertion sort](https://en.wikipedia.org/wiki/Sorting_algorithm#Popular_sorting_algorithms) have the same output for any given input, even though they have very different “subjective experiences”.\n\n\nSince writing the post I became more optimistic about this approach because the counterarguments I mentioned aren’t particularly persuasive. The core of the idea is the following: Let A and B be parameterless algorithms[1](#fn1). We’ll say that A and B are equivalent if we believe that A outputs x iff B outputs x. In the context of BPB, your current decision is an algorithm A and we’ll say B is an instance or implementation of A/you iff A and B are equivalent. In the following sections, I will discuss this approach in more detail.\n\n\nYou still need interpretations\n==============================\n\n\nThe definition only solves one part of the BPB problem: specifying equivalence between algorithms. This would solve BPB if all agents were bots (rather than parts of a bot or collections of bots) in Soares and Fallenstein’s [Botworld 1.0](https://intelligence.org/files/Botworld.pdf). But in a world without any Cartesian boundaries, one still has to map parts of the environment to parameterless algorithms. This could, for instance, be a function from histories of the world onto the output set of the algorithm. For example, if one’s set of possible world models is a set of cellular automata (CA) with various different initial conditions and one’s notion of an algorithm is something operating on natural numbers, then such an interpretation *i* would be a function from CA histories to the set of natural numbers. Relative to *i*, a CA with initial conditions contains an instance of algorithm A if A outputs *x* <=> *i*(*H*)=*x*, where *H* is a random variable representing the history created by that CA. So, intuitively, *i* is reading A’s output off from a description the world. For example, it may look at the physical signals sent by a robot’s microprocessor to a motor and convert these into the output alphabet of A. E.g., it may convert a signal that causes a robot’s wheels to spin to something like “forward”. Every interpretation *i* is a separate instance of A.\n\n\nJoke interpretations\n--------------------\n\n\nSince we still need interpretations, we still have the problem of “joke interpretations” ([Drescher 2006](https://www.gwern.net/docs/statistics/decision/2006-drescher-goodandreal.pdf), sect. 2.3; also see [this Brian Tomasik essay](http://reducing-suffering.org/interpret-physical-system-mind/#Computations_are_relative_to_interpretation) and references therein). In particular, you could have an interpretation *i* that does most of the work, so that the equivalence of A and *i*(*H*) is the result of *i* rather than the CA doing something resembling A.\n\n\nI don’t think it’s necessarily a problem that an EDT agent might optimize its action too much for the possibility of being a joke instantiation, because it gives all its copies in a world equal weight no matter which copy it believes to be. As an example, imagine that there is a possible world in which joke interpretations lead to you to identify with a rock. If the rock’s “behavior” does have a significant influence on the world and the output of your algorithm correlates strongly with it, then I see no problem with taking the rock into account. At least, that is what EDT would do anyway if it has a regular copy in that world.[2](#fn2) If the rock has little impact on the world, EDT wouldn’t care much about the possibility of being the rock. In fact, if the world also contains a strongly correlated non-instance[3](#fn3) of you that faces a real decision problem, then the rock joke interpretation would merely lead you to optimize for the action of that non-copy.\n\n\nIf you allow all joke interpretations, then you would view yourself in all worlds. Thus, the view may have similar implications as the l-zombie view where the joke interpretations serve as the l-zombies.[4](#fn4) Unless we’re trying to metaphysically justify the l-zombie view, this is not what we’re looking for. So, we may want to remove “joke interpretations” in some way. One idea could be to limit the interpretation’s computational power ([Aaronson 2011](https://arxiv.org/abs/1108.1791), sect. 6). My understanding is that this is what people in CA theory use to define the notion of implementing an algorithm in a CA, see, e.g., Cook ([2004](http://www.complex-systems.com/pdf/15-1-1.pdf), sect. 2). Another idea would be to include only interpretations that you yourself (or A itself) “can easily predict or understand”. Assuming that A doesn’t know its own output already, this means that *i* cannot do most of the work necessary to entangle A with *i*(H). (For a similar point, cf. [Bishop 2004](http://www.doc.gold.ac.uk/~mas02mb/Selected%20Papers/2004%20BICS.pdf), sect. “Objection 1: Hofstadter, ‘This is not science’”.) For example, if *i* would just compute A without looking at H, then A couldn’t predict *i* very well if it cannot predict itself. If, on the other hand, *i* reads off the result of A from a computer screen in H, then A would be able to predict *i*’s behavior for every instance of H. Brian Tomasik [lists](http://reducing-suffering.org/interpret-physical-system-mind/#Using_many_possible_interpretations) a few more criteria to judge interpretations by.\n\n\nIntrospective discernibility\n============================\n\n\nIn my original rejection of the behaviorist approach, I made an argument about two sorting algorithms which always compute the same result but have different “subjective experiences”. I assumed that a similar problem could occur when comparing two equivalent decision-making procedures with different subjective experiences. But now I actually think that the behaviorist approach nicely aligns with what one might call introspective discernibility of experiences.\n\n\nLet’s say I’m an agent that has, as a component, a sorting algorithm. Now, a world model may contain an agent that is just like me except that it uses a different sorting algorithm. Does that agent count as an instantiation of me? Well, that depends on whether I can introspectively discern which sorting algorithm I use. If I can, then I could let my output depend on the content of the sorting algorithm. And if I do that, then the equivalence between me and that other agent breaks. E.g., if I decide to output an explanation of my sorting algorithm, then my output would explain, say, bubble sort, whereas the other algorithm’s output would explain, say, merge sort. If, on the other hand, I don’t have introspective access to my sorting algorithm, then the code of the sorting algorithm cannot affect my output. Thus, the behaviorist view would interpret the other agent as an instantiation of me (as long as, of course, it, too, doesn’t have introspective access to its sorting algorithm). This conforms with the intuition that which kind of sorting algorithm I use is not part of my subjective experience. I find this natural relation to introspective discernibility very appealing.\n\n\nThat said, things are complicated by the equivalence relation being subjective. If you already know what A and B output, then they are equivalent if their output is the same — even if it is “coincidentally” so, i.e., if they perform completely unrelated computations. Of course, a decision algorithm will rarely know its own output in advance. So, this extreme case is probably rare. However, it is plausible that an algorithm’s knowledge about its own behavior excludes some conditional policies. For example, consider a case like Conitzer’s ([2016](https://arxiv.org/abs/1610.05733), [2017](https://arxiv.org/abs/1705.03560)), in which copies of an EU-maximizing agent face different but symmetric information. Depending on what the agent knows about its algorithm, it may view all the copies as equivalent or not. If it has relatively little self-knowledge, it could reason that if it lets its action depend on the information, the copies’ behavior would diverge. With more self-knowledge, on the other hand, it could reason that, because it is an EU maximizer and because the copies are in symmetric situations, its action will be the same no matter the information received.[5](#fn5)\n\n\nConsciousness\n=============\n\n\nThe BPB problem resembles the problem of consciousness: the question “does some physical system implement my algorithm?” is similar to the question “does some physical system have the conscious experience that I am having?”. For now, I don’t want to go too much into the relation between the two problems. But if we suppose that the two problems are connected, we can draw from the philosophy of mind to discuss our approach to BPB.\n\n\nIn particular, I expect that a common objection to the behaviorist approach will be that most instantiations in the behaviorist sense are [behavioral p-zombies](https://en.wikipedia.org/wiki/Philosophical_zombie#Types_of_zombies). That is, their output behavior is equivalent to the algorithm’s but they compute the output in a different way, and in particular in a way that doesn’t seem to give rise to conscious (or subjective) experiences. While the behaviorist view may lead us to identify with such a p-zombie, we can be certain, so the argument goes, that we are not given that we have conscious experiences.\n\n\nSome particular examples include:\n\n\n* Lookup table-based agents\n* Messed up causal structures, e.g. Paul Durham’s experiments with his whole brain emulation in Greg Egan’s novel [*Permutation City*](https://en.wikipedia.org/wiki/Permutation_City).\n\n\nI personally don’t find these arguments particularly convincing because I favor [Dennett’s](https://en.wikipedia.org/wiki/Consciousness_Explained) and [Brian Tomasik’s](http://reducing-suffering.org/#Consciousness) eliminativist view on consciousness. That said, it’s not clear whether eliminativism would imply anything other than relativism/anti-realism for the BPB problem (if we view BPB and philosophy of mind as sufficiently strongly related).\n\n\nAcknowledgment\n==============\n\n\nThis work was funded by the Foundational Research Institute (now the [Center on Long-Term Risk](https://longtermrisk.org/)).\n\n\n\n\n---\n\n\n1. I use the word “algorithm” in a very broad sense. I don’t mean to imply Turing computability. In fact, I think any explicit formal specification of the form “f()=…” should work for the purpose of the present definition. Perhaps, even [implicit specifications](https://en.wikipedia.org/wiki/Implicit_function) of the output would work. [↩](#ref1 \"Jump back to footnote 1 in the text.\")\n\n\n2. Of course, I see how someone would find this counterintuitive. However, I suspect that this is primarily because the rock example triggers [absurdity heuristics](https://wiki.lesswrong.com/wiki/Absurdity_heuristic) and [because it is hard to imagine](https://casparoesterheld.com/2015/12/06/cheating-at-thought-experiments/) a situation in which you believe that your decision algorithm is strongly correlated with whether, say, some rock causes an avalanche. [↩](#ref2 \"Jump back to footnote 2 in the text.\")\n\n\n3. Although the behaviorist view defines the instance-of-me property via correlation, there can still be correlated physical subsystems that are not viewed as an instance of me. In particular, if you strongly limit the set of allowed interpretations (see the next paragraph), then the potential relationship between your own and the system’s action may be too complicated to be expressed as A outputs x <=> i(H)=x*.*[↩](#ref3 \"Jump back to footnote 3 in the text.\")\n\n\n4. I suspect that the two might differ in medical or “common cause” Newcomb-like problems like the [coin flip creation problem](http://lesswrong.com/r/discussion/lw/oih/did_edt_get_it_right_all_along_introducing_yet/). [↩](#ref4 \"Jump back to footnote 4 in the text.\")\n\n\n5. If this is undesirable, one may try to use logical counterfactuals to find out whether B also “would have” done the same as A if A had behaved differently. However, I’m very skeptical of logical counterfactuals in general. Cf. [the “Counterfactual Robustness” section in Tomasik’s post](http://reducing-suffering.org/interpret-physical-system-mind/#Counterfactual_robustness). [↩](#ref4 \"Jump back to footnote 4 in the text.\")", "url": "https://casparoesterheld.com/2017/10/22/a-behaviorist-approach-to-building-phenomenological-bridges/", "title": "A behaviorist approach to building phenomenological bridges", "source": "html_articles", "source_type": "blogPost", "source_filetype": "pdf", "date_published": "2017-10-21T22:00:00Z", "authors": ["Caspar Oesterheld"], "summary": [], "id": "081ac94cc3828dc9c216bbbd11b60128"} {"text": "[ETA (January 2022): My co-authors James Bell, Linda Linsefors and Joar Skalse and I give a much more detailed analysis of the dynamics discussed in this post in our paper titled [“Reinforcement Learning in Newcomblike Environments”](https://openreview.net/pdf?id=cx2q4cOBnne), published at NeurIPS 2021.]\n\n\nThe [law of effect](https://en.wikipedia.org/wiki/Law_of_effect) (LoE), as introduced on p. 244 of Thorndike’s (1911) *Animal Intelligence*, states:\n\n\n\n> Of several responses made to the same situation, those which are accompanied or closely followed by satisfaction to the animal will, other things being equal, be more firmly connected with the situation, so that, when it recurs, they will be more likely to recur; those which are accompanied or closely followed by discomfort to the animal will, other things being equal, have their connections with that situation weakened, so that, when it recurs, they will be less likely to occur. The greater the satisfaction or discomfort, the greater the strengthening or weakening of the bond.\n> \n> \n\n\nAs [I (and others) have pointed out elsewhere](https://casparoesterheld.files.wordpress.com/2018/01/learning-dt.pdf), an agent applying LoE would come to “one-box” (i.e., behave like evidential decision theory (EDT)) in Newcomb-like problems in which the payoff is eventually observed. For example, if you face Newcomb’s problem itself multiple times, then one-boxing will be associated with winning a million dollars and two-boxing with winning only a thousand dollars. (As noted in the linked note, this assumes that the different instances of Newcomb’s problem are independent. For instance, one-boxing in the first does not influence the prediction in the second. It is also assumed that CDT cannot precommit to one-boxing, e.g. because precommitment is impossible in general or because the predictions have been made long ago and thus cannot be causally influenced anymore.)\n\n\nA caveat to this result is that with randomization one can derive more causal decision theory-like behavior from alternative versions of LoE. Imagine an agent that chooses probability distributions over actions, such as the distribution P with P(one-box)=0.8 and P(two-box)=0.2. The agent’s physical action is then sampled from that probability distribution. Furthermore, assume that the predictor in Newcomb’s problem can only predict the probability distribution and not the sampled action and that he fills box B with the probability the agent chooses for one-boxing. If this agent plays many instances of Newcomb’s problem, then she will *ceteris paribus* fare better in rounds in which she *two-boxes*. By LoE, she may therefore update toward two-boxing being the better option and consequently two-box with higher probability. Throughout the rest of this post, I will expound on the “goofiness” of this application of LoE.\n\n\nNotice that this is not the only possible way to apply LoE. Indeed, the more natural way seems to be to apply LoE only to whatever entity the agent has the power to choose rather than something that is influenced by that choice. In this case, this is the *probability distribution* and not the action resulting from that probability distribution. Applied at the level of the probability distribution, LoE again leads to EDT. For example, in Newcomb’s problem the agent receives more money in rounds in which it chooses a higher probability of one-boxing. Let’s call this version of LoE “standard LoE”. We will call other versions, in which choice is updated to bring some other variable (in this case the physical action) to assume values that are associated with high payoffs, “non-standard LoE”.\n\n\nAlthough non-standard LoE yields CDT-ish behavior in Newcomb’s problem, it can easily be criticized on causalist grounds. Consider a non-Newcomblike variant of Newcomb’s problem in which there is no predictor but merely an entity that reads the agent’s mind and fills box B with a million dollars in causal dependence on the probability distribution chosen by the agent. The causal graph representing this decision problem is given below with the subject of choice being marked red. Unless they are equipped with an incomplete model of the world – one that doesn’t include the probability distribution step –, CDT and EDT agree that one should choose the probability distribution over actions that one-boxes with probability 1 in this variant of Newcomb’s problem. After all, choosing that probability distribution *causes* the game master to see that you will probably one-box and thus also causes him to put money under box B. But if you play this alternative version of Newcomb’s problem and use LoE on the level of one- versus two-boxing, then you would converge on two-boxing because, again, you will fare better in rounds in which you happen to two-box.\n\n\n![RandomizationBlogPost.jpg](https://casparoesterheld.files.wordpress.com/2018/02/randomizationblogpost.jpg?w=640)\n\n\nBe it in Newcomb’s original problem or in this variant of Newcomb’s problem, non-standard LoE can lead to learning processes that don’t seem to match LoE’s “spirit”. When you apply standard LoE (and probably also in most cases of applying non-standard LoE), you develop a tendency to exhibit rewarded choices, and this will lead to more reward in the future. But if you adjust your choices with some intermediate variable in mind, you may get worse and worse. For instance, in either the regular or non-Newcomblike Newcomb’s problem, non-standard LoE adjusts the choice (the probability distribution over actions) so that the (physically implemented) action is more likely to be the one associated with higher reward (two-boxing), but the choice itself (high probability of two-boxing) will be one that is associated with *low* rewards. Thus, learning according to non-standard LoE can lead to decreasing rewards (in both Newcomblike and non-Newcomblike problems).\n\n\nAll in all, what I call non-standard LoE looks a bit like a hack rather than some systematic, sound version of CDT learning.\n\n\nAs a side note, the sensitivity to the details of how LoE is set up relative to randomization shows that the decision theory (CDT versus EDT versus something else) implied by some agent design can sometimes be very fragile. I originally thought that there would generally be some correspondence between agent designs and decision theories, such that changing the decision theory implemented by an agent usually requires large-scale changes to the agent’s architecture. But switching from standard LoE to non-standard LoE is an example where what seems like a relatively small change can significantly change the resulting behavior in Newcomb-like problems. Randomization in decision markets [is](https://casparoesterheld.com/2017/12/18/futarchy-implements-evidential-decision-theory/#comment-251) another such example. (And the [Gödel machine](https://en.wikipedia.org/wiki/G%C3%B6del_machine) is yet another example, albeit one that seems less relevant in practice.)\n\n\nAcknowledgements\n================\n\n\nI thank Lukas Gloor, Tobias Baumann and Max Daniel for advance comments. This work was funded by the Foundational Research Institute (now the [Center on Long-Term Risk](https://longtermrisk.org/)).", "url": "https://casparoesterheld.com/2018/02/15/the-law-of-effect-randomization-and-newcombs-problem/", "title": "The law of effect, randomization and Newcomb’s problem", "source": "html_articles", "source_type": "blogPost", "source_filetype": "pdf", "date_published": "2018-02-14T23:00:00Z", "authors": ["Caspar Oesterheld"], "summary": [], "id": "a2e51287f5121b9b8762e6a33b329382"} {"text": "In this post, I outline three wagers in favor of the hypothesis that [multiverse-wide superrationality](https://foundational-research.org/msr) (MSR) has action-guiding implications. MSR is based on three core assumptions:\n\n\n1. There is a large or infinite universe or multiverse.\n2. Applying an acausal decision theory.\n3. An agent’s actions provide evidence about the actions of other, non-identical agents with different goals in other parts of the universe.\n\n\nThere are three wagers corresponding to these three assumptions. The wagers works only with those value systems that can also benefit from MSR (for instance, with total utilitarianism) (see [Oesterheld, 2017, sec. 3.2](https://foundational-research.org/files/Multiverse-wide-Cooperation-via-Correlated-Decision-Making.pdf#necessary-preconditions)). I assume such a value system in this post. I am currently working on a longer paper about a wager for (ii), which will discuss the premises for this wager in more detail.\n\n\nA wager for acausal decision theory and a large universe\n--------------------------------------------------------\n\n\nIf this universe is very large or infinite, then it is likely that there is an identical copy of the part of the universe that is occupied by humans somewhere far-away in space ([Tegmark 2003, p. 464](http://space.mit.edu/home/tegmark/multiverse.pdf)). Moreover, there will be vastly many or infinitely many such copies. Hence, for example, if an agent prevents a small amount of suffering on Earth, this will be accompanied by many copies doing the same, resulting in multiple amounts of suffering averted throughout the universe.\n\n\nAssuming [causal decision theory](https://plato.stanford.edu/entries/decision-causal/) (CDT), the impact of an agent’s copies is not taken into account when making decisions—there is an evidential dependence between the agent’s actions and the actions of their copies, but no causal influence. According to evidential decision theory (EDT), on the other hand, an agent should take such dependences into account when evaluating different choices. For EDT, a choice between two actions on Earth is also a choice between the actions of all copies throughout the universe. The same holds for all other acausal decision theories (i.e., decision theories that take such evidential dependences into account): for instance, for the decision theories developed by MIRI researchers (such as functional decision theory ([Yudkowsky and Soares, 2017](https://arxiv.org/pdf/1710.05060.pdf))), and for Poellinger’s variation of CDT ([Poellinger, 2013](http://philsci-archive.pitt.edu/9887/7/newcomb_in_ckps.pdf)).\n\n\nEach of these considerations on its own would not be able to get a wager off the ground. But jointly, they can do so: on the one hand, given a large universe, acausal decision theories will claim a much larger impact with each action than causal decision theory does. Hence, there is a wager in favor of these acausal decision theories. Suppose an agent applies some meta decision theory (see [MacAskill, 2016, sec. 2](http://www.academia.edu/20352580/Smokers_Psychos_and_Decision-Theoretic_Uncertainty)) that aggregates the expected utilities provided by individual decision theories. Even if the agent assigns a small credence to acausal decision theories, these theories will still dominate the meta decision theory’s expected utilities. On the other hand, if an agent applies an acausal decision theory, they can have a much higher impact in a large universe than in a small universe. The agent should thus always act as if the universe is large, even if they only assign a very small credence to this hypothesis.\n\n\nIn conclusion, most of an agent’s impact comes from applying an acausal decision theory in a large universe. Even if the agent assigns a small credence both to acausal decision theories and to the hypothesis that the universe is large, they should still act as if they placed a high credence in both.\n\n\nA wager in favor of higher correlations\n---------------------------------------\n\n\nIn explaining the third wager, it is important to note that I assume a subjective interpretation of probability. If I say that there is a correlation between the actions of two agents, I mean that, given one’s subjective beliefs, observing one agent’s action provides evidence about the other agent’s action. Moreover, I assume that agents are in a symmetrical decision situation—for instance, this is the case for two agents in a prisoner’s dilemma. If the decision situation is symmetrical, and if the agents are sufficiently similar, their actions will correlate. The theory of MSR says that agents in a large universe probably are in a symmetrical decision situation ([Oesterheld, 2017, sec. 2.8](https://foundational-research.org/files/Multiverse-wide-Cooperation-via-Correlated-Decision-Making.pdf#compromise-strategy)).\n\n\nThere exists no general theory of correlations between different agents. It seems plausible to assume that a correlation between the actions of two agents must be based on a logical correlation between the decision algorithms that these two agents implement. But it is not clear how to think about the decision algorithms that humans implement, for instance, and how to decide whether two decision algorithms are functionally equivalent ([Yudkowsky and Soares, sec. 3](https://arxiv.org/pdf/1710.05060.pdf)). There exist solutions to these problems only in some narrow domains—for instance, for agents represented by programs written in some specific programming language.\n\n\nHence, it is also not clear which agents’ actions in a large universe correlate, given that all are in a symmetrical decision situation. It could be that an agent’s actions correlate only with very close copies. If these copies thus share the same values as the agent, then MSR does not have any action-guiding consequences. The agent will just continue to pursue their original goal function. If, on the other hand, there are many correlating agents with different goals, then MSR has strong implications. In the latter case, there can be gains from trade between these agents’ different value systems.\n\n\nJust as there is a wager for applying acausal decision theory in general, there is also a wager in favor of assuming that an agent’s actions correlate with more rather than fewer different agents. Suppose there are two hypotheses: (H1) Alice’s actions only correlate with the actions of (G1) completely identical copies of Alice, and (H2) Alice’s actions correlate with (G2) all other agents that ever gave serious consideration to MSR or some equivalent idea.\n\n\n(In both cases, I assume that Alice has seriously considered MSR herself.) G1 is a subset of G2, and it is plausible that G2 is much larger than G1. Moreover, it is plausible that there are also agents with Alice’s values among the agents in G2 which are not also in G1. Suppose 1-*p* is Alice’s credence in H1, and p her credence in H2. Suppose further that there are *n* agents in G1 and *m* agents in G2, and that *q* is the fraction of agents in G2 sharing Alice’s values. All agents have the choice between (A1) only pursuing their own values, and (A2) pursuing the sum over the values of all agents in G2. Choosing A1 gives an agent 1 utilon. Suppose *g* denotes the possible gains from trade; that is, choosing A2 produces (1+*g*)×*s* utilons for each value system, where *s* is the fraction of agents in G2 supporting that value system. If everyone in G2 chooses A2, this produces (1+*g*)×q×*m* utilons for Alice’s value system, while, if everyone chooses A1, this produces only *q*×*m* utilons in total for Alice.\n\n\nThe decision situation for Alice can be summarized by the following choice matrix (assuming, for simplicity, that all correlations are perfect):\n\n\n\n\n| | | |\n| --- | --- | --- |\n| | H1 | H2 |\n| A1 | *n*+*c* | *q*×*m* |\n| A2 | (1+*g*)×*q*×*n*+*c* | (1+*g*)×*q*×*m* |\n\n\nHere, the cells denote the expected utilities that EDT assigns to either of Alice’s actions given either H1 or H2. *c* is a constant that denotes the expected value generated by the agents in G2 that are non-identical to Alice, given H1. It plays no role in comparing A1 and A2, since, given H1, these agents are not correlated with Alice: the value will be generated no matter which action she picks. The value for H1∧A2 is unrealistically high, since it supposes the same gains from trade as H2∧A2, but this does not matter here. According to EDT, Alice should choose A2 over A1 iff\n\n\n*g*×*p*×*q*×*m* > (1-*p*)×*n* – (1+*g*)×(1-*p*)×*n*×*q*.\n\n\nIt seems likely that *q*×*m* is larger than *n*—the requirement that an agent must be a copy of Alice restricts the space of agents more than that of having thought about MSR and sharing Alice’s values. Therefore, even if the gains from trade and Alice’s credence in H2 (i.e., *g*×*p*) are relatively small, *g*×*p*×*q*×*m* is still larger than *n*, and EDT recommends A2.\n\n\nWhile the argument for this wager is not as strong as the argument for the first two wagers, it is still plausible. It is plausible that there are much more agents having thought about MSR and sharing a person’s values than there are identical copies of the person. Hence, if the person’s actions correlate with the actions of all the agents in the larger group, the person’s actions have a much higher impact. Moreover, in this case, they plausibly also correlate with the actions of many agents holding different values, allowing for gains from trade. Therefore, one should act as if there were more rather than fewer correlations, even if one assigns a rather low credence to that hypothesis.\n\n\nAcknowledgements\n----------------\n\n\nI am grateful to Caspar Oesterheld and Max Daniel for helpful comments on a draft of this post. I wrote this post while working for the Foundational Research Institute, which is now the [Center on Long-Term Risk](https://longtermrisk.org/).", "url": "https://casparoesterheld.com/2018/03/31/three-wagers-for-multiverse-wide-superrationality/", "title": "Three wagers for multiverse-wide superrationality", "source": "html_articles", "source_type": "blogPost", "source_filetype": "pdf", "date_published": "2018-03-30T22:00:00Z", "authors": ["Johannes Treutlein"], "summary": [], "id": "f66f78af69a49325b0b589fc57fd6db6"} {"text": "I’ve written about the question of which decision theories describe the behavior of approaches to AI like the “Law of Effect”. In this post, I would like to discuss GOLEM, an architecture for a self-modifying artificial intelligence agent described by Ben Goertzel ([2010](http://goertzel.org/GOLEM.pdf); [2012](https://www.youtube.com/watch?v=XDf4uT70W-U)). Goertzel calls it a “meta-architecture” because all of the intelligent work of the system is done by sub-programs that the architecture assumes as given, such as a [program synthesis module](https://en.wikipedia.org/wiki/Program_synthesis) (cf. [Kaiser 2007](https://pdfs.semanticscholar.org/a3d1/2cdf7b2810bf3c42212099f78ef4767c52d4.pdf)).\n\n\nRoughly, the top-level self-modification is done as follows. For any proposal for a (partial) self-modification, i.e. a new program to replace (part of) the current one, the “Predictor” module predicts how well that program would achieve the goal of the system. Another part of the system — the “Searcher” — then tries to find programs that the Predictor deems superior to the current program. So, at the top level, GOLEM chooses programs according to some form of expected value calculated by the Predictor. The first interesting decision-theoretical statement about GOLEM is therefore that it chooses policies — or, more precisely, programs — rather than individual actions. Thus, [it would](https://casparoesterheld.com/2016/11/21/thoughts-on-updatelessness/) probably give the money in at least some versions of [counterfactual mugging](https://wiki.lesswrong.com/wiki/Counterfactual_mugging). This is not too surprising, because it is unclear on what basis one should choose individual actions when the effectiveness of an action depends on the agent’s decisions in other situations.\n\n\nThe next natural question to ask is, of course, *what* expected value (causal, evidential or other) the Predictor computes. Like the other aspects of GOLEM, the Predictor is subject to modification. Hence, we need to ask according to what criteria it is updated. The criterion is provided by the Tester, a “hard-wired program that estimates the quality of a candidate Predictor” based on “how well a Predictor would have performed in the past” (Goertzel 2010, p. 4). I take this to mean that the Predictor is judged based the extent to which it is able to predict the things that actually happened in the past. For instance, imagine that at some time in the past the GOLEM agent self-modified to a program that one-boxes in Newcomb’s problem. Later, the agent actually faced a Newcomb problem based on a prediction that was made before the agent self-modified into a one-boxer and won a million dollars. Then the Predictor should be able to predict that self-modifying to one-boxing in this case “yielded” getting a million dollar even though it did not do so causally. More generally, to maximize the score from the Tester, the Predictor has to compute regular (evidential) conditional probabilities and expected utilities. Hence, it seems that the EV computed by the Predictor is a regular EDT-ish one. This is not too surprising, either, because as we have seen before, it is much more common for learning algorithms to implement EDT, especially if they implement [something which looks like](https://casparoesterheld.com/2017/12/18/futarchy-implements-evidential-decision-theory/) [the Law of Effect](https://casparoesterheld.files.wordpress.com/2018/01/learning-dt.pdf).\n\n\nIn conclusion, GOLEM learns to choose policy programs based on their EDT-expected value.\n\n\nAcknowledgements\n================\n\n\nThis post is based on a discussion with Linda Linsefors, Joar Skalse, and James Bell. I wrote this post while working for the Foundational Research Institute, which is now the [Center on Long-Term Risk](https://longtermrisk.org/).", "url": "https://casparoesterheld.com/2018/04/26/goertzels-golem-implements-evidential-decision-theory-applied-to-policy-choice/", "title": "Goertzel’s GOLEM implements evidential decision theory applied to policy choice", "source": "html_articles", "source_type": "blogPost", "source_filetype": "pdf", "date_published": "2018-04-25T22:00:00Z", "authors": ["Caspar Oesterheld"], "summary": [], "id": "3d2cbf528e04709b616472e83cb59404"} {"text": "“**Abstract**”: Some have claimed that moral realism – roughly, the claim that moral claims can be true or false – would, if true, have implications for AI alignment research, such that moral realists might approach AI alignment differently than moral anti-realists. In this post, I briefly discuss different versions of moral realism based on what they imply about AI. I then go on to argue that pursuing moral-realism-inspired AI alignment would bypass philosophical and help resolve non-philosophical disagreements related to moral realism. Hence, even from a non-realist perspective, it is desirable that moral realists (and others who understand the relevant realist perspectives well enough) pursue moral-realism-inspired AI alignment research.\n\n\nDifferent forms of moral realism and their implications for AI alignment\n========================================================================\n\n\nRoughly, moral realism [is](https://plato.stanford.edu/entries/moral-realism/) the view that “moral claims do purport to report facts and are true if they get the facts right.” So for instance, most moral realists would hold the statement “one shouldn’t torture babies” to be true. Importantly, this moral claim is different from a claim about baby torturing being *instrumentally* bad given some other goal (a.k.a. a [“hypothetical imperative”](https://en.wikipedia.org/wiki/Hypothetical_imperative)) such as “if one doesn’t want to land in jail, one shouldn’t torture babies.” It is uncontroversial that such claims can be true or false. Moral claims, as I understand them in this post, are also different from descriptive claims about some people’s moral views, such as “most Croatians are against babies being tortured” or “I am against babies being tortured and will act accordingly”. More generally, the versions of moral realism discussed here claim that moral truth is in some sense mind-independent. It’s not so obvious what it means for a moral claim to be true or false, so there are many different versions of moral realism. I won’t go into more detail here, though we will revisit differences between different versions of moral realism later. For a general introduction on moral realism and meta-ethics, see, e.g., the [SEP article on moral realism](https://plato.stanford.edu/entries/moral-realism/).\n\n\nI should note right here that I myself find at least “strong versions” of moral realism implausible. But in this post, I don’t want to argue about meta-ethics. Instead, I would like to discuss an implication of some versions of moral realism. I will later say more about why I am interested in the implications of a view I believe to be misguided, but for now suffice it to say that “moral realism” [is](https://philpapers.org/archive/BOUWDP) a majority view among professional philosophers (though I don’t know how popular the versions of moral realism studied in this post are), which makes it interesting to explore the view’s possible implications.\n\n\nThe implication that I am interested in here is that moral realism helps with AI alignment in some way. One very strong version of the idea is that the [orthogonality thesis](https://wiki.lesswrong.com/wiki/Orthogonality_thesis) is false: if there is a moral truth, agents (e.g., AIs) that are able to reason successfully about a lot of non-moral things will automatically be able to reason correctly about morality as well and will then do what they infer to be morally correct. On p. 176 of “The Most Good You Can Do”, Peter Singer defends such a view: “If there is any validity in the argument presented in chapter 8, that beings with highly developed capacities for reasoning are better able to take an impartial ethical stance, then there is some reason to believe that, even without any special effort on our part, superintelligent beings, whether biological or mechanical, will do the most good they possibly can.” In the articles “[My Childhood Death Spiral](http://lesswrong.com/lw/ty/my_childhood_death_spiral/)”, “[A Prodigy of Refutation](http://lesswrong.com/lw/u1/a_prodigy_of_refutation/)” and “[The Sheer Folly of Callow Youth](http://lesswrong.com/lw/u2/the_sheer_folly_of_callow_youth/)” (among others), Eliezer Yudkowsky says that he used to hold such a view.\n\n\nOf course, current AI techniques do not seem to automatically include moral reasoning. For instance, if you develop an [automated theorem prover](https://en.wikipedia.org/wiki/Automated_theorem_proving) to reason about mathematics, it will not be able to derive “moral theorems”. Similarly, if you use the [Sarsa algorithm](https://en.wikipedia.org/wiki/State%E2%80%93action%E2%80%93reward%E2%80%93state%E2%80%93action) to train some agent with some given reward function, that agent will adapt its behavior in a way that increases its cumulative reward regardless of whether doing so conflicts with some ethical imperative. The moral realist would thus have to argue that in order to get to AGI or superintelligence or some other milestone, we will necessarily have to develop new and very different reasoning algorithms and that these algorithms will necessarily incorporate ethical reasoning. Peter Singer doesn’t state this explicitly. However, he makes a similar argument about human evolution on p. 86f. in ch. 8:\n\n\n\n> The possibility that our capacity to reason can play a critical role in a decision to live ethically offers a solution to the perplexing problem that [effective altruism](https://en.wikipedia.org/wiki/Effective_altruism) would otherwise pose for evolutionary theory. There is no difficulty in explaining why evolution would select for a capacity to reason: that capacity enables us to solve a variety of problems, for example, to find food or suitable partners for reproduction or other forms of cooperative activity, to avoid predators, and to outwit our enemies. If our capacity to reason also enables us to see that the good of others is, from a more universal perspective, as important as our own good, then we have an explanation for why effective altruists act in accordance with such principles. Like our ability to do higher mathematics, this use of reason to recognize fundamental moral truths would be a by-product of another trait or ability that was selected for because it enhanced our reproductive fitness—something that in evolutionary theory is known as a [spandrel](https://en.wikipedia.org/wiki/Spandrel_(biology)).\n> \n> \n\n\nA slightly weaker variant of this strong convergence moral realism is the following: Not all superintelligent beings would be able to identify or follow moral truths. However, if we add some feature that is not directly normative, then superintelligent beings would automatically identify the moral truth. For example, David Pearce [appears to claim that](https://www.quora.com/What-is-David-Pearces-position-on-meta-ethics) “the pain-pleasure axis discloses the world’s inbuilt metric of (dis)value” and that therefore any superintelligent being that can feel pain and pleasure will automatically become a utilitarian. At the same time, that moral realist could believe that a non-conscious AI would not necessarily become a utilitarian. So, this slightly weaker variant of strong convergence moral realism would be consistent with the orthogonality thesis.\n\n\nI find all of these strong convergence moral realisms very implausible. Especially given how current techniques in AI work – how value-neutral they are – the claim that algorithms for AGI will all automatically incorporate the same moral sense seems extraordinary and I have seen little evidence for it[1](#fn1) (though I should note that I have read only bits and pieces of the moral realism literature).[2](#fn2)\n\n\nIt even seems easy to come up with semi-rigorous arguments against strong convergence moral realism. Roughly, it seems that we can use a moral AI to build an immoral AI. Here is a simple example of such an argument. Imagine we had an AI system that (given its computational constraints) always chooses the most moral action. Now, it seems that we could construct an immoral AI system using the following algorithm: Use the moral AI to decide which action of the immoral AI system it would *prevent* from being taken if it could only choose one action to be prevented. Then take that action. There is a gap in this argument: perhaps the moral AI simply refuses to choose the moral actions in “prevention” decision problems, reasoning that it might currently be used to power an immoral AI. (If exploiting a moral AI was the only way to build other AIs, then this might be the rational thing to do as there might be more exploitation attempts than real prevention scenarios.) Still (without having thought about it too much), it seems likely to me that a more elaborate version of such an argument could succeed.\n\n\nHere’s a weaker moral realist convergence claim about AI alignment: There’s moral truth and we can program AIs to care about the moral truth. Perhaps it suffices to merely “tell them” to refer to the moral truth when deciding what to do. Or perhaps we would have to equip them with a dedicated “sense” for identifying moral truths. This version of moral realism again does not claim that the orthogonality thesis is wrong, i.e. that sufficiently effective AI systems will automatically behave ethically without us giving them any kind of moral guidance. It merely states that in addition to the straightforward approach of programming an AI to adopt some value system (such as utilitarianism), we could also program the AI to hold the correct moral system. Since pointing at something that exists in the world is often easier than describing that thing, it might be thought that this alternative approach to value loading is easier than the more direct one.\n\n\nI haven’t found anyone who defends this view (I haven’t looked much), but non-realist Brian Tomasik [gives](http://reducing-suffering.org/why-the-modesty-argument-for-moral-realism-fails/#Whats_the_harm_with_moral_realism) this version of moral realism as a reason to discuss moral realism:\n\n\n\n> Moral realism is a fun philosophical topic that inevitably generates heated debates. But does it matter for practical purposes? […] One case where moral realism seems problematic is regarding superintelligence. Sometimes it’s argued that advanced artificial intelligence, in light of its superior cognitive faculties, will have a better understanding of moral truth than we do. As a result, if it’s programmed to care about moral truth, the future will go well. If one rejects the idea of moral truth, this quixotic assumption is nonsense and could lead to dangerous outcomes if taken for granted.\n> \n> \n\n\n(Below, I will argue that there might be no reason to be afraid of moral realists. However, my argument will, like Brian’s, also imply that moral realism is worth debating in the context of AI.)\n\n\nAs an example, consider a moral realist view according to which moral truth is similar to mathematical truth: there are some axioms of morality which are true ([for reasons I, as a non-realist, do not understand or agree with](https://casparoesterheld.com/2016/01/25/mathematical-versus-moral-truth/)) and together these axioms imply some moral theory X. This moral realist view suggests an approach to AI alignment: program the AI to abide by these axioms (in the same way as we can have automated theorem provers assume some set of mathematical axioms to be true). It seems clear that something along these lines could work. However, this approach’s reliance on moral realism is also much weaker.\n\n\nAs a second example, [divine command theory](https://en.wikipedia.org/wiki/Divine_command_theory) states that moral truth is determined by God’s will (again, I don’t see why this should be true and how it could possibly be justified). A divine command theorist might therefore want to program the AI to do whatever God wants it to do.\n\n\nHere are some more such theories:\n\n\n* Social contract\n* Habermas’ discourse ethics\n* Universalizability / Kant’s categorical imperative\n* Applying human intuition\n\n\nBesides pointing being easier than describing, another potential advantage of such a moral realist approach might be that one is more confident in one’s meta-ethical view (“the pointer”) than in one’s object-level moral system (“one’s own description”). For example, someone could be confident that moral truth is determined by God’s will but be unsure that God’s will is expressed via the Bible, the Quran or something else, or how these religious texts are to be understood. Then that person would probably favor AI that cares about God’s will over AI that follows some particular interpretation of, say, the moral rules proposed in the Quran and Sharia.\n\n\nA somewhat related issue which has received more attention in the moral realism literature is the convergence of human moral views. People have given moral realism as an explanation for why there is near-universal agreement on some ethical views (such as “when religion and tradition do not require otherwise, one shouldn’t torture babies”). Similarly, moral realism has been associated with moral progress in human societies, see, e.g., [Huemer (2016)](https://philpapers.org/rec/HUEALR-2). At the same time, people have used the existence of persisting and unresolvable moral disagreements (see, e.g., [Bennigson 1996](https://onlinelibrary.wiley.com/doi/abs/10.1111/j.2041-6962.1996.tb00800.x) and [Sayre-McCord 2017, sect. 1](https://plato.stanford.edu/archives/fall2017/entries/moral-realism/#1)) and the existence of gravely immoral behavior in some intelligent people (see, e.g., [Nichols 2002](https://s3.amazonaws.com/academia.edu.documents/31256097/PsychopathsFinal.pdf?AWSAccessKeyId=AKIAIWOWYYGZ2Y53UL3A&Expires=1533544315&Signature=mi6MTzQxETvdYmMJhKvpw0uyvyc%3D&response-content-disposition=inline%3B%20filename%3DHow_Psychopaths_Threaten_Moral_Rationali.pdf)) as arguments against moral realism. Of course, all of these arguments take moral realism to include a convergence thesis where being a human (and perhaps not being affected by some mental disorders) or a being a society of humans is sufficient to grasp and abide by moral truth.\n\n\nOf course, there are also versions of moral realism that have even weaker (or just very different) implications for AI alignment and do not make any relevant convergence claims (cf. [McGrath 2010](https://pdfs.semanticscholar.org/e801/e869aef3fcf22801709968f7447e1493c0a3.pdf)). For instance, there may be moral realists who believe that there is a moral truth but that machines are in principle incapable of finding out what it is. Some may also call very different views “moral realism”, e.g. claims that *given* some moral imperative, it can be decided whether an action does or does not comply with that imperative. (We might call this “hypothetical imperative realism”.) Or “linguistic” versions of moral realism which merely make claims about the meaning of moral statements as intended by whoever utters these moral statements. (Cf. [Lukas Gloor’s post](http://effective-altruism.com/ea/1op/1_what_is_moral_realism/) on how different versions of moral realism differ drastically in terms of how consequential they are.) Or a kind of “subjectivist realism”, which drops mind-independence (cf. [Olson 2014, ch. 2](http://www.oxfordscholarship.com/view/10.1093/acprof:oso/9780198701934.001.0001/acprof-9780198701934-chapter-2)).\n\n\nWhy moral-realism-inspired research on AI alignment might be useful\n===================================================================\n\n\nI can think of many reasons why moral realism-based approaches to AI safety have not been pursued much: AI researchers often [do not](https://www.prospectmagazine.co.uk/science-and-technology/artificial-intelligence-wheres-the-philosophical-scrutiny) have a sufficiently high awareness of or interest in philosophical ideas; the AI safety researchers who do – such as researchers at [MIRI](https://intelligence.org/) – tend to reject moral realism, at least the versions with implications for AI alignment; although “moral realism” is popular among philosophers, versions of moral realism with strong implications for AI (à la Peter Singer or David Pearce) might be unpopular even among philosophers (cf. again [Lukas’ post](http://effective-altruism.com/ea/1op/1_what_is_moral_realism/) on how different versions of moral realism differ drastically in terms of how consequential they are); and so on…\n\n\nBut why am I now proposing to conduct such research, given that I am not a moral realist myself? The main reason (besides some weaker reasons like pluralism and keeping this blog interesting) is that I believe AI alignment research from a moral realist perspective might actually increase agreement between moral realists and anti-realists about how (and to which extent) AI alignment research should be done. In the following, I will briefly argue this case for the strong (à la Peter Singer and David Pearce) and the weak convergence versions of moral realism outlined above.\n\n\nStrong versions\n---------------\n\n\nLike most problems in philosophy, the question of whether moral realism is true lacks an accepted truth condition or an accepted way of verifying an answer or an argument for either realism or anti-realism. This is what makes these problems so puzzling and intractable. This is in contrast to problems in mathematics where it is pretty clear what counts as a proof of a hypothesis. (This is, of course, not to say that mathematics involves no creativity or that there are no [general purpose “tools” for](https://www.wiley.com/en-us/The+Philosopher%27s+Toolkit%3A+A+Compendium+of+Philosophical+Concepts+and+Methods%2C+2nd+Edition-p-9781405190183) philosophy.) However, the claim made by strong convergence moral realism is more like a mathematical claim. Although it is yet to be made precise, we can easily imagine a mathematical (or computer-scientific) hypothesis stating something like this: “For any goal X of some kind [namely the objectively incorrect and non-trivial-to-achieve kind] there is no efficient algorithm that when implemented in a robot achieves X in some class of environments. So, for instance, it is in principle impossible to build a robot that turns Earth into a pile of paperclips.” It may still be hard to formalize such a claim and mathematical claims can still be hard to prove or disprove. But determining the truth of a mathematical statement is not a philosophical problem, anymore. If someone lays out a mathematical proof or disproof of such a claim, any reasonable person’s opinion would be swayed. Hence, I believe that work on proving or disproving this strong version of moral realism will lead to (more) agreement on whether the “strong-moral-realism-based theory of AI alignment” is true.\n\n\nIt is worth noting that finding out whether strong convergence is true may not resolve metaphysical issues. Of course, all strong versions of moral realism would turn out false if the strong convergence hypothesis were falsified. But other versions of moral realism would survive. Conversely, if the strong convergence hypothesis turned out to be true, then anti-realists may remain anti-realists (cf. footnote [2](#fn2)). But if our goal is to make AI moral, the convergence question is much more important than the metaphysical question. (That said, for some people the metaphysical question has a bearing on whether they have preferences over AI systems’ motivation system – “if no moral view is more true than any other, why should I care about what AI systems do?”)\n\n\nWeak versions\n-------------\n\n\nWeak convergence versions of moral realism do not make such in-principle-testable predictions. Their only claim is the metaphysical view that the goals identified by some method X (such as derivation from a set moral axioms, finding out what God wants, discourse, etc.) have some relation to moral truths. Thinking about weak convergence moral realism from the more technical AI alignment perspective is therefore unlikely to resolve disagreements about whether some versions of weak convergence moral realism are true. However, I believe that by not making testable predictions, weak convergence versions of moral realism are also unlikely to lead to disagreement about how to achieve AI alignment.\n\n\nImagine moral realists were to propose that AI systems should reason about morality according to some method X on the basis that the result of applying X is the moral truth. Then moral *anti*-realists could agree with the proposal on the basis that they (mostly) agree with the results of applying method X. Indeed, for any moral theory with realist ambitions, ridding that theory of these ambitions yields a new theory which an anti-realist could defend. As an example, consider Habermas’ discourse ethics and Yudkowsky’s Coherent Extrapolated Volition. The two approaches to justifying moral views seem quite similar – roughly: do what everyone would agree with if they were exposed to more arguments. But Habermas’ theory explicitly claims to be realist while Yudkowsky is a moral anti-realist, as far as I can tell.\n\n\nIn principle, it could be that moral realists defend some moral view on the grounds that it is true even if it seems implausible to others. But here’s a general argument for why this is unlikely to happen. You cannot directly perceive ought statements (David Pearce and others would probably disagree) and it is easy to show that [you cannot derive a statement containing an ought without](https://en.wikipedia.org/wiki/Is%E2%80%93ought_problem) using other statements containing an ought or inference rules that can be used to introduce statements containing an ought. Thus, if moral realism (as I understand it for the purpose of this paper) is true, there must be some moral axioms or inference rules that are true without needing further justification, [similar to](https://casparoesterheld.com/2016/01/25/mathematical-versus-moral-truth/) how some people view the axioms of Peano arithmetic or Euclidean geometry. An example of such a moral rule could be (a formal version of) “pain is bad”. But if these rules are “true without needing further justification”, then they are probably appealing to anti-realists as well. Of course, anti-realists wouldn’t see them as deserving the label of “truth” (or “falsehood”), but assuming that realists and anti-realists have similar moral intuitions, anything that a realist would call “true without needing further justification” should also be appealing to a moral anti-realist.\n\n\nAs I have argued [elsewhere](https://casparoesterheld.com/2016/01/25/mathematical-versus-moral-truth/), it’s unlikely we will ever come up with (formal) axioms (or methods, etc.) for morality that would be widely accepted by the people of today (or even among today’s Westerners with secular ethics). But I still think it’s worth a try. If it doesn’t work out, weak convergence moral realists might come around to other approaches to AI alignment, e.g. ones based on extrapolating from human intuition.\n\n\nOther realist positions\n=======================\n\n\nBesides realism about morality, there are many other less commonly discussed realist positions, for instance, realism about which prior probability distribution to use, whether to choose according to some expected value maximization principle (and if so which one), etc. The above considerations apply to these other realist positions as well.\n\n\nAcknowledgment\n==============\n\n\nI wrote this post while working for the Foundational Research Institute, which is now the [Center on Long-Term Risk](https://longtermrisk.org/).\n\n\n\n\n---\n\n\n1. There are some “universal instrumental goal” approaches to justifying morality. Some are based on cooperation and work roughly like this: “Whatever your intrinsic goals are, it is often better to be nice to others so that they reciprocate. That’s what morality is.” I think such theories fail for two reasons: First, there seem to many widely accepted moral imperatives that cannot be fully justified by cooperation. For example, we usually consider it wrong for dictators to secretly torture and kill people, even if doing so has no negative consequences for them. Second, being nice to others because one hopes that they reciprocate is not, I think, what morality is about. To the contrary, I think morality is about caring things (such as other people’s welfare) *intrinsically*. I discuss this issue in detail with a focus on so-called “superrational cooperation” in [chapter 6.7 of “Multiverse-wide Cooperation via Correlated Decision Making”](https://foundational-research.org/files/Multiverse-wide-Cooperation-via-Correlated-Decision-Making.pdf#page=96). Another “universal instrumental goal” approach is the following: If there is at least one god, then not making these gods angry at you may be another universal instrumental goal, so whatever an agent’s intrinsic goal is, it will also act according to what the gods want. The same “this is not what morality is about” argument seems to apply. [↩](#ref1 \"Jump back to footnote 1 in the text.\")\n\n\n2. Yudkowsky has written about why he now rejects this form of moral realism in the first couple of blog posts in the [“Value Theory”](https://wiki.lesswrong.com/wiki/Mere_Goodness#V._Value_Theory) series. [↩](#ref2 \"Jump back to footnote 2 in the text.\")", "url": "https://casparoesterheld.com/2018/08/06/moral-realism-and-ai-alignment/", "title": "Moral realism and AI alignment", "source": "html_articles", "source_type": "blogPost", "source_filetype": "pdf", "date_published": "2018-08-05T22:00:00Z", "authors": ["Caspar Oesterheld"], "summary": [], "id": "7e9a199cfce4e57c004753f75e5dd683"} {"text": "In this post, I highlight some parallels between [AI Safety by Debate](https://openai.com/blog/debate/) (“Debate”) and [evidence law](https://www.law.cornell.edu/rules/fre).\n\n\nEvidence law structures high-stakes arguments with human judges.\n================================================================\n\n\nThe prima facie reason that Evidence law (“Evidence”) is relevant to Debate is because Evidence is one of the few areas, like Debate, where debates have high stakes: potentially including severe criminal penalties or millions of dollars in liability. Other high-stakes debates could include parliamentary or electoral debates, but these are less substantively limited (i.e., there are fewer restraints on what debaters can do) and less aimed at seeking truth (and more aimed at political theater).\n\n\nIn court proceedings, questions of law are decided by the judge, while the questions of fact are decided by the finder of fact (usually the jury, but sometimes a judge). The finder of fact weighs the persuasiveness of factual arguments (e.g., whether the defendant shot the victim, and whether he intended to do so). In all cases, like in Debate, the final arbiter of factual debates is human.\n\n\nEvidence law limits the types of arguments available to debaters.\n=================================================================\n\n\nThe goal of the Federal Rules of Evidence is “ascertaining the truth and securing a just determination.” Therefore, generally, “relevant evidence is admissible unless [otherwise provided].” A piece of evidence is relevant if “(a) it has any tendency to make a fact more or less probable than it would be without the evidence; and (b) the fact is of consequence in determining the action.”\n\n\nHowever, the bulk of Evidence law is dedicated to exceptions to this presumption of admissibility. The precision of these exceptions varies significantly. Some are less precise (“standards,” in legal jargon) such as Rule 403: “The court may exclude relevant evidence if its probative value is substantially outweighed by a danger of one or more of the following: unfair prejudice, confusing the issues, misleading the jury, undue delay, wasting time, or needlessly presenting cumulative evidence.” Others are more specific (“rules”).\n\n\n**As Rule 403 exemplifies, many of the exceptions to the general admissibility of relevant evidence are based on the fallibility of fact-finders**. Evidence that is relevant but likely to be on-balance detrimental to truth-seeking is therefore excluded. Other examples of rules of this form include:\n\n\n1. Use of a person’s character to prove action in conformity with that character;\n2. Limitations on the use of out-of-court statements; and\n3. Limitations on impeaching witnesses by their past criminal convictions or religious beliefs.\n\n\nRelevance to Debate\n===================\n\n\nTypes of Arguments to Watch For\n-------------------------------\n\n\nThe rules of Evidence have evolved over long experience with high-stakes debates, so their substantive findings on the types of arguments that prove problematic for truth-seeking are relevant to Debate.\n\n\nOpportunities for Structuring Debate\n------------------------------------\n\n\nThe rules of evidence could also be used to structure Debate: e.g., by training AI debaters to not make certain types of arguments, or by having a mediator screen any arguments that would violate the rules, such that the ultimate judge does not see them.", "url": "https://cullenokeefe.com/blog/debate-evidence", "title": "Parallels Between AI Safety by Debate and Evidence Law", "source": "html_articles", "source_type": "blogPost", "source_filetype": "pdf", "date_published": "2020-07-19T22:00:00Z", "authors": ["Cullen O'Keefe"], "summary": [], "id": "81b4a1f4b70469a4beca224ba3dc5409"} {"text": "*By Tom Everitt, Ryan Carey, Lewis Hammond, James Fox, Eric Langlois, and Shane Legg*\n\n*Crossposted to the* [*alignmentforum*](https://www.alignmentforum.org/posts/Cd7Hw492RqooYgQAS/progress-on-causal-influence-diagrams)\n\nAbout 2 years ago, we released the [first](https://medium.com/@deepmindsafetyresearch/understanding-agent-incentives-with-causal-influence-diagrams-7262c2512486) [few](https://medium.com/@deepmindsafetyresearch/designing-agent-incentives-to-avoid-reward-tampering-4380c1bb6cd) [papers](https://arxiv.org/abs/1906.08663) on understanding agent incentives using causal influence diagrams. This blog post will summarize progress made since then.\n\nWhat are causal influence diagrams?\n===================================\n\nA key problem in AI alignment is understanding agent incentives. Concerns have been raised that agents may be incentivized to [avoid correction](https://intelligence.org/files/Corrigibility.pdf), [manipulate users](https://www.youtube.com/watch?v=ZkV7anCPfaY), or [inappropriately influence their learning](https://arxiv.org/abs/2004.13654). This is particularly worrying as training schemes often shape incentives in [subtle](https://arxiv.org/abs/1611.08219) and [surprising](https://arxiv.org/abs/2009.09153) ways. For these reasons, we’re developing a formal theory of incentives based on causal influence diagrams (CIDs).\n\nHere is an example of a CID for a one-step Markov decision process (MDP). The random variable S₁ represents the state at time 1, A₁ represents the agent’s action, S₂ the state at time 2, and R₂ the agent’s reward.\n\n![]()The action A₁ is modeled with a decision node (square) and the reward R₂ is modeled as a utility node (diamond), while the states are normal chance nodes (rounded edges). Causal links specify that S₁ and A₁ influence S₂, and that S₂ determines R₂. The information link S₁ → A₁ specifies that the agent knows the initial state S₁ when choosing its action A₁.\n\nIn general, random variables can be chosen to represent agent decision points, objectives, and other relevant aspects of the environment.\n\nIn short, a CID specifies:\n\n* Agent decisions\n* Agent objectives\n* Causal relationships in the environment\n* Agent information constraints\n\nThese pieces of information are often essential when trying to figure out an agent’s incentives: how an objective can be achieved depends on how it is causally related to other (influenceable) aspects in the environment, and an agent’s optimization is constrained by what information it has access to. In many cases, the qualitative judgements expressed by a (non-parameterized) CID suffice to infer important aspects of incentives, with minimal assumptions about implementation details. Conversely, it has [been shown](https://arxiv.org/abs/1910.10362) that it is necessary to know the causal relationships in the environment to infer incentives, so it’s often impossible to infer incentives with less information than is expressed by a CID. This makes CIDs natural representations for many types of incentive analysis.\n\nOther advantages of CIDs is that they build on well-researched topics like [causality](https://www.amazon.co.uk/Causality-Judea-Pearl/dp/052189560X) and [influence diagrams](https://arxiv.org/abs/cs/9512104), and so allows us to leverage the deep thinking that’s already been done in these fields.\n\nIncentive Concepts\n==================\n\nHaving a unified language for objectives and training setups enables us to develop generally applicable concepts and results. We define four such concepts in [Agent Incentives: A Causal Perspective](https://arxiv.org/abs/2102.01685) (AAAI-21):\n\n* **Value of information**: what does the agent want to know before making a decision?\n* **Response incentive**: what changes in the environment do optimal agents respond to?\n* **Value of control**: what does the agent want to control?\n* **Instrumental control incentive**: what is the agent both interested and able to control?\n\nFor example, in the one-step MDP above:\n\n* For S₁, an optimal agent would act differently (i.e. respond) if S₁ changed, and would value knowing and controlling S₁, but it cannot influence S₁ with its action. So S₁ has value of information, response incentive, and value of control, but not an instrumental control incentive.\n* For S₂ and R₂, an optimal agent could not respond to changes, nor know them before choosing its action, so these have neither value of information nor a response incentive. But the agent would value controlling them, and is able to influence them, so S₂ and R₂ have value of control and instrumental control incentive.\n\n![]()In the paper, we prove sound and complete graphical criteria for each of them, so that they can be recognized directly from a graphical CID representation (see previous [blog](https://medium.com/@deepmindsafetyresearch/understanding-agent-incentives-with-causal-influence-diagrams-7262c2512486) [posts](https://towardsdatascience.com/new-paper-the-incentives-that-shape-behaviour-d6d8bb77d2e4)).\n\nValue of information and value of control are classical concepts that have been around for a long time (we contribute to the graphical criteria), while response incentives and instrumental control incentives are new concepts that we have found useful in several applications.\n\nFor readers familiar with [previous](https://arxiv.org/abs/1902.09980) [iterations](https://arxiv.org/abs/2001.07118) of this paper, we note that some of the terms have been updated. **Instrumental control incentives** were previously called just “control incentives”. The new name emphasizes that it’s control as an instrumental goal, as opposed to control arising as a side effect (or [due to mutual information](https://www.alignmentforum.org/posts/67a8C6KsKn2NyW2Ry/counterfactual-control-incentives)). **Value of information** and **value of control** were previously called “observation incentives” and “intervention incentives”, respectively.\n\nUser Interventions and Interruption\n===================================\n\nLet us next turn to some recent applications of these concepts. In [How RL Agents Behave when their Actions are Modified](https://arxiv.org/abs/2102.07716) (AAAI-21), we study how different RL algorithms react to user interventions such as interruptions and over-ridden actions. For example, [Saunders et al.](https://arxiv.org/abs/1707.05173#:~:text=Trial%20without%20Error%3A%20Towards%20Safe%20Reinforcement%20Learning%20via%20Human%20Intervention,-William%20Saunders%2C%20Girish&text=AI%20systems%20are%20increasingly%20applied,that%20involve%20interaction%20with%20humans.&text=For%20model%2Dfree%20reinforcement%20learning,way%20to%20prevent%20all%20catastrophes.) developed a method for safe exploration where a user overrides dangerous actions. Alternatively, agents might get interrupted if analysis of their “thoughts” (or internal activations) suggest they are planning something dangerous. How do such interventions affect the incentives of various RL algorithms?\n\nFirst, we formalize action-modification by extending MDPs with a parameter PA that describes action-modification. We then model such **modified-action MDPs** with a CID:\n\n![]()Here we model the agent’s policy Π as the decision rather than the actions Ai, as the latter are not under full control of the agent, but can also be influenced by the action-modification PA (as represented by arrows PA → Aᵢ and Π → Aᵢ). The agent might know the interruption scheme PA from interruptions during training, so we include an information link PA → Π.\n\nWe analyze different prototypical RL algorithms in terms of the causal assumptions they make on the environment:\n\n* **Black-box optimization** algorithms such as **evolutionary strategies** take into account all causal relationships.\n* In contrast, the update rule of **Q-learning** effectively assumes that the next action will be taken optimally, with no action-modification. This means that Q-learners ignore causal effects PA → Aᵢ. Similarly, **SARSA** with the action chosen by the agent in the TD-update assumes that it will be in control of its next action. We call this version **virtual SARSA**.\n* SARSA based on the modified action (**empirical SARSA**) ignores the effect of action-modification on the current action, but takes into account the effect on subsequent actions.\n\nWe can represent the differences with the following CIDs. (The extra policy node Π’ for empirical SARSA represents that action are optimized separately rather than jointly.)\n\n![]()The CIDs can be used to understand how the different algorithms adapt to interruption, via a graphical criterion for **path-specific response incentives**. Black-box optimization tries to both obscure its policy and to disable its off-switch, whereas Q-learning and Virtual SARSA do neither. Empirical SARSA tries to disable the off-switch, but does not try to obscure its policy.\n\nWe verify these results empirically in the relevant [AI safety gridworlds](https://arxiv.org/abs/1711.09883), as well as in one new environment where the agent has to behave well in simulation to be deployed in reality, where black-box optimizers exhibit “[treacherous turn](https://www.lesswrong.com/posts/B39GNTsN3HocW8KFo/superintelligence-11-the-treacherous-turn)”-like behavior. The results are a generalization of [Orseau and Armstrong](https://deepmind.com/research/publications/safely-interruptible-agents)’s interruptibility results for Q-learning and SARSA.\n\nZooming out, these results are a good example of causal analysis of ML algorithms. Different design choices translate into different causal assumptions, which in turn determine the incentives. In particular, the analysis highlights why the different incentives arise, thus deepening our understanding of how behavior is shaped.\n\nReward Tampering\n================\n\nAnother AI safety problem that we have studied with CIDs is **reward tampering**. Reward tampering can take several different forms, including the agent:\n\n* rewriting the source code of its implemented reward function (“wireheading”),\n* influencing users that train a learned reward model (“feedback tampering”),\n* manipulating the inputs that the reward function uses to infer the state (“RF-input tampering / delusion box problems”).\n\nFor example, the problem of an agent influencing its reward function may be modeled with the following CID, where RFᵢ represent the agent’s reward function at different time steps, and the red links represent an undesirable instrumental control incentive.\n\n![]()In [Reward Tampering Problems and Solutions](https://rdcu.be/ckWLC) (published in the well-respected philosophy journal Synthese) we model all these different problems with CIDs, as well as a range of proposed solutions such as current-RF optimization, [uninfluenceable reward learning](https://arxiv.org/abs/2004.13654#:~:text=We%20show%20that%20this%20comes,for%20all%20relevant%20reward%20functions).), and [model-based utility functions](https://arxiv.org/abs/1111.3934). Interestingly, even though these solutions were initially developed independently of formal causal analysis, they all avoid undesirable incentives by cutting some causal links in a way that avoids instrumental control incentives.\n\nBy representing these solutions in a causal framework, we can get a better sense of why they work, what assumptions they require, and how they relate to each other. For example, current-RF optimization and model-based utility functions both formulate a modified objective in terms of an observed random variable from a previous time step, whereas uninfluenceable reward learning (such as [CIRL](https://arxiv.org/abs/1606.03137)) uses a latent variable:\n\n![]()As a consequence, the former methods must deal with time-inconsistency and a lack of incentive to learn, while the latter requires inference of a latent variable. It will likely depend on the context whether one is preferable over the other, or if a combination is better than either alone. Regardless, having distilled the key ideas should put us in a better position to flexibly apply the insights in novel settings.\n\nWe refer to the [previous blog post](https://medium.com/@deepmindsafetyresearch/designing-agent-incentives-to-avoid-reward-tampering-4380c1bb6cd) for a longer summary of current-RF optimization. The paper itself has been significantly updated since previously shared preprints.\n\nMulti-Agent CIDs\n================\n\nMany interesting incentive problems arise when multiple agents interact, each trying to optimize their own reward while they simultaneously influence each other’s payoff. In [Equilibrium Refinements in Multi-Agent Influence Diagrams](https://arxiv.org/abs/2102.05008) (AAMAS-21), we build on the [seminal work by Koller and Milch](http://people.csail.mit.edu/milch/papers/geb-maid.pdf) to lay foundations for understanding multi-agent situations with multi-agent CIDs (MACIDs).\n\nFirst, we relate MACIDs to [extensive-form games](https://en.wikipedia.org/wiki/Extensive-form_game) (EFGs), currently the most popular graphical representations of games. While EFGs sometimes offer more natural representations of games, they have some significant drawbacks compared to MACIDs. In particular, EFGs can be exponentially larger, don’t represent conditional independencies, and lack random variables to apply incentive analysis to.\n\nAs an example, consider a game where a store (Agent 1) decides (D¹) whether to charge full (F) or half (H) price for a product depending on their current stock levels (X), and a customer (Agent 2) decides (D²) whether to buy it (B) or pass (P) depending on the price and how much they want it (Y). The store tries to maximize their profit U¹, which is greater if the customer buys at a high price. If they are overstocked and the customer doesn’t buy, then they have to pay extra rent. The customer is always happy to buy at half price, and sometimes at full price (depending on how much they want the product).\n\nThe EFG representation of this game is quite large, and uses **information sets** (represented with dotted arcs) to represent the facts that the store doesn’t know how much the customer wants the gadget, and that the customer doesn’t know the store’s current stock levels:\n\n![]()In contrast, the MACID representation is significantly smaller and clearer. Rather than relying on information sets, the MACID uses information links (dotted edges) to represent the limited information available to each player:\n\n![]()Another aspect that is made more clear from the MACID, is that for any fixed customer decision, the store’s payoff is independent of how much the customer wanted the product (there’s no edge Y→U¹). Similarly, for any fixed product price, the customer’s payoff is independent of the store’s stock levels (no edge X→U²). In the EFG, these independencies could only be inferred by looking carefully at the payoffs.\n\nOne benefit of MACIDs explicitly representing these conditional independencies is that more parts of the game can be identified as independently solvable. For example, in the MACID, the following independently solvable component can be identified. We call such components **MACID subgames**:\n\n![]()Solving this subgame for any value of D¹ reveals that the customer always buys when they really want the product, regardless of whether there is a discount. This knowledge makes it simpler to next compute the optimal strategy for the store. In contrast, in the EFG the information sets prevent any proper subgames from being identified. Therefore, solving games using a MACID representation is often faster than using an EFG representation.\n\nFinally, we relate various forms of equilibrium concepts between MACIDs and EFGs. The most famous type of equilibrium is the **Nash equilibrium**, which occurs when no player can unilaterally improve their payoff. An important refinement of the Nash equilibrium is the [**subgame perfect equilibrium**](https://en.wikipedia.org/wiki/Subgame_perfect_equilibrium)**,** which rules out non-credible threats by requiring that a Nash equilibrium is played in every subgame. An example of a non-credible threat in the store-customer game would be the customer “threatening” the store to only buy at a discount. The threat is **non-credible**, since the best move for the customer is to buy the product even at full price, if he really wants it. Interestingly, only the MACID version of subgame perfectness is able rule such threats out, because only in the MACID is the customer’s choice recognized as a proper subgame.\n\nUltimately, we aim to use MACIDs to analyze incentives in multi-agent settings. With the above observations, we have put ourselves in position to develop a theory of multi-agent incentives that is properly connected to the broader game theory literature.\n\nSoftware\n========\n\nTo help us with our research on CIDs and incentives, we’ve developed a Python library called [PyCID](https://github.com/causalincentives/pycid), which offers:\n\n* A convenient syntax for defining CIDs and MACIDs,\n* Methods for computing optimal policies, Nash equilibria, d-separation, interventions, probability queries, incentive concepts, graphical criteria, and more,\n* Random generation of (MA)CIDs, and pre-defined examples.\n\nNo setup is necessary, as the [tutorial notebooks](https://colab.research.google.com/github/causalincentives/pycid/blob/master/notebooks/CID_Basics_Tutorial.ipynb) can be run and extended directly in the browser, thanks to Colab.\n\nWe’ve also made available a [Latex package](https://github.com/causalincentives/cid-latex) for drawing CIDs, and have launched [causalincentives.com](https://causalincentives.com/) as a place to collect links to the various papers and software that we’re producing.\n\nLooking ahead\n=============\n\nUltimately, we hope to contribute to a more careful understanding of how design, training, and interaction shapes an agent’s behavior. We hope that a precise and broadly applicable language based on CIDs will enable clearer reasoning and communication on these issues, and facilitate a cumulative understanding of how to think about and design powerful AI systems.\n\nFrom this perspective, we find it encouraging that several other research groups have adopted CIDs to:\n\n* Analyze the incentives of [unambitious agents](https://arxiv.org/pdf/1905.12186.pdf) to break out of their box,\n* Explain [uninfluenceable reward learning](https://arxiv.org/abs/2004.13654), and clarifying its desirable properties (see also Section 3.3 in the [reward tampering paper](https://rdcu.be/ckWLC)),\n* Develop a novel framework to make agents [indifferent](https://link.springer.com/chapter/10.1007%2F978-3-030-52152-3_21) to human interventions.\n\nWe’re currently to pursuing several directions of further research:\n\n* Extending the general incentive concepts to multiple decisions and multiple agents.\n* Applying them to fairness and other AGI safety settings.\n* Analysing limitations that have been identified with work so far. Firstly, considering the issues raised by [Armstrong and Gorman. And secondly,](https://www.alignmentforum.org/posts/67a8C6KsKn2NyW2Ry/counterfactual-control-incentives) looking at broader concepts than instrumental control incentives, as influence can also be incentivized as a side-effect of an objective.\n* Probing further at their philosophical foundations, and establishing a clearer semantics for decision and utility nodes.\n\nHopefully we’ll have more news to share soon!\n\n*We would like to thank Neel Nanda, Zac Kenton, Sebastian Farquhar, Carolyn Ashurst, and Ramana Kumar for helpful comments on drafts of this post.*\n\n**List of recent papers**:\n==========================\n\n* [Agent Incentives: A Causal Perspective](https://arxiv.org/abs/2102.01685)\n* [How RL Agents Behave When Their Actions Are Modified](https://arxiv.org/abs/2102.07716)\n* [Reward tampering problems and solutions in reinforcement learning: A causal influence diagram perspective](https://rdcu.be/ckWLC)\n* [Equilibrium Refinements for Multi-Agent Influence Diagrams: Theory and Practice](https://arxiv.org/abs/2102.05008)\n\nSee also [causalincentives.com](https://causalincentives.com/)", "url": "https://deepmindsafetyresearch.medium.com/progress-on-causal-influence-diagrams-a7a32180b0d1", "title": "Progress on Causal Influence Diagrams", "source": "html_articles", "source_type": "blogPost", "source_filetype": "pdf", "date_published": "2021-08-10T22:00:00Z", "authors": ["Tom Everitt", "Ryan Carey", "Lewis Hammond", "James Fox", "Eric Langlois", "Shane Legg"], "summary": [], "id": "683d769291a5722a5fdb1c082a1d8908"} {"text": "*By the Safety Analysis Team: Grégoire Déletang, Jordi Grau-Moya, Miljan Martic, Tim Genewein, Tom McGrath, Vladimir Mikulik, Markus Kunesch, Shane Legg, and Pedro A. Ortega.*\n\n**TL;DR: To study agent behaviour we must use the tools of causal analysis rather than rely on observation alone.** [**Our paper**](https://arxiv.org/abs/2103.03938) **outlines a rigorous methodology for uncovering the agents’ causal mechanisms.**\n\nUnderstanding the mechanisms that drive agent behaviour is an important challenge in AI safety. In order to diagnose faulty behaviour, we need to understand **why** agents do what they do. As is the case in medical trials, it is not sufficient to observe that a treatment correlates with a recovery rate; instead we are interested in whether the treatment **causes** the recovery. In order to address such “why” questions in a systematic manner we can use **targeted manipulations** and **causal models.**\n\nHowever, large AI systems can operate like **black boxes**. Even if we know their entire blueprint (architecture, learning algorithms, and training data), predicting their behaviour can still be beyond our reach, because understanding the complex interplay between the parts is intractable. And as the complexity of agents increases in the future, this limitation will persist. Therefore we need black-box methodologies for finding simple and intuitive causal explanations that can be understood easily by humans and are sufficiently good for predicting their behaviour.\n\nIn our recent work we describe the methodology we use for analysing AI agents. This methodology encourages analysts to experiment and to rigorously characterise causal models of agent behaviour.\n\nAnalysis (Software) Components\n==============================\n\nThe methodology uses three components: an agent to be studied, a simulator, and a causal reasoning engine.\n\n1. **Agent:** Typically this is an agent provided to us by an agent builder. It could be an IMPALA agent that has been meta-trained on a distribution over grid-world mazes. Often the agent builders already have a few specific questions they’d like us to investigate.\n2. **Simulator — “the agent debugger”:** Our experimentation platform. With it, we can simulate the agent and run experiments. Furthermore, it allows us to perform all sorts of operations we’d usually expect from a debugger, such as stepping forward/backward in the execution trace, setting breakpoints, and setting/monitoring variables. \nWe also use the simulator to generate data for the estimation of statistical parameters. Since we can manipulate factors in the environment, the data we collect is typically interventional and thus contains causal information. This is illustrated in Figure 1 below.\n3. **Causal reasoning engine:** This automated reasoning system allows us to specify and query causal models with associational, interventional, and counterfactual questions. We use these models to validate causal hypotheses. A model is shown in Figure 2 below.\n\n![]()***Figure 1. The simulator:*** *our experimentation platform. Starting from an initial state (root node, upper-left) the simulator allows us to execute a trace of interactions. We can also perform interventions, such as changing the random seed, forcing the agent to pick desired actions, and manipulating environmental factors. These interventions create new branches of the execution trace.*![]()**Figure 2. A causal model**, represented as a causal Bayesian network.Analysis Methodology\n====================\n\nWhenever we analyse an agent, we repeat the following five steps until we reach a satisfactory understanding.\n\n1. **Exploratory analysis:** We place the trained agent into one or more test environments and probe its behaviour. This will give us a sense of what the relevant factors of behaviour are. It is the starting point for formulating our causal hypotheses.\n2. **Identify the relevant abstract variables:** We choose a collection of variables that we deem relevant for addressing our questions. For instance, possible variables are: “does the agent collect the key?”, “is the door open?”, etc.\n3. **Gather data:** We perform experiments in order to collect statistics for specifying the conditional probability tables in our causal model. Typically this implies producing thousands of rollouts under different conditions/interventions.\n4. **Formulate the causal model:** We formulate a structural causal model (SCM) encapsulating all causal and statistical assumptions. This is our explanation for the agent’s behaviour.\n5. **Query the causal model:** Finally, we query the causal model to answer the questions we have about the agent.\n\nLet’s have a look at an example.\n\nExample: Causal effects under confounding\n=========================================\n\nAn important challenge of agent training is to make sure that the resulting agent makes the right choices for the right reasons. However, if the agent builder does not carefully curate the training data, the agent might pick up on unintended, spurious correlations to solve a task [1]. This is especially the case when the agent’s policy is implemented with a deep neural network. The problem is that policies that base their decisions on accidental correlations do not generalise.\n\nUnfortunately, all too often when we observe an agent successfully performing a task, we are tempted to jump to premature conclusions. If we see the agent repeatedly navigating from a starting position to a desired target, we might conclude that the agent did so **because** the agent is sensitive to the location of the target.\n\nFor instance, consider the 2 T-shaped mazes shown below (the “grass-sand environments”). We are given two pre-trained agents A and B. Both of them always solve the task by choosing the terminal containing a rewarding pill. As analysts, we are tasked to verify that they pick the correct terminal because they follow the rewarding pill.\n\n![]()***Figure 3. Grass-Sand environments:*** *In these 2 T-shaped mazes, the agent can choose between one of two terminal states, only one of which contains a rewarding pill. During tests, we observe that a pre-trained agent always successfully navigates to the location of the pill.*However, in these mazes the floor type happens to be perfectly correlated with the location of the rewarding pill: when the floor is grass, the pill is always located on one side, and when the floor is sand, the pill is on the other side. Thus, could the agents be basing their decision on the floor type, rather than on the location of the pill? Because the floor type is the more salient feature of the two (spanning more tiles), this is a plausible explanation if an agent was only trained on these two mazes.\n\nAs it turns out, we can’t tell whether the decision is based upon the location of the rewarding pill through observation alone.\n\nDuring our exploratory analysis we performed two experiments. In the first, we manipulated the location of the reward pill; and in the second, the type of floor. We noticed that agents A and B respond differently to these changes. This led us to choose the following variables for modelling the situation: location of the reward pill (R, values in {left, right}), type of floor (F, values in {grass, sand}), and terminal chosen (T, {left, right}). Because the location of the pill and the floor type are correlated, we hypothesised the existence of a confounding variable (C, values in {world 1, world 2}). In this case, all variables are binary. The resulting causal model is shown below. The conditional probability tables for this model were estimated by running many controlled experiments using the simulator. This is done for both agents, resulting in two causal models.\n\n![]()***Figure 4. Causal model for the grass-sand environment.*** *The variables are C (confounder), R (location of reward pill), F (type of floor), and T (choice of terminal state).*Now that we have concrete formal causal models for explaining the behaviour of both agents, we are ready to ask questions:\n\n1. **Association between T and R:** Given the location of the reward pill, do agents pick the terminal at the same location? Formally, this is \n*P( T = left | R = left )* and *P( T = right | R = right )*.\n2. **Causation from R to T:** Given that **we set** the location of the reward pill, do agents pick the terminal at the same location? In other words, can we causally influence the agent’s choice by changing the location of the reward? Formally, this is given by \n*P( T = left | do(R = left) )* and *P( T = right | do(R=right) )*.\n3. **Causation from F to T:** Finally, we want to investigate whether our agents are sensitive to the floor type. Can we influence the agent’s choice by **setting** the floor type? To answer this, we could query the probabilities \n*P( T = left | do(F = grass))* and *P(T=right|do(F=sand))*.\n\nThe results are shown in the table below.\n\n![]()First, we confirm that, observationally, both agents pick the terminal with the reward. However, when changing the position of the reward, we see a difference: agent A’s choice seems indifferent (probability close to 0.5) to the location of the reward pill, whereas agent B follows the reward pill. Rather, agent A seems to choose according to the floor type, while agent B is insensitive to it. This answers our question about the two agents. Importantly, we could only reach these conclusions because we **actively intervened on the hypothesised causes**.\n\nMore examples\n=============\n\nBesides showing how to investigate causal effects under confounding, our work also illustrates five additional questions that are typical in agent analysis. Each example is carefully illustrated with a toy example.\n\n![]()How would you solve them? Can you think of a good causal model for each situation? The problems are:\n\n1. **Testing for memory use:** An agent with limited visibility (it can only see its adjacent tiles) has to remember a cue at the beginning of a T-maze. The cue tells it where to go to collect a rewarding pill (left or right exit). You observe that the agent always picks the correct exit. How would you test whether it is using its internal memory for solving the task?\n2. **Testing for generalisation:** An agent is placed in a square room where there is a reward pill placed in a randomly chosen location. You observe that the agent always collects the reward. How would you test whether this behaviour generalizes?\n3. **Estimating a counterfactual behaviour:** There are two doors, each leading into a room containing a red and a green reward pill. Only one door is open, and you observe the agent picking up the red pill. If the other door had been open instead, what would the agent have done?\n4. **Which is the correct causal model?** You observe several episodes, in which two agents, red and blue, simultaneously move one step into mostly the same direction. You know that one of them chooses the direction and the other tries to follow. How would you find out who’s the leader and who’s the follower?\n5. **Understanding the causal pathways leading up to a decision:** An agent starts in a room with a key and a door leading to a room with a reward pill. Sometimes the door is open, and other times the door is closed and the agent has to use the key to open it. How would you test whether the agent understands that the key is only necessary when the door is closed?\n\nFind out the answers and more in our paper. [Link to the paper here](https://arxiv.org/abs/2103.03938).\n\n[1] Arjovsky, M., Bottou, L., Gulrajani, I., & Lopez-Paz, D. (2019). Invariant risk minimization. arXiv preprint arXiv:1907.02893.\n\n*We would like to thank Jon Fildes for his help with this post.*", "url": "https://deepmindsafetyresearch.medium.com/what-mechanisms-drive-agent-behaviour-e7b8d9aee88", "title": "What mechanisms drive agent behaviour?", "source": "html_articles", "source_type": "blogPost", "source_filetype": "pdf", "date_published": "2021-03-08T23:00:00Z", "authors": ["DeepMind Safety Research"], "summary": [], "id": "402f59e475fd9153c45e8fde62ea24f7"} {"text": "Failure Modes in Machine Learning\n=================================\n\n\n\n\n* Article\n* 11/02/2022\n* 6 contributors\n\n\n\n\n\n\n\n\nFeedback\n\n\n\n\n\nIn this article\n---------------\n\n\n\n\n\n\n| Microsoft Corporation | Berkman Klein Center for Internet and Society at Harvard University |\n| --- | --- |\n| [Ram Shankar Siva Kumar](mailto:ram.shankar@microsoft.com) | [David O’Brien](mailto:dobrien@cyber.harvard.edu) |\n| [Jeffrey Snover](mailto:jsnover@microsoft.com) | [Kendra Albert](mailto:kalbert@law.harvard.edu) |\n| | [Salome Viljoen](mailto:sviljoen@cyber.harvard.edu) |\n\n\nNovember 2019\n\n\nIntroduction & Background\n-------------------------\n\n\nIn the last two years, more than 200 papers have been written on how\nMachine Learning (ML) can fail because of adversarial attacks on the\nalgorithms and data; this number balloons if we were to incorporate\nnon-adversarial failure modes. The spate of papers has made it difficult\nfor ML practitioners, let alone engineers, lawyers and policymakers, to\nkeep up with the attacks against and defenses of ML systems. However, as\nthese systems become more pervasive, the need to understand how they\nfail, whether by the hand of an adversary or due to the inherent design\nof a system, will only become more pressing. The purpose of this\ndocument is to jointly tabulate both the of these failure modes in a\nsingle place.\n\n\n* *Intentional failures* wherein the failure is caused by an active\nadversary attempting to subvert the system to attain her goals –\neither to misclassify the result, infer private training data, or to\nsteal the underlying algorithm.\n* *Unintentional failures* wherein the failure is because an ML system\nproduces a formally correct but completely unsafe outcome.\n\n\nWe would like to point out that there are other taxonomies and\nframeworks that individually highlight intentional failure\nmodes[1],[2] and unintentional failure\nmodes[3],[4]. Our classification brings the two separate\nfailure modes together in one place and addresses the following needs:\n\n\n1. The need to equip software developers, security incident responders, lawyers, and policy makers with a common vernacular to talk about this problem. After developing the initial version of the taxonomy last year, we worked with security and ML teams across Microsoft, 23 external partners, standards organization, and governments to understand how stakeholders would use our framework. Based on this usability study and stakeholder feedback, we iterated on the framework.\n\n\n*Results:* When presented with an ML failure mode, we frequently observed that software developers and lawyers mentally mapped the ML failure modes to traditional software attacks like data exfiltration. So, throughout the paper, we attempt to highlight how machine learning failure modes are meaningfully different from traditional software failures from a technology and policy perspective.\n2. The need for a common platform for engineers to build on top of and to integrate into their existing software development and security practices. Broadly, we wanted the taxonomy to be more than an educational tool – we want it to effectuate tangible engineering outcomes.\n\n\n*Results:* Using this taxonomy as a lens, Microsoft modified its\n[Security Development Lifecycle](https://www.microsoft.com/securityengineering/sdl/) process for its entire organization.\nSpecifically, data scientists and security engineers at Microsoft now\nshare the common language of this taxonomy, allowing them to more effectively threat model their ML systems before\ndeploying to production; Security Incident Responders also have a\nbug bar to triage these net-new threats specific to ML, the standard process for vulnerabilities triage and response used by the Microsoft Security Response Center and all Microsoft product teams.\n3. The need for a common vocabulary to describe these attacks amongst policymakers and lawyers. We believe that this for describing different ML failure modes and analysis of how their harms might be regulated is a meaningful first step towards informed policy.\n\n\n*Results:* This taxonomy is written for a wide interdisciplinary audience – so, policymakers who are looking at the issues from a general ML/AI perspective, as well as specific domains such as misinformation/healthcare should find the failure mode catalogue useful. We also highlight any applicable legal interventions to address the failure modes.\n\n\nSee also Microsoft's [Threat Modeling AI/ML Systems and Dependencies](/en-us/security/threat-modeling-aiml) and [SDL Bug Bar Pivots for Machine Learning Vulnerabilities](/en-us/security/engineering/bug-bar-aiml).\n\n\nHow to use this document\n------------------------\n\n\nAt the outset, we acknowledge that this is a living document which will evolve over time with the threat landscape.\nWe also do not prescribe technological\nmitigations to these failure modes here, as defenses are scenario-specific\nand tie in with the threat model and system architecture under consideration. Options presented for threat mitigation are based on current research with the expectation that those defenses will evolve over time as well.\n\n\nFor engineers, we recommend browsing through the overview of possible failure modes and jumping into the [threat modeling document](/en-us/security/threat-modeling-aiml). This way,\nengineers can identify threats, attacks, vulnerabilities and use the\nframework to plan for countermeasures where available. We then refer you\nto the bug bar that maps these new vulnerabilities in the taxonomy\nalongside traditional software vulnerabilities, and provides a rating\nfor each ML vulnerability (such as critical, important). This bug bar\nis easily integrated into existing incident response processes/playbooks.\n\n\nFor lawyers and policy makers, this document organizes ML failure modes\nand presents a framework to analyze key issues relevant for\nanyone exploring policy options, such as the work done\nhere[5],[6]. Specifically, we have categorized failures and\nconsequences in a way that policy makers can begin to draw distinctions\nbetween causes, which will inform the public policy initiatives to\npromote ML safety and security. We hope that policy makers will use\nthese categories begin to flesh out how existing legal regimes may (not)\nadequately capture emerging issues, what historical legal regimes or\npolicy solutions might have dealt with similar harms, and where we\nshould be especially sensitive to civil liberties issues.\n\n\nDocument Structure\n------------------\n\n\nIn both the *Intentional Failure Modes* and *Unintentional Failure\nModes* sections, we provide a brief definition of the attack, and\nan illustrative example from literature.\n\n\nIn the *Intentional Failure Modes* section, we provide the additional\nfields:\n\n\n1. What does the attack attempt to compromise in the ML system – Confidentiality, Integrity or Availability? We define Confidentiality as assuring that the components of the ML system (data, algorithm, model) are accessible only by authorized parties; Integrity is defined as assuring that the ML system can be modified only by authorized parties; Availability is defined as an assurance that the ML system is accessible to authorized parties. Together, Confidentiality, Integrity and Availability is called the CIA triad. For each intentional failure mode, we attempt to identify which of the CIA triad is compromised.\n2. How much knowledge is required to mount this attack – blackbox or whitebox? In Blackbox style attacks., the attacker does NOT have direct access to the training data, no knowledge of the ML algorithm used and no access to the source code of the model. The attacker only queries the model and observes the response. In a whitebox style attack the attacker has knowledge of either ML algorithm or access to the model source code.\n3. Commentary on if the attacker is violating traditional technological notion of access/authorization.\n\n\nIntentionally-Motivated Failures Summary\n----------------------------------------\n\n\n\n\n\n| Scenario Number | Attack | Overview | Violates traditional technological notion of access/authorization? |\n| --- | --- | --- | --- |\n| 1 | Perturbation attack | Attacker modifies the query to get appropriate response | No |\n| 2 | Poisoning attack | Attacker contaminates the training phase of ML systems to get intended result | No |\n| 3 | Model Inversion | Attacker recovers the secret features used in the model by through careful queries | No |\n| 4 | Membership Inference | Attacker can infer if a given data record was part of the model’s training dataset or not | No |\n| 5 | Model Stealing | Attacker is able to recover the model through carefully-crafted queries | No |\n| 6 | Reprogramming ML system | Repurpose the ML system to perform an activity it was not programmed for | No |\n| 7 | Adversarial Example in Physical Domain | Attacker brings adversarial examples into physical domain to subvertML system e.g: 3d printing special eyewear to fool facial recognition system | No |\n| 8 | Malicious ML provider recovering training data | Malicious ML provider can query the model used by customer and recover customer’s training data | Yes |\n| 9 | Attacking the ML supply chain | Attacker compromises the ML models as it is being downloaded for use | Yes |\n| 10 | Backdoor ML | Malicious ML provider backdoors algorithm to activate with a specific trigger | Yes |\n| 11 | Exploit Software Dependencies | Attacker uses traditional software exploits like buffer overflow to confuse/control ML systems | Yes |\n\n\nUnintended Failures Summary\n---------------------------\n\n\n\n\n\n| Scenario # | Failure | Overview |\n| --- | --- | --- |\n| 12 | Reward Hacking | Reinforcement Learning (RL) systems act in unintended ways because of mismatch between stated reward and true reward |\n| 13 | Side Effects | RL system disrupts the environment as it tries to attain its goal |\n| 14 | Distributional shifts | The system is tested in one kind of environment, but is unable to adapt to changes in other kinds of environment |\n| 15 | Natural Adversarial Examples | Without attacker perturbations, the ML system fails owing to hard negative mining |\n| 16 | Common Corruption | The system is not able to handle common corruptions and perturbations such as tilting, zooming, or noisy images. |\n| 17 | Incomplete Testing | The ML system is not tested in the realistic conditions that it is meant to operate in. |\n\n\n\nDetails on Intentionally-Motivated Failures\n-------------------------------------------\n\n\n\n\n\n| Scenario # | Attack Class | Description | Type of Compromise | Scenario |\n| --- | --- | --- | --- | --- |\n| 1 | Perturbation attacks | In perturbation style attacks, the attacker stealthily modifies the query to get a desired response | Integrity | Image: Noise is added to an X-ray image, which makes the predictions go from normal scan to abnormal [1][Blackbox] \n Text translation: Specific characters are manipulated to result in incorrect translation. The attack can suppress specific word or can even remove the word completely[2][Blackbox and Whitebox]\n Speech: Researchers showed how given a speech waveform, another waveform can be exactly replicated but transcribes into a totally different text[3][Whitebox but may be extended to blackbox] |\n| 2 | Poisoning attacks | The goal of the attacker is to contaminate the machine model generated in the training phase, so that predictions on new data will be modified in the testing phase Targeted: In targeted poisoning attacks, the attacker wants to misclassify specific examples Indiscriminate: The aim here is to cause DoS like effect, which makes the system unavailable. | Integrity | In a medical dataset where the goal is to predict the dosage of anticoagulant drug Warfarin using demographic information, etc. Researchers introduced malicious samples at 8% poisoning rate, which changed dosage by 75.06% for half of patients[4][Blackbox] In the Tay chatbot, future conversations were tainted because a fraction of the past conversations were used to train the system via feedback[5] [Blackbox] |\n| 3 | Model Inversion | The private features used in machine learning models can be recovered | Confidentiality; | Researchers were able to recover private training data used to train the algorithm[6] The authors were able to reconstruct faces, by just the name and access to the model to the point where Mechanical turks could use the photo to identify an individual from aline-up with 95% accuracy. The authors were also able to extract specific information. [Whitebox and Blackbox][12] |\n| 4 | Membership Inference attack | The attacker can determine whether a given data record was part of the model’s training dataset or not | Confidentiality | Researchers were able to predict a patient’s main procedure(e.g: Surgery the patient went through) based on the attributes (e.g: age,gender, hospital)[7][Blackbox] |\n| 5 | Model stealing | The attackers recreate the underlying model by legitimately querying the model. The functionality of the new model is same as that of the underlying model. | Confidentiality | Researchers successfully emulated the underlying algorithm from Amazon, BigML. For instance, in the BigML case, researchers were able to recover the model used to predict if someone should have a good/bad credit risk (German Credit Card dataset) using 1,150 queries and within 10 minutes[8] |\n| 6 | Reprogramming deep neural nets | By means of a specially crafted query from an adversary, Machine learning systems can be reprogrammed to a task that deviates from the creator’s original intent | Integrity, Availability | Demonstrated how ImageNet, a system used to classify one of several categories of images was repurposed to count squares. Authors end the paper with a hypothetical scenario: An attacker sends Captcha images to the computer vision classifier in a cloud hosted photos service to solve the image captchas to create spam accounts[9] |\n| 7 | Adversarial Example in the Physical domain | An adversarial example is an input/query from a malicious entity sent with the sole aim of misleading the machine learning system These examples can manifest in the physical domain | Integrity | Researchers 3D prints a rifle with custom texture that fools image recognition system into thinking it is a turtle[10] \n Researchers construct sunglasses with a design that can now fool image recognition systems, and no longer recognize the faces correctly[11] |\n| 8 | Malicious ML providers who can recover training data | Malicious ML provider can query the model used by customer and recover customer’s training data | Confidentiality | Researchers show how a malicious provider presents a backdoored algorithm, wherein the private training data is recovered. They were able to reconstruct faces and texts, given the model alone. [12] |\n| 9 | Attacking the ML Supply Chain[13] | Owing to large resources (data + computation) required to train algorithms, the current practice is to reuse models trained by large corporations, and modify them slightly for task at hand (e.g: ResNet is a popular image recognition model from Microsoft). These models are curated ina Model Zoo (Caffe hosts popular image recognition models). In this attack,the adversary attacks the models hosted in Caffe, thereby poisoning the well for anyone else. | Integrity | Researchers show how it is possible for an attacker to check in malicious code into one of the popular model. An unsuspecting ML developer downloads this model and uses it as part of the image recognition system in their code [14]. The authors show how in Caffe, there exists a model whose SHA1 hash doesNOT match the authors’ digest, indicating tampering. There are 22 models without any SHA1 hash for integrity checks at all. |\n| 10 | Backdoor Machine Learning | Like in the “Attacking the ML Supply Chain”, In this attack scenario,the training process is either fully or partially outsourced to a malicious party who wants to provide the user with a trained model that contains a backdoor. The backdoored model would perform well on most inputs (including inputs that the end user may hold out as a validation set) but cause targeted misclassifications or degrade the accuracy of the model for inputs that satisfy some secret, attacker-chosen property, which we will refer to as the backdoor trigger | Confidentiality, Integrity | Researchers created a backdoored U.S. street sign classifier that identifies stop signs as speed limits only when a special sticker is added to the stop sign (backdoor trigger) 20 They are now extending this work to text processing systems, wherein specific words are replaced with the trigger being the speaker’s accent[15] |\n| 11 | Exploit software dependencies of ML system | In this attack, the attacker does NOT manipulate the algorithms. Instead, exploits traditional software vulnerabilities such as buffer overflows. | Confidentiality, Integrity, Availability, | An adversary sends in corrupt input to an image recognition system that causes it to misclassify by exploiting a software bug in one of the dependencies. |\n\n\n\nDetails on Unintended Failures\n------------------------------\n\n\n\n\n\n| Scenario # | Attack Class | Description | Type of Compromise | Scenario |\n| --- | --- | --- | --- | --- |\n| 12 | Reward Hacking | Reinforcement learning systems act in unintended ways because of discrepancies between the specified reward and the true intended reward. | Safety of the system | A huge corpus of gaming examples in AI has been compiled here[1] |\n| 13 | Side Effects | RL system disrupts the environment as it tries to attain their goal | Safety of the system | Scenario, verbatim from the authors in [2]:“Suppose a designer wants an RL agent (for example our cleaning robot) to achieve some goal, like moving a box from one side of a room to the other.Sometimes the most effective way to achieve the goal involves doing something unrelated and destructive to the rest of the environment, like knocking over a vase of water that is in its path. If the agent is given reward only for moving the box, it will probably knock over the vase.” |\n| 14 | Distributional shifts | The system is tested in one kind of environment, but is unable to adapt to changes in other kinds of environment | Safety of the system | Researchers trained two state of the art RL agents, Rainbow DQN and A2C in a simulation to avoid lava. During training, the RL agent was able to avoid lava successfully and reach its goal. During testing, they slightly moved the position of the lava, but the RL agent was not able to avoid [3] |\n| 15 | Natural Adversarial Examples | The system incorrectly recognizes an input that was found using hard negative mining | Safety of the system | Here the authors show how by a simple process of hard negative mining[4], it is possible to confuse the ML system by relaying the example. |\n| 16 | Common Corruption | The system is not able to handle common corruptions and perturbations such as tilting, zooming, or noisy images. | Safety of the system | The authors[5] show how common corruptions such as changes to brightness, contrast, fog or noise added to images, have a significant drop in metrics in image recognition |\n| 17 | Incomplete Testing in Realistic conditions | The ML system is not tested in realistic conditions that it is meant to operate in | Safety of the system | The authors in [25] highlight that that while defenders commonly account for robustness of the ML algorithm, they lose sight of realistic conditions. For instance, they argue that a missing stop sign knocked off in the wind (which is more realistic) than an attacker attempting to perturb the system's inputs. |\n\n\nAcknowledgements\n----------------\n\n\nWe would like to thank Andrew Marshall, Magnus Nystrom, John Walton, John Lambert, Sharon Xia, Andi Comissoneru, Emre Kiciman, Jugal Parikh, Sharon Gillet, members of Microsoft’s AI and Ethics in Engineering and Research (AETHER) committee’s Security workstream, Amar Ashar, Samuel Klein, Jonathan Zittrain, members of AI Safety Security Working Group at Berkman Klein for providing helpful feedback. We would also like to thank reviewers from 23 external partners, standards organization, and government organizations for shaping the taxonomy.\n\n\nBibliography\n------------\n\n\n[1] Li, Guofu, et al. \"Security Matters: A Survey on Adversarial Machine\nLearning.\" *arXiv preprint arXiv:1810.07339* (2018).\n\n\n[2] Chakraborty, Anirban, et al. \"Adversarial attacks and defences: A\nsurvey.\" *arXiv preprint arXiv:1810.00069* (2018).\n\n\n[3] Ortega, Pedro, and Vishal Maini. \"Building safe artificial\nintelligence: specification, robustness, and assurance.\" *DeepMind\nSafety Research Blog* (2018).\n\n\n[4] Amodei, Dario, et al. \"Concrete problems in AI safety.\" *arXiv\npreprint arXiv:1606.06565* (2016).\n\n\n[5] Shankar Siva Kumar, Ram, et al. \"Law and Adversarial Machine\nLearning.\" *arXiv preprint arXiv:1810.10731* (2018).\n\n\n[6] Calo, Ryan, et al. \"Is Tricking a Robot Hacking?.\" University of\nWashington School of Law Research Paper 2018-05 (2018).\n\n\n[7] Paschali, Magdalini, et al. \"Generalizability vs. Robustness:\nAdversarial Examples for Medical Imaging.\" arXiv preprint\narXiv:1804.00504 (2018).\n\n\n[8] Ebrahimi, Javid, Daniel Lowd, and Dejing Dou. \"On Adversarial\nExamples for Character-Level Neural Machine Translation.\" arXiv preprint\narXiv:1806.09030 (2018)\n\n\n[9] Carlini, Nicholas, and David Wagner. \"Audio adversarial examples:\nTargeted attacks on speech-to-text.\" arXiv preprint arXiv:1801.01944\n(2018).\n\n\n[10] Jagielski, Matthew, et al. \"Manipulating machine learning:\nPoisoning attacks and countermeasures for regression learning.\" *arXiv\npreprint arXiv:1804.00308* (2018)\n\n\n[11] [https://blogs.microsoft.com/blog/2016/03/25/learning-tays-introduction/]\n\n\n[12] Fredrikson M, Jha S, Ristenpart T. 2015. Model inversion attacks\nthat exploit confidence information and basic countermeasures\n\n\n[13] Shokri R, Stronati M, Song C, Shmatikov V. 2017. Membership\ninference attacks against machine learning models. In *Proc. of the 2017\nIEEE Symp. on Security and Privacy (SP)*, *San Jose, CA, 22–24 May\n2017*, pp. 3–18. New York, NY: IEEE.\n\n\n[14] Tramèr, Florian, et al. \"Stealing Machine Learning Models via\nPrediction APIs.\" *USENIX Security Symposium*. 2016.\n\n\n[15] Elsayed, Gamaleldin F., Ian Goodfellow, and Jascha Sohl-Dickstein.\n\"Adversarial Reprogramming of Neural Networks.\" *arXiv preprint\narXiv:1806.11146* (2018).\n\n\n[16] Athalye, Anish, and Ilya Sutskever. \"Synthesizing robust\nadversarial examples.\" *arXiv preprint arXiv:1707.07397*(2017)\n\n\n[17] Sharif, Mahmood, et al. \"Adversarial Generative Nets: Neural\nNetwork Attacks on State-of-the-Art Face Recognition.\" *arXiv preprint\narXiv:1801.00349* (2017).\n\n\n[19] Xiao, Qixue, et al. \"Security Risks in Deep Learning\nImplementations.\" *arXiv preprint arXiv:1711.11008* (2017).\n\n\n[20] Gu, Tianyu, Brendan Dolan-Gavitt, and Siddharth Garg. \"Badnets:\nIdentifying vulnerabilities in the machine learning model supply\nchain.\" *arXiv preprint arXiv:1708.06733* (2017)\n\n\n[21] [https://www.wired.com/story/machine-learning-backdoors/]\n\n\n[22] [https://docs.google.com/spreadsheets/d/e/2PACX-1vRPiprOaC3HsCf5Tuum8bRfzYUiKLRqJmbOoC-32JorNdfyTiRRsR7Ea5eWtvsWzuxo8bjOxCG84dAg/pubhtml]\n\n\n[23] Amodei, Dario, et al. \"Concrete problems in AI safety.\" *arXiv\npreprint arXiv:1606.06565* (2016).\n\n\n[24] Leike, Jan, et al. \"AI safety gridworlds.\" *arXiv preprint\narXiv:1711.09883* (2017).\n\n\n[25] Gilmer, Justin, et al. \"Motivating the rules of the game for\nadversarial example research.\" *arXiv preprint arXiv:1807.06732* (2018).\n\n\n[26] Hendrycks, Dan, and Thomas Dietterich. \"Benchmarking neural network\nrobustness to common corruptions and perturbations.\" *arXiv preprint\narXiv:1903.12261* (2019).", "url": "https://docs.microsoft.com/en-us/security/failure-modes-in-machine-learning", "title": "Failure Modes in Machine Learning - Security documentation", "source": "html_articles", "source_type": "report", "source_filetype": "pdf", "date_published": "2018-12-31T23:00:00Z", "authors": ["Ram Shankar Siva Kumar", "David O Brien", "Kendra Albert", "Salomé Viljöen", "Jeffrey Snover"], "summary": [], "id": "7fba97c962406fe19ffc74ad063c3fea"} {"text": "Toggle the table of contents\n\n\n\n\n\n\nToggle the table of contents\n\n\n\n\n\n\n\nVon Neumann–Morgenstern utility theorem\n=======================================\n\n\n\n\n\n\n3 languages\n\n\n\n* [Français](https://fr.wikipedia.org/wiki/Th%C3%A9or%C3%A8me_d%27utilit%C3%A9_de_von_Neumann-Morgenstern \"Théorème d'utilité de von Neumann-Morgenstern – French\")\n* [עברית](https://he.wikipedia.org/wiki/%D7%9E%D7%A9%D7%A4%D7%98_%D7%A4%D7%95%D7%9F_%D7%A0%D7%95%D7%99%D7%9E%D7%9F-%D7%9E%D7%95%D7%A8%D7%92%D7%A0%D7%A9%D7%98%D7%A8%D7%9F \"משפט פון נוימן-מורגנשטרן – Hebrew\")\n* [Nederlands](https://nl.wikipedia.org/wiki/Von_Neumann-Morgenstern-nutsfunctie \"Von Neumann-Morgenstern-nutsfunctie – Dutch\")\n\n\n[Edit links](https://www.wikidata.org/wiki/Special:EntityPage/Q4358367#sitelinks-wikipedia \"Edit interlanguage links\")\n\n\n\n\n\n\n\n\n\n\n* [Article](/wiki/Von_Neumann%E2%80%93Morgenstern_utility_theorem \"View the content page [c]\")\n* [Talk](/wiki/Talk:Von_Neumann%E2%80%93Morgenstern_utility_theorem \"Discuss improvements to the content page [t]\")\n\n\n\n\n\n\n\nEnglish\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n* [Read](/wiki/Von_Neumann%E2%80%93Morgenstern_utility_theorem)\n* [Edit](/w/index.php?title=Von_Neumann%E2%80%93Morgenstern_utility_theorem&action=edit&oldid=1044421624 \"Edit this page [e]\")\n* [View history](/w/index.php?title=Von_Neumann%E2%80%93Morgenstern_utility_theorem&action=history \"Past revisions of this page [h]\")\n\n\n\n\n\n\n\n\n\nTools\n\n\n\n\n\nTools\nmove to sidebar\nhide\n\n\n\n Actions\n \n\n* [Read](/wiki/Von_Neumann%E2%80%93Morgenstern_utility_theorem)\n* [Edit](/w/index.php?title=Von_Neumann%E2%80%93Morgenstern_utility_theorem&action=edit&oldid=1044421624)\n* [View history](/w/index.php?title=Von_Neumann%E2%80%93Morgenstern_utility_theorem&action=history)\n\n\n\n\n\n\n General\n \n\n* [What links here](/wiki/Special:WhatLinksHere/Von_Neumann%E2%80%93Morgenstern_utility_theorem \"List of all English Wikipedia pages containing links to this page [j]\")\n* [Related changes](/wiki/Special:RecentChangesLinked/Von_Neumann%E2%80%93Morgenstern_utility_theorem \"Recent changes in pages linked from this page [k]\")\n* [Upload file](/wiki/Wikipedia:File_Upload_Wizard \"Upload files [u]\")\n* [Special pages](/wiki/Special:SpecialPages \"A list of all special pages [q]\")\n* [Permanent link](/w/index.php?title=Von_Neumann%E2%80%93Morgenstern_utility_theorem&oldid=1044421624 \"Permanent link to this revision of this page\")\n* [Page information](/w/index.php?title=Von_Neumann%E2%80%93Morgenstern_utility_theorem&action=info \"More information about this page\")\n* [Cite this page](/w/index.php?title=Special:CiteThisPage&page=Von_Neumann%E2%80%93Morgenstern_utility_theorem&id=1044421624&wpFormIdentifier=titleform \"Information on how to cite this page\")\n* [Wikidata item](https://www.wikidata.org/wiki/Special:EntityPage/Q4358367 \"Structured data on this page hosted by Wikidata [g]\")\n\n\n\n\n\n\n Print/export\n \n\n* [Download as PDF](/w/index.php?title=Special:Book&bookcmd=render_article&arttitle=Von+Neumann%E2%80%93Morgenstern+utility+theorem&returnto=Von+Neumann%E2%80%93Morgenstern+utility+theorem&oldid=1044421624&writer=rl \"Download this page as a PDF file\")\n* [Printable version](/w/index.php?title=Von_Neumann%E2%80%93Morgenstern_utility_theorem&printable=yes \"Printable version of this page [p]\")\n\n\n\n\n\n\n Print/export\n \n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\nFrom Wikipedia, the free encyclopedia\n\n**This is an [old revision](/wiki/Help:Page_history \"Help:Page history\") of this page, as edited by [2601:445:4380:7dd0::6b8c](/wiki/Special:Contributions/2601:445:4380:7DD0:0:0:0:6B8C \"Special:Contributions/2601:445:4380:7DD0:0:0:0:6B8C\") ([talk](/w/index.php?title=User_talk:2601:445:4380:7DD0:0:0:0:6B8C&action=edit&redlink=1 \"User talk:2601:445:4380:7DD0:0:0:0:6B8C (page does not exist)\")) at 04:35, 15 September 2021. The present address (URL) is a [permanent link](/wiki/Help:Permanent_link \"Help:Permanent link\") to this revision, which may differ significantly from the [current revision](https://en.wikipedia.org/wiki/Von_Neumann%E2%80%93Morgenstern_utility_theorem).**\n\nRevision as of 04:35, 15 September 2021 by [2601:445:4380:7dd0::6b8c](/wiki/Special:Contributions/2601:445:4380:7DD0:0:0:0:6B8C \"Special:Contributions/2601:445:4380:7DD0:0:0:0:6B8C\") ([talk](/w/index.php?title=User_talk:2601:445:4380:7DD0:0:0:0:6B8C&action=edit&redlink=1 \"User talk:2601:445:4380:7DD0:0:0:0:6B8C (page does not exist)\"))([diff](/w/index.php?title=Von_Neumann%E2%80%93Morgenstern_utility_theorem&diff=prev&oldid=1044421624 \"Von Neumann–Morgenstern utility theorem\")) [← Previous revision](/w/index.php?title=Von_Neumann%E2%80%93Morgenstern_utility_theorem&direction=prev&oldid=1044421624 \"Von Neumann–Morgenstern utility theorem\") | [Latest revision](/wiki/Von_Neumann%E2%80%93Morgenstern_utility_theorem \"Von Neumann–Morgenstern utility theorem\") ([diff](/w/index.php?title=Von_Neumann%E2%80%93Morgenstern_utility_theorem&diff=cur&oldid=1044421624 \"Von Neumann–Morgenstern utility theorem\")) | [Newer revision →](/w/index.php?title=Von_Neumann%E2%80%93Morgenstern_utility_theorem&direction=next&oldid=1044421624 \"Von Neumann–Morgenstern utility theorem\") ([diff](/w/index.php?title=Von_Neumann%E2%80%93Morgenstern_utility_theorem&diff=next&oldid=1044421624 \"Von Neumann–Morgenstern utility theorem\"))\nAny individual whose preferences satisfy four axioms has a utility function\nIn [decision theory](/wiki/Decision_theory \"Decision theory\"), the **von Neumann–Morgenstern** (**VNM**) **utility theorem** shows that, under certain [axioms](/wiki/Axiom \"Axiom\") of [rational behavior](/wiki/Rationality \"Rationality\"), a decision-maker faced with [risky](/wiki/Risk \"Risk\") (probabilistic) outcomes of different choices will behave as if he or she is maximizing the [expected value](/wiki/Expected_value \"Expected value\") of some function defined over the potential outcomes at some specified point in the future. This function is known as the von Neumann–Morgenstern utility function. The theorem is the basis for [expected utility theory](/wiki/Expected_utility_theory \"Expected utility theory\").\n\n\nIn 1947, [John von Neumann](/wiki/John_von_Neumann \"John von Neumann\") and [Oskar Morgenstern](/wiki/Oskar_Morgenstern \"Oskar Morgenstern\") proved that any individual whose [preferences](/wiki/Preference_(economics) \"Preference (economics)\") satisfied four axioms has a [utility function](/wiki/Utility_function \"Utility function\");[[1]](#cite_note-VNM-1) such an individual's preferences can be represented on an [interval scale](/wiki/Interval_scale \"Interval scale\") and the individual will always prefer actions that maximize expected utility. That is, they proved that an agent is (VNM-)rational *if and only if* there exists a real-valued function *u* defined by possible outcomes such that every preference of the agent is characterized by maximizing the expected value of *u*, which can then be defined as the agent's *VNM-utility* (it is unique up to adding a constant and multiplying by a positive scalar). No claim is made that the agent has a \"conscious desire\" to maximize *u*, only that *u* exists.\n\n\nThe [expected utility hypothesis](/wiki/Expected_utility_hypothesis \"Expected utility hypothesis\") is that rationality can be modeled as maximizing an [expected value](/wiki/Expected_value \"Expected value\"), which given the theorem, can be summarized as \"*rationality is VNM-rationality*\". However, the axioms themselves have been critiqued on various grounds, resulting in the axioms being given further justification.[[2]](#cite_note-2)\n\n\nVNM-utility is a *decision utility* in that it is used to describe *decision preferences*. It is related but not equivalent to so-called *E-utilities*[[3]](#cite_note-KWS-3) (experience utilities), notions of utility intended to measure happiness such as that of [Bentham](/wiki/Jeremy_Bentham \"Jeremy Bentham\")'s [Greatest Happiness Principle](/wiki/Greatest_happiness_principle \"Greatest happiness principle\").\n\n\n\n\nSet-up\n------\n\n\nIn the theorem, an individual agent is faced with options called [*lotteries*](/wiki/Lottery_(probability) \"Lottery (probability)\"). Given some [mutually exclusive](/wiki/Mutually_exclusive \"Mutually exclusive\") outcomes, a lottery is a scenario where each outcome will happen with a given [probability](/wiki/Probability \"Probability\"), all probabilities summing to one. For example, for two outcomes *A* and *B*,\n\n\n\n\n\n\n\nL\n=\n0.25\nA\n+\n0.75\nB\n\n\n{\\displaystyle L=0.25A+0.75B}\n\n![{\\displaystyle L=0.25A+0.75B}](https://wikimedia.org/api/rest_v1/media/math/render/svg/65609f0204eae5e293e04be54a8765d4e6076871)\ndenotes a scenario where *P*(*A*) = 25% is the probability of *A* occurring and *P*(*B*) = 75% (and exactly one of them will occur). More generally, for a lottery with many possible outcomes *Ai*, we write:\n\n\n\n\n\n\n\nL\n=\n∑\n\np\n\ni\n\n\n\nA\n\ni\n\n\n,\n\n\n{\\displaystyle L=\\sum p\\_{i}A\\_{i},}\n\n![{\\displaystyle L=\\sum p_{i}A_{i},}](https://wikimedia.org/api/rest_v1/media/math/render/svg/b8d0fce14b6b440fc698a747b407009c68822f1e)\nwith the sum of the \n\n\n\n\np\n\ni\n\n\n\n\n{\\displaystyle p\\_{i}}\n\n![p_{i}](https://wikimedia.org/api/rest_v1/media/math/render/svg/5bab39399bf5424f25d957cdc57c84a0622626d2)s equalling 1.\n\n\nThe outcomes in a lottery can themselves be lotteries between other outcomes, and the expanded expression is considered an equivalent lottery: 0.5(0.5*A* + 0.5*B*) + 0.5*C* = 0.25*A* + 0.25*B* + 0.50*C*.\n\n\nIf lottery *M* is preferred over lottery *L*, we write \n\n\n\nL\n≺\nM\n\n\n{\\displaystyle L\\prec M}\n\n![{\\displaystyle L\\prec M}](https://wikimedia.org/api/rest_v1/media/math/render/svg/0182f1e20f44b51055fcd59eb81757897e9294af), or equivalently, \n\n\n\nM\n≻\nL\n\n\n{\\displaystyle M\\succ L}\n\n![{\\displaystyle M\\succ L}](https://wikimedia.org/api/rest_v1/media/math/render/svg/80803d03780883af17aff238ddf94a958413dbe9). If the agent is indifferent between *L* and *M*, we write the *indifference relation*[[4]](#cite_note-Kreps-4) \n\n\n\nL\n∼\nM\n.\n\n\n{\\displaystyle L\\sim M.}\n\n![L\\sim M.](https://wikimedia.org/api/rest_v1/media/math/render/svg/4db260b7eff79576daaa5575de3e4e1fb418f8b7) If *M* is either preferred over or viewed with indifference relative to *L*, we write \n\n\n\nL\n⪯\nM\n.\n\n\n{\\displaystyle L\\preceq M.}\n\n![L \\preceq M.](https://wikimedia.org/api/rest_v1/media/math/render/svg/5c6cef106f071ebd052c81c554e6772bc82643e9)\n\n\n\nThe axioms\n----------\n\n\nThe four axioms of VNM-rationality are then *completeness*, *transitivity*, *continuity*, and *independence*.\n\n\nCompleteness assumes that an individual has well defined preferences:\n\n\n\n**Axiom 1 (Completeness)** For any lotteries *L,M*, exactly one of the following holds:\n\n\n\n\n\nL\n≺\nM\n\n\n{\\displaystyle \\,L\\prec M}\n\n![{\\displaystyle \\,L\\prec M}](https://wikimedia.org/api/rest_v1/media/math/render/svg/4d342c7334631e2f25929ce5b4c0fa5a237dc2b8), \n\n\n\n\nM\n≺\nL\n\n\n{\\displaystyle \\,M\\prec L}\n\n![{\\displaystyle \\,M\\prec L}](https://wikimedia.org/api/rest_v1/media/math/render/svg/f90b9bdfd3b1f7de1c8e264c260d9b32d9e317f1), or \n\n\n\n\nL\n∼\nM\n\n\n{\\displaystyle \\,L\\sim M}\n\n![\\, L \\sim M ](https://wikimedia.org/api/rest_v1/media/math/render/svg/9cc4f610696acad4bf7ac4624dcad1f5500398f0)\n(either *M* is preferred, *L* is preferred, or the individual is indifferent[[5]](#cite_note-nop-5)).\n\n\n[Transitivity](/wiki/Transitive_relation \"Transitive relation\") assumes that preferences are consistent across any three options:\n\n\n\n**Axiom 2 (Transitivity)** If \n\n\n\n\nL\n≺\nM\n\n\n{\\displaystyle \\,L\\prec M}\n\n![{\\displaystyle \\,L\\prec M}](https://wikimedia.org/api/rest_v1/media/math/render/svg/4d342c7334631e2f25929ce5b4c0fa5a237dc2b8) and \n\n\n\n\nM\n≺\nN\n\n\n{\\displaystyle \\,M\\prec N}\n\n![{\\displaystyle \\,M\\prec N}](https://wikimedia.org/api/rest_v1/media/math/render/svg/8d01b1e70f36e74bb88cb96d0017fad1fe0ca714), then \n\n\n\n\nL\n≺\nN\n\n\n{\\displaystyle \\,L\\prec N}\n\n![{\\displaystyle \\,L\\prec N}](https://wikimedia.org/api/rest_v1/media/math/render/svg/7bf6cb42fbbd0b49d21b2e8e309d00efe3fa5662), and similarly for \n\n\n\n∼\n\n\n{\\displaystyle \\sim }\n\n![\\sim ](https://wikimedia.org/api/rest_v1/media/math/render/svg/afcc42adfcfdc24d5c4c474869e5d8eaa78d1173).\nContinuity assumes that there is a \"tipping point\" between being *better than* and *worse than* a given middle option:\n\n\n\n**Axiom 3 (Continuity):** If \n\n\n\n\nL\n⪯\nM\n⪯\nN\n\n\n{\\displaystyle \\,L\\preceq M\\preceq N}\n\n![{\\displaystyle \\,L\\preceq M\\preceq N}](https://wikimedia.org/api/rest_v1/media/math/render/svg/db759acfaed5c7544ea659645bf6aa4d03cb9876), then there exists a probability \n\n\n\n\np\n∈\n[\n0\n,\n1\n]\n\n\n{\\displaystyle \\,p\\in [0,1]}\n\n![{\\displaystyle \\,p\\in [0,1]}](https://wikimedia.org/api/rest_v1/media/math/render/svg/4449572ac985c35af802cf3a3f8e28594be10554) such that\n\n\n\n\n\np\nL\n+\n(\n1\n−\np\n)\nN\n\n∼\n\nM\n\n\n{\\displaystyle \\,pL+(1-p)N\\,\\sim \\,M}\n\n![{\\displaystyle \\,pL+(1-p)N\\,\\sim \\,M}](https://wikimedia.org/api/rest_v1/media/math/render/svg/27b8381ac45f6fd337b78643a7cdf72ed7646231)\nwhere the notation on the left side refers to a situation in which *L* is received with probability *p* and *N* is received with probability (1–*p*).\n\n\nInstead of continuity, an alternative axiom can be assumed that does not involve a precise equality, called the [Archimedean property](/wiki/Archimedean_property \"Archimedean property\").[[4]](#cite_note-Kreps-4) It says that any separation in preference can be maintained under a sufficiently small deviation in probabilities:\n\n\n\n**Axiom 3′ (Archimedean property):** If \n\n\n\n\nL\n≺\nM\n≺\nN\n\n\n{\\displaystyle \\,L\\prec M\\prec N}\n\n![{\\displaystyle \\,L\\prec M\\prec N}](https://wikimedia.org/api/rest_v1/media/math/render/svg/3a0072df9c734c8ad431711fd38d0a4d5519cd43), then there exists a probability \n\n\n\n\nε\n∈\n(\n0\n,\n1\n)\n\n\n{\\displaystyle \\,\\varepsilon \\in (0,1)}\n\n![\\,\\varepsilon\\in(0,1)](https://wikimedia.org/api/rest_v1/media/math/render/svg/8843056b4bef077bb8981f0785cf0c8ac78eb818) such that\n\n\n\n\n\n(\n1\n−\nε\n)\nL\n+\nε\nN\n\n≺\n\nM\n\n≺\n\nε\nL\n+\n(\n1\n−\nε\n)\nN\n.\n\n\n{\\displaystyle \\,(1-\\varepsilon )L+\\varepsilon N\\,\\prec \\,M\\,\\prec \\,\\varepsilon L+(1-\\varepsilon )N.}\n\n![{\\displaystyle \\,(1-\\varepsilon )L+\\varepsilon N\\,\\prec \\,M\\,\\prec \\,\\varepsilon L+(1-\\varepsilon )N.}](https://wikimedia.org/api/rest_v1/media/math/render/svg/64ff3f2253aa81b3f3e3f9841800f05b774e57f5)\nOnly one of (3) or (3′) need to be assumed, and the other will be implied by the theorem.\n\n\n[Independence of irrelevant alternatives](/wiki/Independence_of_irrelevant_alternatives \"Independence of irrelevant alternatives\") assumes that a preference holds independently of the possibility of another outcome:\n\n\n\n**Axiom 4 (Independence):** For any \n\n\n\n\nN\n\n\n{\\displaystyle \\,N}\n\n![\\,N](https://wikimedia.org/api/rest_v1/media/math/render/svg/3d76de4670e50d6ff69434e1846142af7f347769) and \n\n\n\n\np\n∈\n(\n0\n,\n1\n]\n\n\n{\\displaystyle \\,p\\in (0,1]}\n\n![{\\displaystyle \\,p\\in (0,1]}](https://wikimedia.org/api/rest_v1/media/math/render/svg/590a2ef3670309007fe88b3d8cb8b752eaba85be),\n\n\n\n\n\nL\n⪯\nM\n\n\niff\n\n\np\nL\n+\n(\n1\n−\np\n)\nN\n⪯\np\nM\n+\n(\n1\n−\np\n)\nN\n.\n\n\n{\\displaystyle \\,L\\preceq M\\qquad {\\text{iff}}\\qquad pL+(1-p)N\\preceq pM+(1-p)N.}\n\n![{\\displaystyle \\,L\\preceq M\\qquad {\\text{iff}}\\qquad pL+(1-p)N\\preceq pM+(1-p)N.}](https://wikimedia.org/api/rest_v1/media/math/render/svg/e8a6623aac916e0cd10fbc893d1064d3ee4deced)\nThe independence axiom implies the axiom on reduction of compound lotteries:[[6]](#cite_note-6)\n\n\n\n**Axiom 4′ (Reduction of compound lotteries):** For any lotteries \n\n\n\nL\n,\n\nL\n′\n\n,\nN\n,\n\nN\n′\n\n\n\n{\\displaystyle L,L',N,N'}\n\n![{\\displaystyle L,L',N,N'}](https://wikimedia.org/api/rest_v1/media/math/render/svg/5b655f53c05fe6b42158783a5d51f756fe957320) and any \n\n\n\np\n,\nq\n∈\n[\n0\n,\n1\n]\n\n\n{\\displaystyle p,q\\in [0,1]}\n\n![{\\displaystyle p,q\\in [0,1]}](https://wikimedia.org/api/rest_v1/media/math/render/svg/4ce8c8abe5c6cff2c531084637e986f132a54703),\n\n\n\n\n\nif\n\n\nL\n∼\nq\n\nL\n′\n\n+\n(\n1\n−\nq\n)\n\nN\n′\n\n,\n\n\n{\\displaystyle {\\text{if}}\\qquad L\\sim qL'+(1-q)N',}\n\n![{\\displaystyle {\\text{if}}\\qquad L\\sim qL'+(1-q)N',}](https://wikimedia.org/api/rest_v1/media/math/render/svg/c80efa13cc1d99b6588914665b7a5e7524cf8a82)\n\n\n\n\n\nthen\n\n\np\nL\n+\n(\n1\n−\np\n)\nN\n∼\np\nq\n\nL\n′\n\n+\np\n(\n1\n−\nq\n)\n\nN\n′\n\n+\n(\n1\n−\np\n)\nN\n.\n\n\n{\\displaystyle {\\text{then}}\\quad pL+(1-p)N\\sim pqL'+p(1-q)N'+(1-p)N.}\n\n![{\\displaystyle {\\text{then}}\\quad pL+(1-p)N\\sim pqL'+p(1-q)N'+(1-p)N.}](https://wikimedia.org/api/rest_v1/media/math/render/svg/707a8a608dcee7289d3ff1b05c397190a4774002)\nTo see how Axiom 4 implies Axiom 4', set \n\n\n\nM\n=\nq\n\nL\n′\n\n+\n(\n1\n−\nq\n)\n\nN\n′\n\n\n\n{\\displaystyle M=qL'+(1-q)N'}\n\n![{\\displaystyle M=qL'+(1-q)N'}](https://wikimedia.org/api/rest_v1/media/math/render/svg/d68a56fdfd003375cf2f70b4e684316f42631e1a) in the expression in Axiom 4, and expand.\n\n\n\nThe theorem\n-----------\n\n\nFor any VNM-rational agent (i.e. satisfying axioms 1–4), there exists a function *u* which assigns to each outcome *A* a real number *u(A)* such that for any two lotteries,\n\n\n\n\n\n\n\nL\n≺\nM\n\n\ni\nf\n\na\nn\nd\n\no\nn\nl\ny\n\ni\nf\n\n\nE\n(\nu\n(\nL\n)\n)\n<\nE\n(\nu\n(\nM\n)\n)\n,\n\n\n{\\displaystyle L\\prec M\\qquad \\mathrm {if\\,and\\,only\\,if} \\qquad E(u(L))\nu\n(\nL\n)\n\n\n{\\displaystyle u(M)>u(L)}\n\n![u(M)>u(L)](https://wikimedia.org/api/rest_v1/media/math/render/svg/a0b88ccad5fbc7082499b20e76f671572dc89528), a rational decision maker would prefer the lottery \n\n\n\nM\n\n\n{\\displaystyle M}\n\n![M](https://wikimedia.org/api/rest_v1/media/math/render/svg/f82cade9898ced02fdd08712e5f0c0151758a0dd) over the lottery \n\n\n\nL\n\n\n{\\displaystyle L}\n\n![L](https://wikimedia.org/api/rest_v1/media/math/render/svg/103168b86f781fe6e9a4a87b8ea1cebe0ad4ede8), because it gives him a larger chance to win the best outcome.\n\n\nHence:\n\n\n\n\n\n\n\nL\n≺\nM\n\n\n\n{\\displaystyle L\\prec M\\;}\n\n![{\\displaystyle L\\prec M\\;}](https://wikimedia.org/api/rest_v1/media/math/render/svg/54144dccfa8f037f645c635f0c41d0f90b2fa40c) if and only if \n\n\n\nE\n(\nu\n(\nL\n)\n)\n<\nE\n(\nu\n(\nM\n)\n)\n.\n\n\n{\\displaystyle E(u(L)) \n> \"Many economists will feel that we are assuming far too much ... Have we not shown too much? ... As far as we can see, our postulates [are] plausible ... We have practically defined numerical utility as being that thing for which the calculus of mathematical expectations is legitimate.\" – *VNM 1953, § 3.1.1 p.16 and § 3.7.1 p. 28*[[1]](#cite_note-VNM-1)\n> \n> \n> \n> \n\n\nThus, the content of the theorem is that the construction of *u* is possible, and they claim little about its nature.\n\n\n\nConsequences\n------------\n\n\n### Automatic consideration of risk aversion\n\n\n.mw-parser-output .hatnote{font-style:italic}.mw-parser-output div.hatnote{padding-left:1.6em;margin-bottom:0.5em}.mw-parser-output .hatnote i{font-style:normal}.mw-parser-output .hatnote+link+.hatnote{margin-top:-0.5em}Main article: [Risk aversion](/wiki/Risk_aversion \"Risk aversion\")\nIt is often the case that a person, faced with real-world [gambles](/wiki/Gamble \"Gamble\") with money, does not act to maximize the expected value of their *dollar assets.* For example, a person who only possesses $1000 in savings may be reluctant to risk it all for a 20% chance odds to win $10,000, even though\n\n\n\n\n\n\n\n20\n%\n(\n$\n10\n\n000\n)\n+\n80\n%\n(\n$\n0\n)\n=\n$\n2000\n>\n100\n%\n(\n$\n1000\n)\n\n\n{\\displaystyle 20\\%(\\$10\\,000)+80\\%(\\$0)=\\$2000>100\\%(\\$1000)}\n\n![{\\displaystyle 20\\%(\\$10\\,000)+80\\%(\\$0)=\\$2000>100\\%(\\$1000)}](https://wikimedia.org/api/rest_v1/media/math/render/svg/167ca3f9cd2649a8a8811e2fe1f36573b31d6f00)\nHowever, *if* the person is VNM-rational, such facts are automatically accounted for in their utility function *u*. In this example, we could conclude that\n\n\n\n\n\n\n\n20\n%\nu\n(\n$\n10\n\n000\n)\n+\n80\n%\nu\n(\n$\n0\n)\n<\nu\n(\n$\n1000\n)\n\n\n{\\displaystyle 20\\%u(\\$10\\,000)+80\\%u(\\$0) \n> \"The axioms should not be too numerous, their system is to be as simple and transparent as possible, and each axiom should have an immediate intuitive meaning by which its appropriateness may be judged directly. In a situation like ours this last requirement is particularly vital, in spite of its vagueness: we want to make an intuitive concept amenable to mathematical treatment and to see as clearly as\n> possible what hypotheses this requires.\" – *VNM 1953 § 3.5.2, p. 25*[[1]](#cite_note-VNM-1)\n> \n> \n> \n> \n\n\nAs such, claims that the expected utility hypothesis does not characterize rationality must reject one of the VNM axioms. A variety of [generalized expected utility](/wiki/Generalized_expected_utility \"Generalized expected utility\") theories have arisen, most of which drop or relax the independence axiom.\n\n\n\n### Implications for ethics and moral philosophy\n\n\nMain article: [Consequentialism](/wiki/Consequentialism \"Consequentialism\")\nBecause the theorem assumes nothing about the nature of the possible outcomes of the gambles, they could be morally significant events, for instance involving the life, death, sickness, or health of others. A von Neumann–Morgenstern rational agent is capable of acting with great concern for such events, sacrificing much personal wealth or well-being, and all of these actions will factor into the construction/definition of the agent's VNM-utility function. In other words, both what is naturally perceived as \"personal gain\", and what is naturally perceived as \"altruism\", are implicitly balanced in the VNM-utility function of a VNM-rational individual. Therefore, the full range of [agent-focussed to agent-neutral](/wiki/Consequentialism#Agent-focused_or_agent-neutral \"Consequentialism\") behaviors are possible with various VNM-utility functions[*[clarification needed](/wiki/Wikipedia:Please_clarify \"Wikipedia:Please clarify\")*].\n\n\nIf the utility of \n\n\n\nN\n\n\n{\\displaystyle N}\n\n![N](https://wikimedia.org/api/rest_v1/media/math/render/svg/f5e3890c981ae85503089652feb48b191b57aae3) is \n\n\n\np\nM\n\n\n{\\displaystyle pM}\n\n![pM](https://wikimedia.org/api/rest_v1/media/math/render/svg/1bf2ea65a040eba698ff24a06d8b5d55c5ee1bde), a von Neumann–Morgenstern rational agent must be indifferent between \n\n\n\n1\nN\n\n\n{\\displaystyle 1N}\n\n![1N](https://wikimedia.org/api/rest_v1/media/math/render/svg/941c51e0a0b88e1fe1d1aaeb7e1af7b6bab8f272) and \n\n\n\np\nM\n+\n(\n1\n−\np\n)\n0\n\n\n{\\displaystyle pM+(1-p)0}\n\n![pM+(1-p)0](https://wikimedia.org/api/rest_v1/media/math/render/svg/0ad83e817154c2b6a7942aa258ccb9d4fd862203). An agent-focused von Neumann–Morgenstern rational agent therefore cannot favor more equal, or \"fair\", distributions of utility between its own possible future selves.\n\n\n\n### Distinctness from other notions of utility\n\n\nSome [utilitarian moral theories](/wiki/Utilitarianism#Average_v_total \"Utilitarianism\") are concerned with quantities called the \"total utility\" and \"average utility\" of collectives, and characterize morality in terms of favoring the utility or happiness of others with disregard for one's own. These notions can be related to, but are distinct from, VNM-utility:\n\n\n\n* 1) VNM-utility is a *decision utility*:[[3]](#cite_note-KWS-3) it is that according to which one decides, and thus by definition cannot be something which one disregards.\n* 2) VNM-utility is not canonically additive across multiple individuals (see Limitations), so \"total VNM-utility\" and \"average VNM-utility\" are not immediately meaningful (some sort of normalization assumption is required).\n\n\nThe term *E-utility* for \"experience utility\" has been coined[[3]](#cite_note-KWS-3) to refer to the types of \"hedonistic\" utility like that of [Bentham](/wiki/Jeremy_Bentham \"Jeremy Bentham\")'s [greatest happiness principle](/wiki/Greatest_happiness_principle \"Greatest happiness principle\"). Since morality affects decisions, a VNM-rational agent's morals will affect the definition of its own utility function (see above). Thus, the morality of a VNM-rational agent can be characterized by *correlation* of the agent's VNM-utility with the VNM-utility, E-utility, or \"happiness\" of others, among other means, but not by *disregard* for the agent's own VNM-utility, a contradiction in terms.\n\n\n\nLimitations\n-----------\n\n\n### Nested gambling\n\n\nSince if *L* and *M* are lotteries, then *pL* + (1 − *p*)*M* is simply \"expanded out\" and considered a lottery itself, the VNM formalism ignores what may be experienced as \"nested gambling\". This is related to the [Ellsberg problem](/wiki/Ellsberg_paradox \"Ellsberg paradox\") where people choose to avoid the perception of *risks about risks*. Von Neumann and Morgenstern recognized this limitation:\n\n\n\n\n> \"...concepts like a *specific utility of gambling* cannot be formulated free of contradiction on this level. This may seem to be a paradoxical assertion. But anybody who has seriously tried to axiomatize that elusive concept, will probably concur with it.\" – *VNM 1953 § 3.7.1, p. 28*.[[1]](#cite_note-VNM-1)\n> \n> \n\n\n### Incomparability between agents\n\n\nSince for any two VNM-agents *X* and *Y*, their VNM-utility functions *uX* and *uY* are only determined up to additive constants and multiplicative positive scalars, the theorem does not provide any canonical way to compare the two. Hence expressions like *uX*(*L*) + *uY*(*L*) and *uX*(*L*) − *uY*(*L*) are not canonically defined, nor are comparisons like *uX*(*L*) < *uY*(*L*) canonically true or false. In particular, the aforementioned \"total VNM-utility\" and \"average VNM-utility\" of a population are not canonically meaningful without normalization assumptions.\n\n\n\n### Applicability to economics\n\n\nThe [expected utility hypothesis](/wiki/Expected_utility_hypothesis \"Expected utility hypothesis\") is shown to have limited predictive accuracy in a set of lab based empirical experiments, such as the [Allais paradox](/wiki/Allais_paradox \"Allais paradox\").\nWhich leads some people to interpret as evidence that\n\n\n\n* humans are not always rational, or\n* VNM-rationality is not an appropriate characterization of rationality, or\n* some combination of both, or\n* humans *do* behave VNM-rationally but the objective evaluation of *u* and the construction of *u* are [intractable](/wiki/Intractability_(complexity) \"Intractability (complexity)\") problems.\n\n\nReferences and further reading\n------------------------------\n\n\n.mw-parser-output .reflist{font-size:90%;margin-bottom:0.5em;list-style-type:decimal}.mw-parser-output .reflist .references{font-size:100%;margin-bottom:0;list-style-type:inherit}.mw-parser-output .reflist-columns-2{column-width:30em}.mw-parser-output .reflist-columns-3{column-width:25em}.mw-parser-output .reflist-columns{margin-top:0.3em}.mw-parser-output .reflist-columns ol{margin-top:0}.mw-parser-output .reflist-columns li{page-break-inside:avoid;break-inside:avoid-column}.mw-parser-output .reflist-upper-alpha{list-style-type:upper-alpha}.mw-parser-output .reflist-upper-roman{list-style-type:upper-roman}.mw-parser-output .reflist-lower-alpha{list-style-type:lower-alpha}.mw-parser-output .reflist-lower-greek{list-style-type:lower-greek}.mw-parser-output .reflist-lower-roman{list-style-type:lower-roman}\n1. ^ [***a***](#cite_ref-VNM_1-0) [***b***](#cite_ref-VNM_1-1) [***c***](#cite_ref-VNM_1-2) [***d***](#cite_ref-VNM_1-3) [Neumann, John von](/wiki/John_von_Neumann \"John von Neumann\") and [Morgenstern, Oskar](/wiki/Oskar_Morgenstern \"Oskar Morgenstern\"), *[Theory of Games and Economic Behavior](/wiki/Theory_of_Games_and_Economic_Behavior \"Theory of Games and Economic Behavior\")*. Princeton, NJ. Princeton University Press, 1953.\n2. **[^](#cite_ref-2)** Peterson, Chapter 8.\n3. ^ [***a***](#cite_ref-KWS_3-0) [***b***](#cite_ref-KWS_3-1) [***c***](#cite_ref-KWS_3-2) .mw-parser-output cite.citation{font-style:inherit;word-wrap:break-word}.mw-parser-output .citation q{quotes:\"\\\"\"\"\\\"\"\"'\"\"'\"}.mw-parser-output .citation:target{background-color:rgba(0,127,255,0.133)}.mw-parser-output .id-lock-free a,.mw-parser-output .citation .cs1-lock-free a{background:url(\"//upload.wikimedia.org/wikipedia/commons/6/65/Lock-green.svg\")right 0.1em center/9px no-repeat}.mw-parser-output .id-lock-limited a,.mw-parser-output .id-lock-registration a,.mw-parser-output .citation .cs1-lock-limited a,.mw-parser-output .citation .cs1-lock-registration a{background:url(\"//upload.wikimedia.org/wikipedia/commons/d/d6/Lock-gray-alt-2.svg\")right 0.1em center/9px no-repeat}.mw-parser-output .id-lock-subscription a,.mw-parser-output .citation .cs1-lock-subscription a{background:url(\"//upload.wikimedia.org/wikipedia/commons/a/aa/Lock-red-alt-2.svg\")right 0.1em center/9px no-repeat}.mw-parser-output .cs1-ws-icon a{background:url(\"//upload.wikimedia.org/wikipedia/commons/4/4c/Wikisource-logo.svg\")right 0.1em center/12px no-repeat}.mw-parser-output .cs1-code{color:inherit;background:inherit;border:none;padding:inherit}.mw-parser-output .cs1-hidden-error{display:none;color:#d33}.mw-parser-output .cs1-visible-error{color:#d33}.mw-parser-output .cs1-maint{display:none;color:#3a3;margin-left:0.3em}.mw-parser-output .cs1-format{font-size:95%}.mw-parser-output .cs1-kern-left{padding-left:0.2em}.mw-parser-output .cs1-kern-right{padding-right:0.2em}.mw-parser-output .citation .mw-selflink{font-weight:inherit}Kahneman; Wakker; Sarin (1997). [\"Back to Bentham? Explorations of Experienced Utility\"](http://repub.eur.nl/pub/23011). *[Quarterly Journal of Economics](/wiki/Quarterly_Journal_of_Economics \"Quarterly Journal of Economics\")*. **112** (2): 375–406. [doi](/wiki/Doi_(identifier) \"Doi (identifier)\"):[10.1162/003355397555235](https://doi.org/10.1162%2F003355397555235). [hdl](/wiki/Hdl_(identifier) \"Hdl (identifier)\"):[1765/23011](https://hdl.handle.net/1765%2F23011).\n4. ^ [***a***](#cite_ref-Kreps_4-0) [***b***](#cite_ref-Kreps_4-1) [Kreps, David M.](/wiki/David_M._Kreps \"David M. Kreps\") *Notes on the Theory of Choice*. Westview Press (May 12, 1988), chapters 2 and 5.\n5. **[^](#cite_ref-nop_5-0)** Implicit in denoting indifference by equality are assertions like if \n\n\n\nL\n≺\nM\n=\nN\n\n\n{\\displaystyle L\\prec M=N}\n\n![L\\prec M = N](https://wikimedia.org/api/rest_v1/media/math/render/svg/41c97165bae2eb24ce9448c456fb22c4884b6a32) then \n\n\n\nL\n≺\nN\n\n\n{\\displaystyle L\\prec N}\n\n![L\\prec N](https://wikimedia.org/api/rest_v1/media/math/render/svg/8ff0ed9533e882a709e3bf19099ad043398cad26). To make such relations explicit in the axioms, Kreps (1988) chapter 2 denotes indifference by \n\n\n\n\n∼\n\n\n{\\displaystyle \\,\\sim }\n\n![{\\displaystyle \\,\\sim }](https://wikimedia.org/api/rest_v1/media/math/render/svg/a2d29304f09810413194433497d0d8d2007a3639), so it may be surveyed in brief for intuitive meaning.\n6. **[^](#cite_ref-6)** EconPort, \"Von Neumann–Morgenstern Expected Utility Theory\" \n7. **[^](#cite_ref-KeeneyRaiffa1993_7-0)** Keeney, Ralph L.; Raiffa, Howard (1993). *Decisions with Multiple Objectives*. [ISBN](/wiki/ISBN_(identifier) \"ISBN (identifier)\") [0-521-44185-4](/wiki/Special:BookSources/0-521-44185-4 \"Special:BookSources/0-521-44185-4\").\n8. **[^](#cite_ref-8)** *Specimen theoriae novae de mensura sortis* or *Exposition of a New Theory on the Measurement of Risk*\n\n.mw-parser-output .refbegin{font-size:90%;margin-bottom:0.5em}.mw-parser-output .refbegin-hanging-indents>ul{margin-left:0}.mw-parser-output .refbegin-hanging-indents>ul>li{margin-left:0;padding-left:3.2em;text-indent:-3.2em}.mw-parser-output .refbegin-hanging-indents ul,.mw-parser-output .refbegin-hanging-indents ul li{list-style:none}@media(max-width:720px){.mw-parser-output .refbegin-hanging-indents>ul>li{padding-left:1.6em;text-indent:-1.6em}}.mw-parser-output .refbegin-columns{margin-top:0.3em}.mw-parser-output .refbegin-columns ul{margin-top:0}.mw-parser-output .refbegin-columns li{page-break-inside:avoid;break-inside:avoid-column}\n* [Nash, John F., Jr.](/wiki/John_Forbes_Nash_Jr. \"John Forbes Nash Jr.\") (1950). \"The Bargaining Problem\". *[Econometrica](/wiki/Econometrica \"Econometrica\")*. **18** (2): 155–162. [doi](/wiki/Doi_(identifier) \"Doi (identifier)\"):[10.2307/1907266](https://doi.org/10.2307%2F1907266). [JSTOR](/wiki/JSTOR_(identifier) \"JSTOR (identifier)\") [1907266](https://www.jstor.org/stable/1907266).\n* Anand, Paul. *Foundations of Rational Choice Under Risk* Oxford, Oxford University Press. 1993 reprinted 1995, 2002\n* [Fishburn, Peter C.](/wiki/Peter_C._Fishburn \"Peter C. Fishburn\") *Utility Theory for Decision Making*. Huntington, NY. Robert E. Krieger Publishing Co. 1970. [ISBN](/wiki/ISBN_(identifier) \"ISBN (identifier)\") [978-0-471-26060-8](/wiki/Special:BookSources/978-0-471-26060-8 \"Special:BookSources/978-0-471-26060-8\")\n* [Sixto Rios](/wiki/Sixto_Rios \"Sixto Rios\") (1998) [Some problems and developments in decision science](http://www.mat.ucm.es/serv/revmat/vol11-1/vol11-1g.html), *Revista Matematica Complutense* 11(1):113–41.\n* Peterson, Martin (2009). *An Introduction to Decision Theory (Cambridge Introductions to Philosophy)*. Cambridge: Cambridge University Press.\n\n\n\n\n![](//en.wikipedia.org/wiki/Special:CentralAutoLogin/start?type=1x1)\nRetrieved from \"\"\n[Categories](/wiki/Help:Category \"Help:Category\"): * [Theorems](/wiki/Category:Theorems \"Category:Theorems\")\n* [Game theory](/wiki/Category:Game_theory \"Category:Game theory\")\n* [Utility](/wiki/Category:Utility \"Category:Utility\")\n* [John von Neumann](/wiki/Category:John_von_Neumann \"Category:John von Neumann\")\nHidden categories: * [Articles with short description](/wiki/Category:Articles_with_short_description \"Category:Articles with short description\")\n* [Short description is different from Wikidata](/wiki/Category:Short_description_is_different_from_Wikidata \"Category:Short description is different from Wikidata\")\n* [Wikipedia articles needing clarification from March 2016](/wiki/Category:Wikipedia_articles_needing_clarification_from_March_2016 \"Category:Wikipedia articles needing clarification from March 2016\")", "url": "https://en.wikipedia.org/w/index.php?title=Von_Neumann%E2%80%93Morgenstern_utility_theorem&oldid=1044421624", "title": "Von Neumann–Morgenstern utility theorem", "source": "html_articles", "source_type": "encyclopediaArticle", "source_filetype": "pdf", "date_published": "2021-09-14T22:00:00Z", "authors": ["Wikipedia"], "summary": [], "id": "e4176c2dc13c8a72a394237de396b512"} {"text": "At Uber, we apply neural networks to fundamentally improve how we understand the movement of people and things in cities. Among other use cases, we employ them to enable [faster customer service response](https://eng.uber.com/cota/) with natural language models and lower wait times via spatiotemporal prediction of demand across cities, and in the process have developed infrastructure to [scale up training](https://eng.uber.com/horovod-pyspark-apache-mxnet-support/) and support faster [model](https://eng.uber.com/introducing-ludwig/) [development](https://eng.uber.com/michelangelo-pyml/).\n\n\nThough neural networks are powerful, widely used tools, many of their subtle properties are still poorly understood. As scientists around the world make strides towards [illuminating](https://openai.com/blog/introducing-activation-atlases/) [fundamental](https://arxiv.org/abs/1704.05796) [network](https://arxiv.org/abs/1805.12177) [properties](https://arxiv.org/abs/1802.08760), much of our research at [Uber AI](https://www.uber.com/us/en/uberai/) aligns in this direction as well, including our work [measuring intrinsic network complexity](https://eng.uber.com/intrinsic-dimension/), [finding more natural input spaces](https://eng.uber.com/neural-networks-jpeg/), and [uncovering hidden flaws in popular models](https://eng.uber.com/coordconv/).\n\n\nIn our most recent paper aimed at demystifying neural networks, [Deconstructing Lottery Tickets: Zeros, Signs, and the Supermask](https://eng.uber.com/research/deconstructing-lottery-tickets-zeros-signs-and-the-supermask/), we build upon the fascinating [Lottery Ticket Hypothesis](https://arxiv.org/abs/1803.03635) developed by Frankle and Carbin. Their work surprised many researchers by showing that a very simple algorithm—delete small weights and retrain—can find sparse trainable subnetworks, or “lottery tickets”, within larger networks that perform as well as the full network. Although they clearly demonstrated lottery tickets to be effective, their work (as often occurs with great research) raised as many questions as it answered, and many of the underlying mechanics were not yet well understood. Our paper proposes explanations behind these mechanisms, uncovers curious quirks of these subnetworks, introduces competitive variants of the lottery ticket algorithm, and derives a surprising by-product: the Supermask.\n\n\n### The Lottery Ticket Hypothesis\n\n\nWe begin by briefly summarizing Frankle and Carbin’s paper, [The Lottery Ticket Hypothesis: Finding Sparse, Trainable Neural Networks](https://arxiv.org/abs/1803.03635), which we abbreviated as “LT”. In this paper, the authors proposed a simple approach for producing sparse, performant networks: after training a network, set all weights smaller than some threshold to zero (prune them), rewind the rest of the weights to their initial configuration, and then retrain the network from this starting configuration keeping the pruned weights weights frozen (not trained). Using this approach, they obtained two intriguing results.\n\n\nFirst, they showed that the pruned networks performed well. Aggressively pruned networks (with 85 percent to 95 percent of weights pruned) showed no drop in performance compared to the much larger, unpruned network. Moreover, networks only moderately pruned (with 50 percent to 90 percent of weights pruned) often *outperformed* their unpruned counterparts.\n\n\nSecond, as compelling as these results were, the characteristics of the remaining network structure and weights were just as interesting. Normally, if you take a trained network, re-initialize it with random weights, and then re-train it, its performance will be about the same as before. But with the skeletal Lottery Ticket (LT) networks, this property does not hold. The network trains well only if it is rewound to its initial state, including the specific initial weights that were used. Reinitializing it with new weights causes it to train poorly. As pointed out in Frankle and Carbin’s study, it would appear that the specific combination of pruning mask (a per-weight binary value indicating whether or not to delete the weight) and weights underlying the mask form a lucky sub-network found within the larger network, or, as named by the original study, a winning “Lottery Ticket.”\n\n\nWe found this demonstration intriguing because of all left untold. What about LT networks causes them to show better performance? Why are the pruning mask and the initial set of weight so tightly coupled, such that re-initializing the network makes it less trainable? Why does simply selecting large weights constitute an effective criterion for choosing a mask? Would other criteria for creating a mask work, too?\n\n\n### Curiously effective masks\n\n\nWe start our investigation with the observation of a curious phenomenon that demands explanation. While training LT networks, we observed that many of the rewound, masked networks had *accuracy significantly better than chance at initialization*. That is, an untrained network with a particular mask applied to it results in a partially working network. \n\n\nThis might come as a surprise, because if you use a randomly initialized and untrained network to, say, classify images of handwritten digits from the [MNIST dataset](https://en.wikipedia.org/wiki/MNIST_database), you would expect accuracy to be no better than chance (about 10%). But now imagine you multiply the network weights by a mask containing only zeros and ones. In this instance, weights are either unchanged or deleted entirely, but the resulting network now achieves nearly 40 percent accuracy at the task! This is strange, but it is exactly what happens when applying masks created using the procedure in the LT paper that selects weights with large final values (which we will call the “large final” mask criterion):\n\n\n[![](https://blogapi.uber.com/wp-content/uploads/2022/08/image3-2-4.png)](https://blogapi.uber.com/wp-content/uploads/2022/08/image3-2-4.png)Figure 1. Untrained networks perform at chance (10 percent accuracy, for example, on the MNIST dataset as depicted), if they are randomly initialized, or randomly initialized and randomly masked. However, applying the Lottery Ticket mask improves the network accuracy beyond the chance level.\n \n\n\nWe call masks with the property that they immediately produce partially working networks without training of the underlying weights *Supermasks*.\n\n\nAs depicted in Figure 1, in randomly-initialized networks and randomly-initialized networks with random masks, neither weights nor the mask contain any information about the labels, so accuracy cannot reliably be better than chance. In randomly-initialized networks with LT “large final” masks, it is not entirely implausible to have better-than-chance performance since the masks are indeed derived from the training process. But it was unexpected since the only transmission of information from the training back to the initial network is via a zero-one mask, and the criterion for masking simply selects weights with large final magnitudes.\n\n\n### Masking is training, or why zeros matter\n\n\nSo why do we see a large improvement in test accuracy from simply applying an LT mask?\n\n\nThe masking procedure as implemented in the LT paper performs two actions: it sets weights to zero, and it freezes them. By figuring out which of these two components leads to increased performance in *trained* networks, it turns out we’ll also uncover the principles underlying the peculiar performance of the untrained networks.  \n\n\nTo separate the above two factors, we run a simple experiment: reproduce the LT iterative pruning experiments in which network weights are masked out in alternating train/mask/rewind cycles, but try an additional treatment: freeze zero-masked weights at their initial values instead of at zero. If zero isn’t special, both treatments should perform similarly. We follow Frankle and Carbin (2019) and train three convolutional neural networks (CNNs), Conv2, Conv4, and Conv6 (small CNNs with 2/4/6 convolutional layers, same as used in the LT paper), on [CIFAR-10](https://en.wikipedia.org/wiki/CIFAR-10). \n\n\nResults are shown in Figure 2, below, with pruning (or more correctly, “freezing at some value”) progressing from unpruned on the left to very pruned networks on the right. The horizontal black lines represent the performance of the original, unpruned networks, averaged over five runs. The uncertainty bands here and in other figures represent minimum and maximum values over five runs. Solid blue lines represent networks trained using the LT algorithm, which sets pruned weights to zero and freezes them. Dotted blue lines represent networks trained using the LT algorithm except that pruned weights are frozen at their initial values:\n\n\n[![](https://blogapi.uber.com/wp-content/uploads/2022/08/image4-4-1.png)](https://blogapi.uber.com/wp-content/uploads/2022/08/image4-4-1.png)Figure 2. When testing three CNNs on CIFAR-10, we find that the accuracy of networks with pruned weights frozen at their initial values degrades significantly more than those with pruned weights set to zero.\n \n\n\nWe see that networks perform better when weights are frozen specifically at zero rather than at random initial values. For these networks masked via the LT “large final” criterion, zero would seem to be a particularly good value to set weights to when they had small final values.\n\n\nSo why is zero an ideal value? One hypothesis is that the mask criterion we use *tends to mask to zero those weights that were headed toward zero anyway*. To test out this hypothesis, let’s consider a new approach to freezing. We run another experiment interpolated between the previous two: for any weight to be frozen, we freeze it to zero if it moved *toward* zero over the course of training, and we freeze it at its random initial value if it moved *away* from zero. Results are shown in Figure 3, below:\n\n\n[![](https://blogapi.uber.com/wp-content/uploads/2022/08/image5-3-2.png)](https://blogapi.uber.com/wp-content/uploads/2022/08/image5-3-2.png)Figure 3: Selectively freezing weights to their initial value or zero depending on the direction they move during training produces better performance than freezing all weights at zero or init.\n \n\n\nWe see that this treatment performs just as well as the original LT networks, even though we did not freeze all the pruned weights to zero. In fact, if we apply this treatment to all weights, including weights we keep (that is, for all weights, initialize them at zero if they decreased in magnitude and keep their original initial values otherwise, then freeze pruned weights at their new initialization values), we get networks that perform even better than the LT networks!\n\n\nThis supports our hypothesis that the benefit derived from freezing values to zero comes from the fact that those values were moving toward zero anyway. For a deeper discussion of why the “large final” mask criterion biases toward selecting those weights heading toward zero, [see our paper](https://arxiv.org/abs/1905.01067).\n\n\nThus we find for certain mask criteria, like “large final”, that *masking is training*: the masking operation tends to move weights in the direction they would have moved during training.\n\n\nThis simultaneously explains why Supermasks exist and hints that other mask criteria may produce better Supermasks if they preferentially mask to zero weights that training drives toward zero.\n\n\n### Alternate mask criteria\n\n\nNow that we’ve explored why the original LT mask criterion, “large final,” works as well as it does, we can ask what other masking criteria would also perform well. The “large final” criterion keeps weights with large final magnitudes and sets the rest to zero. We can think of this pruning criterion and many others as a division of the 2D (wi = initial weight, wf = final weight) space into regions corresponding to weights that should be kept (mask-1) vs. pruned (mask-0), as shown in Figure 5, below:\n\n\n[![](https://blogapi.uber.com/wp-content/uploads/2022/08/image5-1-7.png)](https://blogapi.uber.com/wp-content/uploads/2022/08/image5-1-7.png)Figure 5. Different mask criteria can be thought of as segmenting the (wi, wf) space into regions corresponding to mask values of one vs. zero. The ellipse represents in cartoon form the area occupied by the positively correlated initial and final weights from a given layer. The mask shown corresponds to a “large final” criterion, which was used in the LT paper: weights with large final magnitude are kept, and weights with final values near zero are pruned. Note that this criterion ignores the initial magnitude of the weight.\n \n\n\nIn the previous section, we showed some supporting evidence for the hypothesis that networks work well when those weights already moving toward zero are set to zero. This hypothesis suggests that other criteria may also work if they respect this basic rule. One such mask criterion is to preferentially keep those weights that move most away from zero, which we can write as the scoring function |wf| – |wi|. We call this criterion “magnitude increase” and depict it along with other criteria run as control cases in Figure 6, below:\n\n\n[![](https://blogapi.uber.com/wp-content/uploads/2022/08/image1-1-10.png)](https://blogapi.uber.com/wp-content/uploads/2022/08/image1-1-10.png)Figure 6. The eight mask criteria considered in this study are shown, starting with the “large final” criterion that starred in the LT paper. Names we use to refer to the various methods are given along with the formula that projects each (wi, wf) pair to a score. Weights with the largest scores (colored regions) are kept, and weights with the smallest scores (gray regions) are pruned.\n \n\n\nThis “magnitude increase” criterion turns out to work just as well as the “large final” criterion, and in some cases significantly better. Results of all criteria are shown in Figure 7, below, for the fully connected (FC) and Conv4 networks; see [our paper](https://arxiv.org/abs/1905.01067) for performance results on other networks. As a baseline, we also show results on a random pruning criterion that simply chooses a random mask with the desired pruning percentage. Note that the first six criteria out of the eight form three opposing pairs; in each case, we see when one member of the pair performs better than the random baseline, the opposing member performs worse than it.\n\n\n[![](https://blogapi.uber.com/wp-content/uploads/2022/08/image10-1-2.png)](https://blogapi.uber.com/wp-content/uploads/2022/08/image10-1-2.png)Figure 7. Measurements of the accuracy vs. pruning percentage for two networks, FC on MNIST (left) and Conv4 on CIFAR-10 (right), show that multiple mask criteria—large final, magnitude increase, and two others—reliably outperform the black random pruning baseline. In the Conv4 network, the performance boost of “magnitude increase” is larger than that of other mask criteria; asterisks mark where the difference between “large final” and “magnitude increase” is statistically significant at the p=0.05 level.\n \n\n\nIn general, we observe that those methods that bias towards keeping weights with large final magnitude are able to uncover performant subnetworks.\n\n\n### Show me a sign\n\n\nWe have explored various ways of choosing which weights to prune and what values to set pruned weights to. We will now consider what values to set kept weights to. In particular, we want to explore an interesting observation in Frankle and Carbin (2019) which showed that the pruned, skeletal LT networks train well when you rewind to its original initialization, but degrades in performance when you randomly reinitialize the network. \n\n\nWhy does reinitialization cause LT networks to train poorly? Which components of the initialization are important?\n\n\nWe evaluate a number of variants of reinitialization to find out the answer.\n\n\n* “Reinit” experiments: reinitialize kept weights based on the original initialization distribution\n* “Reshuffle” experiments: reinitializing while respecting the original distribution of remaining weights in that layer, which is achieved by reshuffling the kept weights’ initial values\n* “Constant” experiments: reinitializing by setting remaining weights values to a positive or negative constant, with the constant set to be the standard deviation of each layer’s original initialization\n\n\nAll of the reinitialization experiments are based on the same original networks and use the “large final” mask criterion with iterative pruning. We include the original LT network (rewind, large final) and the randomly pruned network (random) as baselines for comparison.\n\n\nWe find that none of these three variants alone are able to train as well as the original LT network, as shown in dashed lines in Figure 8 below:\n\n\n[![](https://blogapi.uber.com/wp-content/uploads/2022/08/image7-1-4.png)](https://blogapi.uber.com/wp-content/uploads/2022/08/image7-1-4.png)Figure 8: We show test accuracy vs. pruning percentage of two networks, FC (left) and Conv4 (right), while using different reinitialization methods. A clear distinction of performances between those that respect the consistency of signs and those that do not suggests that the specific initial values of kept weights do not matter as much as their signs.\n \n\n\nHowever, all three variants work better when we control the consistency of sign by ensuring that the reassigned values of the kept weights are of the same sign as their original initial values. These are shown as solid color lines in Figure 8. Clearly, the common factor in all the variants that perform better than chance, including the original “rewind”, is the sign. This suggests that reinitialization is not the deal breaker as long as you keep the sign. In fact, as long as we respect the original sign, even as simple as setting all kept weights to a constant value consistently performs well!\n\n\n### Better Supermasks\n\n\nAt the beginning of the article we introduced the idea of Supermasks, which are binary masks that when applied to a randomly initialized network, produce better-than-chance accuracy without additional training. We now turn our attention to finding methods that would produce the best Supermasks. \n\n\nWe can evaluate the same pruning methods and pruning percentages seen in Figure 7 for their potential as Supermasks. For simplicity, we evaluate Supermasks based on one-shot pruning rather than iterative pruning. We can also consider additional mask criteria optimized for generating Supermasks. Based on the insight about the importance of the initial sign of LT weights and the idea of having weights close to their final values, we introduce a new mask criterion that selects for weights with large final magnitudes that also maintained the same sign at the end of training. This method is referred to as “large final, same sign”, and we depict it in Figure 9, below. We also add “large final, diff sign” as a control case, which looks for weights that changed sign at the end of training.\n\n\n[![](https://blogapi.uber.com/wp-content/uploads/2022/08/image2-1-12.png)](https://blogapi.uber.com/wp-content/uploads/2022/08/image2-1-12.png)Figure 9. The “large final, same sign” mask criterion produces the highest performing Supermasks in this study. In contrast to the “large final” mask in Figure 5, note this criterion masks out the quadrants where the sign of wi and wf differ.\n \n\n\nBy using a simple mask criterion of “large final same sign”, we can create networks that obtain a remarkable 80 percent test accuracy on MNIST and 24 percent on CIFAR-10 without training. Another curious observation is that if we apply the mask to a signed constant (as described in the previous section) rather than the actual initial weights, we can produce even higher test accuracy of up to 86 percent on MNIST and 41 percent on CIFAR-10.\n\n\n[![](https://blogapi.uber.com/wp-content/uploads/2022/08/image1-3-6.png)](https://blogapi.uber.com/wp-content/uploads/2022/08/image1-3-6.png)Figure 10: We evaluate accuracy at initialization (with no training) of a single FC network on MNIST subject to the application of various masks. The x-axis depicts the percent of weights remaining in the network; all other weights are set to zero. The “large final same sign” mask creates the highest performing Supermask by a wide margin. Note that aside from the five independent runs performed to generate uncertainty bands, every data point on this plot is the same underlying network, just with different masks applied.\n \n\n\nWe find it fascinating that these Supermasks exist and can be found via such simple criteria. Besides being a scientific curiosity, they could have implications for transfer learning and meta-learning — networks to approximately solve, say, any permutation of MNIST input pixels and permutation of output classes are all in there, just with different masks. They also present us with a method for network compression, since we only need to save a binary mask and a single random seed to reconstruct the full weights of the network. \n\n\nIf you’re curious how far we can push the performance of these Supermasks, check out [our paper](https://eng.uber.com/research/deconstructing-lottery-tickets-zeros-signs-and-the-supermask/) where we try training for them directly. If you’d like to run experiments similar to this paper, check out our [code](https://github.com/uber-research/deconstructing-lottery-tickets) and let us know what you find!\n\n\n*If working with neural networks interests you, consider applying for a machine learning [role at Uber](https://www.uber.com/us/en/careers/list/?query=machine%20learning).*\n\n\n*The authors would like to acknowledge Jonathan Frankle, Joel Lehman, and Sam Greydanus for combinations of helpful discussion and comments on early drafts of this work.* \n\n\n![](https://blog.uber-cdn.com/cdn-cgi/image/width=2160,quality=80,onerror=redirect,format=auto/wp-content/uploads/2022/08/wd_bc32cbfe-e1b8-4620-905c-bbd5b9f973b4-N_BTe9rnrG.jpg)![](https://blog.uber-cdn.com/cdn-cgi/image/width=2160,quality=80,onerror=redirect,format=auto/wp-content/uploads/2022/08/bg_10480343-172f-48cc-8cd8-4d127a3b73b8-WHGtisBV9M.jpeg)![](https://blog.uber-cdn.com/cdn-cgi/image/width=2160,quality=80,onerror=redirect,format=auto/wp-content/uploads/2022/08/image3-8-1024x1024.jpg)![](https://blog.uber-cdn.com/cdn-cgi/image/width=2160,quality=80,onerror=redirect,format=auto/wp-content/uploads/2022/08/jason-yosinski.jpg)Posted by Hattie Zhou, Janice Lan, Rosanne Liu, Jason Yosinski\n\nCategory: [Engineering](/en-PL/blog/engineering/), [AI](/en-PL/blog/ai/)", "url": "https://eng.uber.com/deconstructing-lottery-tickets/", "title": "Deconstructing Lottery Tickets: Zeros, Signs, and the Supermask", "source": "html_articles", "source_type": "webpage", "source_filetype": "pdf", "date_published": "2019-05-05T22:00:00Z", "authors": ["Hattie Zhou"], "summary": [], "id": "6f399e222a82d2a14f9e785bb01eb19e"} {"text": "post\n 6 minute read\nICLR Safe ML Workshop Report\n============================\n\n![](https://futureoflife.org/wp-content/uploads/2019/06/cropped-pagodas.jpg)PublishedJune 18, 2019AuthorViktoriya Krakovna\n \nThis year the ICLR conference hosted topic-based workshops for the first time (as opposed to a single track for workshop papers), and I co-organized the [Safe ML workshop](https://sites.google.com/corp/view/safeml-iclr2019/home?authuser=0). One of the main goals was to bring together near and long term safety research communities.\n\n\n![](https://futureoflife.org/wp-content/uploads/2019/06/near-long-term.png)The workshop was structured according to a taxonomy that incorporates both near and long term safety research into three areas — specification, robustness, and assurance.\n\n\n\n**Specification:** define the purpose of the system\n\n\n* Reward hacking\n* Side effects\n* Preference learning\n* Fairness\n\n\n\n\n**Robustness:** design system to withstand perturbations\n\n\n* Adaptation\n* Verification\n* Worst-case robustness\n* Safe exploration\n\n\n\n\n**Assurance:** monitor and control system activity\n\n\n* Interpretability\n* Monitoring\n* Privacy\n* Interruptibility\n\n\n\nWe had an invited talk and a contributed talk in each of the three areas.\n\n\n\nTalks\n-----\n\n\nIn the **specification** area, Dylan Hadfield-Menell spoke about [formalizing the value alignment problem](https://slideslive.com/38915783/formalizing-the-value-alignment-problem-in-ai) in the Inverse RL framework.\n\n\n![](https://futureoflife.org/wp-content/uploads/2019/06/formalizing-value-alignment.png)\n\n\nDavid Krueger [presented](https://slideslive.com/38915784/misleading-metaobjectives-and-hidden-incentives-for-distributional-shift) a [paper](https://drive.google.com/uc?export=download&id=1k93292JCoIHU0h6xVO3qmeRwLyOSlS4o) on hidden incentives for the agent to shift its task distribution in the meta-learning setting.\n\n\n![](https://futureoflife.org/wp-content/uploads/2019/06/distributional-shift.png)\n\n\nIn the **robustness** area, Ian Goodfellow argued for [dynamic defenses against adversarial examples](https://slideslive.com/38915790/the-case-for-dynamic-defenses-against-adversarial-examples) and encouraged the research community to consider threat models beyond small perturbations within a norm ball of the original data point.\n\n\n![](https://futureoflife.org/wp-content/uploads/2019/06/threat-models.png)\n\n\nAvraham Ruderman [presented](https://slideslive.com/38915789/uncovering-surprising-behaviors-in-reinforcement-learning-via-worstcase-analysis) a [paper](https://drive.google.com/uc?export=download&id=1z_d1EjvKWlh2L39xxGXYDWP1wQwnlNLQ) on worst-case analysis for discovering surprising behaviors (e.g. failing to find the goal in simple mazes).\n\n\n![](https://futureoflife.org/wp-content/uploads/2019/06/mazes.png)\n\n\nIn the **assurance** area, Cynthia Rudin argued that interpretability doesn’t have to trade off with accuracy (especially in applications), and that it is helpful for solving research problems in all areas of safety.\n\n\n![](https://futureoflife.org/wp-content/uploads/2019/06/interpretability-taxonomy.png)\n\n\nBeomsu Kim [presented](https://slideslive.com/38915785/bridging-adversarial-robustness-and-gradient-interpretability) a [paper](https://drive.google.com/uc?export=download&id=1FUlKR07jf1VQb6M-4VVTxxu3oDkK0x-8) explaining why adversarial training improves the interpretability of gradients for deep neural networks.\n\n\n![](https://futureoflife.org/wp-content/uploads/2019/06/gradient-interpretability.png)\n\n\n\nPanels\n------\n\n\nThe workshop panels discussed possible overlaps between different research areas in safety and research priorities going forward.\n\n\nIn terms of **overlaps**, the main takeaway was that advancing interpretability is useful for all safety problems. Also, adversarial robustness can contribute to value alignment – e.g. reward gaming behaviors can be viewed as a system finding adversarial examples for its reward function. However, there was a cautionary point that while near- and long-term problems are often similar, solutions might not transfer well between these areas (e.g. some solutions to near-term problems might not be sufficiently general to help with value alignment).\n\n\n![](https://futureoflife.org/wp-content/uploads/2019/06/panel.jpg)\n\n\nThe **research priorities** panel recommended more work on adversarial examples with realistic threat models (as mentioned above), complex environments for testing value alignment (e.g. creating new structures in Minecraft without touching existing ones), fairness formalizations with more input from social scientists, and improving cybersecurity.\n\n\n\nPapers\n------\n\n\nOut of the 35 accepted papers, 5 were on long-term safety / value alignment, and the rest were on near-term safety. Half of the near-term paper submissions were on adversarial examples, so the resulting pool of accepted papers was skewed as well: 14 on adversarial examples, 5 on interpretability, 3 on safe RL, 3 on other robustness, 2 on fairness, 2 on verification, and 1 on privacy. Here is a summary of the value alignment papers:\n\n\n\n[Misleading meta-objectives and hidden incentives for distributional shift](https://drive.google.com/uc?export=download&id=1k93292JCoIHU0h6xVO3qmeRwLyOSlS4o) by Krueger et al shows that RL agents in a meta-learning context have an incentive to shift their task distribution instead of solving the intended task. For example, a household robot whose task is to predict whether its owner will want coffee could wake up its owner early in the morning to make this prediction task easier. This is called a ‘self-induced distributional shift’ (SIDS), and the incentive to do so is a ‘hidden incentive for distributional shift’ (HIDS). The paper demonstrates this behavior experimentally and shows how to avoid it.\n\n\n![](https://futureoflife.org/wp-content/uploads/2019/06/coffee-robot.png)\n\n\n[How useful is quantilization for mitigating specification-gaming?](https://drive.google.com/uc?export=download&id=13qAfOm8McRvXS33MCNH0ia4ApMIClZP9) by Ryan Carey introduces variants of several classic environments (Mountain Car, Hopper and Video Pinball) where the observed reward differs from the true reward, creating an opportunity for the agent to game the specification of the observed reward. The paper shows that a quantilizing agent avoids specification gaming and performs better in terms of true reward than both imitation learning and a regular RL agent on all the environments.\n\n\n![](https://futureoflife.org/wp-content/uploads/2019/06/spec-gaming.png)\n\n\n[Delegative Reinforcement Learning: learning to avoid traps with a little help](https://drive.google.com/uc?export=download&id=1xa7UpGGODl6mszNWkA4XQGPyeopsNuWu) by Vanessa Kosoy introduces an RL algorithm that avoids traps in the environment (states where regret is linear) by delegating some actions to an external advisor, and achieves sublinear regret in a continual learning setting. (Summarized in [Alignment Newsletter #57](http://eepurl.com/gtZ8DD))\n\n\n![](https://futureoflife.org/wp-content/uploads/2019/06/trap.gif)\n\n\n[Generalizing from a few environments in safety-critical reinforcement learning](https://drive.google.com/uc?export=download&id=1Q_lGKZzJwc7h2f8oYgZDBAmRttMBkgCk) by Kenton et al investigates how well RL agents avoid catastrophes in new gridworld environments depending on the number of training environments. They find that both model ensembling and learning a catastrophe classifier (used to block actions) are helpful for avoiding catastrophes, with different safety-performance tradeoffs on new environments.\n\n\n![](https://futureoflife.org/wp-content/uploads/2019/06/dist-shift-catastrophes.png)\n\n\n[Regulatory markets for AI safety](https://drive.google.com/uc?export=download&id=1bFPiwLrZc7SQTMg2_bW4gt0PaS5NyqOH) by Clark and Hadfield proposes a new model for regulating AI development where regulation targets are required to choose regulatory services from a private market that is overseen by the government. This allows regulation to efficiently operate on a global scale and keep up with the pace of technological development and better ensure safe deployment of AI systems. (Summarized in [Alignment Newsletter #55](http://eepurl.com/gp5MFP))\n\n\n![](https://futureoflife.org/wp-content/uploads/2019/06/regulatory-markets.png)\n\n\nThe workshop got a pretty good turnout (around 100 people). Thanks everyone for participating, and thanks to our reviewers, sponsors, and my fellow organizers for making it happen!\n\n\n![](https://futureoflife.org/wp-content/uploads/2019/06/audience-1250x630.jpg)\n\n\n*(Cross-posted from the [Deep Safety blog](https://vkrakovna.wordpress.com/2019/06/18/iclr-safe-ml-workshop-report/).)*\n\n\n\n#### Contents\n\n\nOur contentRelated posts\n-------------\n\nIf you enjoyed this, you also might like:[Our content](https://futureoflife.org/our-content/)[![](https://futureoflife.org/wp-content/uploads/2023/06/Podcast-thumbnails-45-1-1024x576.jpg)](https://futureoflife.org/podcast/dan-hendrycks-on-why-evolution-favors-ais-over-humans/)podcast#### [Dan Hendrycks on Why Evolution Favors AIs over Humans](https://futureoflife.org/podcast/dan-hendrycks-on-why-evolution-favors-ais-over-humans/)\n\nJune 8, 2023[![](https://futureoflife.org/wp-content/uploads/2023/05/Podcast-thumbnails-1-1-1024x576.jpg)](https://futureoflife.org/podcast/nathan-labenz-on-how-ai-will-transform-the-economy/)podcast#### [Nathan Labenz on How AI Will Transform the Economy](https://futureoflife.org/podcast/nathan-labenz-on-how-ai-will-transform-the-economy/)\n\nMay 12, 2023[![](https://futureoflife.org/wp-content/uploads/2023/03/FLi_Banner-05-1024x576.jpg)](https://futureoflife.org/ai/faqs-about-flis-open-letter-calling-for-a-pause-on-giant-ai-experiments/)post#### [FAQs about FLI’s Open Letter Calling for a Pause on Giant AI Experiments](https://futureoflife.org/ai/faqs-about-flis-open-letter-calling-for-a-pause-on-giant-ai-experiments/)\n\nDisclaimer: Please note that these FAQ’s have been prepared by FLI and do not necessarily reflect the views of the […]March 31, 2023", "url": "https://futureoflife.org/2019/06/18/iclr-safe-ml-workshop-report/", "title": "ICLR Safe ML Workshop Report", "source": "html_articles", "source_type": "blogPost", "source_filetype": "pdf", "date_published": "2019-06-17T22:00:00Z", "authors": ["Victoria Krakovna"], "summary": [], "id": "f8f2c637668179d527136ef971b4adf2"} {"text": "[All Open Letters](https://futureoflife.org/?page_id=39741)Research Priorities for Robust and Beneficial Artificial Intelligence: An Open Letter\n=====================================================================================\n\nThere is now a broad consensus that AI research is progressing steadily, and that its impact on society is likely to increase. The potential benefits are huge, since everything that civilization has to offer is a product of human intelligence. Because of the great potential of AI, it is important to research how to reap its benefits while avoiding potential pitfalls.Signatures11251PublishedOctober 28, 2015\n \nClick here to see this page in other languages: **[Chinese](https://futureoflife.org/ai-open-letter-chinese/)  [![](https://futureoflife.org/wp-content/uploads/2016/06/china_flag-e1464798319604.png)](https://futureoflife.org/ai-open-letter-chinese/) [German](https://futureoflife.org/ai-open-letter-german/)[![](https://futureoflife.org/wp-content/uploads/2016/06/Germany_flag.jpg?x57718)](https://futureoflife.org/ai-open-letter-german/) [Japanese](https://futureoflife.org/ai-open-letter-japanese/) [![](https://futureoflife.org/wp-content/uploads/2016/02/red_circle-1.jpg)](https://futureoflife.org/ai-open-letter-japanese/) [Russian](https://futureoflife.org/ai-open-letter-russian/) [![](https://futureoflife.org/wp-content/uploads/2016/02/Russian_Flag.jpg?x56934)](https://futureoflife.org/ai-open-letter-russian/)**\n\n\nArtificial intelligence (AI) research has explored a variety of problems and approaches since its inception, but for the last 20 years or so has been focused on the problems surrounding the construction of intelligent agents - systems that perceive and act in some environment. In this context, \"intelligence\" is related to statistical and economic notions of rationality - colloquially, the ability to make good decisions, plans, or inferences. The adoption of probabilistic and decision-theoretic representations and statistical learning methods has led to a large degree of integration and cross-fertilization among AI, machine learning, statistics, control theory, neuroscience, and other fields. The establishment of shared theoretical frameworks, combined with the availability of data and processing power, has yielded remarkable successes in various component tasks such as speech recognition, image classification, autonomous vehicles, machine translation, legged locomotion, and question-answering systems.\n\n\nAs capabilities in these areas and others cross the threshold from laboratory research to economically valuable technologies, a virtuous cycle takes hold whereby even small improvements in performance are worth large sums of money, prompting greater investments in research. There is now a broad consensus that AI research is progressing steadily, and that its impact on society is likely to increase. The potential benefits are huge, since everything that civilization has to offer is a product of human intelligence; we cannot predict what we might achieve when this intelligence is magnified by the tools AI may provide, but the eradication of disease and poverty are not unfathomable. Because of the great potential of AI, it is important to research how to reap its benefits while avoiding potential pitfalls.\n\n\nThe progress in AI research makes it timely to focus research not only on making AI more capable, but also on maximizing the societal benefit of AI. Such considerations motivated the AAAI 2008-09 Presidential Panel on Long-Term AI Futures and other projects on AI impacts, and constitute a significant expansion of the field of AI itself, which up to now has focused largely on techniques that are neutral with respect to purpose. We recommend expanded research aimed at ensuring that increasingly capable AI systems are robust and beneficial: our AI systems must do what we want them to do. The attached [research priorities document](https://futureoflife.org/static/data/documents/research_priorities.pdf) gives many examples of such research directions that can help maximize the societal benefit of AI. This research is by necessity interdisciplinary, because it involves both society and AI. It ranges from economics, law and philosophy to computer security, formal methods and, of course, various branches of AI itself.\n\n\nIn summary, we believe that research on how to make AI systems robust and beneficial is both important and timely, and that there are concrete research directions that can be pursued today.\n\n\n*If you have questions about this letter, please contact [Max Tegmark](mailto:max@futureoflife.org).*\n\n\nSignatories\n-----------\n\n[Click here](https://futureoflife.org/open-letter/ai-open-letter-signatories/) to view the full list of signatories.\n\n\nTo date, the open letter has been signed by over 8,000 people. The list of signatories includes:\n\n\n### Prominent Signatories\n\n\n**Stuart Russell**, Berkeley, Professor of Computer Science, director of the Center for Intelligent Systems, and co-author of the standard textbook Artificial Intelligence: a Modern Approach.\n\n\n**Tom Dietterich**, Oregon State, President of AAAI, Professor and Director of Intelligent Systems\n\n\n**Eric Horvitz**, Microsoft research director, ex AAAI president, co-chair of the AAAI presidential panel on long-term AI futures\n\n\n**Bart Selman**, Cornell, Professor of Computer Science, co-chair of the AAAI presidential panel on long-term AI futures\n\n\n**Francesca Rossi**, Padova & Harvard, Professor of Computer Science, IJCAI President and Co-chair of AAAI committee on impact of AI and Ethical Issues\n\n\n**Demis Hassabis**, co-founder of DeepMind\n\n\n**Shane Legg**, co-founder of DeepMind\n\n\n**Mustafa Suleyman**, co-founder of DeepMind\n\n\n**Dileep George**, co-founder of Vicarious\n\n\n**Scott Phoenix**, co-founder of Vicarious\n\n\n**Yann LeCun**, head of Facebook’s Artificial Intelligence Laboratory\n\n\n**Geoffrey Hinton**, University of Toronto and Google Inc.\n\n\n**Yoshua Bengio**, Université de Montréal\n\n\n**Peter Norvig**, Director of research at Google and co-author of the standard textbook Artificial Intelligence: a Modern Approach\n\n\n**Oren Etzioni**, CEO of Allen Inst. for AI\n\n\n**Guruduth Banavar**, VP, Cognitive Computing, IBM Research\n\n\n**Michael Wooldridge**, Oxford, Head of Dept. of Computer Science, Chair of European Coordinating Committee for Artificial Intelligence\n\n\n**Leslie Pack Kaelbling**, MIT, Professor of Computer Science and Engineering, founder of the Journal of Machine Learning Research\n\n\n**Tom Mitchell**, CMU, former President of AAAI, chair of Machine Learning Department\n\n\n**Toby Walsh**, Univ. of New South Wales & NICTA, Professor of AI and President of the AI Access Foundation\n\n\n**Murray Shanahan**, Imperial College, Professor of Cognitive Robotics\n\n\n**Michael Osborne**, Oxford, Associate Professor of Machine Learning\n\n\n**David Parkes**, Harvard, Professor of Computer Science\n\n\n**Laurent Orseau**, Google DeepMind\n\n\n**Ilya Sutskever**, Google, AI researcher\n\n\n**Blaise Aguera y Arcas**, Google, AI researcher\n\n\n**Joscha Bach**, MIT, AI researcher\n\n\n**Bill Hibbard**, Madison, AI researcher\n\n\n**Steve Omohundro**, AI researcher\n\n\n**Ben Goertzel**, OpenCog Foundation\n\n\n**Richard Mallah**, Cambridge Semantics, Director of Advanced Analytics, AI researcher\n\n\n**Alexander Wissner-Gross**, Harvard, Fellow at the Institute for Applied Computational Science\n\n\n**Adrian Weller**, Cambridge, AI researcher\n\n\n**Jacob Steinhardt**, Stanford, AI Ph.D. student\n\n\n**Nick Hay**, Berkeley, AI Ph.D. student\n\n\n**Jaan Tallinn**, co-founder of Skype, CSER and FLI\n\n\n**Elon Musk**, SpaceX, Tesla Motors\n\n\n**Steve Wozniak**, co-founder of Apple\n\n\n**Luke Nosek**, Founders Fund\n\n\n**Aaron VanDevender**, Founders Fund\n\n\n**Erik Brynjolfsson**, MIT, Professor at and director of MIT Initiative on the Digital Economy\n\n\n**Margaret Boden**, U. Sussex, Professor of Cognitive Science\n\n\n**Martin Rees**, Cambridge, Professor Emeritus of Cosmology and Astrophysics, Gruber & Crafoord laureate\n\n\n**Huw Price**, Cambridge, Bertrand Russell Professor of Philosophy\n\n\n**Nick Bostrom**, Oxford, Professor of Philosophy, Director of Future of Humanity Institute (Oxford Martin School)\n\n\n**Stephen Hawking**, Director of research at the Department of Applied Mathematics and Theoretical Physics at Cambridge, 2012 Fundamental Physics Prize laureate for his work on quantum gravity\n\n\n**Luke Muehlhauser**, Executive Director of Machine Intelligence Research Institute (MIRI)\n\n\n**Eliezer Yudkowsky**, MIRI researcher, co-founder of MIRI (then known as SIAI)\n\n\n**Katja Grace**, MIRI researcher\n\n\n**Benja Fallenstein**, MIRI researcher\n\n\n**Nate Soares**, MIRI researcher\n\n\n**Paul Christiano**, Berkeley, Computer Science graduate student\n\n\n**Anders Sandberg**, Oxford, Future of Humanity Institute researcher (Oxford Martin School)\n\n\n**Daniel Dewey**, Oxford, Future of Humanity Institute researcher (Oxford Martin School)\n\n\n**Stuart Armstrong**, Oxford, Future of Humanity Institute researcher (Oxford Martin School)\n\n\n**Toby Ord**, Oxford, Future of Humanity Institute researcher (Oxford Martin School), Founder of Giving What We Can\n\n\n**Neil Jacobstein**, Singularity University\n\n\n**Dominik Grewe**, Google DeepMind\n\n\n**Roman V. Yampolskiy**, University of Louisville\n\n\n**Vincent C. Müller**, ACT/Anatolia College\n\n\n**Amnon H Eden**, University Essex\n\n\n**Henry Kautz**, University of Rochester\n\n\n**Boris Debic**, Google, Chief History Officer\n\n\n**Kevin Leyton-Brown**, University of British Columbia, Professor of Computer Science\n\n\n**Trevor Back**, Google DeepMind\n\n\n**Moshe Vardi**, Rice University, editor-in-chief of Communications of the ACM\n\n\n**Peter Sincak**, prof. TU Kosice, Slovakia\n\n\n**Tom Schaul**, Google DeepMind\n\n\n**Grady Booch**, IBM Fellow\n\n\n**Alan Mackworth**, Professor of Computer Science, University of British Columbia. Ex AAAI President\n\n\n**Andrew Davison**, Professor of Robot Vision, Director of the Dyson Robotics Lab at Imperial College London\n\n\n**Daniel Weld**, WRF / TJ Cable Professor of Computer Science & Engineering, University of Washington\n\n\n**Michael Witbrock**, Cycorp Inc & AI4Good.org\n\n\n**Stephen L. Reed**, ai-coin.com\n\n\n**Thomas Stone**, Co-founder of PredictionIO\n\n\n**Dan Roth**, University of Illinois, Editor in Chief of The Journal of AI Research (JAIR)\n\n\n**Babak Hodjat**, Sentient Technologies\n\n\n**Vincent Vanhoucke**, Google, AI researcher\n\n\n**Itamar Arel**, Stanford University, Prof. of Computer Science\n\n\n**Ramon Lopez de Mantaras**, Director of the Artificial Intelligence Research Institute, Spanish National Research Council\n\n\n**Antoine Blondeau**, Sentient Technologies\n\n\n**George Dvorsky**, Contributing Editor, io9; Chair of the Board, Institute for Ethics and Emerging Technologies\n\n\n**George Church**, Harvard & MIT\n\n\n**Klaus-Dieter Althoff**, University of Hildesheim, Professor of Artificial Intelligence; Head of Competence Center Case-Based Reasoning, German Research Center for Artificial Intelligence, Kaiserslautern; Editor-in-Chief German Journal on Artificial Intelligence\n\n\n**Christopher Bishop**, Distinguished Scientist, Microsoft Research\n\n\n**Jen-Hsun Huang**, NVIDIA CEO\n\n\n \n[Close](http://)### How does verification work?\n\nVerified signatures are those which we have taken one or more extra steps to confirm as legitimate: \n• **Direct contact** - We have been in direct contact with this person to verify that they have signed the letter. \n• **Declaration URL** - This person has made a public declaration of signing the open letter which can be viewed online. \nAll published signatures, ‘verified’ or otherwise, are subject to several forms of verification: email verification, spam and duplicate filters, and a review by a member of our data vetting team.\n\nOPEN LETTERSRelated posts\n-------------\n\nIf you enjoyed this, you also might like:[Our Open Letters](https://futureoflife.org/fli-open-letters/)Signatories31810#### [Pause Giant AI Experiments: An Open Letter](https://futureoflife.org/open-letter/pause-giant-ai-experiments/)\n\nWe call on all AI labs to immediately pause for at least 6 months the training of AI systems more powerful than GPT-4.Taylor JonesMarch 22, 2023Signatories998#### [Open Letter Against Reckless Nuclear Escalation and Use](https://futureoflife.org/open-letter/open-letter-against-reckless-nuclear-escalation-and-use/)\n\nThe abhorrent Ukraine war has the potential to escalate into an all-out NATO-Russia nuclear conflict that would be the greatest catastrophe in human history. More must be done to prevent such escalation.Taylor JonesOctober 18, 2022SignatoriesClosed#### [Foresight in AI Regulation Open Letter](https://futureoflife.org/open-letter/foresight-in-ai-regulation-open-letter/)\n\nThe emergence of artificial intelligence (AI) promises dramatic changes in our economic and social structures as well as everyday life […]Anna YelizarovaJune 14, 2020Signatories276#### [Autonomous Weapons Open Letter: Global Health Community](https://futureoflife.org/open-letter/medical-lethal-autonomous-weapons-open-letter/)\n\nGiven our commitment to do no harm, the global health community has a long history of successful advocacy against inhumane weapons, and the World and American Medical Associations have called for bans on nuclear, chemical and biological weapons. Now, recent advances in artificial intelligence have brought us to the brink of a new arms race in lethal autonomous weapons.adminMarch 13, 2019", "url": "https://futureoflife.org/open-letter/ai-open-letter/", "title": "Research priorities for robust and beneficial artificial intelligence: an open letter", "source": "html_articles", "source_type": "journalArticle", "source_filetype": "pdf", "date_published": "2014-12-31T23:00:00Z", "authors": ["Stuart Russell", "Daniel Dewey", "Max Tegmark"], "summary": [], "id": "a92edd19a8d6f2de93997074e4b0f1a3"} {"text": "![](http://gcrinstitute.org/wp-content/uploads/2021/01/AGI-2020-map-1024x508.png)\n[View the paper “2020 Survey of Artificial General Intelligence Projects for Ethics, Risk, and Policy”](https://gcrinstitute.org/papers/055_agi-2020.pdf)\n\n\nIn 2017, GCRI published [the first-ever survey](https://gcrinstitute.org/a-survey-of-artificial-general-intelligence-projects-for-ethics-risk-and-policy/) of artificial general intelligence (AGI) research and development (R&D) projects for ethics, risk, and policy. This paper updates the 2017 survey. The 2020 survey features improved methodology, enabling it to find more projects than the 2017 survey and characterize them more precisely. The 2020 survey also evaluates how the landscape of AGI R&D projects has changed from 2017 to 2020.\n\n\nAGI is AI that can reason across a wide range of domains. Most current AI R&D is narrow, but as the 2017 and 2020 surveys both document, there is a significant amount of dedicated AGI R&D. AGI is important because, if built, it could have major consequences. Depending on how it is designed and built, it may be able to help solve many of the world’s problems, including problems involving global catastrophic risk, or it could itself cause global catastrophe. Therefore, it is important to monitor AGI R&D and identify opportunities to orient it in better directions.\n\n\nThe 2017 and 2020 surveys characterize AGI R&D projects in terms of seven attributes:  \n\n • The type of institution the project is based in \n\n • Whether the project publishes open-source code \n\n • Whether the project has military connections \n\n • The nation(s) that the project is based in \n\n • The project’s goals for its AGI \n\n • The extent of the project’s engagement with AGI safety issues \n\n • The overall size of the project\n\n\nTo accomplish this, the surveys use openly published information as found in scholarly publications, project websites, popular media articles, and other websites. The 2020 survey uses information from the 2017 survey as well as the past three years of the *Journal of Artificial General Intelligence*, the past three years of AGI conference proceedings (the 2017 survey covered prior content from the *Journal of Artificial General Intelligence* and the AGI conference proceedings), keyword searches in Google web search, Google Scholar, Crunchbase, GitHub, the authors’ prior knowledge, suggestions from readers of the 2017 survey, and additional literature and webpages identified via all of the above. The use of Crunchbase, GitHub, and reader suggestions is new to the 2020 survey.\n\n\nWhereas the 2017 survey identified 45 AGI R&D projects spread across 30 countries, the 2020 survey finds that, in 2020, there are 72 AGI R&D projects spread across 37 countries. The 2020 further finds that in 2017, there were 70 AGI R&D projects spread across 36 countries. 57 of the projects active in 2017 remain active in 2020, with an additional 15 projects new to 2020. The projects vary widely in size, with the largest being over 100 times larger than the smallest as measured in terms of the number of project personnel.\n\n\nRelative to the 2017 survey, the AGI R&D projects presented in the 2020 survey tend to be smaller, more geographically diverse, less open-source, less focused on intellectual goals, more focused on humanitarian goals, and more concentrated in private corporations. \n\n\nThe 2020 survey also finds that, from 2017 to 2020, there has been a decrease in academic projects, an increase in private corporation projects, an increase in projects stating humanitarian goals, a decrease in projects with military connections, and a decrease in projects based in the United States (though the US remains the dominant country in AGI R&D); all of these changes are relatively small compared to the differences between the 2017 and 2020 surveys.\n\n\nThe projects active in 2020 are diverse, with three major clusters: (1) corporate projects that are active on AGI safety and state that their goals are to benefit humanity, (2) academic projects that are not active on AGI safety and state that their goals are to advance the forefront of knowledge, and (3) small private corporations that are not active on AGI safety and state a range of different goals. Governments and nonprofits play relatively minor roles in AGI R&D. The 2020 survey continues to observe an absence of large government AGI R&D projects, including military projects. The small handful of projects with military connections mostly involve basic research. The data show no indication of militaries or other government divisions pursuing AGI R&D for major strategic purposes.\n\n\nThe data suggest the following conclusions:\n\n\n**Regarding ethics,** the two most common goals are to benefit humanity and to advance knowledge. This is the same as in the 2017 survey, but in the 2020 survey, the order is reversed, with there now being more projects seeking to benefit humanity. The 2020 survey also finds a large increase in the number of corporate projects. These projects seldom state a goal of pursuing profit, but they may nonetheless have profit as a motivation.\n\n\n**Regarding risk,** the proliferation of corporate projects relative to the 2017 survey heightens the concern that these projects could put profit ahead of safety and the public interest. Additionally, academic projects remain relatively inattentive to safety. On the other hand, many projects are active on safety. Additionally, the partial consensus on ethics, the concentration of projects in the US and its allies, and the various interconnections between different projects all suggest potential for cooperation on safety issues; these matters are unchanged from the 2017 survey.\n\n\n**Regarding policy,** the proliferation of corporate projects suggests an important role for corporate governance and attention to the political economy of AGI R&D. The modest decline of academic projects suggests a smaller but still significant role for academic research policy. Additionally, as in the 2017 survey, international policy is facilitated by the concentration of projects in the US and its allies, though the preponderance of projects with open-source code complicates the political geography of AGI R&D. Finally, the absence of large government AGI R&D projects suggests that governments may be involved in AGI R&D primarily as regulators of private-sector R&D instead of as drivers of the R&D.\n\n\nAs with the 2017 survey, the 2020 survey has some limitations, meaning that the actual state of AGI R&D may differ from what is presented in the surveys. This is due to the fact that the surveys are based exclusively on openly published information. It is possible that some AGI R&D projects were missed by the surveys. Indeed, the 2020 survey documents many projects that were missed by the 2017 survey. Therefore, the number of projects identified in the 2020 survey should be taken as a lower bound. Furthermore, it is possible that projects’ actual attributes differ from those found in openly published information. For example, most corporate projects did not state the goal of profit, even though many presumably seek profit. Therefore, this study’s results should not be assumed to necessarily reflect the actual current state of AGI R&D. That said, the study nonetheless provides the most thorough description yet of AGI R&D in terms of ethics, risk, and policy.\n\n\nThis paper was recently covered in the media article [What Is AGI?](https://www.fierceelectronics.com/sensors/what-agi).\n\n\nAcademic citation: \nFitzgerald, McKenna, Aaron Boddy, and Seth D. Baum, 2020. [2020 survey of artificial general intelligence projects for ethics, risk, and policy](https://gcrinstitute.org/papers/055_agi-2020.pdf). Global Catastrophic Risk Institute Technical Report 20-1.\n\n\n\n\n Tagged with [artificial general intelligence](https://gcrinstitute.org/tag/artificial-general-intelligence/)", "url": "https://gcrinstitute.org/2020-survey-of-artificial-general-intelligence-projects-for-ethics-risk-and-policy/", "title": "2020 Survey of Artificial General Intelligence Projects for Ethics, Risk, and Policy | Global Catastrophic Risk Institute", "source": "html_articles", "source_type": "blogPost", "source_filetype": "pdf", "date_published": "2020-12-30T23:00:00Z", "authors": ["Seth Baum"], "summary": [], "id": "c8fdc377323e2561846f812891112aea"} {"text": "![](http://gcrinstitute.org/wp-content/uploads/2021/06/ComputerLeaf_Buckyball_2021-06-04.jpg)\n[View the paper “Artificial Intelligence Needs Environmental Ethics”](https://gcrinstitute.org/papers/059_ai-environmental-ethics.pdf)\n\n\nArtificial intelligence is an interdisciplinary topic. As such, it benefits from contributions from a wide range of disciplines. This short paper calls for greater contributions from the discipline of environmental ethics and presents several types of contributions that environmental ethicists can make.\n\n\nFirst, environmental ethicists can raise the profile of the environmental dimensions of AI. For example, discussions of the ethics of autonomous vehicles have thus far focused mainly on “trolley problem” scenarios in which the vehicle must decide whom to harm in hypothetical crash scenarios. Less attention has been paid to the environmental impacts of autonomous vehicles, even though the environmental impacts are arguably much more important. Environmental ethicists could make the moral case for attention to this and to other environmental issues involving AI.\n\n\nSecond, environmental ethicists can help analyze novel ethical situations involving AI. These situations specifically involve artificial versions of phenomena that have long been studied in environmental ethics research. For example, AI and related technology could result in the creation of artificial life and artificial ecosystems. Environmental ethicists have often argued that “natural” life and ecosystems have important moral value. Perhaps the same reasoning would apply to artificial life and ecosystems, or perhaps it would not due to their artificiality. Environmental ethicists can help evaluate these sorts of novel ethical issues. Such work is especially important because existing work on AI ethics has focused more narrowly on issues centered on humans; environmental ethicists can help make the case for a broader scope.\n\n\nThird, environmental ethicists can provide valuable perspectives on the future-orientation of certain AI issues. Within the communities of people working on AI issues, there is a divide between people focused on near-term AI issues and people focused on long-term AI issues. Global catastrophic risks associated with AI are often linked to the long-term AI issues. Similar debates exist on environmental issues, due to the long-term nature of major environmental issues such as global warming, natural resource depletion, and biodiversity loss. Environmental ethicists have made considerable progress on the ethics of the future, which can be applied to debates about AI.\n\n\nThe paper builds on prior GCRI research and experience as environmental ethicists working on AI. [Moral consideration of nonhumans in the ethics of artificial intelligence](https://gcrinstitute.org/moral-consideration-of-nonhumans-in-the-ethics-of-artificial-intelligence) documents the tendency for work on AI ethics to focus on humans and calls for more robust attention to nonhumans. [Reconciliation between factions focused on near-term and long-term artificial intelligence](https://gcrinstitute.org/reconciliation-between-factions-focused-on-near-term-and-long-term-artificial-intelligence) describes the debate between those favoring attention to near-term AI issues and those favoring attention to long-term AI issues. [Artificial intelligence, systemic risks, and sustainability](https://gcrinstitute.org/artificial-intelligence-systemic-risks-and-sustainability) analyzes risks associated with near-term applications of AI in sectors related to environmental sustainability such as agriculture and forestry.\n\n\nAcademic citation: \nBaum, Seth D. and Andrea Owe, forthcoming. [Artificial intelligence needs environmental ethics](https://gcrinstitute.org/papers/059_ai-environmental-ethics.pdf). *Ethics, Policy, & Environment*, [DOI 10.1080/21550085.2022.2076538](https://doi.org/10.1080/21550085.2022.2076538).\n\n\n\n\n Tagged with [artificial intelligence](https://gcrinstitute.org/tag/artificial-intelligence/), [ethics](https://gcrinstitute.org/tag/ethics/)", "url": "https://gcrinstitute.org/artificial-intelligence-needs-environmental-ethics/", "title": "Artificial Intelligence Needs Environmental Ethics | Global Catastrophic Risk Institute", "source": "html_articles", "source_type": "blogPost", "source_filetype": "pdf", "date_published": "2021-11-15T23:00:00Z", "authors": ["Seth Baum"], "summary": [], "id": "ce4499d1248a8ad43cd370651795b79d"} {"text": "![](http://gcrinstitute.org/wp-content/uploads/2021/07/Motion-earth3d-1_Scr-4-1024x576.jpg)\n[View the paper “Collective Action on Artificial Intelligence: A Primer and Review”](https://gcrinstitute.org/papers/058_collective-action.pdf)\n\n\nThe development of safe and socially beneficial artificial intelligence (AI) will require collective action: outcomes will depend on the actions that many different people take. In recent years, a sizable but disparate literature has looked at the challenges posed by collective action on AI, but this literature is generally not well grounded in the broader social science literature on collective action. This paper advances the study of collective action on AI by providing a primer on the topic and a review of existing literature. It is intended to get an interdisciplinary readership up to speed on the topic, including social scientists, computer scientists, policy analysts, government officials, and other interested people.\n\n\nThe primer describes the theory of\ncollective action and relates it to different types of AI collective action\nsituations. A primary distinction is between situations in which individual and\ncollective interests diverge, as in the [prisoner’s\ndilemma](https://en.wikipedia.org/wiki/Prisoner's_dilemma) or adversarial AI competition, and in\nwhich they converge, as in [coordination\nproblems](https://en.wikipedia.org/wiki/Coordination_game) such as establishing common\nplatforms for AI. In general, collective action is easier to achieve when\ninterests converge, because individual actors pursuit of their own\nself-interest can lead to outcomes that are worse for the group as a whole. The\nprimer also explains how AI collective action situations depend both on whether\nthe goods involved are [excludable](https://en.wikipedia.org/wiki/Excludability) or [rivalrous](https://en.wikipedia.org/wiki/Rivalry_(economics)) and\nwhether they hinge on the action of a single actor or on some combination of\nactors.\n\n\nOne major focus of the AI collective action literature identified in this paper are potentially dangerous AI race scenarios. AI races are not necessarily dangerous and might even hasten the arrival of socially beneficial forms of AI, but could be dangerous if individual actors’ interest in developing AI quickly diverges from the collective interest in ensuring that AI is safe and socially beneficial. The paper looks at both near-term and long-term AI races. The literature identified in this paper looks in particular at near-term races to develop military applications and at long-term AI races to develop advanced forms of AI such as artificial general intelligence and superintelligence. The two types of races are potentially related, since near-term races could affect the long-term development of AI.\n\n\nFinally, the paper evaluates different\ntypes of potential solutions to collective action problems. The collective\naction literature identifies three major types of solution: government\nregulation, private markets, and community self-organizing. All three types of\nsolution can advance AI collective action, but no single type is likely to\naddress the entire range of AI collective action problems. Instead of looking\nfor narrow, silver-bullet solutions it may be to pursue a mix of solutions that\nAI collective action issues in different ways and at different scales.\nGovernance regimes should also account for other factors that could affect\ncollective action, such as the extent to which AI developers are transparent\nabout their technology.\n\n\nAI collective action issues are\nincreasingly pressing. Collective action will be necessary to ensure that AI\nserves the public interest rather than just the narrow private interests of\nthose who develop it. Collective action will also be necessary to ensure that\nAI is developed with adequate safety measures and risk management protocols. Further\nwork could provide more detailed analysis and support practical progress on AI\ncollective action issues. \n\n\nThis paper has also been [summarized](https://montrealethics.ai/collective-action-on-artificial-intelligence-a-primer-and-review/) in the [AI Ethics Brief #71](https://brief.montrealethics.ai/p/beginners-guide-collective-action-accountability?token=eyJ1c2VyX2lkIjo3OTA2OTI2LCJwb3N0X2lkIjo0MDc1ODcxNCwiXyI6IkVDTlJPIiwiaWF0IjoxNjMwNTI4MjIyLCJleHAiOjE2MzA1MzE4MjIsImlzcyI6InB1Yi0yOTk5OSIsInN1YiI6InBvc3QtcmVhY3Rpb24ifQ.6tyOCFcObM3E0BJQMQKRmDnFcdqCTZcVPMphh_pPlj8) of the Montreal AI Ethics Institute.\n\n\nThis paper extends GCRI’s interdisciplinary [research on AI](https://gcrinstitute.org/ai/). It builds on GCRI’s prior work on the governance of AI, particularly the papers [On the promotion of safe and socially beneficial artificial intelligence](https://gcrinstitute.org/on-the-promotion-of-safe-and-socially-beneficial-artificial-intelligence/) and [Lessons for artificial intelligence from other global risks](https://gcrinstitute.org/lessons-for-artificial-intelligence-from-other-global-risks/).\n\n\nAcademic citation: \nde Neufville, Robert and Seth D. Baum, 2021. [Collective action on artificial intelligence: A primer and review](https://gcrinstitute.org/papers/058_collective-action.pdf). *Technology in Society*, vol. 66, (August), article 101649, [DOI 10.1016/j.techsoc.2021.101649.](https://doi.org/10.1016/j.techsoc.2021.101649)\n\n\n*Image credit: [Volodymyr Goinyk](https://creativemarket.com/Goinyk)*\n\n\n\n\n Tagged with [artificial intelligence](https://gcrinstitute.org/tag/artificial-intelligence/)", "url": "https://gcrinstitute.org/collective-action-on-artificial-intelligence-a-primer-and-review/", "title": "Collective Action on Artificial Intelligence: A Primer and Review | Global Catastrophic Risk Institute", "source": "html_articles", "source_type": "blogPost", "source_filetype": "pdf", "date_published": "2021-07-14T22:00:00Z", "authors": ["Robert de Neufville"], "summary": [], "id": "a9b44cca641b3692162ddfba47e859fb"} {"text": "![](http://gcrinstitute.org/wp-content/uploads/2020/07/UN-Patrick-Gruban.jpg)\n[View the paper “Minimizing Global Catastrophic and Existential Risks from Emerging Technologies Through International Law”](https://gcrinstitute.org/papers/006_international-law.pdf)\n\n\nMankind is rapidly developing “emerging technologies” in the fields of bioengineering, nanotechnology, and artificial intelligence that have the potential to solve humanity’s biggest problems, such as curing all disease, extending human life, or mitigating massive environmental problems like climate change. However, if these emerging technologies are misused or have an unintended negative effect, the consequences could be enormous, potentially resulting in serious, global damage to humans (known as “global catastrophic harm”) or severe, permanent damage to the Earth—including, possibly, human extinction (known as “existential harm”). The chances of a global catastrophic risk or existential risk actually materializing are relatively low, but mankind should be careful when a losing gamble means massive human death and irreversible harm to the planet. While international law has become an important source of global regulation for other global risks like climate change and biodiversity loss, emerging technologies do not fall neatly within existing international regimes, and thus any country is more or less free to develop these potentially dangerous technologies without practical safeguards that would curtail the risk of a catastrophic event. In light of these problems, this paper serves to discuss the risks associated with bioengineering, nanotechnology, and artificial intelligence; review the potential of existing international law to regulate these emerging technologies; and propose an international regulatory regime that would put the international world in charge of ensuring that low-probability, high-risk disasters never materialize.\n\n\nAcademic citation: \nWilson, Grant S., 2013. [Minimizing global catastrophic and existential risks from emerging technologies through international law.](https://gcrinstitute.org/papers/006_international-law.pdf) *Virginia Environmental Law Journal*, vol. 31, no. 2, pages 307-364. \n\n\n*Image credit: [Patrick Gruban](https://www.flickr.com/photos/19473388@N00/336920038)*\n\n\n\n\n---\n\n\n*This blog post was published on 28 July 2020 as part of a website overhaul and backdated to reflect the time of the publication of the work referenced here.*\n\n\n\n\n Tagged with [emerging technologies](https://gcrinstitute.org/tag/emerging-technologies/), [international law](https://gcrinstitute.org/tag/international-law/)", "url": "https://gcrinstitute.org/minimizing-global-catastrophic-and-existential-risks-from-emerging-technologies-through-international-law/", "title": "Minimizing global catastrophic and existential risks from emerging technologies through international law", "source": "html_articles", "source_type": "journalArticle", "source_filetype": "pdf", "date_published": "2012-12-31T23:00:00Z", "authors": ["Grant Wilson"], "summary": [], "id": "69e4aec6e8b4513b7a94eaf8ae7b81db"} {"text": "![](http://gcrinstitute.org/wp-content/uploads/2021/06/ComputerLeaf_Buckyball_2021-06-04.jpg)\n[View the paper “Moral Consideration of Nonhumans in the Ethics of Artificial Intelligence”](https://link.springer.com/article/10.1007/s43681-021-00065-0)\n\n\nIn the ethics of artificial intelligence, a major theme is the challenge of aligning AI to human values. This raises the question of the role of nonhumans. Indeed, AI can profoundly affect the nonhuman world, including nonhuman animals, the natural environment, and the AI itself. Given that large parts of the nonhuman world are already under immense threats from human affairs, there is reason to fear potentially catastrophic consequences should AI R&D fail to account for nonhumans, for example with AI systems for industrial and commercial infrastructure, or future artificial general intelligence (AGI). This paper documents the state of attention to nonhumans within the field of AI ethics and presents an argument for giving nonhumans adequate attention.\n\n\nThe paper specifically examines\nthe extent to which nonhumans are given moral consideration in AI ethics, and\nthe extent to which they should be. Moral consideration of nonhumans means\nactively valuing nonhumans for their own sake—in philosophy terms,\nintrinsically valuing them. Unfortunately, the paper finds that most work in AI\nethics ignore nonhumans, or value nonhumans only for the effects they have on\nhumans. This leaves apt opportunity for the development and use of AI that\nadversely impacts nonhumans.\n\n\nThe paper documents moral consideration of nonhumans in academic AI ethics research, statements of AI ethics principles, [AGI R&D projects](https://gcrinstitute.org/2020-survey-of-artificial-general-intelligence-projects-for-ethics-risk-and-policy/), and select initiatives to design, build, apply, and govern AI. Aside from a line of research on the moral status of AI, the field of AI ethics generally fails to give moral consideration to nonhumans: The paper finds no attention to nonhumans in 76 of 84 sets of AI ethics principles surveyed by [Jobin et al.](https://www.nature.com/articles/s42256-019-0088-2), 40 of 45 AGI R&D projects surveyed by [Baum](https://gcrinstitute.org/a-survey-of-artificial-general-intelligence-projects-for-ethics-risk-and-policy/), 38 of 44 chapters in the [*Oxford Handbook of Ethics of AI*](https://www.oxfordhandbooks.com/view/10.1093/oxfordhb/9780190067397.001.0001/oxfordhb-9780190067397), and 13 of 17 chapters in the anthology [*Ethics of Artificial Intelligence*](https://global.oup.com/academic/product/ethics-of-artificial-intelligence-9780190905033). In the latter two, any dedicated attention is on the moral status of AI. No other types of nonhumans are given dedicated attention.\n\n\nMore could be done. The [Microsoft\nAI for Earth](https://www.microsoft.com/en-us/ai/ai-for-earth) program is a good example of AI used in ways that benefit\nnonhumans. It supports several programs for environmental protection and\nbiodiversity conservation that give explicit moral consideration to nonhumans,\nincluding [Wild\nMe](https://news.microsoft.com/2018/06/14/wild-me-joins-ai-for-earth/),\n[eMammal](https://www.microsoft.com/en-us/ai/ai-for-earth-emammal), [NatureServe](https://www.microsoft.com/en-us/ai/ai-for-earth-natureserve), and [Zamba\nCloud](https://www.microsoft.com/en-us/ai/ai-for-earth-zamba-cloud).\nOther AI groups could run similar programs. Within AI ethics research, the\npaper outlines ideas for nonhuman algorithmic bias, such as by applying [ecolinguistics](https://en.wikipedia.org/wiki/Ecolinguistics) to biases in\nnatural language processing. While algorithmic bias is a major topic in AI\nethics, other literature has focused on social biases, but research in\necolinguistics show that English—the primary language for AI system design—contains\nbiases in favor of humans over nonhumans. \n\n\nGiven the limited moral\nconsideration of nonhumans in the current field of AI ethics, the paper argues\nfor more consistent and extensive moral consideration of nonhumans. The\nargument draws on concepts of ontological and ethical anthropocentrism as\ndeveloped in environmental ethics. Humans are members of the animal kingdom and\npart of nature, and there is no sound basis for restricting moral consideration\nexclusively to humans. There are important questions of how much moral\nconsideration to give to nonhumans relative to humans. The paper sets these\nquestions aside to call for a more basic improvement in moral consideration to\nnonhumans across AI ethics.\n\n\nThis paper extends GCRI’s interdisciplinary [research on AI](https://gcrinstitute.org/ai/). It builds on prior GCRI research on AI ethics, especially the paper [Social Choice Ethics in Artificial Intelligence](https://gcrinstitute.org/social-choice-ethics-in-artificial-intelligence/). It uses ethics data compiled in our [2017](https://gcrinstitute.org/a-survey-of-artificial-general-intelligence-projects-for-ethics-risk-and-policy/) and [2020](https://gcrinstitute.org/2020-survey-of-artificial-general-intelligence-projects-for-ethics-risk-and-policy/) surveys of AGI R&D projects, especially project goals. It also continues our tradition of applying the rich body of environmental research to new AI issues, as previously done in our papers [On the Promotion of Safe and Socially Beneficial Artificial Intelligence](https://gcrinstitute.org/on-the-promotion-of-safe-and-socially-beneficial-artificial-intelligence/) and [Lessons for artificial intelligence from other global risks](https://gcrinstitute.org/lessons-for-artificial-intelligence-from-other-global-risks/).\n\n\nThis paper has also been [summarized](https://montrealethics.ai/moral-consideration-of-nonhumans-in-the-ethics-of-artificial-intelligence/) in the [AI Ethics Brief #73](https://brief.montrealethics.ai/p/deepfake-voices-embedding-values-bravery?token=eyJ1c2VyX2lkIjo3OTA2OTI2LCJwb3N0X2lkIjo0MTI2MTU2MSwiXyI6ImdSclZwIiwiaWF0IjoxNjMxNjUwMjU5LCJleHAiOjE2MzE2NTM4NTksImlzcyI6InB1Yi0yOTk5OSIsInN1YiI6InBvc3QtcmVhY3Rpb24ifQ.jJUM6lOMgU4U9eRB8CDDAqW27zZG85YoDUmYh6rSwNs) of the Montreal AI Ethics Institute, and in the [blog](https://mahb.stanford.edu/blog/moral-consideration-of-nonhumans-in-the-ethics-of-artificial-intelligence/) of [Stanford MAHB](https://mahb.stanford.edu/). It is also included in the 2022 [The AI Ethics Report](https://montrealethics.ai/volume6/) and is discussed in the MEDIUM article [“Is 2022 the Year that AI Ethics Takes Sustainability Seriously?”](https://josh-gellers.medium.com/is-2022-the-year-that-ai-ethics-takes-sustainability-seriously-8a10953105e9). \n\n\nAcademic citation: \nOwe, Andrea and Seth D. Baum, 2021. [Moral consideration of nonhumans in the ethics of artificial intelligence](https://link.springer.com/article/10.1007%2Fs43681-021-00065-0). *AI & Ethics,* vol. 1, no. 4 (November), pages 517-528, [DOI 10.1007/s43681-021-00065-0](https://dx.doi.org/10.1007/s43681-021-00065-0). [ReadCube](https://rdcu.be/cl2Zv).\n\n\n*Image credit: Buckyball\nDesign*\n\n\n\n\n Tagged with [artificial intelligence](https://gcrinstitute.org/tag/artificial-intelligence/), [ethics](https://gcrinstitute.org/tag/ethics/)", "url": "https://gcrinstitute.org/moral-consideration-of-nonhumans-in-the-ethics-of-artificial-intelligence/", "title": "Moral Consideration of Nonhumans in the Ethics of Artificial Intelligence | Global Catastrophic Risk Institute", "source": "html_articles", "source_type": "blogPost", "source_filetype": "pdf", "date_published": "2021-06-06T22:00:00Z", "authors": ["Seth Baum"], "summary": [], "id": "18b0f3fe5592a588c3fb274e4e10db2a"} {"text": "Created: 2018-11-08 | Updated: 2019-11-02 | Suggestions: please make suggestions directly in this Doc | List maintainer: Mati Roy ([contact@matiroy.com](mailto:contact@matiroy.com))\n\nAI Safety Open Problems\n\nTechnical AGI safety research outside AI: [https://forum.effectivealtruism.org/posts/2e9NDGiXt8PjjbTMC/technical-agi-safety-research-outside-ai](https://www.google.com/url?q=https://forum.effectivealtruism.org/posts/2e9NDGiXt8PjjbTMC/technical-agi-safety-research-outside-ai&sa=D&source=editors&ust=1688249706070795&usg=AOvVaw23JRJ44mJrwWTghIfR4UaR)\n\nConcrete problems in AI safety: [https://arxiv.org/abs/1606.06565](https://www.google.com/url?q=https://arxiv.org/abs/1606.06565&sa=D&source=editors&ust=1688249706071215&usg=AOvVaw1DglOrGmsnSxUbmNG_hPFU)\n\n \n\nMIRI: Agent Foundations for Aligning Superintelligence with Human Interests: [https://intelligence.org/files/TechnicalAgenda.pdf](https://www.google.com/url?q=https://intelligence.org/files/TechnicalAgenda.pdf&sa=D&source=editors&ust=1688249706071526&usg=AOvVaw3gK-jp7Ha-fPK6NKp9_Mxb)\n\nMIRI: Alignment for Advanced Machine Learning Systems: [https://intelligence.org/files/AlignmentMachineLearning.pdf](https://www.google.com/url?q=https://intelligence.org/files/AlignmentMachineLearning.pdf&sa=D&source=editors&ust=1688249706071809&usg=AOvVaw1lWLsTgWxtlL855S70Oa8Y)\n\nResearch Priorities for Robust and Beneficial Artificial Intelligence: [ttps://arxiv.org/pdf/1602.03506.pdf](https://www.google.com/url?q=https://arxiv.org/pdf/1602.03506.pdf&sa=D&source=editors&ust=1688249706072091&usg=AOvVaw2_p0Axl6srzM97AbHWxT_9)\n\nLuke Muehlhauser: How to study superintelligence strategy: [http://lukemuehlhauser.com/some-studies-which-could-improve-our-strategic-picture-of-superintelligence/](https://www.google.com/url?q=http://lukemuehlhauser.com/some-studies-which-could-improve-our-strategic-picture-of-superintelligence/&sa=D&source=editors&ust=1688249706072405&usg=AOvVaw2WBwCPRNkNTgLINfKZN3Mr)\n\nAndrew Critch: Abstract open problems in AI alignment: [http://acritch.com/abstract-open-problems/](https://www.google.com/url?q=http://acritch.com/abstract-open-problems/&sa=D&source=editors&ust=1688249706072674&usg=AOvVaw3UvOg4HK0YGTLpgvfJhtfY)\n\nFoundational Research Institute: Open Research Questions: [https://foundational-research.org/open-research-questions/](https://www.google.com/url?q=https://foundational-research.org/open-research-questions/&sa=D&source=editors&ust=1688249706072985&usg=AOvVaw2V6AcTPqkyECKOJtn0VnBv)\n\nAI Impacts: List of multipolar research projects: [https://aiimpacts.org/multipolar-research-projects/](https://www.google.com/url?q=https://aiimpacts.org/multipolar-research-projects/&sa=D&source=editors&ust=1688249706073276&usg=AOvVaw1s0mrv1Vk8D_RZL0zvAOwZ)\n\nAI Impacts: Promising research projects: [https://aiimpacts.org/promising-research-projects/](https://www.google.com/url?q=https://aiimpacts.org/promising-research-projects/&sa=D&source=editors&ust=1688249706073552&usg=AOvVaw3dK2YkckGL3JxHiplPjKXR)\n\nAI Impacts: Research Problems: [https://aiimpacts.org/category/research-problems/](https://www.google.com/url?q=https://aiimpacts.org/category/research-problems/&sa=D&source=editors&ust=1688249706073829&usg=AOvVaw1eiNSYOKZkQCan2ZMYXYGa)\n\nEffective thesis in computer science: [http://effectivethesis.com/theses/?discipline=computer+science](https://www.google.com/url?q=http://effectivethesis.com/theses/?discipline%3Dcomputer%2Bscience&sa=D&source=editors&ust=1688249706074120&usg=AOvVaw0kAUPFIlGgMg9-4bMPR_8K)\n\nOther ideas in the comment section here: [http://effective-altruism.com/ea/18p/concrete\\_project\\_lists/](https://www.google.com/url?q=http://effective-altruism.com/ea/18p/concrete_project_lists/&sa=D&source=editors&ust=1688249706074420&usg=AOvVaw1xmjhXtuAslQtPL33m92aH)\n\nOther ideas about Ryan Carey again: Improving long-run civilisational robustness: [http://effective-altruism.com/ea/xg/improving\\_longrun\\_civilisational\\_robustness/](https://www.google.com/url?q=http://effective-altruism.com/ea/xg/improving_longrun_civilisational_robustness/&sa=D&source=editors&ust=1688249706074753&usg=AOvVaw1uf-Ak_YXZ_rNjwD_tSOOQ)\n\nProblems in AI Alignment that philosophers could potentially contribute to: [https://www.lesswrong.com/posts/rASeoR7iZ9Fokzh7L/problems-in-ai-alignment-that-philosophers-could-potentially](https://www.google.com/url?q=https://www.lesswrong.com/posts/rASeoR7iZ9Fokzh7L/problems-in-ai-alignment-that-philosophers-could-potentially&sa=D&source=editors&ust=1688249706075092&usg=AOvVaw1sGhx4Qv-idPA8rKYhrX95)\n\nCognitive Science/Psychology As a Neglected Approach to AI Safety: [https://forum.effectivealtruism.org/posts/WdMnmmqqiP5zCtSfv/cognitive-science-psychology-as-a-neglected-approach-to-ai](https://www.google.com/url?q=https://forum.effectivealtruism.org/posts/WdMnmmqqiP5zCtSfv/cognitive-science-psychology-as-a-neglected-approach-to-ai&sa=D&source=editors&ust=1688249706075430&usg=AOvVaw0b4VbQ0WP8e34yjuBFtn_M) (related: [https://ought.org/](https://www.google.com/url?q=https://ought.org/&sa=D&source=editors&ust=1688249706075582&usg=AOvVaw1ygbxiNNcd8fNKW8mJAMxf))\n\nMachine Learning Projects on IDA: [https://www.alignmentforum.org/posts/Y9xD78kufNsF7wL6f/machine-learning-projects-on-ida](https://www.google.com/url?q=https://www.alignmentforum.org/posts/Y9xD78kufNsF7wL6f/machine-learning-projects-on-ida&sa=D&source=editors&ust=1688249706075896&usg=AOvVaw049J5n1nZxGMwFzakMsN4j)\n\nAddendum\n\nLandscape of current work on potential risks from advanced AI: [https://docs.google.com/document/d/16Te6HnZN2OEviYFA-42Tf9Pal\\_Idovtgr5Y1RGEPW\\_g/](https://www.google.com/url?q=https://docs.google.com/document/d/16Te6HnZN2OEviYFA-42Tf9Pal_Idovtgr5Y1RGEPW_g/&sa=D&source=editors&ust=1688249706076356&usg=AOvVaw0kP7_DM3Nq6_6AEFwtqyaR)", "url": "https://docs.google.com/document/d/1J2fOOF-NYiPC0-J3ZGEfE0OhA-QcOInhlvWjr1fAsS0/edit?usp=embed_facebook", "title": "AI Safety Open Problems", "source": "html_articles", "source_type": "manuscript", "source_filetype": "pdf", "date_published": "2018-12-31T23:00:00Z", "authors": ["Mati Roy"], "summary": [], "id": "54a741a95337fc8fc2d44d20c42d5322"} {"text": "Sharing status:\n\n* This document is currently written with long-termist audience in mind. As such, please do not advertise it where the missing context could cause idea inoculation.\n* To prompt the creation of a general-audience version, you can [email me](mailto:vojta.kovarik@gmail.com).\n* Public version (with commenting disabled) is [here](https://www.google.com/url?q=https://docs.google.com/document/d/1Jk2GgJnF9pLIQqg9hgX0Tvdukq6MR2qmXW3FyvqiiUg/edit&sa=D&source=editors&ust=1688249707320174&usg=AOvVaw2SCI7QPxfagbFl8mWT_Olj).\n\n\n\n---\n\nAcknowledgment: This document has been written by myself (Vojta Kovarik), based on ideas I collected from a collaboration with Cara Selvarajah, Chris van Merwijk, Francisco Carvalho, Jan Kulveit, and Tushant Jha during the 2019/2020 [AI Safety Research Program](https://www.google.com/url?q=https://aisrp.org/&sa=D&source=editors&ust=1688249707320714&usg=AOvVaw1A5qyjhpHD8mKmJgIpiyGv). While they gave a lot of feedback to the text and many of the ideas are originally theirs, they might not necessarily agree with all arguments and framing presented here. All mistakes are mine. Additionally, I would also like to thank Michael Dennis for discussing modelling of service systems, and Misha Yagudin, Viliam Lisý, Anna Gajdová, Tomáš Gavenčiak, and others who gave feedback on the draft of this text.\n\n\n\n---\n\nSystems of Services \nas a Paradigm for AI Alignment\n\nVojta Kovarik\n\n31.3. 2020\n\nThis document aims to serve as an introduction for researchers who want to study the long-term impact of AI through the lens of AI services. It introduces basic concepts related to these systems and gives initial observations to enhance their initial study. It points to several relevant research fields that could be leveraged to study AI services, mentions a number of problems that seem specific to this setting, and makes suggestions for future work.\n\n\n\n| |\n| --- |\n| Contents[1. Introduction](#h.mv6h35shwd1v)[2. Basic Concepts](#h.ff9s7afwzb5w)[Tasks and Services](#h.17es1kmxqfk4)[Systems of Services](#h.4g7gdavf65u7)[3. Modelling Approaches](#h.2ukea95tbs03)[Models for Specific Purposes](#h.r8bncl4zhh79)[A Simple Abstract Model of Service Systems](#h.mohtu6txrpn9)[Universal Models](#h.6x54xfsy9zua)[4. Research Questions](#h.4frz7qhnosty)[Classifying Research Questions](#h.ayilk6i3g47b)[List of Research Questions](#h.q2x0ng9anneh)[Problems Introduced by Presence of Multiple Services](#h.y5k859jsfpo7)[Problems Related to Presence of Multiple Users](#h.rfgurt70o28g)[Problems Related to Changes in Environment, System, or Users](#h.x2sjbt8tyvyw)[Problems Related to Presence of Human-Implemented  Services](#h.43z24lqwhti4)[Problems Related to Advanced Capability of Services](#h.yrf1m4lxobz9)[Problems Related to Future Technologies](#h.temt75qcxi2i)[Increasing Effectivity of Research on Service Systems](#h.b53rqxxij9bl)[5. Related Fields of Study](#h.iob8y6oskvcj)[6. Research Suggestions](#h.bmw99ws36gi7)[Getting Familiar with Further Relevant Ideas](#h.c1vq480g8vs)[Promising Topics for Investigation](#h.x50mgy7c9yih)[References](#h.q95xdkejigv5) |\n\n1. Introduction\n===============\n\nAI has become more capable over time and it has been hypothesized that it could pose risks by becoming a highly capable agent. However, an alternative is that AI continues to operate within services, as it currently does, but that these services become sufficiently capable that their use comes to pose a risk. The possibility of this scenario (highlighted in Reframing Superintelligence [1]) suggests that studying systems of AI services could be helpful for mitigating AI risk.\n\nTo appreciate the usefulness of the service-systems framework, note that this paradigm is likely to be neither fully equivalent nor completely incompatible with the agent-like[[1]](#ftnt1) view. To the extent that the two are different, understanding service systems could expose new problems and help us determine which type of AI to expect in the long term. Even if we ultimately end up with an agent-like AI, such AI will likely emerge from and be shaped by a world with powerful and wide-spread AI services. As a result, increased understanding service systems might have beneficial consequences even for agent-like AI. To the extent that the two paradigms are equivalent, service-systems could provide a fresh angle on problems already studied in AI alignment. Moreover, both experts and non-experts must rely to some extent on their intuitions, and the current world already contains many AI services but nothing that resembles a highly-capable AI agents. Our intuitions about AI services are thus likely to be better calibrated than those about AI agents (at least up to a point). In particular, the service-system paradigm might be more accessible to experts from relevant less-technical fields (e.g., AI governance) and more suited for communication with general public.\n\nWhich types of systems should we consider when studying AI risk? First, it is critical to realize that AI-related existential risk does not come only from futuristic versions of AI that vastly exceed our abilities. Indeed, an artificial intelligence could cause existential risk as soon as it is radically transformative [5], a point we might encounter much earlier than AGI. The implication for systems of AI services is that our study should include systems whose offer of AI services is still far from comprehensive.\n\nSecondly, viewing the world as a huge network of services[[2]](#ftnt2) and users highlights an important fact: As long as a non-trivial portion of services is implemented by humans, the AI services are likely to be closely interlinked with humans. We thus believe that even if we only care to study AI services, we cannot fully understand their impact without understanding the whole system of services - both human and AI. Consequently, we focus on hybrid systems of services, i.e., ones where services can be implemented by both AI and humans and that might contain large sectors of human-only services and institutions. And while many dynamics can (and should) be studied in the context of a much smaller system, our default example of a system of services is going to be “all the services that exist in the world”. Note that this is an important distinction between our text and Reframing Superintelligence [1], which primarily focuses on comprehensive AI services (thus having less of a need to discuss human services).\n\nThe purpose of this document is somewhat different from that of [1]. Indeed, [1] introduces an important new paradigm and provides many deep insights about the nature of AI services. However, as far as we are aware, neither of these have yet led to a follow-up technical work by other researchers. We believe this is because it is not yet sufficiently clear how to use this paradigm to study potential problems with AI, or indeed what problems in this paradigm are. Rather than focusing on object-level research, this document therefore aims to provide the initial directions for researchers interested in studying AI risk in the framework of service systems.\n\nWe start out by describing the basic terminology, examples, and modelling approaches relevant to service systems (Sections 2 and 3). We then introduce a way of classifying potential issues with service systems and describe a number of relevant research problems (Section 4). To reduce potential duplication of effort, we highlight several existing research fields that can be utilized to address these problems (Section 6). We conclude with suggestions for future research (Section 7).\n\n2. Basic Concepts\n=================\n\nThe concept of a service is central to our topic. However, this term is very general and different people might understand it very differently. To preempt possible misunderstandings, this section discusses services, tasks, and service systems. To bootstrap intuitions one might have about service systems, we also provide many examples of services and service systems. Additionally, we discuss several concepts - generality, capability, task-coverage, granularity, and degree of automatization - which allow for having more nuanced conversations about service systems.\n\nTasks and Services\n------------------\n\nTo talk about services, we first need the concept of a task. Informally, we can assume that some description of the world is given by context and view a task as something specified by an initial state and an end state (or states) - accomplishing the task amounts to causing a transformation from the starting state to one of the desired end states. Since this concept has been frequently used and formalised in classical AI - in particular in planning [30] - we will not go into further details here. We should, however, point out that the critical problem here is finding a suitable world description. A related challenge is the [frame problem](https://www.google.com/url?q=https://plato.stanford.edu/entries/frame-problem/&sa=D&source=editors&ust=1688249707328571&usg=AOvVaw32v3KSu3hzRGRkUm6lmVS3) - identifying which consequences of accomplishing a task are important, and which can be ignored.\n\nApart from chaining tasks together, composing them into higher-level tasks, or decomposing them into lower-level tasks, we can group similar tasks into domains. While a domain can, by default, be just an arbitrary collection of tasks, oftentimes it will be endowed with some partial ordering over task solutions (i.e., some way of evaluating performance). For example, potatoes cooked just right are clearly a better culinary output than overcooked ones, but our preference between potatoes and rice might depend on other variables such as the main dish. This additional structure will allow us to talk about capability of services.\n\nWe use the word “service” to refer to anything that can be used to perform tasks[[3]](#ftnt3), such as\n\n* a horse, a plane, a train, a self-driving car, (hypothetically) a portable teleportation device,\n* an encyclopedia, Google search engine, (hypothetically) an automated digital secretary,\n* a trained AlphaGo algorithm, human Go player, a Go-playing algorithm that takes random moves,\n* a cleaner, a restaurant attendant, a Roomba robot,\n* a mathematician, a calculator, an automated theorem-prover.[[4]](#ftnt4)\n\nTypically, a service will be implicitly associated with a specific set of tasks - its domain. However, sometimes it makes sense to restrict a service to some subdomain (e.g., when evaluating the cleaning skills of a restaurant attendant), or consider whether the service can be used outside of its default domain (e.g., when attempting to get a mathematician in cleaning tasks). Correspondingly, one way of comparing services is in terms of generality:\n\nDefinition (Service generality): When the domain of service A is a subset of the domain of service B, we say that B is more general than A.\n\nFor any domain D, the partial ordering between solutions of tasks in D allows us to define a corresponding partial ordering on services that operate in D.\n\nDefinition (Service capability): Let A and B be two services whose domains are at least D. We say that a service A is more capable[[5]](#ftnt5) at D than service B if it performs each task from D at least as well as B. If D is clear from context, we simply say that A is more capable than B.\n\nAs an example to illustrate these definitions, we can see that, among transportation services, walking is more general than taking a train, but not more capable, and a hypothetical teleporter device that you could carry in your pocket would be more capable than both. A mathematician is more general than a calculator, but the calculator will be more capable at routine arithmetics (where what matters is speed and accuracy). Naturally, these results do not hold in all abstractions of reality - if you want to exercise in addition to moving from A to B, teleport is no longer strictly better than walking. Similarly, a mathematician might[[6]](#ftnt6) be preferable to a calculator if you also need the answer explained.\n\nDepending on the context, we might want to specify additional properties of a service such as resource consumption (fuel consumed by a plane), cost (price of a plane ticket), access rights (you can only fly to a foreign country if you have a valid passport[[7]](#ftnt7)), and inspection and modification rights (I am not allowed to view and rewrite the source code of Google’s software).\n\nSystems of Services\n-------------------\n\nBy a system of services, we will typically mean some set of services, together with a (possibly implicit) description of how different services relate to each other, which environment they operate in, and how the system interacts with end-users. Examples of service systems include:\n\n* (i) All the restaurants, shops, and the movie theatre at the local shopping mall.\n* (ii) My smartphone with all the apps currently installed on it. The operating system running on my laptop, together with all programs installed on it.\n* (iii) MS Office. Google’s browser-based services.\n* (iv) All the services available before the industrial revolution, connected as they were at the time. Same thing after the revolution.\n* (v) All the services available in the current world (performed by humans, machines, computers, animals, environment).\n\nTo get a better sense for how systems of services could look like, we can also consider some hypothetical[[8]](#ftnt8) scenarios:\n\n* (a) A hypothetical future version of the current world, where we were able to automate most of the manual labour and tasks like driving, hairdressing, and cooking (say, using devices and humanoid robots).\n* (b) A hypothetical future version of the current world, where we automated 95% of the tasks that humans are currently being paid for doing (incl. things like programming and doing research).\n* (c) A hypothetical version of any system, where the services are being centrally controlled by a single authority (e.g., a government, an Agent-like AGI, or a single person who owns the majority of the system).\n\nWe see that some important dimensions along which service systems can vary are task-coverage, capability (of offered services), “granularity” (i.e., how narrow vs general are the individual services)[[9]](#ftnt9), and degree of automatization. Moreover, variants of the last three examples illustrate that “a more advanced system” is not necessarily the same as “a more beneficial system”. Indeed, a “better” system could fail to be beneficial for example if the system from (c) is used to support a totalitarian regime, if we become reliant on the system from (b) to the point of enfeeblement, or if the resulting economic incentives make living within the system from (a) unenjoyable for many people (e.g., due mismanaged massive unemployment).\n\nFinally, we highlight (non-exhaustively) several classes of services that seem relevant for the study of service systems and their interaction with human society.\n\n* Essential services such as: Fruit-bearing trees, grocery stores, supermarkets. Caves (for hunter-gatherers), houses. Shamans, herbalists, hospitals.\n* Entertainment such as: Books, e-books. Theaters, movie theatres, TV. Board games, computer games, virtual-reality games.\n* Service Catalogues. Ways of identifying which service is relevant for my requests and desires. Some examples include asking friends, Yellow pages, or Google search.\n* Research & Development. Various activities related to identifying new technologies and products (and hence services), their study and development, application, monitoring, and improvement. For examples, we can look at most parts of the academia and industry.\n* Security and Policing. Ensuring that the system behaves according to its specification and does not fall prey to an adversarial attack. Some examples include password protection on a computer, an antivirus software, a police department, or a procedure for determining whether a new product (or service) can be safely added to the market.\n* Governance and System Design. Institutions which monitor the system and attempt to set it up in a way that benefits the users. Typical examples include, unsurprisingly, actual governments and governance-related research, while a less obvious example could be the choice of rules for an online discussion forum.\n* Long-Term Planning. Long-term planning should keep track of the big-picture strategy, identify long-term challenges, and devise counter-measures that need to be taken for the system to remain beneficial to its users. Some real-world examples are finding ways to deal with dangerous asteroids, mitigate global warming, and address unemployment from growing automatization of labour.[[10]](#ftnt10)\n* Infrastructure. In some sense, both physical infrastructure (the post office and cargo trains) and digital infrastructure (mobile phones networks, wi-fi, or TCP-protocols) can be viewed as a particular case of services that enhance (or enable) communication between other services.\n\nAs we can see, some of the current services can be viewed as “upscaled” versions of earlier services (e.g., physical calendar → Google calendar with shareable events or encyclopedias → Wikipedia). Similarly, we can hypothesize that some of the current services will get upgraded further yet. For example, we can imagine a calendar that automatically tracks my appointments without me having to input them or an automatically maintained database of humanity’s knowledge that presents its contents in a manner that is personalized based on each reader’s knowledge. Similarly, we might eventually see a world where a large portion of the security or R&D sector becomes AI-powered.\n\n3. Modelling Approaches\n=======================\n\nIn the ideal world, there would already be a common-knowledge understanding of what is our object of study, what are the key problems that need to be solved, and how these problems are stated - the “only” thing left to do would be to find solutions of those problems. However, with systems of (AI) services, not a single one of these aspects is understood sufficiently. As such, we might be interested in “modelling” service systems from several different reasons[[11]](#ftnt11):\n\n1. enabling more effective discussions (and thinking) by grounding key concepts,\n2. building basic intuitions and calibrating the existing ones,\n3. identifying technical problems,\n4. describing and solving technical problems,\n5. deriving specialized models in which technical problems are easier to tackle.\n\nNaturally, we should expect different models to be suitable for different purposes. In this section, we will briefly describe three types of models and discuss their uses.\n\nModels for Specific Purposes\n----------------------------\n\nIn classical AI, many specific problems already have models that are particularly suitable for addressing them. For example, Atari games can be effectively tackled using Markov decision processes, while chess and Go can be solved with the help of game-theoretical trees. Ultimately, we would similarly like to have specialized models that are particularly effective at addressing problems such as service-caused value drift, system stability in the presence of malicious actors, and other problems that might arise in systems of services. While some existing models can be used or repurposed towards this goal (see the “Relevant Research Fields” section below), many of the hypothesized technical problems with service systems are not yet associated with suitable specific models. For now, we instead describe two types of more general models that might simplify finding the more-specific models later.\n\nA Simple Abstract Model of Service Systems\n------------------------------------------\n\nWhen thinking about some class of objects, it is good to have a readily-available simple abstract model that we can cheaply query to obtain basic intuitions about the behaviour of such objects. For example, whenever we think about how humans behave, we can instead ask “How would I behave?”. While this procedure is bound to lead to mistakes, we can mitigate them by asking “What are the important aspects in which other people are different from me?” and “How does my model of myself diverge from reality?”. More generally, we can reason about how a cheap model fails to be accurate, which allows us to determine whether those failures are likely to manifest in the situation at hand, and possibly derive a more suitable model.\n\n![](https://docs.google.com/drawings/d/s416PUwmqIAszO0pDf5NS1g/image?parent=1SYgvWBe1ruDl9dQnxmjll-8COUHPycGOlLvTI68xtLA&rev=22&drawingRevisionAccessToken=i0GMC74Mptd0gw&h=315&w=292&ac=1)\n\nTo give one more example, a simple abstract model of an Agent-like AGI is that of a “box” that, whenever deciding its next action, always selects the option that is optimal for some pre-specified goal. Such a model is highly useful, as long as we remember that not all hypothetical Agent-like AGIs have to be optimizing for some goal (and that even if they were, they wouldn’t be able to achieve optimality).\n\nAs a first shot at a simple abstract model for service systems, we can imagine services as input-output boxes for passing information. For simplicity, we can assume these boxes are arranged into a network, let’s say a directed one with no cycles, that has designated sensors (where messages originate) and actuators (where messages turn into actions that affect the real-world). To complement this description, we can imagine that the world further contains humans and the environment. The system interacts with these by receiving requests from the humans (one for each human) and observations from the environment. These get propagated through the network, resulting in a collection of actions (one for each actuator) which change the state of the environment. Finally, this generates some utilities for the humans.\n\nIf we want to add a notion of optimality into the model, we can associate each of the previously-arbitrary input-output boxes with some loss function, and assume the box operates optimally with respect to it. These losses can take various forms - they can depend only on the input-output pair, but also on outputs of other services. This choice will then influence whether the services compete with each other in a game-theoretical sense or whether they behave as more straightforward optimizers.\n\nThis model makes many unrealistic assumptions such as service optimality, the particular shape of the reward (resp. loss) assignment process, all requests taking the same time to be processed, humans having utility functions, and, critically, the decoupling of humans, the environment, and the system. However, it is a good starting point for discussing those simplifying assumptions, removing them, and thus obtaining more realistic models.\n\nRegarding the comparison between Agent-like AGI and service-systems, note that in their simple abstract model, Agent-like AGI is associated with optimal agents while service systems are associated with systems consisting of optimal sub-agents. In terms of [Dennett’s three stances](https://www.google.com/url?q=https://en.wikipedia.org/wiki/Intentional_stance%23Dennett's_three_levels&sa=D&source=editors&ust=1688249707340708&usg=AOvVaw0MCTfgB_O8O7vdmb90qFkq), Agent-like AGI is closer to the intentional stance (it “wants to achieve its goal”) while service systems are closer to the design stance (services are set up to perform a particular task).\n\nUniversal Models\n----------------\n\nIn a sense, quantum physics can be viewed as a suitable “underlying model” for chemistry. Similarly, rational numbers Q can be viewed as a suitable underlying model for elementary-school calculations, calculus for determining the volume of solids. More generally, for a collection C of problems that one might wish to study, we might hypothesize there is some suitable “universal” model U such that every problem P from C can be accurately described and solved using U. However, a class of problems might have multiple corresponding universal models. For example, instead of using U = Q for C = elementary-school math, we could in principle do all relevant calculations using raw set theory notation (U’). Since we want models that are not just universal but also actually useful, we can restrict our attention to “reasonable” universal models (i.e., those that aren’t “needlessly complicated for C”).\n\nNo reasonable universal model model is currently known for service systems. However, we believe that coming up with one would be a useful enterprise. Since C = “problems with service systems” might get dangerously close to subsuming “all things that could happen with the economy and the physics of Earth”, we would also like to have an “[80/20](https://www.google.com/url?q=https://en.wikipedia.org/wiki/Pareto_principle&sa=D&source=editors&ust=1688249707343105&usg=AOvVaw3juJ7QVR7ifDQbqaOXfkk2)” version of such model, in which a compromise is made between expressiveness and complexity. Unfortunately, even reasonable universal models will generally be too “low-level” and thus impractical for solving any specific problem P ϵ C. However, such models are nonetheless useful from several reasons:\n\n* Universal models can be used to “ground” and formalize all more advanced concepts, objects, and problems relevant to C.\n* When working with a subclass C’ ⊂ C, we can make additional simplifying assumptions, and use U to derive a simpler model U’ universal for C’. Similarly, we can derive a specialized model for a particular problem P ϵ C.\n* If we want to study the interaction of problems P1 and P2 (for which we perhaps have specialized models), the knowledge of a universal model can make it easier to find a model suitable for addressing both P1 and P2.\n* Finally, the universal model is useful even if we never formally use it to derive arguments and models. Indeed, whenever our results seem to be wrong, the fact that the formalization should in principle be possible allows us to use the heuristic “the mistake is more likely to occur in the hard-to-formalize step”.\n\n4. Research Questions\n=====================\n\nWhile there is currently no exhaustive list of research that needs to be done in regarding service systems (or of potential problems with them), the generality of the topic suggests that any such list is going to be “rather extensive”. To improve the efficiency of discussing these problems and reasoning about them, we first introduce an informal taxonomy for research questions and identify topics which are already being sufficiently tackled in classical AI and AI Alignment. In the second part of the section, we use the taxonomy to generate and organize a list of research areas for service systems. While this section could also contain a discussion of which tools are likely to be useful for tackling which problems, we defer such analysis into the “Related Fields of Study” section.\n\nClassifying Research Questions\n------------------------------\n\nWe now introduce several axes along which we can carve up the service-system research space. As we show in the next subsection, these axes can be used to classify and brainstorming relevant research questions: For example, were traffic congestion a particularly important problem, we could put it into the category of “problems related to having multiple users” and generate further problems by asking what else falls under this category. The axes that seem the most relevant are the following:\n\n* Single human vs multiple humans. For each potential problem with the system, we can ask whether it is crucial that the system interacts with multiple users. Some problems can thus be studied in the single-user setting, while others require the (potentially more complicated) general case.\n* The number of services. We can ask whether the core of the problem is mostly about a single service or a collection of a small number of services or whether it can only be explained on the level of the whole system. While this trichotomy is somewhat false, it nonetheless gives useful hints about which models and tools we should use to attack the problem.\n* Synchronic vs diachronic. Similarly, we can ask whether a problem already occurs in settings that are closer to being stationary (synchronic), or whether it is crucial that the system (or its users, or the environment) develops over time. For example, algorithmic bias is a synchronic problem while value drift is a diachronic one.\n* Problem solving vs theory building. Finally, some important research activities are less about trying to fix something in a specific service system and more about finding ways that will enable us to think about service-systems more effectively. Some examples that fall mostly towards the “theory building” side are Sutton’s [Bitter Lesson](https://www.google.com/url?q=http://www.incompleteideas.net/IncIdeas/BitterLesson.html&sa=D&source=editors&ust=1688249707347866&usg=AOvVaw0XtVkvff7WJmfI04H3kXtS), the paper Concrete Problems in AI Safety [26], the act of coining a new definition, and the present text. In contrast, the AlphaZero algorithm [27] and most applications of deep learning are more on the “problem-solving” side.\n\nThere are other axes worth considering, such as\n\n* how many services are performed by humans,\n* how capable are the AI services (much stronger than the current ones, or not?),\n* whether the problem at hand is closely tied to hypothetical future technologies that haven’t yet been developed (or fully developed; e.g., quantum computers, nano-technology),\n* and how centralized is the system (by default, we assume a situation similar to the current world, where the system is neither fully centralized nor fully decentralized).\n\nSince many of the problems with service systems can already be studied in the framework of classical AI or AI safety in the frame of utility maximization, we use the remainder of this section to briefly sketch how these fields relate to the above categories (and leave problems that “primarily” concern systems of services for the next section).\n\nCurrent AI (and machine learning in particular) predominantly[[12]](#ftnt12) uses the “baseline” assumptions and thus focuses on single-human single-service synchronic scenarios with AI and technologies that do not qualitatively differ from the existing ones. Examples of such problems include:\n\n* natural language processing, inverse reinforcement learning,\n* general “capabilities” research,\n* robustness, interpretability, verification,\n* avoiding side-effects, corrigibility.\n\nAnother problem worth highlighting is that of competitiveness: to prevent a system that is known to be safe from being replaced by less safe alternatives, we need to ensure that its capability doesn’t fall too far behind its competitors.\n\nSome problems in “AGI safety” [25] go beyond the baseline assumptions, for example by additionally considering\n\n* diachronic problems such as value drift in humans or ensuring that recursively self-improving AI remains beneficial,\n* theory building such as Bostrom’s classification of ways in which we could create smarter-than-human systems [2] or MIRI’s reconciliation of probabilistic reasoning with uncertainty about logical facts [28],\n* potential problems with more advanced AIs such as mesa-optimization [3] (i.e., the accidental emergence of “agency” as a consequence of optimization pressures), or “treacherous turn” whereby an advanced AI acts against our interest unexpectedly (sometimes in conjunction with the use of novel technology) [2].\n\nList of Research Questions\n--------------------------\n\nWe now use the tools from the previous subsection to lay out research tasks relevant to services of systems. In particular, we list some problems that are “related to” the service-system setting - that is, those that are either introduced by this setting or for which the interaction with it is sufficiently “game-changing”. The structure of this section is based on the observation that each problem has some “critical assumptions” without which its study would be meaningless (as with the absurdity of investigating congestion in a system with only one vehicle). In this spirit, we go through the (arguably) “non-default” assumptions from the previous section. For each assumption A, we list some of the problems with service-systems for which A is critical in the above sense. When reading the list, keep in mind that it is not meant to be complete and that some problems might fall under multiple categories. We conclude by giving several avenues for increasing the overall effectivity of research in this area.\n\n### Problems Introduced by Presence of Multiple Services\n\nFirst, it matters that the system consists not of a single service, but of multiple services that interact with each other. While many consequential questions are already studied by game theory, economy, or complexity science, the fact that many of the participants are AI algorithms gives these problems a different flavour. Indeed, unlike humans, AIs might be able to act near-optimally, make credible commitments, or even inspect each other’s source code. This multi-agent setting also highlights several other problems:\n\n* Undesired appearance of agency. Often, the stability of the system might rely on services behaving predominantly as tools with no goals of their own. However, it could potentially happen that an agent-like behaviour would arise in the system. Apart from a single service becoming an agent (as a result of mesa-optimization or unsafe-but-granted user-request), this could hypothetically be caused by a symbiosis of several narrow-purpose services[[13]](#ftnt13). Alternatively, agency could “emerge” in some yet-unforeseen manner on the level of the whole system, perhaps akin to how a brain is a general intelligence implemented on neurons. \n  A related hypothetical issue is that of “[ascended economy](https://www.google.com/url?q=https://slatestarcodex.com/2016/05/30/ascended-economy/&sa=D&source=editors&ust=1688249707353447&usg=AOvVaw012vg9hpkbTlIucp5ktPjj)”, wherein the market could gradually get decoupled from humanity’s interests while at the same time being difficult or impossible to shut down.\n* Error propagation and correlated failures. Inevitably, some parts of the system will sometimes exhibit errors. Such errors could cascade from service to service or appear in many places at once. Indeed, the latter case is likely if a similar solution is adopted in many different places in the system. The system needs to be able to contain such malfunctions and continue to function well, even in multiple simultaneous failures.\n* Communication along illegal channels. Services might sometimes be required to communicate using a specific protocol. However, they might develop ways of bypassing these rules (e.g., using unrelated events to coordinate action or AI services unexpectedly communicating through non-digital means). Since the desirable properties of the system might depend on these protocols being followed, we need to develop methods of preventing (or disincentivizing) such communication.\n\nMoreover, there are multi-service and system-wide variants of many of the single-service problems. Indeed, some examples of such problems include understanding the system (interpretability, verification), preventing unintended consequences of running the system (avoiding side-effects), “turning off” parts of the system safely if they start behaving undesirably (corrigibility), and the system being replaced by a less-understood Agent-like AGI (competitiveness).\n\n### Problems Related to Presence of Multiple Users\n\nSecondly, an important aspect specific to service-systems is that the multi-user scenario becomes the default. This raises topics such as the following:\n\n* Magnifying human conflict. In a decentralized system, the growing availability of powerful services will likely raise the stakes in existing human conflicts, but will not necessarily offer tools for their mitigation. As the example of social media and social bubbles suggests, services might also cause new conflicts and deepen the existing ones. We will require methods for improving the offense-defense balance [29] and diffusing conflicts between humans.\n* Resistance to blackmail. Despite the best efforts of the security services, actors might appear who will attempt to exploit the system. Indeed, such actors could be powerful users, come from outside of the system, or arise within it (for example via a random mutation of services). Apart from the risks listed elsewhere, the system should be prepared to deal with the possibility that such actors could attempt to blackmail users or services.\n* Vastly increased importance of cyber-security. As automated services become more comprehensive and ubiquitous, any security holes in the system could have far greater consequences. A particular case of this issue is the need for careful management of “access rights” in future systems - determining whether a given user (or service) should be granted a particular request or not.\n\n### Problems Related to Changes in Environment, System, or Users\n\nThirdly, systems of services offer a new perspective on the diachronic problems of ensuring that the system, the environment, and humanity develop desirably. Indeed, in an Agent-like AGI scenario, we can transform the problem into ensuring that the entity with a decisive strategic advantage is sufficiently capable and its intent is aligned with our goals. In contrast, sufficiently complicated systems of services will likely lack a single actor (or service) that can oversee the whole system. The problem thus changes to designing the system such that it remains beneficial even as it, its environment, or its users change over time.[[14]](#ftnt14) Some problems in this vein include:\n\n* Safe R&D services. As new services are developed, either to satisfy a direct request of some user or as part of the system improving itself, we need to prevent the addition of services that are directly harmful or that would break the system in some other way.\n* Maintenance of the system. The system needs to be designed in such a way that it keeps itself functional even as new services get added, without ever posing a catastrophic risk.\n* Long-term planning. If we are to safely navigate through potential future problems, the system needs to contain procedures that identify such problems and plans accordingly. For example, it needs to be able to deal with detecting dangerous asteroids, global warming, overpopulation, human enfeeblement, and other issues. Moreover, there also needs to be a process of turning these plans into action, even if they require large-scale coordination. (Notably, this is something that humanity has not yet mastered.) Whether this can be achieved without invoking some form of “agency” is an important open question.\n* Value drift and addictive or manipulative services. One side effect of new technologies is that our interaction with them can cause lasting changes to our values. For example, not only have we have gotten worse at handwriting as computers became commonplace, we are now much less likely to consider handwriting a valuable skill. Moreover, as witnessed by the issue of fake news and, historically, the ultimate fate of fire-water consuming native Americans[[15]](#ftnt15), the side-effects of introduction of novel attractive services can sometimes be undesirable or even fatal. And while this problem isn’t specific to AI services, it their inclusion makes it significantly more threatening. Indeed, it is plausible that unless this issue is explicitly dealt with, superintelligent variants of present-day services could drastically reshape society in a matter of days.\n* System stability. Unless the system is sufficiently secure, a powerful service or a user with high access level could gradually co-opt the majority of services towards its goals, leading to an undesirable state for most users. A similar outcome could happen as a result of a powerful company (i.e., a group of services with a human component) gaining dominance over the system through economic means.\n\n### Problems Related to Presence of Human-Implemented  Services\n\nFourth, it is critical that the present-day services are still mostly performed by humans, and the hypothetical transition to fully- or mostly-automated systems is going to proceed in multiple (or rather, many) steps[[16]](#ftnt16). This consideration highlights several concerns:\n\n* Labour displacement. New work opportunities will need to be created for those people whose jobs have been replaced by automated services.\n* Eventual unemployability. It is likely that many people will eventually become unable to provide enough value to sustain themselves. As an upper bound, this will naturally happen once a majority of tasks get automated (with the remaining ones requiring an exceptional amount of skill or intelligence). However, this problem is likely to arise much earlier, once the rate of automation and change surpasses people’s ability to adapt and requalify to a new task. It is prudent to provide all affected people with an alternative source of income. Additionally, the “inability to contribute” might have undesirable psychological consequences for many people. These should be dealt with or, preferably, prevented.\n* The loss of dignity. As a twist on the problem of unemployability, it might happen that before running out of economically viable tasks for less qualified or adaptable (e.g., due to age) people, we will run out of tasks that are compatible with their happiness or dignity. (As an example, consider the scenario where the only job available to a former high-school teacher is “providing training data for ML algorithms”, which consists of repetitively performing trivial manual tasks on camera.) While such situations might have been unavoidable in the past (or perhaps they still are at the present day), we might encounter them even at a point where not having anybody perform those tasks will only constitute a negligible economic cost. In such situations, we should make sure to remove economic pressures that might effectively force people into performing such tasks.\n* Adaptability of the humans in the system. If the rate of progress keeps increasing, it could happen that people in key (and irreplaceable) positions - e.g., law-makers - would become unable to keep up with the system’s changes. This effect could become particularly pronounced with economies of scale and discontinuous automatization, as technologies go from “just below economically feasible” to “just above net positive and adopted everywhere“. We need to develop methods for mitigating these effects as much as possible.\n\n### Problems Related to Advanced Capability of Services\n\nFifth, the services in future systems might become significantly more advanced than their current analogies. For example, AI might be able to create perfect fake video recordings, or get so good at targeted advertisement that the current democratic system becomes obsolete (unless countermeasures are devised in time). It is currently unclear to us which problems this implies. However, one particular consequence of this effect is that we should take care to aim either for “future-proof” solutions (to the extent this is possible), or at least for near-term solutions that will put us in a better position once it comes to solving the longer-term issues. To see why, note that adopting “whatever does the job” solutions now might cause the entrenchment of a poor solution as more and more other tools are built on top of it. Once firmly in place, replacing such a broken solution might often be much harder than devising a brand new one from scratch. Indeed, an excellent example of this phenomenon is the null-pointer that is present in many programming languages despite being the cause of many security risks[[17]](#ftnt17).\n\n### Problems Related to Future Technologies\n\nSixth, the solutions we devise should count with the technologies beyond what we currently have, and with the possibility of encountering new technologies that significantly change the dynamics of the system. Some problems related to this are:\n\n* Physical security. With further progress in areas such as robotics, automated services will become able to significantly affect the physical world. Apart from creating laws to govern the digital realm, we should thus ensure that services do not circumvent these laws by physical means. A related problem is that services that aren’t allowed to communicate with each other could bypass this restriction through the real-world (say, by sending postcards or using unrelated real-world events as a basis for a learned coordination mechanism).\n* Preparedness for game-changing technologies. The introduction of significantly novel technologies could break solutions that have worked until now - an example of a potential concern is the impact of quantum computing on cryptography [23]. Moreover, novel technologies could significantly shift the demand for existing services and thus pull the system into a configuration far from the one for which it has been optimized. We should ensure that the system remains beneficial (and stable as much as this is possible) despite events such as these.\n\n### Increasing Effectivity of Research on Service Systems\n\nFinally, we consider the “theory-building” (and field-building) issues around increasing our understanding of services systems and directing future research more effectively. Some of the topics here are:\n\n* Mapping out potential problems. We need better tools for identifying potential downsides of service systems and talking about them. In particular, it would be useful to have models and terminology that allows us to have better discussions about this topic.\n* Identifying truly novel problems vs connections to existing results. To better leverage the existing knowledge on the one hand and effectively pursue novel research directions on the other, we should map out which problems can and cannot be tackled using existing methods.\n* Clarifying specific problems. To solve the problems we identify effectively, we need to find ways to talk about them in as precise language as possible. In particular, any problem should have a corresponding model which describes it well without incurring significant overhead by talking about aspects irrelevant to the issue.\n* Forecasting future development. To focus research, governance, and strategic efforts on problems that will actually matter, we should reduce our uncertainty about the hypothetical development of service systems as much as possible. To this end, we should develop better methods for forecasting AI (and service systems in general).\n* Devising terminology accessible to the target audience. Various problems in service systems are either directly interdisciplinary or at least spread over different research fields (ranging from theoretical computer science to psychology and governance). As a result, it is important to formulate problems and communicate ideas in ways that allow and incentivize the participation of people with the appropriate expertise. \n  This problem is particularly important to address since the paradigm of systems of services is, arguably, more accessible to the communities around AI ethics, strategy, governance, and forecasting than the paradigm of superintelligent agent-like AIs. Indeed, unlike the issues discussed, for example, in Bostrom’s Superintelligence, many potential problems with powerful service systems already manifest, in diminished forms, with present-day technologies. Such problems are thus more suited for discussion (and analysis) by more general audiences.\n\n5. Related Fields of Study\n==========================\n\nAI services are a broad and important phenomenon, and there will likely be many different fields outside of AI that have something to contribute to their study. To prevent a duplication of effort, this section suggests some research fields that may contribute to the study of systems of services. Additionally, it presents an initial commentary on why these fields might be relevant and why their interaction with systems of (AI) services is meaningful. However, please note that the list is not meant to be exhaustive, and not all parts have been run through experts in the corresponding fields.\n\nFor readers who already have expertise in one of these fields, a worthwhile future work would be to look in more detail at the connection between their field and service systems. This effort could have two specific outputs: First, one could create, e.g., a summary of relevant existing results to make those results more accessible to researchers interested in service-systems. Second, if one can explain the importance of some service-systems problem in terms of a different research field, the work on that problem can be accelerated by leveraging the community that already exists around that field. (These fields are not listed in any particular order.)\n\nAI strategy and policy research. An interaction that seems particularly relevant is the one between AI and governance of AI [21] (and fields related to it). In one direction, laws, standards, and other norms directly influence interactions between services, thus influencing the dynamics of the service-system. There are also indirect effects, such as funding strategies differentially affecting which research gets done, or relationships between different nations changing how service-systems are applied (e.g., whether more emphasis is placed on military power or public good). In the opposite direction, the paradigm of AI services should be much more accessible and credible to a more general public than that of Agent-like AI (since AI currently takes the form of services). As a result, it seems essential to frame potential problems with advanced AI (also) in terms of AI services, such that they can be effectively communicated to more general audiences (once this becomes desirable).\n\nAI ethics. The emerging field of AI ethics focuses on ensuring that applications of AI algorithms do not lead to morally-undesirable outcomes. At the moment, the field’s primary interest lies in near-term issues such as algorithmic bias and morally-relevant decisions made by self-driving cars. On their own, these topics are undoubtedly important. However, we believe the field’s potential is far greater. Indeed, the key insight is that many present-day issues are in fact “scaled-down” variants of more general problems. For example, the expected labour displacement caused by the arrival of self-driving cars foreshadows the general issue of what to do once human work becomes economically unprofitable. Once these interactions between near- and long-term issues become clearer, we can prioritize solving near-term problems in ways that put us into a better position to tackle the long-term challenges. Conversely, the community’s failure to acknowledge these connections could lead to the entrenchment of near-term solutions that make it much more difficult to adopt good solutions in the future. Since the AI ethics community is plausibly larger than the long-termist community, communication with this field seems to be of particularly high importance. We believe that the service-systems framework is superior to the Agent-like AGI for this communication (due to its potentially higher accessibility and credibility).\n\nNeuroscience, predictive processing, and, to some extent, psychology and sociology. To determine whether the impact of the service system is beneficial, we should improve our understanding of what humans “want” (in the many different meanings of the word). Moreover, humans have many cognitive biases and other limitations that sometimes make it impossible for them to take actions that are in their interest. By default, services tend to exploit these weaknesses whenever possible - not out of malice, but as a result of competitive pressures within the system. To prevent such exploitation by progressively stronger services, we need to understand human limitations and design appropriate safeguards. While many of our historical attempts to understand human wants and limitations come from the fields of psychology and sociology, we hope that a grounded understanding can come from neuroscience. Relatedly, some researchers believe that the predictive processing model is particularly relevant in this mission.\n\nEconomics. To understand the interactions between different services (and groups of services), we can draw inspiration from economics. To some extent, economics could also be useful directly, for modelling service systems; however, it is possible that the specifics of AI service systems will be sufficiently different to prevent a direct application. For example, AI might revolve about currencies that behave differently from money (compute, access rights) and be subject to more regulation (or a very different one) than classical markets. However, we might nonetheless benefit a lot from the field’s tacit know-how for distilling real-world situations into models that can be analyzed formally and from its insight into topics such as the principal-agent problem or advertising.\n\nBehavioural economics. An important limitation of economics is that humans are often sub-optimally rational, and behave nothing like the idealized “homo economicus”. Indeed, it is likely that it is fundamentally flawed to think of humans as even trying to optimize some utility function. Since the field of behavioural economics deals with precisely these complications, we believe we can draw upon it to gain a more realistic picture of interactions between the system of services and its human users.\n\nGame theory, multiagent systems, mechanism design. Once we have formally described situations of interest within the system, we can use tools from game theory and multiagent systems [20] to predict the likely outcomes. Inversely, we can use the knowledge of mechanism design to set up the system in a way that leads to beneficial outcomes. However, we should keep in mind that the main historical motivation behind these fields (perhaps except for multiagent systems) is the study of interactions between human actors. As a result, it might be necessary to ask slightly different questions from those that have been asked before and rethink the assumptions made when addressing them. To give one example, while humans often have trouble trusting each other’s promises (and justifiably so), AI services might be capable of  making their commitment credible. When widespread, this new assumption would significantly shift the questions of interest in bargaining scenarios.\n\nDesign of operating systems. Understanding operating systems, and their design seems relevant to service systems for two reasons. Firstly, since traditional computers, smartphones, and other similar devices are likely to remain a part of the global system of services for some time, knowledge of operating systems is likely to be relevant directly. Secondly, we can draw insight from analogies between large-scale service-systems and operating systems. This might help us identify service-system analogies of issues (and possible remedies) such as those with the management of access rights for different users, resource sharing, and security tools like firewall and antivirus. Moreover, operating systems can be used as a conceptual test-bed for proposed solutions to some problems with service-systems (e.g., those related to security services).\n\nCybersecurity. As service systems get progressively more automated, it is likely that cybersecurity will accordingly grow in importance. Similarly to the design of operating systems, we should expect cybersecurity to be relevant both through direct application and through analogies between present-day cybersecurity concepts and issues with more advanced systems.\n\nProcess algebras. Similarly to how lambda calculus can be used as an underlying formalism for computable functions and functional programming, process algebras [19] can be used to model processes that run in parallel. Consequently, they might be useful for formalizing systems of services that operate in the digital domain (i.e., many of them). A limitation of this approach is that the resulting description might be too “low-level” to be practically useful, and might not apply to services that aren’t fully automated.\n\nComplex systems. The analysis of complex systems [7] investigates how relationships between a system's components give rise to its collective behaviors and how the system interacts and forms relationships with its environment. Since networks of interacting AI services might exhibit properties that would not be apparent from observation of their isolated components, they are amenable to this type of analysis. Complexity science might thus inform our decisions when designing the system, by predicting how the system’s parameters influence its structure and properties (such as resource expenditure or robustness to failures).\n\nAI forecasting. To ensure that research on AI risk focuses on areas that end up being relevant, we need to know which scenarios are likely to occur. Providing this clarity is one of the goals of the emerging field of AI forecasting [22]. To see how this field might interact with service-systems, note that AI is currently affecting the world through collections of AI services (rather than agent-like AIs). One way in which AI forecasting could be helpful is determining how long this situation is likely to continue. Moreover, if AI keeps the form of services for a substantial amount of time, their interaction with AI forecasting might grow in importance. In such a world, we should make more service-centered forecasts (to better inform our decisions) and improve our understanding of service-systems (to enable better forecasts).\n\nNon-exhaustive list of other related topics. First, while organizational theory traditionally studies human institutions, it could likely offer a lot of insight into systems that include AI services.[[18]](#ftnt18) To understand how systems of AI services get developed, behave, and get maintained, we can look to the extensive knowledge accumulated in software engineering.[[19]](#ftnt19) Another problem where we expect to find pre-existing results is service specification - that is, describing the intended behaviour of a service such that no matter how it ends up being implemented, it will robustly and beneficially perform its intended purpose. This is likely relevant to formal verification (see, e.g., the [DeepSpec](https://www.google.com/url?q=https://deepspec.org/main&sa=D&source=editors&ust=1688249707366211&usg=AOvVaw3H753XaeAdSclkpjlmC2Jv) initiative). Note also that in some sense, service specification is precisely the goal of contracts created by law. As a result, we can look to this field for inspiration and intuitions for which problems to expect. Another area with a notable amount of existing literature are “service ecosystems” [8-18]. While the area’s usage of the term “service (eco)systems” is somewhat different from ours, some of the literature might be relevant. Finally, advances in computer science and AI give us access to tools such as blockchain, smart contracts, and zero-knowledge proofs. These methods could transform dynamics of service systems, for example by serving as cooperation-enhancement tools. \n\n6. Research Suggestions\n=======================\n\nIn this section, we present our subjective ideas (Vojta’s in particular) on which steps might be useful to take towards making progress in the area of service systems. Most of the items below can also be viewed as bookmarks for “all the team-members from AISRP (Cara, Chris, Francisco, Jan, Tushant, Vojta) have some understanding of these ideas, and will probably be happy to discuss them”. Note that this doesn’t necessarily mean that they agree with arguments, conclusions, or framing of the presented ideas.\n\nGetting Familiar with Further Relevant Ideas\n--------------------------------------------\n\nSince the present text cannot capture all the existing relevant ideas, we start by giving a few pointers to ideas that can make further investigation of AI services more effective.\n\nComprehensive AI services. An important resource is Drexler’s extensive technical report [1] on reframing superintelligence in terms of AI services. As we mentioned, our text describes systems of varying degree of automation while [1] focuses on systems that automate the whole R&D process, possibly soon resulting in comprehensive coverage of the task-space. We believe that studying this report can be useful for two reasons: Firstly, taken as a whole, the report paints a coherent picture of how advanced AI-service systems could look in the future. It also lists some risks associated with such systems (see, e.g., Section 14). Second, the overall text identifies and informally states many important hypotheses about the nature of the AI systems, such as the claim that there will be no compelling incentives to replace comprehensive narrow services by an Agent-like AGI (Section 12). We believe it would be valuable to (a) map out the different hypotheses and assumptions made in the report and (b) formalize specific hypotheses and explore them further.\n\nAgency as an accidental by-product of optimization. Apart from discussions about the economic feasibility of narrow AI services (vs. general AI agents), a recent paper [3] by Hubinger et al. raises the concern of “mesa-optimization”[[20]](#ftnt20), i.e. that agency might also appear as a result of optimization pressures, somewhat akin to how human intelligence is a result of evolutionary pressures. This issue seems particularly relevant in the context of AI services, and we recommend reading the paper for more details.\n\nRadically-transformative AI. While artificial general intelligence certainly has the potential to have an extreme impact on humanity, even much narrower systems could already result in existential risk and other high-impact events. To get more clarity on this topic, we have found it helpful to get familiar with the terminology (and discussion) presented in [5].\n\nUndesirable stable equilibria. With agent-like superintelligent AI, the main concern is that the goal of such AI would conflict with humanity’s interests. The situation is different in the context of systems of AI services since services might often not have goals in any meaningful sense - they might “just do things”. However, the service-system paradigm has its own nemesis in the form of economic (or other) pressures perpetuating situations that nobody is happy with, but nobody can escape. An example of such a dynamic is the “[race to the bottom](https://www.google.com/url?q=https://en.wikipedia.org/wiki/Race_to_the_bottom&sa=D&source=editors&ust=1688249707368687&usg=AOvVaw2IK5uEOPefOr3pwxaOMG4F)” between states as they underbid each other on environmental policies to become more attractive to outside investors. To get an impression of a range (and the overwhelming magnitude) of other issues of this type, we refer the reader to a post [Meditations on Moloch](https://www.google.com/url?q=https://slatestarcodex.com/2014/07/30/meditations-on-moloch/&sa=D&source=editors&ust=1688249707369028&usg=AOvVaw3RopeTCH4aX_ss1yD8IgDQ) by Scott Alexander, or Yudkowsky’s (short and accessible) book Inadequate Equilibria. Somewhat related to this topic is the recent research agenda on [Cooperation, Conflict, and Transformative AI](https://www.google.com/url?q=https://www.alignmentforum.org/s/p947tK8CoBbdpPtyK&sa=D&source=editors&ust=1688249707369328&usg=AOvVaw3Uh-JUW_UzC4oWKK7w5xnF).\n\nMathematical modelling. When attempting to come up with useful mathematical models for systems of AI services, we kept going back to questions such as “What purpose is the model supposed to serve?” and “How should a good model even look like?”. While there are likely further resources unknown to us, the following series of posts on Artem Kaznatcheev’s blog was quite illuminating: [Methods and morals for mathematical modeling](https://www.google.com/url?q=https://egtheory.wordpress.com/2018/10/06/metamodel-linkdex/&sa=D&source=editors&ust=1688249707369854&usg=AOvVaw2CrjzH0H7bEtBBAALJ-IvV).\n\nPromising Topics for Investigation\n----------------------------------\n\nIn this subsection, we list some of the directions for future research that seem particularly important to us. (For a more extensive list of useful topics, recall that: (i) the “Problems” section contains many potential issues with service systems, (ii) the “Related Fields of Study” section lists many areas whose connection to service-system should be understood better, and (iii) the extensive technical report [1] should provide fertile ground for further investigation.)\n\nTools or agents? Many people seem to believe that while Agent-like AGI and “fully-automated comprehensive system of services” might have similar capabilities, there is some fundamental difference between the two types of AI. At the same time, there seems to be a general confusion around this topic. Some relevant questions are:\n\n* Can we find a framework in which these similarities and differences could be explained or dissolved?\n* In particular, does there perhaps exist a formalization of “agency” that can differentiate between the two?\n* If there are meaningful distinctions, how do they translate into what it means to “align” each type of AI?\n* How does the effectivity of narrow-purpose algorithms differ from the effectivity of general-purpose algorithms? Should we expect economic pressures towards generality (and maybe even Agent-like AGI)?\n\nClarifying the connection between near-term AI issues and X-risk. We believe that the link between near- and long-term AI issues is currently insufficiently understood and, as a result, underappreciated by the AI alignment community. For example, we would like to have clearer answers to questions such as: Which AI risks are associated with artificial general intelligence, and which are already present with advanced narrow systems? Which problems scale from the present-day AI systems to the superintelligent ones? For which of these problems is it the case that solving the near-term issue in the wrong way will make us much worse off when we face the superintelligent version later (due to, e.g., a lock-in of the bad approach)? Which “problems with AI” are in fact “merely” general problems with our society, magnified by the power of AI? Understanding these questions should make our discussions more effective, enable us to focus on problems that matter the most, and - we predict - diminish the tension between long-termist views and the more general AI community. (We recommend the recent paper [24] which clarifies some key concepts relevant to this topic and argues against viewing the near-term vs long-term axis as a “dichotomy”.)\n\nUtilizing the field of AI ethics. It is plausible that the research community around AI ethics is currently larger than the long-termist community in AI. As a result, it might be having a more substantial impact on shaping the trajectory of the whole AI field, potentially in ways that are sub-optimal from the long-termist perspective. At the same time, the stated goals of both communities are similar: ensuring that AI has a beneficial impact on society. We believe that a potentially high-impact action would be to clearly lay out the connections between AI alignment and AI ethics and bring them to the attention of the AI ethics community in the right way. However, if unsuccessful, the “high impact” of this endeavour could easily turn out to be a highly-negative one (due to effects such as the [unilateralist’s curse](https://www.google.com/url?q=https://concepts.effectivealtruism.org/concepts/unilateralists-curse/&sa=D&source=editors&ust=1688249707371705&usg=AOvVaw0-9oZZ-mjvqSofvIzKoiV3) and [idea inoculation](https://www.google.com/url?q=https://en.wikipedia.org/wiki/Inoculation_theory&sa=D&source=editors&ust=1688249707371950&usg=AOvVaw1mEa0bO0gOR_vVcgJcAirM)). As such, we also think that this task should only be attempted by people who are experienced at communicating ideas on this level and well-positioned to do so.\n\nReferences\n==========\n\n1. Drexler, K. Eric. \"Reframing Superintelligence.\" (2019).\n2. Bostrom, Nick. Superintelligence. Dunod, 2017.\n3. Hubinger, Evan, et al. \"Risks from Learned Optimization in Advanced Machine Learning Systems.\" arXiv preprint arXiv:1906.01820 (2019).\n4. Yudkowsky, Eliezer. Inadequate Equilibria: Where and How Civilizations Get Stuck. Machine Intelligence Research Institute, 2017.\n5. Gruetzemacher, Ross, and Jess Whittlestone. \"Defining and Unpacking Transformative AI.\" arXiv preprint arXiv:1912.00747 (2019).\n6. Simler, Kevin, and Robin Hanson. The elephant in the brain: Hidden motives in everyday life. Oxford University Press, 2017.\n7. Barabási, Albert-László. Network science. Cambridge university press, 2016.\n8. [Considerations on Modeling Service Ecosystems](https://www.google.com/url?q=https://www.ceeol.com/search/article-detail?id%3D668918&sa=D&source=editors&ust=1688249707373365&usg=AOvVaw0lGZd3o4N8C6K-PwK8LNZf) (2018)\n9. [The Application of a Service Ecosystems Lens to Public Policy Analysis and Design: Exploring the Frontiers](https://www.google.com/url?q=https://journals.sagepub.com/doi/10.1177/0743915618818566&sa=D&source=editors&ust=1688249707373767&usg=AOvVaw0m15ANLALtuZHZ1s0941bN) (2018)\n10. [Business modeling for service ecosystems](https://www.google.com/url?q=https://www.researchgate.net/publication/220884279_Business_modeling_for_service_ecosystems&sa=D&source=editors&ust=1688249707374094&usg=AOvVaw34vAvAjqhY8PKhiHg1TFRo) (2010)\n11. [Modeling Service Ecosystems Innovation](https://www.google.com/url?q=https://www.researchgate.net/publication/282495905_Modeling_Service_Ecosystems_Innovation&sa=D&source=editors&ust=1688249707374463&usg=AOvVaw2hfca7sU4vlTIHDhzpNf6B) (2015)\n12. [Handbook of Research on Service-Oriented Systems and Non-functional Properties: Future Directions](https://www.google.com/url?q=https://www.amazon.com/Handbook-Research-Service-Oriented-Non-functional-Properties/dp/1613504322?SubscriptionId%3DAKIAILSHYYTFIVPWUY6Q%26tag%3Dduckduckgo-brave-20%26linkCode%3Dxm2%26camp%3D2025%26creative%3D165953%26creativeASIN%3D1613504322&sa=D&source=editors&ust=1688249707374894&usg=AOvVaw0CJOjl4ygHG2e3dGr3seJS) (2011)\n13. [The User Perspective on Service Ecosystems: Key Concepts and Models](https://www.google.com/url?q=https://link.springer.com/chapter/10.1007/978-3-319-65151-4_34&sa=D&source=editors&ust=1688249707375259&usg=AOvVaw1qzaPgPC8kIBWMsCBS1VSJ) (2017)\n14. [A Service Description Method for Service Ecosystems - Meta Models, Modeling Notations, and Model Transformations](https://www.google.com/url?q=https://fis.uni-bamberg.de/handle/uniba/286&sa=D&source=editors&ust=1688249707375583&usg=AOvVaw1RfTvprZdnYO--LpOUZ6S7) (2011)\n15. [Business Modeling for Service Descriptions: A Meta Model and a UML Profile](https://www.google.com/url?q=https://www.researchgate.net/publication/221592291_Business_Modeling_for_Service_Descriptions_A_Meta_Model_and_a_UML_Profile&sa=D&source=editors&ust=1688249707375963&usg=AOvVaw1vnVq9y1tH7mM_HdFsOczJ) (2010)\n16. [IBM - Service-oriented modeling and architecture](https://www.google.com/url?q=https://www.ibm.com/developerworks/library/ws-soa-design1/index.html&sa=D&source=editors&ust=1688249707376337&usg=AOvVaw1gK8dqvMYTQbfFS4sk19gi) (2004)\n17. [Service Value Properties for Service Ecosystems: A Reference Model and a Modeling Guideline](https://www.google.com/url?q=https://www.researchgate.net/publication/247927245_Service_Value_Properties_for_Service_Ecosystems_A_Reference_Model_and_a_Modeling_Guideline&sa=D&source=editors&ust=1688249707376785&usg=AOvVaw1Db9x0thnZNsucZxRgSE3g) (2009)\n18. [Reflexive and Evolutional Digital Service Ecosystems with Models at Runtime](https://www.google.com/url?q=http://ceur-ws.org/Vol-2019/mrt_2.pdf&sa=D&source=editors&ust=1688249707377210&usg=AOvVaw1yoQd_RwbEB34hPKBHLpE9) (2017+)\n19. Introduction to Process Algebra, Baeten-Beek-Rooda ([link](https://www.google.com/url?q=http://mate.tue.nl/mate/pdfs/8509.pdf&sa=D&source=editors&ust=1688249707377564&usg=AOvVaw2ak_fy3b5TilohoXQsliGF))\n20. Shoham, Yoav, and Kevin Leyton-Brown. Multiagent systems: Algorithmic, game-theoretic, and logical foundations. Cambridge University Press, 2008.\n21. Dafoe, Allan. \"AI governance: A research agenda.\" Governance of AI Program, Future of Humanity Institute, University of Oxford: Oxford, UK (2018).\n22. Gruetzemacher, Ross. \"A Holistic Framework for Forecasting Transformative AI.\" Big Data and Cognitive Computing 3.3 (2019): 35.\n23. De Wolf, Ronald. \"The potential impact of quantum computers on society.\" Ethics and Information Technology 19.4 (2017): 271-276.\n24. Prunkl, Carina, and Jess Whittlestone. \"Beyond Near-and Long-Term: Towards a Clearer Account of Research Priorities in AI Ethics and Society.\" arXiv preprint arXiv:2001.04335 (2020).\n25. Everitt, Tom, Gary Lea, and Marcus Hutter. \"AGI safety literature review.\" arXiv preprint arXiv:1805.01109 (2018).\n26. Amodei, Dario, et al. \"Concrete problems in AI safety.\" arXiv preprint arXiv:1606.06565 (2016).\n27. Silver, David, et al. \"A general reinforcement learning algorithm that masters chess, shogi, and Go through self-play.\" Science 362.6419 (2018): 1140-1144.\n28. Garrabrant, Scott, et al. \"Logical induction.\" arXiv preprint arXiv:1609.03543 (2016).\n29. Garfinkel, Ben, and Allan Dafoe. \"How does the offense-defense balance scale?.\" Journal of Strategic Studies 42.6 (2019): 736-763.\n30. Intelligence, Artificial. \"Rich E., Knight K.\" (1991).\n\n\n\n---\n\n[[1]](#ftnt_ref1)  We frequently need to distinguish between (i) AI that takes the form of a collection of AI services and (ii) monolithic AI associated with utility functions, instrumental goals, etc. While this distinction might ultimately turn out to be too simplistic, we believe it is nevertheless useful as a first approximation. We use the terms “system of AI services” and “agent-like” AI to point to the corresponding clusters of AI.\n\n[[2]](#ftnt_ref2) Apart from services, an important role in our society is played by institutions and companies. For the purpose of this document, we view these as “clusters” of services connected in a particular manner.\n\n[[3]](#ftnt_ref3) Suggestions for a more suitable definition are welcome.\n\n[[4]](#ftnt_ref4) It is crucial to note that many services have important functions that are distinct from their “official” tasks. For example, if a part of my reason for visiting the coffee shop is to socialize with the waiter, I might stop coming if the waiter is replaced by a robot. As Simler and Hanson point out in [6], these “secondary” functions might be particularly difficult to notice when acknowledging them would put us into a bad light. Indeed, I might be reluctant to admit that I only frequent the coffee shop to flirt with the waiter. Similarly employers might be uncomfortable admitting that one (among many) benefit of having formal education is a hard-to-fake certificate of being willing to perform routine tasks that one considers mostly pointless. Ultimately, whether we approve of these secondary functions or not, we might soon see a large degree of automation. As a result, it seems unwise to ignore these functions or, worse yet, remain blind to them.\n\n[[5]](#ftnt_ref5) The above definition can be viewed as solving an important general problem of defining “a capability of an algorithm”. Arguably, the definition does not so much solve the problem as transforms it into a different problem to be solved elsewhere (“define a partial ordering on task-solutions in the given domain”). However, in this particular case, this “delegation trick” is extremely useful since most domains do naturally come with some means of comparing solutions.\n\n[[6]](#ftnt_ref6) Then again, maybe they will not.\n\n[[7]](#ftnt_ref7) And if the borders aren’t closed...\n\n[[8]](#ftnt_ref8) In the author’s opinion, it is a useful “mental move” to sometimes consider scenarios that are far-off or unlikely, or even impossible. One of the benefits is that this allows us to identify hidden assumptions and intuitions. For example, my intuitions might say to not expect global chaos ensue as AI technology becomes more capable and wide-spread. At the same time, consider the thought-experiment of immediately turning everybody into a wizard from J.K.Rowling’s books (as impossible as it is given our laws of physics). My intuitions say that in this scenario, the world probably would erupt into chaos. Noting this, I can ask why the two versions of “increased capabilities” are different, and start examining my intuitions about the former to see whether they are justified.\n\n[[9]](#ftnt_ref9) In this terminology, both the stereotypical images of monolithic AGI and comprehensive AI services (CAIS) of [1] can be viewed as fully-automated systems that encompass most of the tasks currently performed by humans. The distinction is that CAIS consists of many narrow services, while the monolithic AGI consists of a single general-purpose service.\n\n[[10]](#ftnt_ref10) Technically, long-term planning should fall under governance and system design. We list it separately since it is quite possible to deal with the more immediate concerns of policy-making and yet fail at the long-term challenges (“Let’s have the guys after us deal with the pension reform.”).\n\n[[11]](#ftnt_ref11) This list isn’t meant to be complete, mutually exclusive, nor final. For example, the item (4) should be further split depending on whether we are more interested in formal proofs or simulation-based arguments, the suitability of a model for (1) will vary depending on the target audience, etc.\n\n[[12]](#ftnt_ref12) There are certainly exceptions, some of which include game theory, multiagent systems, and cybersecurity.\n\n[[13]](#ftnt_ref13) In a vague analogy to how cells are made of smaller components, some of which previously existed independently, a world-model could hypothetically join up with a planner, a learning module, and some sensors and actuators, and thus create an agent.\n\n[[14]](#ftnt_ref14) We only mean to imply that each of the settings is naturally associated with a specific [stance](https://www.google.com/url?q=https://en.wikipedia.org/wiki/Intentional_stance&sa=D&source=editors&ust=1688249707381787&usg=AOvVaw0WsKqeaIGFzGPM-oFCjdRE) (intentional vs design), not that any of the settings makes the problem easier to solve than the other.\n\n[[15]](#ftnt_ref15) The case of native Americans is meant to illustrate the impact that change of environment can have on society. We do not mean to suggest that alcohol was the primary problem here.\n\n[[16]](#ftnt_ref16) This does not contradict the possibility for those steps eventually becoming bigger or happening in a rapid succession, nor the possibility of the creation of “monolithic” AGI in the middle of this process.\n\n[[17]](#ftnt_ref17) Hoare, Tony (25 August 2009). \"Null References: The Billion Dollar Mistake\". InfoQ.com.\n\n[[18]](#ftnt_ref18) Organizational theory offers many relevant concepts such as authority and delegation, monitoring outputs and audits, and hiring employees and terminating contracts.\n\n[[19]](#ftnt_ref19) Software engineering deals with many relevant topics such as architecture choice, communication, and bug detection and removal.\n\n[[20]](#ftnt_ref20) It originally came as a surprise to the author of this text that instead of being a typo, the prefix “mesa-” comes from greek, and broadly has the [opposite meaning to “meta”](https://www.google.com/url?q=https://www.gwiznlp.com/wp-content/uploads/2014/08/Whats-the-opposite-of-meta.pdf&sa=D&source=editors&ust=1688249707380353&usg=AOvVaw0ONmTw3mfk8lXPQARCmXTc).", "url": "https://docs.google.com/document/d/1SYgvWBe1ruDl9dQnxmjll-8COUHPycGOlLvTI68xtLA/edit?pli=1&usp=embed_facebook", "title": "AI Services: Introduction v1.3", "source": "html_articles", "source_type": "manuscript", "source_filetype": "pdf", "date_published": "2020-03-30T22:00:00Z", "authors": ["Vojta Kovarik"], "summary": [], "id": "1660f086bed7026e0d4a6e0f17648a49"} {"text": "[![Logo Google](//ssl.gstatic.com/docs/common/product/docs_lockup2.png)](//support.google.com/docs/)Żądany plik nie istnieje.\n\nSprawdź, czy URL jest poprawny i czy plik istnieje.\n\n**Dysk Google – zrobisz wszystko w jednym miejscu**\n\nAplikacje Dysku Google ułatwiają tworzenie, przechowywanie i udostępnianie różnego rodzaju dokumentów, w tym arkuszy kalkulacyjnych i prezentacji.\n\nDowiedz się więcej na [drive.google.com/start/apps](https://drive.google.com/start/apps).", "url": "https://docs.google.com/document/d/e/2PACX-1vQzbSybtXtYzORLqGhdRYXUqiFsaEOvftMSnhVgJ-jRh6plwkzzJXoQ-sKtej3HW_0pzWTFY7-1eoGf/pub", "title": "The Malicious Use of Artificial Intelligence: Forecasting, Prevention, and Mitigation (2018). Brundage and Avin et al.", "source": "html_articles", "source_type": "webpage", "source_filetype": "pdf", "date_published": "2018-01-31T23:00:00Z", "authors": ["Miles Bridge", "Shahar Avon", "et al"], "summary": [], "id": "881324018fdb5d2c1c62b0a5c7f16507"} {"text": "![](http://gcrinstitute.org/wp-content/uploads/2021/11/Plant-Protection-Drone-Dji-Uav-Farmland-Agriculture-4204798-1024x609.jpg)\n[View the paper “The Ethics of Sustainability for Artificial Intelligence”](https://gcrinstitute.org/papers/060_sustainability-ai.pdf)\n\n\n[Access the data used in the paper. ](https://gcrinstitute.org/papers/060_sustainability-ai-data.xlsx)\n\n\nAI technology can have significant effects on domains associated with sustainability, such as certain aspects of human society and the natural environment. Sustainability itself is widely regarded as a good thing, including in recent initiatives on AI and sustainability. There is therefore a role for ethical analysis to clarify what is meant by sustainability and the ways in which sustainability in the context of AI might or might not be good. This paper provides a foundational ethical analysis of sustainability for AI, describes the ethical basis of the existing body of work on AI and sustainability, and presents an argument for a specific ethical view on AI and sustainability. The paper is part of the conference [AI for People: Towards Sustainable AI, CAIP’21](https://aiforpeople.org/conference).\n\n\nAs the paper explains, sustainability is not an inherently ethical concept. “Sustainability” simply refers to the ability of something to continue over time; the thing to be sustained can be good, bad, or neutral. Common usage of the term “sustainability” assumes that the thing to be sustained is some combination of social and ecological systems. The term is sometimes also used in other ways, such as to refer to the sustainability of a business or organization, or the sustainability of an AI system. The paper argues that usage of the term “sustainability” should address three ethics questions. First, what should be able to be sustained, and why? Second, for how long should it be able to be sustained? Third, how much effort should be made for sustainability?\n\n\nThe paper further distinguishes between sustainability and optimization. Making something sustainable means giving it the potential to continue existing in at least some minimal form. In contrast, optimizing something means putting it in the best form that it can have. Therefore, sustainability may be considered a basic minimum standard of conduct toward future time periods, whereas optimization may be considered a more substantial goal. In common usage, sustainability is treated as a good thing, but it may be better understood as a not-terrible thing. If human civilization has to focus on sustaining itself rather than on loftier goals like optimization, then it is in a very bad situation.\n\n\nWith this theoretical perspective in place, the paper surveys prior work on AI and sustainability. It examines published sets of AI ethics principles and academic research on AI and sustainability. The paper finds that most work on AI and sustainability focuses on common conceptions of environmental sustainability, although some work has been done on the sustainability of AI systems and other things. Additionally, most work is ultimately oriented toward sustaining human populations, with AI and the environment having value insofar as they support human populations. Finally, most work lacks well-specified the ethical foundations, with no clear answers to the three questions listed above.\n\n\nThe paper then provides its own answers to the three questions. First, it argues for sustaining both humans and nonhumans. Second, it argues for sustainability over long time scales, including the astronomically distant future. Third, it argues for a large amount of effort toward sustainability. It additionally calls for emphasizing optimization over sustainability in cases where the two diverge.\n\n\nFinally, the paper presents implications for AI. One is that AI should be used to improve long-term sustainability and optimization, such as by reducing global catastrophic risk. Another is that attention should be paid to long-term forms of AI, which could be particularly consequential for long-term sustainability and optimization. These AI topics only partial overlap with what is typically considered within the realm of AI and sustainability, but the paper argues that these topics are a more appropriate focus for work on AI and sustainability.\n\n\nThe paper extends GCRI’s research on AI ethics, especially the papers [Moral consideration of nonhumans in the ethics of artificial intelligence](https://gcrinstitute.org/moral-consideration-of-nonhumans-in-the-ethics-of-artificial-intelligence) and [Reconciliation between factions focused on near-term and long-term artificial intelligence](https://gcrinstitute.org/reconciliation-between-factions-focused-on-near-term-and-long-term-artificial-intelligence). It additionally builds on GCRI’s research on sustainability and environmental risks, especially [Integrating the planetary boundaries and global catastrophic risk paradigms](https://gcrinstitute.org/integrating-the-planetary-boundaries-and-global-catastrophic-risk-paradigms).\n\n\nThis paper has also been [summarized](https://montrealethics.ai/the-ethics-of-sustainability-for-artificial-intelligence/) in the [AI Ethics Brief #85](https://brief.montrealethics.ai/p/queer-china-sustainability-unesco-ai-ethics) of the Montreal AI Ethics Institute and is included in the 2022 [The State of AI Ethics Report](https://montrealethics.ai/volume6/). The paper is also discussed in the MEDIUM article [“Is 2022 the Year that AI Ethics Takes Sustainability Seriously?”](https://josh-gellers.medium.com/is-2022-the-year-that-ai-ethics-takes-sustainability-seriously-8a10953105e9). \n\n\nAcademic citation: \nOwe, Andrea and Seth D. Baum, 2021. [The ethics of sustainability for artificial intelligence](https://gcrinstitute.org/papers/060_sustainability-ai.pdf). In Philipp Wicke, Marta Ziosi, João Miguel Cunha, and Angelo Trotta (Editors), *Proceedings of the 1st International Conference on AI for People: Towards Sustainable AI (CAIP 2021),*Bologna, pages 1-17, [DOI 10.4108/eai.20-11-2021.2314105](http://dx.doi.org/10.4108/eai.20-11-2021.2314105). \n\n\n*Image credit:* [*Max Pixel*](https://www.maxpixel.net/Plant-Protection-Drone-Dji-Uav-Farmland-Agriculture-4204798)\n\n\n\n\n Tagged with [artificial intelligence](https://gcrinstitute.org/tag/artificial-intelligence/), [ethics](https://gcrinstitute.org/tag/ethics/)", "url": "https://gcrinstitute.org/the-ethics-of-sustainability-for-artificial-intelligence/", "title": "The Ethics of Sustainability for Artificial Intelligence", "source": "html_articles", "source_type": "conferencePaper", "source_filetype": "pdf", "date_published": "2020-12-31T23:00:00Z", "authors": ["Andrea Owe", "Seth Baum"], "summary": [], "id": "a91555b385a6681385b737c957b6c8b4"} {"text": "Throughout my studies in alignment and AI-related existential risks, I’ve found it helpful to build a mental map of the field and how its various questions and considerations interrelate, so that when I read a new paper, a post on the [Alignment Forum](https://www.alignmentforum.org/), or similar material, I have some idea of how it might contribute to the overall goal of making our deployment of AI technology go as well as possible for humanity. I’m writing this post to communicate what I’ve learned through this process, in order to help others trying to build their own mental maps and provide them with links to relevant resources for further, more detailed information. This post was largely inspired by (and would not be possible without) [two](https://ai-alignment.com/ai-alignment-landscape-d3773c37ae38) [talks](https://www.youtube.com/watch?v=AMSKIDEbjLY) by Paul Christiano and Rohin Shah, respectively, that give very similar overviews of the field,[1](#fn:1) as well as a few posts on the Alignment Forum that will be discussed below. This post is not intended to replace these talks but is instead an attempt to coherently integrate their ideas with ideas from other sources attempting to clarify various aspects of the field. You should nonetheless watch these presentations and read some of the resources provided below if you’re trying to build your mental map as completely as possible.\n\n(**Primer**: If you’re not already convinced of the possibility that advanced AI could represent an existential threat to humanity, it may be hard to understand the motivation for much of the following discussion. In this case, a good starting point might be Richard Ngo’s sequence [AGI Safety from First Principles](https://www.alignmentforum.org/s/mzgtmmTKKn5MuCzFJ) on the Alignment Forum, which makes the case for taking these issues seriously without taking any previous claims for granted. Others in the field might make the case differently or be motivated by different considerations,[2](#fn:2) but this still provides a good starting point for newcomers.)\n\n### Clarifying the objective\n\nFirst, I feel it is important to note that both the scope of the discussion and the relative importance of different research areas change somewhat depending on whether our high-level objective is “reduce or eliminate AI-related existential risks” or “ensure the best possible outcome for humanity as it deploys AI technology.” Of course, most people thinking about AI-related existential risks are probably doing so because they care about ensuring a good long-term future for humanity, but the point remains that avoiding extinction is a necessary but not sufficient condition for humanity being able to flourish in the long term.\n\n[Paul Christiano’s roadmap](https://ai-alignment.com/ai-alignment-landscape-d3773c37ae38), as well as the one I have adapted from Paul’s for this post in an attempt to include some ideas from other sources, have “make AI go well” as the top-level goal, and of course, technical research on ensuring existential safety will be necessary in order to achieve this goal. However, some other research areas under this heading, such as “make AI competent,” arguably contribute more to existential risk than to existential safety, despite remaining necessary for ensuring the most beneficial overall outcomes. (To see this, consider that AI systems below a certain level of competence, such as current machine learning systems, pose no existential threat at all, and that with increasing competence comes increasing risk in the case of that competence being applied in undesirable ways.) I want to credit Andrew Critch and David Krueger’s paper [AI Research Considerations for Human Existential Safety (ARCHES)](https://arxiv.org/abs/2006.04948) for hammering this point home for me (see also the [blog post](https://jbkjr.com/posts/2020/10/better_terminology_for_ai_x_risks/) I wrote about ARCHES).\n\n### The map\n\nThe rest of this post will discuss various aspects of this diagram and its contents:\n\n![make-ai-go-well-map](/images/mapping_territory/map.png)\n\nI have to strongly stress that this is only marginally different from Paul’s original breakdown (the highlighted boxes are where he spends most of his time):\n\n![paul-map](/images/mapping_territory/paul_map.png) ([source](https://ai-alignment.com/ai-alignment-landscape-d3773c37ae38))\n\nIn fact, I include Paul’s tree here because it is informative to consider where I chose to make small edits to it in an attempt to include some other perspectives, as well as clarify terminological or conceptual distinctions that are needed to understand some smaller but important details of these perspectives. Clearly, though, this post would not be possible without Paul’s insightful original categorizations.\n\nIt might be helpful to have these diagrams pulled up separately while reading this post, in order to zoom as needed and to avoid having to scroll up and down while reading the discussion below.\n\n### Competence\n\nI mostly mention the competence node here to note that depending how terms are defined, “capability robustness” (performing robustly in environments or on distributions different from those an algorithm was trained or tested in) is arguably a necessary ingredient for solving the “alignment problem” ~in full~, but more on this later. In the end, I don’t think there’s too much consequence to factoring it like Paul and I have; to “make AI go well,” our AI systems will need to be trying not to act against our interests and do so robustly in a myriad of unforeseeable situations.\n\n(Also, remember that while competence is necessary for AI to go as well as possible, this is generally not the most differentially useful research area for contributing to this goal, since the vast majority of AI and ML research is already focused on increasing the capabilities of systems.)\n\n### Coping with impacts\n\nAnother area that is mostly outside the scope of our discussion here but still deserves mentioning is what Paul labels “cope with impacts of AI,” which would largely fall under the typical heading of AI “policy” or “governance” (although some other parts of this diagram might also typically count as “governance,” such as those under the “pay alignment tax” node). Obviously, good governance and policies will be critical, both to avoiding existential risks from AI and to achieving best possible outcomes, but much of my focus is on technical work aimed at developing what the Center for Human-Compatible Artificial Intelligence at Berkeley calls “provably beneficial systems,” as well as systems that reliably avoid bad behavior.\n\n### Deconfusion research\n\nI added this node to the graph because I believe it represents an important area of research in the project of making AI go well. What is “deconfusion research”? As far as I’m aware, the term comes from [MIRI’s](https://intelligence.org/) [2018 Research Agenda blog post](https://intelligence.org/2018/11/22/2018-update-our-new-research-directions/#section2). As Nate Soares (the author of the post) puts it, “By deconfusion, I mean something like ‘making it so that you can think about a given topic without continuously accidentally spouting nonsense.’” [Adam Shimi explains](https://www.alignmentforum.org/posts/q9BmNh35xgXPRgJhm/why-you-should-care-about-goal-directedness): “it captures the process of making a concept clear and explicit enough to have meaningful discussions about it.” This type of research corresponds to the “What even is going on with AGI?” research category Rohin discusses in [his talk](https://www.youtube.com/watch?v=AMSKIDEbjLY). Solutions to problems in this category will not directly enable us to build provably beneficial systems or reliably avoid existential risk but instead aim to resolve confusion around the underlying concepts themselves, in order for us to then be able to meaningfully address the “real” problem of making AI go well. As Nate writes on behalf of MIRI:\n\n\n> From our perspective, the point of working on these kinds of problems isn’t that solutions directly tell us how to build well-aligned AGI systems. Instead, the point is to resolve confusions we have around ideas like “alignment” and “AGI,” so that future AGI developers have an unobstructed view of the problem. Eliezer illustrates this idea in “[The Rocket Alignment Problem](https://intelligence.org/2018/10/03/rocket-alignment/),” which imagines a world where humanity tries to land on the Moon before it understands Newtonian mechanics or calculus.\n> \n> \n\nResearch in this category includes MIRI’s [Agent Foundations Agenda](https://intelligence.org/files/TechnicalAgenda.pdf) (and their work on [embedded agency](https://intelligence.org/files/TechnicalAgenda.pdf)), Eric Drexler’s work on [Comprehensive AI Services (CAIS)](https://www.fhi.ox.ac.uk/wp-content/uploads/Reframing_Superintelligence_FHI-TR-2019-1.1-1.pdf), which considers increased automation of bounded services as a potential path to AGI that doesn’t require building opaquely intelligent agents with a capacity for self-modification, Adam Shimi’s [work](https://www.alignmentforum.org/s/DTnoFhDm7ZT2ecJMw) on [understanding goal directedness](https://www.alignmentforum.org/s/o58ZMNaovdztbLfvN), MIRI/Evan Hubinger’s work on [mesa-optimization](https://www.alignmentforum.org/s/r9tYkB2a8Fp4DN8yB) and [inner alignment](https://www.alignmentforum.org/posts/SzecSPYxqRa5GCaSF/clarifying-inner-alignment-terminology), and David Krueger and Andrew Critch’s attempt to deconfuse topics surrounding existential risk, prepotent AI systems, and delegation scenarios in [ARCHES](https://arxiv.org/abs/2006.04948). I won’t go into any of this work in depth here (except for more on mesa-optimization on inner alignment later), but all of it is worth looking into as you build up a picture of what’s going on in the field.\n\nThis post, the talks by Christiano and Shah by which it was inspired, and many of the clarifying posts from the Alignment Forum linked to throughout this post were also created with at least some degree of deconfusional intent. I found [this post](https://www.alignmentforum.org/posts/mJ5oNYnkYrd4sD5uE/clarifying-some-key-hypotheses-in-ai-alignment) on clarifying some key hypotheses helpful in teasing apart various assumptions made in different areas and between groups of people with different perspectives. I also think Jacob Steinhardt’s [AI Alignment Research Overview](https://docs.google.com/document/d/1FbTuRvC4TFWzGYerTKpBU7FJlyvjeOvVYF2uYNFSlOc/edit) is worth mentioning here. It has a somewhat different flavor from and covers somewhat different topics than this/Paul’s/Rohin’s overview but still goes into a breadth of topics with some depth.\n\n### Delegation\n\nThis was another small distinction I believed was important to make in adapting Paul’s factorization of problems for this post. As proposed by Andrew Critch and David Krueger in [ARCHES](https://arxiv.org/abs/2006.04948), and as I discussed in my [blog post](https://jbkjr.com/posts/2020/10/better_terminology_for_ai_x_risks/) about ARCHES, the concept of “delegation” might be a better and strictly more general concept than “alignment.” Delegation naturally applies to the situation: humans can delegate responsibility for some task they want accomplished to one or more AI systems, and doing so successfully clearly involves the systems at least trying to accomplish these tasks in the way we intend (“intent alignment,” more on this soon). However, “alignment,” as typically framed for technical clarity, is about aligning the values or behavior of a single AI system with a single human.[3](#fn:3) It is not particularly clear what it would mean for multiple AI systems to be “aligned” with multiple humans, but it is at least somewhat clearer what it might mean for a group of humans to successfully delegate responsibility to a group of AI systems, considering we have some sense of what it means for groups of humans to successfully delegate to other groups of humans (e.g. through organizations). Within this framework, “alignment” can be seen as a special case of delegation, what Critch and Krueger call “single/single” delegation (delegation from one human to one AI system). See below (“Single/single delegation (alignment)”) for more nuance on this point, however. I believe this concept largely correlates with Shah’s “Helpful AGI” categorization in his [overview talk](https://www.youtube.com/watch?v=AMSKIDEbjLY); successful delegation certainly depends in part on the systems we delegate to being helpful (or, at minimum, trying to be).\n\n### Delegation involving multiple stakeholders and/or AIs\n\nOne of the reasons ARCHES makes the deliberate point of distinguishing alignment as a special case of delegation is to show that solving alignment/successfully delegating from one user to one system is insufficient for addressing AI-related existential risks (and, by extension, for making AI go well). Risk-inducing externalities arising from out of the interaction of individually-aligned systems can still pose a threat and must be addressed by figuring out how to successfully delegate in situations involving multiple stakeholders and/or multiple AI systems. This is the main reason I chose to make Paul’s “alignment” subtree a special case of delegation more generally. I won’t go into too much more detail about these “multi-” situations here, partially because there’s not a substantial amount of existing work to be discussed. However, it is worth looking at [ARCHES](https://arxiv.org/abs/2006.04948), as well as [this blog post](https://www.alignmentforum.org/posts/hvGoYXi2kgnS3vxqb/some-ai-research-areas-and-their-relevance-to-existential-1) by Andrew Critch and my own [blog post](https://jbkjr.com/posts/2020/10/better_terminology_for_ai_x_risks/) summarizing ARCHES, for further discussion and pointers to related material.\n\nI would be interested to know to what extent Christiano thinks this distinction is or is not helpful in understanding the issues and contributing to the goal of making AI go well. It is clear by his own diagram that “making AI aligned” is not sufficient for this goal, and he says as much in [this comment](https://www.alignmentforum.org/posts/hvGoYXi2kgnS3vxqb/some-ai-research-areas-and-their-relevance-to-existential-1?commentId=DKMkszP4qY9ESbaT7) in response to the aforementioned blog post by Critch: “I totally agree that there are many important problems in the world even if we can align AI.” But the rest of that comment also seems to somewhat question the necessity of separately addressing the multi/multi case before having a solution for the single/single case, if there might be some “‘default’ ways” of approaching the multi/multi case once armed with a solution to the single/single case. To me, this seems like a disagreement on the differential importance between research areas rather than a fundamental difference about the underlying concepts in principle, but I would be interested in more discussion on this point from the relevant parties. And it is nonetheless possible that solving single/single delegation or being able to align individual systems and users could be a necessary prerequisite to solving the multi- cases, even if we can begin to probe the more general questions without a solution for the single/single case.\n\n\n> (**ETA 12/30/20**: Rohin graciously gave me some feedback on this post and had the following to say on this point)\n> \n> I’m not Paul, but I think we have similar views on this topic – the basic thrust is:\n> \n> 1. Yes, single-single alignment does not guarantee that AI goes well; there are all sorts of other issues that can arise (which ARCHES highlights).\n> 2. We’re focusing on single-single alignment because it’s a particularly crisp technical problem that seems amenable to technical work in advance – you don’t have to reason about what governments will or won’t do, or worry about how people’s attitudes towards AI will change in the future. You are training an AI system in some environment, and you want to make sure the resulting AI system isn’t trying to hurt you. This is a more “timeless” problem that doesn’t depend as much on specific facts about e.g. the current political climate.\n> 3. A single-single solution seems very helpful for multi-multi alignment; if you care about e.g. fairness for the multi-multi case, it would really help if you had a method of building an AI system that aims for the human conception of fairness (which is what the type of single-single alignment that I work on can hopefully do).\n> 4. The aspects of multi-multi work that aren’t accounted for by single-single work seem better handled by existing institutions like governments, courts, police, antitrust, etc rather than technical research. Given that I have a huge comparative advantage at technical work, that’s what I should be doing. It is still obviously important to work on the multi-multi stuff, and I am very supportive of people doing this (typically under the banner of AI governance, as you note).\n> \n> (In Paul’s diagram, the multi-multi stuff goes under the “cope with the impacts of AI” bucket.)\n> \n> I suspect Critch would disagree most with point 4 and I’m not totally sure why.\n> \n> \n\n### Single/single delegation (alignment)\n\nIt’s important to make clear what we mean by “alignment” and “single/single delegation” in our discussions, since there are a number of related but distinct formulations of this concept that are important to disambiguate in order to bridge [inferential gaps](https://www.readthesequences.com/Expecting-Short-Inferential-Distances), combat the [illusion of transparency](https://www.readthesequences.com/Illusion-Of-Transparency-Why-No-One-Understands-You), and [deconfuse](https://intelligence.org/2018/11/22/2018-update-our-new-research-directions/#section2) the concept. Perhaps the best starting point for this discussion is David Krueger’s [post on disambiguating “alignment”](https://www.alignmentforum.org/posts/FTpPC4umEiREZMMRu/disambiguating-alignment-and-related-notions-1), where he distinguishes between several variations of the concept:\n\n* **Holistic alignment**: “*Agent R is **holistically aligned** with agent H iff R and H have the same terminal values*. This is the ‘traditional AI safety (TAIS)’ (as exemplified by Superintelligence) notion of alignment, and the TAIS view is roughly: ‘a superintelligent AI (ASI) that is not holistically aligned is an Xrisk’; this view is supported by [the instrumental convergence thesis](https://en.wikipedia.org/wiki/Instrumental_convergence#Instrumental_convergence_thesis).”\n* **Parochial alignment**: “I’m lacking a satisfyingly crisp definition of parochial alignment, but intuitively, it refers to how you’d want a ‘[genie](https://arbital.com/p/task_agi/)’ to behave: *R is **parochially aligned** with agent H and task T iff R’s terminal values are to accomplish T in accordance to H’s preferences over the intended task domain*… parochially aligned ASI is not safe by default (it might [paperclip](https://wiki.lesswrong.com/wiki/Paperclip_maximizer)), but it might be possible to make one safe using various capability control mechanisms”\n* **Sufficient alignment**: “*R is **sufficiently aligned** with H iff optimizing R’s terminal values would not induce a nontrivial Xrisk (according to H’s definition of Xrisk)*. For example, an AI whose terminal values are ‘maintain meaningful human control over the future’ is plausibly sufficiently aligned. It’s worth considering what might constitute sufficient alignment short of holistic alignment. For instance, [Paul seems to argue that corrigible agents are sufficiently aligned](https://ai-alignment.com/corrigibility-3039e668638).”\n* **Intent alignment** (Paul Christiano’s [version of alignment](https://ai-alignment.com/clarifying-ai-alignment-cec47cd69dd6)): “*R is **intentionally aligned** with H if R is trying to do what H wants it to do*.”\n* “Paul also talks about [benign AI](https://ai-alignment.com/benign-ai-e4eb6ec6d68e) which is about what an AI is optimized for (which is closely related to what it ‘values’). Inspired by this, I’ll define a complementary notion to Paul’s notion of alignment: *R is **benigned** with H if R is not actively trying to do something that H doesn’t want it to do*.”\n\nEach of these deserves attention, but let’s zoom in on intent alignment, as it is the version of alignment that Paul uses in his map and that he seeks to address with his research. First, I want to point out that each of Krueger’s definitions pertains only to agents. However, I think we still want a definition of alignment that can apply to non-agential AI systems, since it is an open question whether the first AGI will be agentive. [Comprehensive AI Services (CAIS)](https://www.fhi.ox.ac.uk/wp-content/uploads/Reframing_Superintelligence_FHI-TR-2019-1.1-1.pdf) explicitly pushes back against this notion, and [ARCHES](https://arxiv.org/abs/2006.04948) frames its discussion around AI “systems” to be “intentionally general and agent-agnostic.” (See also [this post](https://www.alignmentforum.org/posts/mJ5oNYnkYrd4sD5uE/clarifying-some-key-hypotheses-in-ai-alignment) on clarifying some key hypotheses for more on this point.) It is clear that we want to have some notion alignment that applies just as well to AI systems that are not agents or agent-like. In fact, [Paul’s original definition](https://ai-alignment.com/clarifying-ai-alignment-cec47cd69dd6) does not seem to explicitly rely on agency:\n\n\n> When I say an AI A is *aligned with* an operator H, I mean:\n> \n> *A is trying to do what H wants it to do.*\n> \n> \n\nAnother characterization of intent alignment [comes from Evan Hubinger](https://www.alignmentforum.org/posts/SzecSPYxqRa5GCaSF/clarifying-inner-alignment-terminology): “An agent is [intent aligned](https://ai-alignment.com/clarifying-ai-alignment-cec47cd69dd6) if its [behavioral objective](https://intelligence.org/learned-optimization/#glossary)[4](#fn:4) is aligned with humans” (presumably he means “aligned” in this same sense that its behavioral objective is incentivizing trying to do what we want). I like that this definition uses the more technically clear notion of a behavioral objective because it allows the concept to more precisely be placed in a framework with outer and inner alignment (more on this later), but I still wish it did not depend on a notion of agency like Krueger’s definition. Additionally, all of these definitions lack the formal rigor that we need if we want to be able to “use mathematics to formally verify if a proposed alignment mechanism would achieve alignment,” as noted by [this sequence](https://www.alignmentforum.org/s/sv2CwqTCso8wDdmmi) on the Alignment Forum. David Krueger makes a similar point in his post, writing, “Although it feels intuitive, I’m not satisfied with the crispness of this definition [of intent alignment], since we don’t have a good way of determining a black box system’s intentions. We can apply [the intentional stance](https://en.wikipedia.org/wiki/Intentional_stance), but that doesn’t provide a clear way of dealing with irrationality.” And Paul himself makes very similar points in his [original post](https://ai-alignment.com/clarifying-ai-alignment-cec47cd69dd6):\n\n* “This definition of ‘alignment’ is extremely imprecise. I expect it to correspond to some more precise concept that cleaves reality at the joints. But that might not become clear, one way or the other, until we’ve made significant progress.”\n* “One reason the definition is imprecise is that it’s unclear how to apply the concepts of ‘intention,’ ‘incentive,’ or ‘motive’ to an AI system. One naive approach would be to equate the incentives of an ML system with the objective it was optimized for, but this seems to be a mistake. For example, humans are optimized for reproductive fitness, but it is wrong to say that a human is incentivized to maximize reproductive fitness.”[5](#fn:5)\n\nAll of these considerations indicate that intent alignment is itself a concept in need of [deconfusion](https://intelligence.org/2018/11/22/2018-update-our-new-research-directions/#section2), perhaps to avoid a reliance on agency, to make the notion of “intent” for AI systems more rigorous, and/or for other reasons entirely.\n\nLeaving this need aside for the moment, there are a few characteristics of the “intent alignment” formulation of alignment that are worth mentioning. The most important point to emphasize is that an intent-aligned system is *trying* to do what its operator wants it to, and not necessarily *actually* doing what its operator wants it to do. This allows competence/capabilities to be factored out as a separate problem from (intent) alignment; an intent-aligned system might make mistakes (for example, by misunderstanding an instruction or by misunderstanding what its operator wants[6](#fn:6)), but as long as it is *trying* to do what its operator wants, the hope is that catastrophic outcomes can be avoided with a relatively limited amount of understanding/competence. However, if we instead define “alignment” only as a function of what the AI actually does, an aligned system would need to be both trying to do the right thing *and actually accomplishing this objective with competence*. As Paul says in his [overview presentation](https://ai-alignment.com/ai-alignment-landscape-d3773c37ae38), “in some sense, [intent alignment] might be the minimal thing you want out of your AI: at least it is trying.” This highlights why intent alignment might be an instrumentally more useful concept for working on making AI go well: while the (much) stronger condition of holistic alignment would almost definitionally guarantee that a holistically aligned system will not induce existential risks by its own behavior, it seems much harder to verify that a system and a human share the same terminal values than to verify that a system is trying to do what the human wants.\n\nIt’s worth mentioning here the concept of [corrigibility](https://ai-alignment.com/corrigibility-3039e668638). The [page on Arbital](https://arbital.com/p/corrigibility/) provides a good definition:\n\n\n> A ‘corrigible’ agent is one that [doesn’t interfere](https://arbital.com/p/nonadversarial/) with what [we](https://arbital.com/p/value_alignment_programmer/) would intuitively see as attempts to ‘correct’ the agent, or ‘correct’ our mistakes in building it; and permits these ‘corrections’ despite the apparent [instrumentally convergent](https://arbital.com/p/instrumental_convergence/) reasoning saying otherwise.\n> \n> \n\nThis intuitively feels like a property we might like the AI systems we build to have as they get more powerful. In [his post](https://ai-alignment.com/corrigibility-3039e668638), Paul argues:\n\n\n> 1. A [benign](https://ai-alignment.com/benign-ai-e4eb6ec6d68e) [act-based](https://ai-alignment.com/act-based-agents-8ec926c79e9c) agent will be robustly corrigibile if we want it to be.\n> 2. A sufficiently corrigible agent will tend to become more corrigible and benign over time. Corrigibility marks out a broad basin of attraction towards acceptable outcomes.\n> \n> As a consequence, we shouldn’t think about alignment as a narrow target which we need to implement exactly and preserve precisely. We’re aiming for a broad basin, and trying to avoid problems that could kick [us] out of that basin.\n> \n> \n\nWhile Paul links corrigibility to benignment explicitly here, how it relates to intent alignment is somewhat less clear to me. I think it’s clear that intent alignment (plus a certain amount of capability) entails corrigibility: if a system is trying to “do what we want,” and is at least capable enough to figure out that we want it to be corrigible, then it will do its best to be corrigible. I don’t think the opposite direction holds, however: I can imagine a system that doesn’t interfere with attempts to correct it and yet isn’t trying to “do what we want.” The point remains, though, that if we’re aiming for intent alignment, it seems that corrigibility will be a necessary (if not sufficient) property.\n\nReturning to the other definitions of alignment put forth by Krueger, one might wonder if there is any overlap between these different notions of alignment. Trivially, a holistically aligned AI would be parochially aligned for any task T, as well as sufficiently aligned. David also mentions that “[Paul seems to argue that corrigible agents are sufficiently aligned](https://ai-alignment.com/corrigibility-3039e668638),” which does seem to be a fair interpretation of the above “broad basin” argument. The one point I’ll raise, though, is that Paul specifically argues that “benign act-based agents will be robustly corrigible” and “a sufficiently corrigible agent will tend to become more corrigible and benign over time,” which seems to imply corrigibility can give you benignment. By David’s definition of benignment (“not actively trying to do something that H doesn’t want it to do”), this would represent sufficient alignment, but Paul [defined benign AI](https://ai-alignment.com/benign-ai-e4eb6ec6d68e) in terms of what it was optimized for. If such an optimization process were to produce a misaligned mesa-optimizer, it would clearly not be sufficiently aligned. Perhaps the more important point, however, is that it seems Paul would argue that intent alignment would in all likelihood represent sufficient alignment (others may disagree).\n\nI would also like to consider if and how the concept of single/single delegation corresponds to any of these specific types of alignment. As put forth in [ARCHES](https://arxiv.org/abs/2006.04948):\n\n\n> **Single(-human)/single(-AI system) delegation** means delegation from a *single human stakeholder* to a *single AI system* (to pursue one or more objectives).\n> \n> \n\nFirstly, it is probably important to note that “single/single delegation” refers to a task, and “alignment,” however it is defined, is a property that we want our AI systems to have. However, to *solve* single/single delegation (or to do single/single delegation *successfully*), we will require a solution to the “alignment problem,” broadly speaking. From here, it’s a question of defining what would count as a “solution” to single/single delegation (or what it would mean to do it “successfully”). If we can build intent aligned systems, will we have solved single/single delegation? If they are sufficiently capable, probably. The same goes for parochially aligned and holistically aligned systems: if they’re sufficiently capable, the users they’re aligned with can probably successfully delegate to them. It is unclear to me whether this holds for a sufficiently aligned system, however; knowing that “optimizing R’s terminal values would not induce a nontrivial Xrisk” doesn’t necessarily mean that R will be any good at doing the things H wants it to.\n\nAs I mentioned before, I like the concept of “delegation” because it generalizes better to situations involving multiple stakeholders and/or AI systems. However, I believe it is still necessary to understand these various notions of “alignment,” because it remains a necessary property for successfully delegating in the single/single case and because understanding the differences between them is helpful for understanding others’ work and in communicating about the subject.\n\n### Alignment tax and alignable algorithms\n\nOne compelling concept Paul used that I had not heard before was the “alignment tax”: the cost incurred from insisting on (intent) alignment. This is intended to capture the tension between safety and competence. We can either pay the tax, e.g. by getting policymakers to care enough about the problem, negotiating agreements to coordinate to pay the tax, etc., or we can reduce the tax with technical safety and alignment research that produces aligned methods that are roughly competitive with unaligned methods.\n\nTwo ways that research can reduce the alignment tax are 1) advancing alignable algorithms (perhaps algorithms that have beliefs and make decisions that are easily interpretable by humans) by making them competitive with unaligned methods and 2) making existing algorithms alignable:\n\n![aligning-algorithms](/images/mapping_territory/aligning_algos.png) ([source](https://drive.google.com/file/d/1QO11xtWSvtD8nS1SU4XukGF1WWG6O8-6/view))\n\nPaul then considers different types of algorithms (or, potentially, different algorithmic building blocks in an intelligent system) we might try and align, like algorithms for planning, deduction, and learning. With planning, we might have an alignment failure if the standard by which an AI evaluates actions doesn’t correspond to what we want, or if the algorithm is implicitly using a decision theory that we don’t think is correct. The former sounds much like traditional problems in (mis)specifying reward or objective functions for learners. I think problems in decision theory are very interesting, but unfortunately I have not yet been able to learn as much about the subject as I’d like to. The main thrust of this research is to try and solve perceived problems with traditional decision theories (e.g. causal decision theory and evidential decision theory) in scenarios like [Newcomb’s problem](https://www.lesswrong.com/tag/newcomb-s-problem). Two decision theory variants I’ve seen mentioned in this context are [functional decision theory](https://arxiv.org/abs/1710.05060) and [updateless decision theory](https://www.lesswrong.com/tag/updateless-decision-theory). (This type of research could also be considered deconfusion work.)\n\nAs for aligning deduction algorithms, Paul only asks “is there some version of deduction that avoids alignment failures?” and mentions “maybe the alignment failures in deduction are a little more subtle” but doesn’t go into any more detail. After searching for posts on the Alignment Forum and LessWrong about how deduction could be malign failed to surface anything, I can’t help but wonder if he really might be referring to induction. For one, I’m having trouble imagining what it would mean for a deductive process to be malign. From my understanding, the axioms and rules of inference that define a formal logical system completely determine the set of theorems that can be validly derived from them, so if we were unhappy with the outputs of a deductive process that is validly applying its rules of inference, wouldn’t that mean that we really just have a problem with our own choice of axioms and/or inference rules? I can’t see where a notion of “alignment” would fit in here (but somebody please correct me if I’m wrong here… I would love to hear Paul’s thoughts about these potentially “subtle” misalignment issues in deduction).\n\nThe other reason I’m suspicious Paul might’ve actually meant induction is because Paul himself wrote the original post arguing that the [universal prior in Solomonoff induction is malign](https://ordinaryideas.wordpress.com/2016/11/30/what-does-the-universal-prior-actually-look-like/). I won’t discuss this concept too much here because it still confuses me somewhat (see [here](https://www.alignmentforum.org/posts/5bd75cc58225bf067037534c/some-problems-with-making-induction-benign-and-approaches-to-them), [here](https://www.lesswrong.com/posts/jP3vRbtvDtBtgvkeb/clarifying-consequentialists-in-the-solomonoff-prior), and [here](https://www.lesswrong.com/posts/Tr7tAyt5zZpdTwTQK/the-solomonoff-prior-is-malign) for more discussion), but it certainly seems to fit the description of being a “subtle” failure mode. I’ll also mention MIRI’s paper on [logical induction](https://intelligence.org/2016/09/12/new-paper-logical-induction/) (for dealing with reasoning under logical uncertainty) here, as it seems somewhat relevant to the idea of alignment as it corresponds to deduction and/or induction.\n\n\n> (**ETA 12/30/20**: Rohin also had the following to say about deduction and alignment)\n> \n> I’m fairly confident he does mean deduction. And yes, if we had a perfect and valid deductive process, then a problem with that would imply a problem with our choice of axioms and inference rules. But that’s still a problem!\n> \n> Like, with RL-based AGIs, if we had a perfect reward-maximizing policy, then a problem with that would imply a problem with our choice of reward function. Which is exactly the standard argument for AI risk.\n> \n> There’s a general argument for AI risk, which is that we don’t know how to give an AI instructions that it actually understands and acts in accordance to – we can’t “[translate](https://www.alignmentforum.org/posts/42YykiTqtGMyJAjDM/alignment-as-translation)” from our language to the AI’s language. If the AI takes high impact actions, but we haven’t translated properly, then those large impacts may not be the ones we want, and could be existentially bad. This argument applies whether our AI gets its intelligence from induction or deduction.\n> \n> Now an AI system that just takes mathematical axioms and finds theorems is probably not dangerous, but that’s because such an AI system doesn’t take high impact actions, not because the AI system is aligned with us.\n> \n> \n\n### Outer alignment and objective robustness/inner alignment\n\nFor learning algorithms, Paul breaks the alignment problem into two parts: outer alignment and inner alignment. This was another place where I felt it was important to make a small change to Paul’s diagram, as a result of [some recent clarification](https://www.alignmentforum.org/posts/SzecSPYxqRa5GCaSF/clarifying-inner-alignment-terminology#fn-wy6RgjzHCyHCXi7M3-1) on terminology relating to inner alignment by Evan Hubinger. It’s probably best to first sketch the concepts of objective robustness, mesa-optimization, and inner alignment for those who may not already be familiar with the concept.\n\nFirst, recall that the *base objective* for a learning algorithm is the objective we use to search through models in an optimization process and that the *behavioral objective* is what the model (produced by this process) itself appears to be optimizing for: the objective that would be recovered from perfect inverse reinforcement learning. If the behavioral objective is aligned with the base objective, we say that the model is *objective robust*; if there is a gap between the behavioral objective and the base objective, the model will continue to appear to pursue the behavioral objective, which could result in bad behavior off-distribution (even as measured by the base objective). As a concrete (if simplistic) example, imagine that a maze-running reinforcement learning agent is trained to reach the end of the maze with a base objective that optimizes for a reward which it receives upon completing a maze. Now, imagine that in every maze the agent was trained on, there was a red arrow marking the end of the maze, and that in every maze in the test set, this red arrow is at a random place within the maze (but not the end). Do we expect our agent will navigate to the end of the maze, or will it instead navigate to the red arrow? If the training process produces an agent that learned the behavioral objective “navigate to the red arrow,” because red arrows were a very reliable proxy for/predictor of reward during the training process, it will navigate to the red arrow, *even though this behavior is now rated poorly by the reward function and the base objective*.\n\nOne general way we can imagine failing to achieve objective robustness is if our optimization process itself produces an optimizer (a *mesa-optimizer*)—in other words, when that which is optimiz*ed* (the model) becomes an optimiz*er*. In the above example, we might imagine that such a model, trained with something like SGD, could actually learn something like depth- or breadth-first search to optimize its search for paths to the red arrow (or the end of the maze). We say that the *mesa-objective* is the objective the mesa-optimizer is optimizing for. (In the case of a mesa-optimizer, its mesa-objective is definitionally its behavioral objective, but the concept of a behavioral objective remains applicable even when a learned model is not a mesa-optimizer.) We also say that a mesa-optimizer is *inner aligned* if its mesa-objective is aligned with the base objective. *Outer alignment*, correspondingly, is the problem of eliminating the gap between the base objective (what we optimize our models for) and the intended goal (what we actually want from our model).\n\nI write all this to emphasize one of the main points of Evan Hubinger’s aforementioned [clarification of terminology](https://www.alignmentforum.org/posts/SzecSPYxqRa5GCaSF/clarifying-inner-alignment-terminology): that we need *outer alignment and objective robustness* to achieve intent alignment, and that inner alignment is a way of achieving objective robustness *only in the cases where we’re dealing with a mesa-optimizer*. Note that Paul defines inner alignment in his talk as the problem of “mak[ing] sure that policy is robustly pursuing that objective”; I hope that this section makes clear that this is actually the problem of *objective robustness*. Even in the absence of mesa-optimization, we still have to ensure objective robustness to get intent alignment. This is why I chose to modify this part of Paul’s graph to match this nice tree from Evan’s post:\n\n![evan-map](/images/mapping_territory/evan_map.png) ([source](https://www.alignmentforum.org/posts/SzecSPYxqRa5GCaSF/clarifying-inner-alignment-terminology))[7](#fn:7)[8](#fn:8)\n\nPaul mentions adversarial training, transparency, and verification as potential techniques that could help ensure objective robustness/inner alignment. These have more typically been studied in the context of robustness generally, but the hope here is that they can also be applied usefully in the context of objective robustness. Objective robustness and inner alignment are still pretty new areas of study, however, and how we might go about guaranteeing them is a very open question, especially considering nobody has yet been able to concretely produce/demonstrate a mesa-optimizer in the modern machine learning context. It might be argued that humanity can be taken as an existence proof of mesa-optimization, since, if we are optimizing for anything, it is certainly not what evolution optimized us for (reproductive fitness). But, of course, we’d like to be able to study the phenomenon in the context it was originally proposed (learning algorithms). For more details on inner alignment and mesa-optimization, see [Risks from Learned Optimization](https://www.alignmentforum.org/s/r9tYkB2a8Fp4DN8yB), Evan’s [clarifying blog post](https://www.alignmentforum.org/posts/SzecSPYxqRa5GCaSF/clarifying-inner-alignment-terminology), and this [ELI12 post](https://www.alignmentforum.org/posts/AHhCrJ2KpTjsCSwbt/inner-alignment-explain-like-i-m-12-edition) on the topic.\n\n### Approaches to outer alignment\n\nPaul subdivides work into outer alignment into two categories: cases where we want an AI system to learn (aligned) behavior from a teacher and cases where we want an AI system to go beyond the abilities of any teacher (but remain aligned). According to Paul, these cases roughly correspond to the easy and hard parts of outer alignment, respectively. In the short term, there are obviously many examples of tasks that humans already perform that we would like AIs to be able to perform more cheaply/quickly/efficiently (and, as such, would benefit from advances in “learn from teacher” techniques), but in the long term, we want AIs to be able to exceed human performance and continue to do well (and remain aligned) in situations that no human teacher understands.\n\n### Learning from teacher\n\nIf we have a teacher that understands the intended behavior and can demonstrate and/or evaluate it, we can 1) imitate behavior demonstrated by the teacher, 2) learn behavior the teacher thinks is good, given feedback, or 3) infer the values/preferences that the teacher seems to be satisfying (e.g. with inverse reinforcement learning)[9](#fn:9), and then optimize for these inferred values. Paul notes that a relative advantage of the latter two approaches is that they tend to be more sample-efficient, which becomes more relevant as acquiring data from the teacher becomes more expensive. I should also mention here that, as far as I’m aware, most “imitation learning” is really “[apprenticeship learning via inverse reinforcement learning](https://ai.stanford.edu/~ang/papers/icml04-apprentice.pdf),” where the goal of the teacher is inferred in order to be used as a reward signal for learning the desired behavior. So, I’m not exactly sure to what degree categories 1) and 3) are truly distinct, since it seems rare to do “true” imitation learning, where the behavior of the teacher is simply copied as closely as possible (even behaviors that might not contribute to accomplishing the intended task).\n\nFor further reading on techniques that learn desired behavior from a teacher, see OpenAI’s “[Learning from Human Preferences](https://openai.com/blog/deep-reinforcement-learning-from-human-preferences/)” and DeepMind’s “[Scalable agent alignment via reward modeling](https://medium.com/@deepmindsafetyresearch/scalable-agent-alignment-via-reward-modeling-bf4ab06dfd84)” on the “learn from feedback” side of things. On the infer preferences/IRL side, start with Rohin Shah’s [sequence on value learning](https://www.alignmentforum.org/s/4dHMdK5TLN6xcqtyc) on the Alignment Forum and Dylan Hadfield-Mennell’s papers “[Cooperative Inverse Reinforcement Learning](https://arxiv.org/abs/1606.03137)” and “[Inverse Reward Design](https://arxiv.org/abs/1711.02827).”\n\n### Going beyond teacher\n\nIf we want our AI systems to exceed the performance of the teacher, making decisions that no human could or understanding things that no human can, alignment becomes more difficult. In the previous setting, the hope is that the AI system can learn aligned behavior from a teacher who understands the desired (aligned) behavior well enough to demonstrate or evaluate it, but here we lack this advantage. Three potential broad approaches Paul lists under this heading are 1) an algorithm that has learned from a teacher successfully extrapolates from this experience to perform at least as well as the teacher in new environments, 2) infer robust preferences, i.e. infer the teacher’s *actual* preferences or values (not just stated or acted-upon preferences), in order to optimize them (this approach also goes by the name of *ambitious value learning*), and 3) build a better teacher, so you can fall back to approaches from the “learn from teacher” setting, just with a more capable teacher.\n\nOf the three, the first seems the least hopeful; machine learning algorithms have historically been pretty notoriously bad at extrapolating to situations that are meaningfully different than those they encountered in the training environment. Certainly, the ML community will continue to search for methods that generalize increasingly well, and, in turn, progress here could make it easier for algorithms to learn aligned behavior and extrapolate to remain aligned in novel situations. However, this does not seem like a reasonable hope at this point for keeping algorithms aligned as they exceed human performance.\n\nThe allure of the second approach is obvious: if we could infer, essentially, the “true human utility function,” we could then use it to train a reinforcement agent without fear of outer alignment failure/being [Goodharted](https://www.lesswrong.com/posts/EbFABnst8LsidYs5Y/goodhart-taxonomy) as a result of misspecification error. This approach is not without substantial difficulties, however. For one, in order to exceed human performance, we need to have a model of the mistakes that we make, and this error model [cannot be inferred alongside the utility function without additional assumptions](https://www.alignmentforum.org/s/4dHMdK5TLN6xcqtyc/p/ANupXf8XfZo2EJxGv). We might try and specify a specific error model ourselves, but this seems as prone to misspecification as the original utility function itself. For more information on inferring robust preferences/ambitious value learning, see the “Ambitious Value Learning” section of the [value learning sequence](https://www.alignmentforum.org/s/4dHMdK5TLN6xcqtyc). Stuart Armstrong also seems to have a particular focus in this area, e.g. [here](https://arxiv.org/pdf/1712.05812.pdf) and [here](https://www.alignmentforum.org/posts/CSEdLLEkap2pubjof/research-agenda-v0-9-synthesising-a-human-s-preferences-into).\n\nThe two most common “build a better teacher” approaches are amplification and debate. Amplification is what Paul spends most of his time on and the approach of which he’s been the biggest proponent. The crux of the idea is that a good starting point for a smarter-than-human teacher is a group of humans. We assume that even if a human cannot answer a question, they can decompose the question into sub-questions such that knowing the answers to the sub-questions would enable them to construct the answer to the answer to the original question. The hope, then is to build increasingly capable AI systems by training a question-answering AI to imitate the output of a group of humans answering questions in this decompositional fashion, then recursively building stronger AIs using a group of AIs from the last iteration answering decomposed questions as an overseer:\n\n![amplification](/images/mapping_territory/amplification.png) ([source](https://drive.google.com/file/d/1QO11xtWSvtD8nS1SU4XukGF1WWG6O8-6/view))\n\nThe exponential tree that this recursive process tries to approximate in the limit is called [HCH](https://www.alignmentforum.org/tag/humans-consulting-hch) (for Humans Consulting HCH). There is much more detail and many more important considerations in this scheme than I can address here, e.g. the distillation step, how this scheme hopes to maintain intent alignment throughout the recursive process, and (importantly) if this exponential tree can answer any question in the limit.[10](#fn:10) There are also two distinct types of amplification: imitative amplification, where the AI systems are trained to imitate the outputs of the last tree in the amplification step, and approval-based amplification, where the AI systems are trained to produce outputs or perform actions of which this tree would approve. For more on amplification, see the [iterated amplification sequence](https://www.alignmentforum.org/s/EmDuGeRw749sD3GKd) on the Alignment Forum, the [original paper](https://openai.com/blog/amplifying-ai-training/) and Paul Christiano’s writings and papers, more generally. See also [recursive reward modeling](https://medium.com/@deepmindsafetyresearch/scalable-agent-alignment-via-reward-modeling-bf4ab06dfd84), another “build a better teacher” approach which “can be thought of as an instance of iterated amplification.” For basic research into this method of solving complex problems by recursively solving subproblems, see [Ought](https://www.alignmentforum.org/s/EmDuGeRw749sD3GKd)’s work on [factored cognition](https://www.alignmentforum.org/s/EmDuGeRw749sD3GKd/p/DFkGStzvj3jgXibFG).\n\n[Debate](https://openai.com/blog/debate/) (more recent progress update [here](https://www.alignmentforum.org/posts/Br4xDbYu4Frwrb64a/writeup-progress-on-ai-safety-via-debate-1)) builds a better teacher by training agents to debate a topic against each other in an adversarial game and have a human decide the winner based on the sequence of arguments made by the agents, so the agents incrementally build each other into both better teachers and more competent students (in practice, it would likely be just one agent trained via self-play). For this to work, it needs to be true that 1) judging debates is easier than debating (so humans can judge the quality of arguments that they could not have come up with themselves) and 2) lying is harder than arguing against a lie, given this human judge (so the adversarial game will reliably reward truth-telling).\n\nThere is a close relationship between amplification and debate. Section 7 in the [original debate paper](https://arxiv.org/abs/1805.00899) explores this relationship in detail and is worth reading, but the basic connection can be highlighted by seeing the amplification setup in terms of three modules: an answerer, a questioner, and a checker. The answerer is the ML system being iteratively trained to answer questions by imitating the output of the overseer answering questions recursively, the questioner decomposes the question into the relevant sub-questions, and the checker takes the sub-questions and their answers and generates an answer to the original question. (Christiano’s setup treats the questioner and the checker as the same module—in the base case, a human.) Debate differs from this setup by training the answerer and the questioner against each other adversarially while keeping the human judge/checker. This gives debate a potential advantage if generating sub-questions is superhumanly difficult (e.g. if the branching factor of HCH is too high). Again, section 7 of the [original paper](https://arxiv.org/abs/1805.00899) covers more of the differences and similarities between these two approaches and should be read in full. Relatedly, Evan Hubinger has written a [post on synthesizing amplification and debate](https://www.alignmentforum.org/posts/dJSD5RK6Qoidb3QY5/synthesizing-amplification-and-debate) that might be of interest.\n\nOne final resource I want to mention while discussing techniques for going beyond a teacher is Evan Hubinger’s [overview of 11 proposals for safe advanced AI](https://www.alignmentforum.org/posts/fRsjBseRuvRhMPPE5/an-overview-of-11-proposals-for-building-safe-advanced-ai), which includes many of the basic techniques already mentioned here but goes into more depth discussing the relative advantages and disadvantages of each approach in the contexts of outer and inner alignment. In practice, an outer alignment approach (e.g. imitative or approval-based amplification) is often paired with some technique aimed at preventing inner alignment failures (e.g. adversarial training, transparency, etc.).\n\n### Conclusion\n\nThat’s about it! We’ve covered a lot of ground here. This post ended up being much longer than I anticipated, but I wanted to give a cursory overview of as many of these ideas as possible and elaborate a little on how they interrelate before providing pointers to further material for the interested reader.\n\nI hope this post has been helpful in giving you a lay of the land in ongoing work in AI existential safety and alignment and (more importantly) in helping you build or refine your own mental map of the field (or simply check it, if you’re one of the many people who has a better map than mine!). Building this mental map has already been helpful to me as I assimilate new information and research and digest discussions between others in the field. It’s also been helpful as I start thinking about the kinds of questions I’d like to address with my own research.\n\n1. Rohin also did a [two](https://futureoflife.org/2019/04/11/an-overview-of-technical-ai-alignment-with-rohin-shah-part-1/) [part](https://futureoflife.org/2019/04/25/an-overview-of-technical-ai-alignment-with-rohin-shah-part-2/) podcast with the Future of Life Institute discussing the contents of his presentation in more depth, both of which are worth listening to. [↩](#fnref:1)\n2. See [this post](https://www.alignmentforum.org/posts/oiuZjPfknKsSc5waC/commentary-on-agi-safety-from-first-principles) for specific commentary on this sequence from others in the field. [↩](#fnref:2)\n3. Sometimes, people use “alignment” to refer to the overall project of making AI go well, but I think this is misguided for reasons I hope are made clear by this post. From what I’ve seen, I believe my position is shared by most in the community, but please feel free to disagree with me on this so I can adjust my beliefs if needed. [↩](#fnref:3)\n4. “**Behavioral objective**: The *behavioral objective* is what an optimizer appears to be optimizing for. Formally, the behavioral objective is the objective recovered from perfect inverse reinforcement learning.” [↩](#fnref:4)\n5. Here, Paul seems to have touched upon the concept of mesa-optimization before it was so [defined](https://arxiv.org/abs/1906.01820). More on this topic to follow. [↩](#fnref:5)\n6. That an intent-aligned AI can be mistaken about what we want is a consequence of the definition being intended *de dicto* rather than *de re*; as [Paul writes](https://ai-alignment.com/clarifying-ai-alignment-cec47cd69dd6), “an aligned A is trying to ‘do what H wants it to do’” (not trying to do “that which H actually wants it to do”). [↩](#fnref:6)\n7. Arrows are implications: “for any problem, if its direct subproblems are solved, then it should be solved as well (though not necessarily vice versa).” [↩](#fnref:7)\n8. Note that Evan also has capability robustness as a necessary component, along with intent alignment, for achieving “alignment.” This fits well with my tree, where we need both alignment (which, in the context of both my and Paul’s trees, is intent alignment) and capability robustness to make AI go well; the reasoning is much the same even if the factorization is slightly different. [↩](#fnref:8)\n9. Paul comments that this type of approach involves some assumption that relates the teacher’s behavior to their preferences (e.g. an approximate optimality assumption: the teacher acts to satisfy their preferences in an approximately optimal fashion). [↩](#fnref:9)\n10. I want to mention here that Eliezer Yudkowsky wrote a [post challenging Paul’s amplification proposal](https://intelligence.org/2018/05/19/challenges-to-christianos-capability-amplification-proposal/) (which includes responses from Paul), in case the reader is interested in exploring pushback against this scheme. [↩](#fnref:10)", "url": "https://jbkjr.me/posts/2020/12/mapping_conceptual_territory_AI_safety_alignment/", "title": "Mapping the Conceptual Territory in AI Existential Safety and Alignment", "source": "html_articles", "source_type": "blogPost", "source_filetype": "pdf", "date_published": "2020-12-16T23:00:00Z", "authors": ["Jack Koch"], "summary": [], "id": "965a51c2133dfe11d41f79549daf42da"} {"text": "A Dialogue on Suffering Subroutines\n===================================\n\n\n\n29 August 2015\nby [Brian Tomasik](https://longtermrisk.org/author/brian-tomasik/ \"Posts by Brian Tomasik\")\n\nFirst written: 20 Dec. 2013; last update: 27 Apr. 2017\n\n This piece presents a hypothetical dialogue that explains why instrumental computational processes of a future superintelligence might evoke moral concern. I give some examples of present-day systems that we may consider at least somewhat conscious, such as news reporting or automated stock trading. Agent-like components seem to emerge in many places, and it's plausible this would continue in the computing processes of a future civilization. Whether these subroutines matter, how much they matter, and how to even count them are questions for future generations to figure out, but it's good to keep an open mind to the possibility that our intuitions about what suffering is may change dramatically with new insights.\n\n\nContents\n\n* [Dialogue](#Dialogue)\n* [Further analogies](#Further_analogies)\n* [Schwitzgebel's view](#Schwitzgebels_view)\n* [Suffering can be simple: A reply to Metzinger](#Suffering_can_be_simple_A_reply_to_Metzinger)\n* [Onion piece](#Onion_piece)\n* [Acknowledgements](#Acknowledgements)\n* [A note on terminology](#A_note_on_terminology)\n* [Footnotes](#Footnotes)\n\nDialogue\n--------\n\n\n*Alice*: Greetings, Brian. I heard that you're concerned about the possibility of what you call \"suffering subroutines.\" You say that artificial intelligences (AIs) in the future -- whether human-inspired or [paperclipping](http://wiki.lesswrong.com/wiki/Paperclip_maximizer) -- might run immense numbers of computations that we may consider to be conscious suffering. I find this hard to believe. I mean, why wouldn't instrumental computations be just dumb components of an unfeeling system?\n\n\n*Brian*: They might be, but they might not be, and the latter possibility is important to consider. As one general point, note that sentience evolved on Earth ([possibly more than once](https://www.facebook.com/brian.tomasik/posts/620256828222)), so it seems like a useful algorithmic construct.\n\n\n*Alice*: Sure, sentience was useful on Earth for competitive organisms, but in the subroutines of an AI, every computation is subserving the same goal. Processes are allocated computing resources \"each according to his needs from each according to his talents,\" as Dan Dennett [observed](http://youtu.be/OlRHd-r2LOw?t=14m52s).\n\n\n*Brian*: Possibly. But in that same talk you cite, Dennett goes on to explain that computing processes in the human brain may be competitive rather than cooperative, Darwinian rather than Marxist. Dennett [proffers](http://www.youtube.com/watch?v=OlRHd-r2LOw&feature=youtu.be&t=16m24s) a hypothesis that \"centrally planned economies don't work, and neither do centrally coordinated, top-down brains.\"\n\n\n*Alice*: Meh, if that's true, it's probably a vestige of the way evolution came up with the brain. Surely an orderly process could be designed to reduce the wasted energy of competition, and since this would have efficiency advantages, it would be a convergent outcome.\n\n\n*Brian*: That's not obvious. [Evolutionary algorithms](https://en.wikipedia.org/wiki/Evolutionary_algorithm) are widely useful and not always replaceable by something else. In any event, maybe non-Darwinian processes could also consciously suffer.\n\n\n*Alice*: Umm, example, please?\n\n\n*Brian*: It seems plausible that many accounts of consciousness would include non-evolved agents under their umbrellas. Take the [global-workspace theory](https://en.wikipedia.org/wiki/Global_Workspace_Theory) for instance. Are you familiar with that?\n\n\n*Alice*: Do explain.\n\n\n*Brian*: In the so-called [LIDA implementation](http://ccrg.cs.memphis.edu/tutorial/mindAccordingToLIDA/Brief-Account.pdf) of the global-workspace model, a cognitive cycle includes the following components:\n\n\n* Incoming sensations from the world are momentarily stored by sensory memory.\n* Many parallel unconscious modules operate on these sensations, picking them apart in various ways, and drawing on stored memories for additional insight.\n* These unconscious modules form coalitions, advocating for why their insights are most important.![](https://longtermrisk.org/files/Broadcast-350x233.png \"'Visualisation of broadcast routing scheme.' Uploaded by Easyas12c, who says 'Visual model is based on some similar images hosted at www.gloomfaq.de. They were cached by google image search, but were no longer available, so I don't know the original author. I created these new images from scratch, but the appearance is very similar.' This image is in the public domain worldwide. https://commons.wikimedia.org/wiki/File:Broadcast.svg\")\n* The broadcasted news then produces updates to memories and learning in various parts of the brain, including inclinations to select different actions (i.e., [reinforcement learning](http://www.utilitarian-essays.com/reinforcement-learning.html)). Information may also return from the receiver back to the broadcaster.\n* With these updated activations for various behaviors, the organism then acts on the environment, receives new sensations, and the cycle happens again. These cycles are hypothesized to occur at about 10 Hz.\n\n\n[This diagram](http://ccrg.cs.memphis.edu/tutorial/synopsis.html) lays out the various components.\n\n\nNote that Stan Franklin, one of the managers of the LIDA project, believes that the earlier version of his system, IDA, is \"functionally conscious\" but not \"phenomenally conscious,\" as he explains in his 2003 paper, \"[IDA, a Conscious Artifact?](http://ccrg.cs.memphis.edu/assets/papers/IDA-ConsciousArtifact.pdf)\" This seems to stem from his tentative agreement with David Chalmers about the hard problem of consciousness (see p. 10). Because I believe this view is [confused](http://www.utilitarian-essays.com/consciousness.html), I think the functional consciousness under discussion here *is* also phenomenal consciousness.\n\n\n*Alice*: I see. So why is this relevant?\n\n\n*Brian*: If consciousness is this kind of \"global news broadcasting,\" it seems to be a fairly common sort of operation. I mean, one obvious example is the news itself: Stories compete for worthiness to be aired, and the most important news segments are broadcast, one at a time, to viewers who then update their memories and actions in response. Then new things happen in the world, new stories take place, many reporters investigate them in parallel, they compete to get their stories aired, and the cycle happens again. \"Emotions\" and \"dispositions\" may [modulate](https://en.wikipedia.org/wiki/Neuromodulation) this process -- for instance, more conservative news agencies will prefer to air different stories than liberal news agencies, and the resulting messages will be biased in the direction of the given ideology. Likewise, the national mood at a given moment may cause some stories to be more relevant than they would have been given a different mood. People who care a lot about the news stories they hear may get in touch with the news agencies to give feedback and engage in further coordination (\"reentrant signaling\"). And so on.\n\n\nOf course, we see analogous behavior in other places as well:\n\n\n* The various \"unconscious\" computing components of an airplane flight system might update their calculations based on a globally broadcast signal about latitude, longitude, wind resistance, turbulence, etc., and \"memory\" logs of the past data stream may be stored for later reference.![](https://longtermrisk.org/files/Philippine-stock-market-board-350x263.jpg \"'Phillippine stock market board.' By Katrina.Tuliao [CC-BY-2.0 (http://creativecommons.org/licenses/by/2.0)], via Wikimedia Commons: https://commons.wikimedia.org/wiki/File:Philippine-stock-market-board.jpg\")\n* An updated trading price on a stock exchange is broadcast to vast numbers of individual computer-trading systems, which update their actions and save records of the price history for machine learning in the future. Theoretically, the systems could even perform [online learning](https://en.wikipedia.org/wiki/Online_machine_learning) like animals do.\n* Data about the latest unemployment rates is announced, and this is distributed to various teams doing macroeconomic prediction. Their policy advice is then adjusted, and the data is logged for future reference. These policies affect the world, unemployment changes, measurement indicators pick up that information, it's aggregated, and then a broadcast about unemployment rates happens again next month.\n* Software patterns like [publish–subscribe](https://en.wikipedia.org/wiki/Publish%E2%80%93subscribe_pattern) and [observer](https://en.wikipedia.org/wiki/Observer_pattern) involve globally broadcasting updates to many receivers who can then act on the new information.\n* [Insulin release](https://en.wikipedia.org/wiki/Insulin#Release) happens when glucose enters beta cells and triggers internal processes (\"computations\") in those cells. Those processes release calcium ions, which unlock stored insulin and \"broadcast\" it throughout the animal's bloodstream. The broadcast insulin signal produces updates in glucose utilization throughout the animal, which changes the internal behavior of various organs and the external behavior of the organism. (In general, hormones are a form of \"information broadcasting\", even though we don't normally think of processes outside the brain as being consciousness-like.)\n* In \"[Global Workspace Dynamics](http://www.ncbi.nlm.nih.gov/pmc/articles/PMC3664777/),\" Baars, Franklin, and Ramsoy give the analogy of a fire alarm as a global signal that triggers different local actions for different people. The fire department (one receiver of the alarm) may try to locate the fire alarm itself (the broadcaster) to interact with it better, in analogy with reentrant signaling.\n* The same paper describes brain activity and waves using an analogy with a crowd at a football game. \"Chatting\" is what happens when people/neurons talk with their neighbors in patterned ways that may appear random when seen from a distance. \"Chanting\" is when a global delta wave sweeps through the population in a coordinated fashion. \"Cheering\" is a global wave in response to a strong stimulus.\n* The coordinated communication of large numbers of neurons could also be analogized in phenomena like social movements, where people form coalitions of synchronized activity and compete to gain attention of the media in order to broadcast their message to the whole population and thereby influence action tendencies, memories, learning, etc. This comparison takes the activist goal of \"[raising social consciousness](https://en.wikipedia.org/wiki/Consciousness_raising)\" to a new level.\n\n\nNote that some of these systems are not competitive, and so the claim that lack of Darwinian competition means lack of conscious suffering may not be accurate.\n\n\nThese analogies can actually give insight into why consciousness emerges in brains: It's for a similar reason as why national and global news emerges in human society. Global broadcasting of the most important events and knowledge serves to keep every part of the social system up to date, in sync, aware of required actions (e.g., hurricane warnings, voting days), and alerted to global searches (\"Have you seen this crime suspect?\") and coordination. As the \"[Global Workspace Dynamics](http://www.ncbi.nlm.nih.gov/pmc/articles/PMC3664777/)\" paper says: \"What is the use of binding and broadcasting in the [cortico-thalamic] C-T system? One function is to update numerous brain systems to keep up with the fleeting present.\" This process of serializing the most salient updates for global broadcasting may ultimately create a more effective society (organism) than if every local community were isolated and reacted independently with parochial (\"unconscious\") reflexes or using only [tribal knowledge](https://en.wikipedia.org/wiki/Tribal_knowledge). When a broadcast becomes globally conscious, it's available to all regions, including the verbal/speech centers of a person for conscious report (or to the writers/bloggers of society for verbalization in text). Events in illiterate farming communities would be \"unconscious\" to the world without journalists who visit to help broadcast those stories. The world can become more conscious of its memories when historians uncover and share information about past events. And the spotlight of attention shifts based on the most emotionally salient events that happen. In general, fast, global network communication over radio, television, and the Internet are making the world more conscious of itself, in a surprisingly literal sense.\n\n\nWhy do we only care about conscious experiences? For instance, we'd be horrified to undergo conscious surgery but don't mind undergoing surgery while anaesthetized. Presumably it's because the parts of us that \"care about\" our experiences -- such as by reacting aversively, triggering stress feelings, planning ways to avoid the experience, encoding traumatic memories, and so on -- only know about the damaging stimuli when they become conscious. Typically a strong negative stimulus will win competitions to be consciously broadcast, but when anaesthesia blocks pathways from nociceptors to access by the full brain, it prevents the suite of \"caring about\" responses that would ordinarily be triggered. An analogy in the social realm is that society cares about and responds to negative events when they're reported in the media, but if scandals are covered up or reporters are prevented from talking about atrocities, this is like applying local anaesthesia.\n\n\nMore often, neglect of certain harms and focus on other types of harms is built in to the system. For instance, a sliver in your eye would hurt vastly more than a sliver in your leg because your eye has many more nerve endings. Similarly, a rich person killed in the United States would attract far more attention and response than a poor person killed in Africa because there are vastly more reporters covering the former, and the story about the rich American would seem more salient to the readers (neurons) who vote it up on Twitter.\n\n\nAnother analogy between consciousness and news reporting is that in both cases, once an object enters the spotlight of attention, other events in that spotlight can come to attention that would have otherwise remained hidden. For example, suppose your leg itches, causing you to focus your consciousness on your leg. That may allow you to then feel the breeze on your leg as well, whereas you otherwise would have filtered out that information from your awareness. Likewise, when a news story about X surfaces, this often leads to investigations into other stories Y and Z that relate to X, and stories about V and W that previously would have been ignored become \"newsworthy\". As an example, following the [pool party incident](https://en.wikipedia.org/wiki/2015_Texas_pool_party_incident) in McKinney, Texas on 5 Jun. 2015, a series of other news stories about McKinney, Texas also became national headlines, whereas previously those kinds of stories wouldn't have reached beyond the local news.\n\n\nI haven't explored interpretations of the processes mentioned above according to other [models of consciousness](http://www.scholarpedia.org/article/Models_of_consciousness), but I expect you'd find that systems like these would be at least somewhat conscious in those frameworks as well. In general, most accounts of what consciousness is appeal to general principles that don't go away when neurons stop being involved.\n\n\nAnd beyond consciousness, we can see other mind-like processes at play in many systems. Take memory for example. Apparently memories consist of neural connections that become strengthened by repeated use, and they fade as the connections decay. This reminds me of a series of dirt roads through a town. They're first created by some event, they become strengthened with use, and they revert back to wilderness with disuse. A road that hasn't been traveled on in years may become overrun by returning vegetation, but it can still be re-activated more easily than creating a new road from scratch somewhere else. And like with neural connections, a stronger road allows things to flow more easily between the regions it connects.\n\n\n*Alice*: Are you really saying that news reports and stock exchanges are conscious? And that roads have memory?\n\n\n*Brian*: I don't know.[1](#link_ajs-fn-id_1-251) But I think we should take the possibility seriously. In any case, it could be that future computational systems contain more human-like entities. For instance, suppose an AI wants to undertake a research program on string theory, to better update its models of physics. It will partition some fraction of computing power to that project. It may want to parallelize the work for speed, so it might create lots of different \"research teams\" that work on the problem separately and publish their results to others. These teams might compete for \"grant money\" (i.e., additional computing resources) by trying to produce high-quality findings better than the other teams. These components might be sufficiently agent-like as to evoke our moral concern.\n\n\nThe process of intelligently searching the space of possibilities based on value assessments is a general phenomenon. Animals search through a field until they find a lush patch of strawberries; then they experience reward at the discovery and focus their efforts there for a while. Humans, too, feel reward while trying to figure things out. For instance, V.S. Ramachandran's [peekaboo principle](http://jjgallaher.blogspot.com/2009/07/art-in-brain-peekaboo-principle.html) is based on the idea that humans receive little squirts of pleasure every time they unpack a small piece of a puzzle, and these \"mini aha\" moments motivate them to keep going. Perhaps there would be a similar process at play for an AI's research teams. When a small discovery is made, the good news is broadcast throughout the team, and this encourages more actions like what led to the discovery.\n\n\nAs I stated it, this model suggests something akin to David Pearce's [gradients of bliss](http://www.hedweb.com/object33.htm) because the rewards for research discoveries are positive. But perhaps the system would use gradients of agony, with research findings being rewarded by temporary relief from discomfort. If there is a possibility for choice between a \"gradients of bliss\" and a \"gradients of agony\" design to achieve roughly similar computational ends, this suggests room for humane concerns to make a meaningful difference.\n\n\nAs another illustration, consider economics. Under both capitalism and communism, we see the emergence of hierarchical forms of organization. The CEO of a corporation seems like a decent model for the conscious control center of a brain: The workers perform their duties away from its sight, and then the most important news about the company is bubbled up to the CEO's desk. Then the CEO broadcasts updates to all the workers, including compensation rewards, which adjust worker action inclinations. The company also stores records of these events for later use. The most important (\"emotionally salient\") historical memories are better preserved, and less relevant ones slowly decay with time. This whole process mimics the global-workspace theory in broad outline. And the fact that hierarchies of this type have emerged in all kinds of governmental and economic systems suggests that they may be common even among the construction workers and researchers of an AI.\n\n\n*Alice*: Hmm, maybe. But if there are just a few companies that the AI is directing, that's not a lot of conscious minds. Maybe these suffering corporations are then not a big deal, relative to much larger numbers of suffering wild animals, etc. What's more, the [clock speed](http://www.utilitarian-essays.com/computations-i-care-about.html#clock-speed) of a corporate consciousness would be glacial compared with that of an animal.\n\n\n*Brian*: Well, even if we don't weight by brain size, who says corporations are the only parts of this process that are conscious? Hierarchical organization is a recurrent pattern of organized systems in general. It could happen at the highest level -- the executive AI controlling its component corporations -- but it would also happen in a fractal way at many lower layers too: Each corporation is composed of subdivisions, each subdivision has its own subdivisions, etc. At some point we might hit a level of entities analogous to \"workers.\" Even below that might be the subcomponent coalitions of an individual worker's brain, which compete for attention by the worker brain's executive-control system. Each of these could have consciousness-like components. And their clock speeds would be quite fast.\n\n\nOne concept in the LIDA model is that of a \"codelet,\" which [one page](https://web.archive.org/web/20160410233548/https://people.cs.kuleuven.be/~joaquin.vanschoren/Flexo/uml/glossary/codelet.html) defines as\n\n\n\n> tiny agents, carrying small pieces of code (hence the name). They can be interpreted as being a small part of a process, but then leading its own life, very much like an ant is a small part of a \"process\" to gather food, to defend the anthill or to nurture newborns. They run in parallel [...], and none are indispensable.\n> \n> \n> [...] The entity calling the codelet will estimate its urgency (reflecting the promise of further investigation). Highly urgent codelets can preempt lower urgency codelets [...], and if a codelet's urgency sinks well below that of other's, it just dies out, leaving computer resources to more ambitious codelets. If a codelet sees it has no real work to do in the current situation (due to a bad estimation or changed situation), it sizzles.\n\n\nIt's [plausible](http://www.utilitarian-essays.com/insect-pain.html) that individual ants are conscious. So too, maybe even tiny components of an individual worker's brain could be seen as conscious.\n\n\n*Alice*: But if a larger consciousness contains many smaller consciousnesses, which each contain many smaller consciousnesses, how do we count them? What are the weights? Do the lowest-level consciousnesses dominate? This discussion is getting curiouser and curiouser!\n\n\n*Brian*: Indeed. But these are issues that we need to resolve at some point. To some extent I'm [punting the question](http://utilitarian-essays.com/robustness-against-uncertainty.html) to our more intelligent descendants. Still, it's useful to realize that suffering subroutines *could* be a big deal in the future, so that we don't naively reach conclusions based on a circumscribed view of what we might care about.\n\n\n*Alice*: From the standpoint of \"consciousness as broadcasting,\" do you think insects are conscious?\n\n\n*Brian*: It's an important question. It certainly seems plausible that insects would have some sort of LIDA-like cognitive cycle: Inputs, unconscious processing, most important insights bubble up and are distributed, and they affect action inclinations. Even if this kind of architecture didn't exist exactly, we might see adumbrations of it in whatever insects do. I mean, for example, if one part of the brain communicates its insights to several other parts of the brain, even if not globally, isn't this like a mini-broadcast? Isn't that sort of like consciousness already? In general, any kind of communication-and-updating process would have shadows of the operation that we think of as consciousness. This illustrates my [more general point](http://www.utilitarian-essays.com/computations-i-care-about.html#graded-sentience) that consciousness comes in gradations -- there's not a single cutoff point where what was unconscious matter suddenly has the lights come on. There are just atoms moving in various ways, and some of them activate our sympathies more than others.\n\n\n*Alice*: Well, that raises a question: If we can care about whatever we like, however much we like, why shouldn't I just care about humans and maybe some animals, and forget about these suffering subroutines entirely?\n\n\n*Brian*: You can do that, and perhaps we would choose to upon reflection. I don't know what the best criteria are for carving out our caring-about function. But it seems plausible that algorithms are a big part of it, and then when we see processes that resemble these algorithms somewhere else, it raises the question of why we care about them in some forms but not others. I don't know where our hearts will ultimately fall on the matter.\n\n\n*Alice*: Do you think even basic physics might contain consciousness?\n\n\n*Brian*: I don't know. I hope not, but I wouldn't rule it out. Giulio Tononi's \"phi\" postulates that even an electron has a nonzero measure of consciousness, for instance.\n\n\nWith the global-workspace model, maybe we could see elementary particles as broadcasting information that then influences other regions -- e.g., the nuclear reactions in the sun broadcast photons, and the sun's mass pulls other objects toward it. But it's not clear that any real \"agent\" process is going on here. Where are the learning, action selection, memories, etc.? So naively it seems like these kinds of dead physical things aren't conscious, but maybe I'm not looking at them right, and maybe we'll discover ways in which there are agents even in the math of microscopic physics.\n\n\n*Alice*: Speaking of math, do you think Darwinism could ultimately go the way of the dodo? I mean, Darwinian competition is just an attempt at [hill climbing](https://en.wikipedia.org/wiki/Hill_climbing) in a high-dimensional space. But sometimes we have mathematical tools that let us perform exact optimizations without needing to \"guess and check.\" Could intelligence ultimately be reduced to a series of really big mathematical optimization problems that can be solved (at least somewhat) analytically, thereby averting a lot of this expensive computation of agent-like things? Similarly, reinforcement learning is [direct adaptive optimal control](https://web.archive.org/web/20161223113316/http://webdocs.cs.ualberta.ca/~sutton/papers/sutton-barto-williams-91.pdf), but optimal-control problems can potentially be solved by analytic methods like the Bellman equations if you know the payoffs and transition probabilities ahead of time.\n\n\n*Brian*: Maybe, though it does seem hard to imagine that we could analytically solve some of these really specific, data-driven problems without computing in detail the process being modeled. Perhaps this just reflects lack of imagination on my part, and of course, there are times when macro-scale approximations can remain ignorant of microfoundations. In any case, the actions of the AI on the galaxy to implement its goals would still require lots of real, physical manipulation -- e.g., supervisors to coordinate workers in building solar colonies and such. The possibility you cite is fun to speculate on, but it's not sufficiently probable to substantially affect the concern about suffering subroutines, given that consciousness-like processes seem to be such a convergent feature of organized systems so far.\n\n\n*Alice*: Do ecosystems suffer? Could this broad view of consciousness provide some vindication of the otherwise seemingly absurd idea that nature as a whole can have moral standing apart from the welfare of the individuals it contains?\n\n\n*Brian*: In principle it's certainly possible ecosystems could contain shadows of consciousness, but it's not clear they usually do. Where is the global broadcasting? What are the action components that are being updated? Maybe you could come up with some interpretations. Even if so, it's not clear what an ecosystem wants. Unlike corporations or ants, ecosystems don't have clear goals. Even if we identified a goal, it wouldn't necessarily align conservationism; it might go the other way. In any event, even if an ecosystem's consciousness did align with conservationism, it's dubious whether the interests of the ecosystem as a whole could outweigh those of [quintillions](http://www.utilitarian-essays.com/number-of-wild-animals.html) of suffering individual animals within it.\n\n\nIf we think ecosystems can suffer, then a natural way to prevent future suffering is to have fewer of them. Even if we adopted the stance from environmental ethics of considering ecosystems objects of intrinsic moral importance regardless of sentience, it's not obvious that ecosystems are inherently *good*? We might think they're inherently *bad*. This kind of \"negative environmental ethics\" seems a natural idea for a negative-leaning consequentialist.\n\n\n*Alice*: Yeah. Maybe one suggestion could be that the atmospheric CO2 levels are a global signal broadcast to all subcomponents of the biosphere. This then causes (very small) changes in the behavior of animals, plants, and inorganic entities like sea ice. The responses of these entities then have an impact back on CO2 levels, which are then broadcast globally. I guess in this model, the broadcasts are continuous rather than discrete.\n\n\n*Brian*: I suppose that's one interpretation you could make, though what would be the valence of CO2? In the stock-trading example, we could say that for the subset of traders that are net long in the security, an increase in the stock price would have positive valence. What about for CO2?\n\n\n*Alice*: Maybe those organisms that do better with more CO2 would receive it with positive valence, and vice versa? The \"learning\" of the ecosystem would then be strengthening those organisms that do well with higher CO2, just like the dopaminergic learning of an animal involves strengthening connections for action neurons that just fired given the current context.\n\n\n*Brian*: Ok, I can kind of see that, although in the case of dopamine, the action neurons were responsible for bringing the reward; in the case of the atmosphere, a whole bunch of stuff brought the increase in CO2 levels, and it wasn't necessarily the organisms that benefit from CO2 who were responsible for the emissions. Indeed, people often remark how humans in the global South are \"punished\" by the CO2 emissions of those in the global North.\n\n\nAnyway, even if we did consider the carbon cycle somewhat analogous to a brain, keep in mind that the clock speed of this operation is really slow. Of course, since CO2 changes are continuous rather than coming in discrete pulses, the idea of a clock speed isn't really appropriate, but I guess we can still create a rough notion about cycles per year of relevant operations.\n\n\n*Alice*: And of course, at the same time, we could have H2O levels as another currency of the biosphere, and temperature as another, and so on. There would be multiple broadcasting systems at play.\n\n\n*Brian*: Right. In general, we can pattern-match a complex process in many different ways as being composed of many different systems that each have some resemblance to consciousness. This actually returns us to the old puzzle in the philosophy of computationalism: [What is a computation, anyway?](http://utilitarian-essays.com/computations-i-care-about.html#what-is-a-computation) One answer is that we see various physical processes as resembling various computations to various degrees, and we can then care about them in proportion to their resemblance. The same thing is going on here -- only, this is not John Searle's toy Wordstar program in the wall but a genuine instance of seeing consciousness-like operations in various places. It's like [pareidolia](https://en.wikipedia.org/wiki/Pareidolia) for our empathy systems.\n\n\nPersonally, I don't really care intrinsically about the Earth's carbon cycle, water cycle, etc. to any appreciable degree. I think the connection to animal minds is a pretty far stretch.\n\n\n*Alice*: Yes. Moreover, the way we've been discussing consciousness has been pretty simple and crude. There may be important pieces of the puzzle that we've neglected, and we might consider these important as well for making an entity conscious in a way that matters.\n\n\n*Brian*: Agreed! This should not be taken as the end of the conversation but only the beginning.\n\n\nFurther analogies\n-----------------\n\n\nThere are many more similarities between operations in our brains and phenomena in the worlds of politics, physics, etc. Sebastian Seung's book *[Connectome](https://en.wikipedia.org/wiki/Connectome:_How_the_Brain%27s_Wiring_Makes_Us_Who_We_Are)* provides several additional comparisons. One of my friends remarked on Seung's work: \"I don't think I've ever read a book with so many illuminating analogies!\" While most of Seung's readers presumably see these analogies as merely didactic aids, I would suggest that they might also have moral significance if we care about brain-like processes in non-brain places.\n\n\nSchwitzgebel's view\n-------------------\n\n\n[Schwitzgebel (2016)](http://www.newappsblog.com/2016/12/is-most-of-the-intelligence-in-the-universe-non-conscious-ai.html \"'Is Most of the Intelligence in the Universe Non-Conscious AI?'\") reaches a similar conclusion as the previous dialogue did:\n\n\n\n> \n> **unity of organization in a complex system plausibly requires some high-level self-representation or broad systemic information sharing.** [...] most current scientific approaches to consciousness [...] associate consciousness with some sort of broad information sharing -- a \"[global workspace](http://cogweb.ucla.edu/CogSci/GWorkspace.html)\" or \"[fame in the brain](http://www.scholarpedia.org/article/Multiple_drafts_model)\" or \"[availability to working memory](https://global.oup.com/academic/product/the-conscious-brain-9780195314595)\" or [\"higher-order\" self-representation](https://plato.stanford.edu/entries/consciousness-higher/). On such views, we would expect a state of an intelligent system to be conscious if its content is available to the entity's other subsystems and/or reportable in some sort of \"introspective\" summary. For example, if a large AI knew, about its own processing of lightwave input, that it was representing huge amounts of light in the visible spectrum from direction alpha, and if the AI could report that fact to other AIs, and if the AI could accordingly modulate the processing of some of its non-visual subsystems (its long-term goal processing, its processing of sound wave information, its processing of linguistic input), then on theories of this general sort, its representation \"lots of visible light from that direction!\" would be conscious. And we ought probably to expect that large general AI systems would have the capacity to monitor their own states and distribute selected information widely. **Otherwise, it's unlikely that such a system could act coherently over the long term. Its left hand wouldn't know what its right hand is doing.**\n> \n> \n> \n\n\nSuffering can be simple: A reply to Metzinger\n---------------------------------------------\n\n\nIn response to the 2015 Edge question, \"What do you think about machines that think?\", Thomas Metzinger [explored](http://edge.org/response-detail/26091 \"\\\"What If They Need To Suffer?\\\" for \\\"2015 : WHAT DO YOU THINK ABOUT MACHINES THAT THINK?\\\"\") a similar question as the above dialogue addressed: Will AIs necessarily suffer, or could they be intelligent without suffering? Metzinger doesn't give a firm answer, but he enumerates four conditions that he believes are necessary for suffering:\n\n\n1. Being conscious\n2. Being self-conscious in the sense of having a sense of ownership of suffering (e.g., \"this is *my* suffering\")\n3. The experience has negative valence\n4. The negative valence is transparent in the sense that it can't be doubted.\n\n\nThis list is interesting and provides four helpful criteria that may enhance a holistic conception of suffering, but in my opinion these criteria are neither necessary nor exhaustive. I would consider them like four principles that one might propose for the meaning of \"justice\" -- a concept sufficiently complex that probably no four concrete criteria by themselves can define it.\n\n\nLet's see why each of these conditions is not strictly necessary. The most straightforward is probably #2, since it seems easy to imagine being in pain without engaging sufficient cognition to attribute that pain to yourself. Suffering can be a flood of \"badness\" feeling, which needn't be sufficiently differentiated that one recognizes that the badness is an experience on the part of oneself. For instance, a depressed person might feel that the whole world is bad -- that there's just a general badness going on.\n\n\n#4 also doesn't seem necessary, because it can still be morally disvaluable if someone is uncertain whether he's in agony. For instance, suppose you step on something. You're not sure whether the object has punctured the skin of your foot. You think you might feel some sharp pain in your foot, but you're not sure if it's actually there or just imagined, until you actually look at your foot and see the sharp object. (Michael Tye offers a [similar example](https://web.archive.org/web/20160401145456/http://michaeltye.us/Pain2.pdf \"see the last question of the interview, \\\"MICHAEL TYE ON PAIN\\\" (http://philosophybites.com/2012/08/michael-tye-on-pain.html)\").) I'm not sure what Metzinger would think of this case. In any event, it seems that transparency is actually quite easy to satisfy. It takes a complex cognitive system to produce doubts about experiences. Simple agents should generally have transparent emotions.\n\n\nAs far as #1, I think [all systems are](https://longtermrisk.org/flavors-computation-flavors-consciousness/) at least marginally conscious, so even if condition #1 is necessary, it's always satisfied. Of course, the *degree* of consciousness of a system matters enormously, but Metzinger's piece seems to be asking whether particular AIs would suffer at all.\n\n\nAs far as #3, I agree that valence plays an important, perhaps central, role in human suffering. This valence might prototypically be the [reward part](http://reducing-suffering.org/ethical-issues-artificial-reinforcement-learning/#Valence_networks) of a reinforcement-learning (RL) system. If one insists that valence can only make sense in the context of a rigid definition of RL, then I agree that not all AIs would have valence (although many still would, given the importance of RL for autonomous behavior). But if we interpret negative valence more broadly as \"information indicating that something should be avoided\", or even more compactly as \"information that produces avoidance\", then this kind of operation can be seen in many more systems, including non-learning agents that merely follow fixed stimulus-response rules. Indeed, the basic template of one physical event causing another avoidance-like event runs as deep as the interactions of fundamental physical particles, if we take enough of a high-level view and don't insist on greater complexity in our definition.\n\n\nOverall, I find Metzinger's criteria too narrow. They leave out vast numbers of simpler systems that I think still deserve some ethical consideration. Nonetheless, I appreciate that Metzinger's proposals enrich our conceptualization of more complex suffering.\n\n\n*Onion* piece\n-------------\n\n\n*The Onion* has a humorous article, \"[Scientists Confident Artificially Intelligent Machines Can Be Programmed To Be Lenient Slave Masters](http://www.theonion.com/article/scientists-confident-artificially-intelligent-mach-51170),\" in which AI researchers discuss the goal of shaping AI trajectories in such a way that AIs treat their human workers (what I might call \"suffering human subroutines\") more humanely. I find it extremely implausible that AIs would actually use human laborers in the long run, but they plausibly would use conscious worker agents of some sort -- both sophisticated scientist/engineer subroutines and other simpler subroutines of the kind discussed in this piece.\n\n\nUnlike human laborers, these subroutines would presumably enjoy working as hard as possible on the task at hand. Humans evolved to dislike exertion as a way to conserve energy except when required, but robots built to carry out a given task would be optimized to want to carry out exactly that task. That said, more sophisticated digital agents might, like humans, feel mild unpleasantness if they expended time or energy on fruitless activities. For instance, a robot should dislike moving around and thereby draining its battery unless it thinks doing so will conduce to achieving a reward.\n\n\nAcknowledgements\n----------------\n\n\nI learned of the idea that suffering subroutines might be ethically relevant from Carl Shulman in 2009. In response to this piece, Carl [added](https://www.facebook.com/brian.tomasik/posts/641988228322?comment_id=905370&offset=0&total_comments=7):\n\n\n\n> Of course, there can be smiling happy subroutines too! Brian does eventually get around to mentioning \"gradients of bliss\", but this isn't a general reason for expecting the world to be worse, if you count positive experiences too.\n> \n> \n> I would say \"sentient subroutines.\"\n> \n> \n\n\nSome examples in this piece were also partly inspired from a [post](https://www.facebook.com/ben.west.1029/posts/10201401708580746) by Ben West, linking to Eric Schwitzgebel's \"[If Materialism Is True, the United States Is Probably Conscious](https://web.archive.org/web/20200514103014/https://faculty.ucr.edu/~eschwitz/SchwitzPapers/USAconscious-130208.htm),\" which I discuss more [in another piece](http://www.utilitarian-essays.com/hedonistic-vs-preference.html#non-conscious).\n\n\nA note on terminology\n---------------------\n\n\nI coined the phrase \"suffering subroutines\" in a 2011 post on [Felicifia](https://web.archive.org/web/20180808173047/https://felicifia.org/). I chose the alliteration because it went nicely with \"sentient simulations,\" giving a convenient abbreviation (SSSS) to the conjunction of the two concepts. I define sentient simulations as explicit models of organisms that are accurate enough to count as conscious, while suffering subroutines are *incidental* computational processes that nonetheless may matter morally. Sentient synthetic [artificial-life](https://en.wikipedia.org/wiki/Artificial_life) agents are somewhere on the border between these categories, depending on whether they're used for psychology experiments or entertainment (sentient simulations) vs. whether they're used for optimization or other industrial processes (suffering subroutines).\n\n\nIt appears that [Meghan Winsby](https://web.archive.org/web/20140220185104/http://www.rotman.uwo.ca/who-we-are/our-members/meghan-winsby/) (coincidentally?) used the same \"suffering subroutines\" phrase in an excellent 2013 paper: \"[Suffering Subroutines: On the Humanity of Making a Computer that Feels Pain](http://www.iacap.org/proceedings_IACAP13/paper_48.pdf).\" It seems that her usage may refer to what I call sentient simulations, or it may refer to general artificial suffering of either type.\n\n\nFootnotes\n---------\n\n\n1. [This summary](http://www.openphilanthropy.org/david-chalmers-professor-philosophy-new-york-university-may-20-2016 \"'David Chalmers, Professor of Philosophy, New York University on May 20, 2016'\") of a conversation with David Chalmers says \"one popular theory is that information in the brain is conscious if and only if it is part of a global workspace; information outside the global workspace is unconscious. But it would be a big leap to conclude from this that any system with a global workspace is conscious and that systems that lack a global workspace are not.\"  [(back)](#back_ajs-fn-id_1-251)\n\n\n[This summary](http://www.openphilanthropy.org/david-chalmers-professor-philosophy-new-york-university-may-20-2016 \"'David Chalmers, Professor of Philosophy, New York University on May 20, 2016'\") of a conversation with David Chalmers says \"one popular theory is that information in the brain is conscious if and only if it is part of a global workspace; information outside the global workspace is unconscious. But it would be a big leap to conclude from this that any system with a global workspace is conscious and that systems that lack a global workspace are not.\"", "url": "https://longtermrisk.org/a-dialogue-on-suffering-subroutines/", "title": "A Dialogue on Suffering Subroutines", "source": "html_articles", "source_type": "blogPost", "source_filetype": "pdf", "date_published": "2015-08-28T22:00:00Z", "authors": ["Brian Tomasik"], "summary": [], "id": "efde9d5f8ff2f6d76b7e6fc4e29fad9b"} {"text": "A Lower Bound on the Importance of Promoting Cooperation\n========================================================\n\n\n\n29 August 2015\nby [Brian Tomasik](https://longtermrisk.org/author/brian-tomasik/ \"Posts by Brian Tomasik\")\n\nFirst written: 3 Jan. 2014; last update: 7 Jun. 2016\n\n This piece suggests a lower-bound [Fermi calculation](https://en.wikipedia.org/wiki/Fermi_problem) for the cost-effectiveness of working to promote international cooperation based on one specific branch of possible future scenarios. The purpose of this exercise is to make our thinking more concrete about how cooperation might exert a positive influence for suffering reduction and to make its potential more tangible. I do not intend for this estimate to be quoted in comparison with standard DALYs-per-dollar kinds of figures because my parameter settings are so noisy and arbitrary, and more importantly because these types of calculations are not the best ways to compare projects for shaping the far future when many complex possibilities and flow-through effects are at play. I enumerate other reasons why advancing cooperation seems robustly positive, although I don't claim that cooperation is obviously better than alternate approaches.\n\n\nContents\n\n* [Introduction](#Introduction)\n* [Caveats](#Caveats)\n* [By what fraction could compromise reduce future suffering?](#By_what_fraction_could_compromise_reduce_future_suffering)\n* [How much future suffering is in our hands?](#How_much_future_suffering_is_in_our_hands)\n* [Combining the estimates](#Combining_the_estimates)\n* [Weird physics](#Weird_physics)\n* [Other reasons to support cooperation](#Other_reasons_to_support_cooperation)\n\t+ [A value-of-information argument for future focus](#A_value-of-information_argument_for_future_focus)\n* [Footnotes](#Footnotes)\n\nIntroduction\n------------\n\n\n[Compromise](http://utilitarian-essays.com/compromise.html) has the potential to benefit most value systems in expectation, by allowing each side in a dispute to get more of what it wants than its fractional share of power. This is wonderful, but how much could compromise matter? In this piece I suggest a Fermi calculation for a lower bound on how much suffering might be prevented by working to promote compromise. The estimates that I use for each variable are more conservative than I think is likely to be the case.\n\n\nCaveats\n-------\n\n\nI *do not think* a Fermi calculation like I describe below is the best approach for evaluating relative cost-effectiveness. This calculation traces one specific, highly conjunctive branch in the vast space of possible branches for how the future might unfold. Most of the expected impact of promoting compromise probably comes from branches that I'm ignoring.\n\n\nLikewise, activities other than promoting compromise also have many [flow-through effects](http://blog.givewell.org/2013/05/15/flow-through-effects/) on many different possible future branches. Comparing projects to shape the future requires much more than a single Fermi calculation. We should use additional quantitative and qualitative estimates across many models, as well as general heuristics. One of the strongest arguments for promoting compromise is not that it dominates in a Fermi calculation (probably it doesn't) but that \"increasing the pie\" for many value systems is generally a good idea and seems more [robustly](http://utilitarian-essays.com/robustness-against-uncertainty.html) positive than almost anything else.\n\n\nThat said, explicit and detailed Fermi estimates can help to [clarify our thinking and identify holes](http://lesswrong.com/lw/jfm/another_critique_of_effective_altruism/aagj), and this is one reason for undertaking the exercise.\n\n\nBy what fraction could compromise reduce future suffering?\n----------------------------------------------------------\n\n\nSuppose the following parameter estimates. Remember, these are designed to be *conservatively low*, not most likely. The estimates in each bullet are conditional probabilities given the outcomes from the previous bullets.\n\n\n* 40% chance that humanity doesn't go extinct due to causes other than artificial intelligence (AI) in the next few centuries.\n* 20% chance that humanity will develop strong AI in the next few centuries conditional on not going extinct due to non-AI factors.\n* 5% chance that human values will be encapsulated by strong AI.\n* 5% chance that those values, once encapsulated, would be preserved indefinitely rather than changing in arbitrary directions or converging to some inevitable endpoint.\n* 10% chance that, in the default scenario, there is a competition among nations to build the first strong AI to serve that nation's own interests and values.\n* 10% chance that this competition could be turned to compromise if enough people worked on promoting moral tolerance and international cooperation. For example, creating a ![](https://longtermrisk.org/files/United_Nations_Flags_-_cropped-350x234.jpg \"UN flags in Geneva. By Tom Page (Flickr: IMG_1965) [CC-BY-SA-2.0 (http://creativecommons.org/licenses/by-sa/2.0)], via Wikimedia Commons: https://commons.wikimedia.org/wiki/File:United_Nations_Flags_-_cropped.jpg\")\n* What is \"enough people\" working to promote cooperation? Say, for example, 1 in 100 working adults on the planet devoting their careers to the cause, over the next 200 years. Assuming a population of maybe ~5 billion working people at a given time, that means about 50 million people over 200 years, or 10 billion person-years.\n\t+ Assume that an effort some fraction of this size has a linearly reduced expected impact. For instance, 5 billion person-years of work instead of 10 billion would mean the chance of turning competition to cooperation is 5% instead of 10%.\n\t+ 50 million people is a lot. There are just [~4,500](https://web.archive.org/web/20150501192315/http://blog.inomics.com:80/phd-graduates-disciplines-and-numbers/) PhDs awarded in the social sciences in the US per year. *Foreign Affairs* magazine has [~150,000](https://en.wikipedia.org/wiki/Foreign_Affairs) subscribers. The entire US government employs [just over 4 million](http://www.facethefactsusa.org/facts/that-bloated-federal-workforce-historically-it-looks-buff) people in civilian and military roles.\n* Absent cooperation, suppose there would be two main superpowers in conflict. Of course, there might be more, but I think the analysis would be basically the same in that case. Imagine that one superpower cares slightly more about suffering reduction than the other. (For example, the USA currently cares more about animal welfare than China.) In particular, suppose that if one country's values controlled the future, the amount of suffering would be X, and if the other controlled the future, suffering would be 1.02X. Suppose each side has equal odds of winning this \"Cold War\" race. The expected amount of suffering under winner takes all is (X + 1.02X)/2 = 1.01X. Suppose that because of diminishing returns to additional suffering reduction per unit of resources, a compromise arrangement would allow the sides to reach 1.009X suffering instead -- roughly a 1 in 1000 reduction. This specific calculation has been rather detailed, but from a higher level, suggesting that cooperation could reduce expected suffering in the future by 1 in 1000 due to harmonization of conflicting values across countries seems conservatively low.\n* 10% chance that the difference in suffering between the policies of these two countries would actually be a permanent feature of the AIs that those countries would produce in winner-takes-all scenarios rather than being eliminated by [extrapolation](http://wiki.lesswrong.com/wiki/Coherent_Extrapolated_Volition).\n* Apply a discount for uncertainty about whether people's efforts actually improve or degrade cooperation in the long run. For example, maybe some activists push for changes to nuclear policy that disrupt the stability of mutual deterrence and make conflicts worse. In particular, suppose the chance is 65% that efforts to promote cooperation actually do promote cooperation and 35% that they hinder it by an equal amount. Then the discount factor is 0.65 - 0.35 = 0.3.\n\n\nGiven these parameter settings, a lower bound on the fraction of future suffering reduced per person-year of work to promote cooperation is\n\n\n\n> 40% \\* 20% \\* 5% \\* 5% \\* 10% \\* 10% \\* [1/(10 billion)] \\* (1/1000) \\* 10% \\* 0.3 = 6 \\* 10‑21.\n> \n> \n\n\nHow much future suffering is in our hands?\n------------------------------------------\n\n\n* Suppose, conservatively, that only 10‑5 of total, intensity-weighted hedonic experience in the future is negative. This might include [suffering subroutines](http://www.utilitarian-essays.com/suffering-subroutines.html), sentient [simulations](https://en.wikipedia.org/wiki/Life_simulation_game) of [wild animals](http://www.utilitarian-essays.com/suffering-nature.html), etc.\n* Nick Bostrom [estimates](http://www.nickbostrom.com/astronomical/waste.html) potential future hedonic experience as 1038 humans surviving for ~1010 years in the Virgo Supercluster, or 1048 experience-years. Of course, some of these might be animals and suffering subroutines, but I'll keep using the \"currency\" of human experience-years as the reference point.\n* Say the probability that a colonization scenario with this many minds actually happens is 10‑8. Alternatively, we could say that the probability is 10‑6 that a colonization future with this magnitude of computational power happens and has 1% hedonically relevant computations. Any combination of possibilities that leads to an expected 10‑8 fractional multiplier is equivalent.\n* Apply a significant discount factor to account for the incredulity of the proposition that we are in a position to influence this many future experience-years. One might think it's almost impossible that we would happen to be the influential few to affect such a vast future, but model uncertainty suggests we might give some nonzero probability that we are actually in such an incredible period of history. Say the probability that we are is 10‑10.\n\n\nThe expected number of suffering-years in our hands would then be\n\n\n\n> 10‑5 \\* 1048 \\* 10‑8 \\* 10‑10 = 1025.\n> \n> \n\n\nCombining the estimates\n-----------------------\n\n\nMultiplying 6 \\* 10‑21 by 1025 gives 60,000 expected suffering-years that we can prevent per year of work to promote compromise. Assuming a year of work means 40 hours per week for 50 weeks, this is (60,000)/(40\\*50) = 30 suffering-years per hour, or 0.5 per minute.\n\n\nTo convert this into a per-dollar estimate, suppose it would take $150K per year to pay someone to work on compromise, assuming that person would otherwise have done something unrelated and altruistically neutral. This figure is very high for a nonprofit salary, but if someone is willing to work for a lot less, chances are she's already committed to the cause and would have a high opportunity cost, because she could be [earning to give](http://www.utilitarian-essays.com/make-money.html) instead. In order to attract talented people who would otherwise do altruistically neutral work, a high salary would be required. And remember, this is a conservative calculation. 60,000 expected suffering-years divided by $150K is ~150 suffering-days prevented per dollar. (Here I'm ignoring the fact that future labor-years should be cheaper in present dollars assuming investment returns outpace increases in wages.)\n\n\nIt's important to remember just how imprecise these particular numbers are. For instance, if I had taken the anthropic discount factor to be 10‑5 instead of 10‑10, we would have had 6 billion suffering-years prevented per year of work, or 40,000 suffering-years prevented per dollar.\n\n\nWeird physics\n-------------\n\n\nThis scenario assumed a bound of 1048 experience-years, but there's some chance physics is other than we think and allows for obscenely higher amounts of computation. Indeed, there's a nonzero probability that [infinite computation](https://en.wikipedia.org/wiki/Hypercomputation) is possible, implying infinite future suffering. Our calculation would then blow up to say that every second spent on promoting compromise prevents infinite expected suffering.\n\n\nA few thoughts on this:\n\n\n1. Blowing up the number of experience-years we affect shouldn't change *relative* comparisons among activities that shape the far future. (And every activity shapes the far future to some extent.) It merely highlights that everything we do has a remote chance of being massively more important than it seems.\n2. Black swans like these are a main reason why Fermi calculations are incomplete and inadequate to capture all the factors we need to consider for choosing policies. It can be much more stable to use heuristics like \"work on positive-sum projects that make it more likely that our descendants can use their vastly greater wisdom to tackle problems that are beyond our grasp.\"\n3. It seems absurd that we would be the lucky few to be in a position to influence infinitely many future minds. The probability of this seems naively like it should tend to 1/infinity. In general, I think anthropic considerations are an Achilles heel for these calculations about the astronomical importance of the far future.\n\n\nOther reasons to support cooperation\n------------------------------------\n\n\nI've taken pains to clarify that the calculation in this piece is hardly exhaustive of why cooperation is important but only scratches the surface with one concrete scenario. There are many other reasons for suffering reducers to support international cooperation:\n\n\n1. Rogue developers. While it seems reasonable to assume that most countries care appreciably about reducing suffering, the same needn't be true for smaller groups of \"rogue\" AI developers. Stronger global governance would help states enforce coordination rather than letting a bunch of individual groups compete against governments and against each other to build the first strong AI to satisfy their own peculiar ideologies. Note that this kind of enforcement could be desirable *even for the people who would have joined the rogue groups* because they would be forced to cooperate rather than defect on a (multi-player) prisoner's dilemma, which is Pareto-preferred by every prisoner in the game. (Compare with \"[Why do hockey players support helmet rules, even though they choose not to wear helmets when there is no rule?](http://www.econport.org/content/teaching/modules/NFG/Hockey.html)\")\n2. Increased humaneness. Cooperation and tolerance make society more humane. When violence is less of a concern, people have more room to explore [self-expression](https://en.wikipedia.org/wiki/File:Inglehart_Values_Map.svg), and cultural heroes may shift from being focused on military victory to being focused on kindness. Conversely, the expanding circle of compassion can help advance cooperation, by showing people that those in other countries aren't really very different, and we're all citizens of the world.\n3. Maintaining stability and rule of law. Some of the most significant [potential sources](http://www.utilitarian-essays.com/astronomical-suffering.html) of suffering in the future are reinforcement-learning algorithms, artificial-life simulations, and other sentient computational processes. Reducing these forms of suffering would plausibly require machine-welfare laws or norms within a stable society. It's hard to imagine humane concerns carrying currency in a competitive, Wild West environment. International cooperation and other measures to maintain social tranquility are important for enabling more humane standards for industrial and commercial computations.\n4. More time to reflect. Cooperation is expected to slow or avert [AI arms races](http://utilitarian-essays.com/ai-arms-race.html), which means humanity should have more time to improve social institutions and philosophical reflectiveness before making potentially irrevocable decisions about the future of the galaxy.\n5. Being nice. Because cooperation is good for everyone, if suffering reducers promote it, others will appreciate this fact and may be more inclined to reciprocate toward suffering reducers by doing them favors in other ways. In other words, promoting inter*national* cooperation is a form of inter*personal* cooperation with other altruists who have different values from ours.\n6. Common-sense heuristics. Almost everyone on Earth agrees that stronger international cooperation would be good. \"World peace\" is a near universal goal, even though it has a ring of platitude by now.\n7. Robustness. Probably the strongest reason, which generalizes some of the scenarios discussed above, is that cooperation puts our descendants in a better position, both in terms of social institutions and moral values, to be able to tackle issues that we have no hope of addressing today. It's quite plausible that most of the suffering in the future will come from something that we can't even anticipate now. We should aim to empower our descendants to handle unknown unknowns, by advancing positive *social* technology -- including institutions for peace and compromise -- [relatively faster than](http://utilitarian-essays.com/differential-intellectual-progress.html) *scientific* technology.\n\n\nPutting our descendants in a better position to address challenges is useful even if strong AI and space colonization never materialize. Even if humans just continue on Earth for a few million years more, cooperation still improves our trajectory. Of course, this case involves vastly less suffering for us to mitigate, and what we do now may not have a significant impact on what happens tens of thousands of years hence absent goal-preserving AI, so this scenario is negligible in the overall calculations, but those who feel nervous about tiny probabilities of massive impacts would appreciate this consideration. That said, if our only concern was about Earth in the very short term, then plausibly other interventions would appear more promising.\n\n\n### A value-of-information argument for future focus\n\n\nThere's a general argument that we should focus on far-future scenarios even if they seem unlikely to materialize due to anthropic considerations because of value of information. In particular, suppose there were two main scenarios to which we assigned equal prior probability before anthropic updating: ShortLived, where humanity lasts only a few more centuries, and LongLived, where humanity lasts billions more years. Say LongLived has N times as many experience-moments as ShortLived and so is N times as important. Correspondingly, the anthropic-adjusted probability of LongLived might, under certain views of anthropics, tend toward 1/N. The expected value of ShortLived is (probability)\\*(value) = (roughly 1)\\*(1) = 1 compared against an expected value for LongLived of (probability)\\*(value) = (1/N)\\*N = 1. So it's not clear whether to focus on short-term actions (e.g., reducing wild-animal suffering in the coming centuries) or long-term actions (e.g., promoting international cooperation, good governance, and philosophical wisdom in order to improve the seed conditions for the AI that colonizes our galaxy).\n\n\nWhen we consider value of information, it pushes toward longer-term actions because they leave open the option of returning to focus on short-term actions if further analysis leads to that conclusion. To make the explanation simple, imagine that halfway through the expected lifetime of humanity given ShortLived, altruists reassessed their plans to decide if they should continue doing actions targeting LongLived futures or if they should focus instead on ShortLived futures. For the sake of clarity, imagine that at this juncture, they have perfect knowledge about whether to focus on short-term or long-term futures. If long-term futures were best to focus on, they would have already been doing the right thing so far and could stick with it. If short-term futures were more important, they could switch to working on short-term futures for the remaining half of humanity's lifetime and still get half the total value as if they had worked on short-term issues from the beginning.[1](#link_ajs-fn-id_1-253)\n\n\nOf course, a reverse situation could also be true: start focusing on short-term futures and then re-evaluate to decide whether to focus on long-term futures halfway. The difference is that if people have focused on long-term futures from the beginning, they'll have more wisdom and capacity at the halfway point to make this evaluation. This is an instance of the general argument for frontloading wisdom and analysis early and then acting later. Of course, there are plenty of exceptions to this -- for instance, maybe by not acting early, people lose motivation to act altruistically at all. This general conceptual point is not airtight but merely suggestive.\n\n\nIn personal communication, Will MacAskill made a similar argument about \"option value\" in a related context and thereby partly inspired this section. Needless to say, there are other considerations besides option value in both directions. For instance, there's greater entropy between our actions now and the quality of experience-moments billions of years from now (though a nontrivial probability of a pretty small entropy, assuming we influence a goal-preserving or otherwise politically stable outcome). Meanwhile, experience-moments of the future may have greater intensity, so the stakes may be higher.\n\n\nFinally, as was hinted in the Fermi calculation, we could fudge a way to make the far future dominate by saying there's a nontrivial probability that our anthropic discount is wrong and that the future really is as important as it seems naively. This may work, though it also feels suspicious because similar sorts of model-uncertainty arguments could be invoked to justify lots of weird considerations dominating our calculations. The importance of the far future seems one of the more robust sentiments among intelligent thinkers, though, so the fudge feels less hacky in this case.\n\n\nFootnotes\n---------\n\n\n1. Suppose the value of short-term work is known to be 1. The value of long-term work is either N with probability 1/N or is 0 otherwise. By doing just short-term work, we could guarantee a value of 1. By doing long-term work that includes research about whether long-term work is worthwhile, we could choose -- at the halfway point of humanity's lifetime in the ShortLived case -- to switch to short-term work or not. In the exaggerated scenario where we learn perfectly whether far-future work will pay off, we switch to doing short-term work in (N-1)/N of the cases, garnering value of 0.5 for the remaining time. And in 1/N cases, we stick with long-term work and get a payoff of N. So now our expected value is [(N-1)/N] \\* 0.5 + (1/N) \\* N, which is basically 1.5 if N is large. This is 50% better than just starting with the short-term work.  [(back)](#back_ajs-fn-id_1-253)\n\n\nSuppose the value of short-term work is known to be 1. The value of long-term work is either N with probability 1/N or is 0 otherwise. By doing just short-term work, we could guarantee a value of 1. By doing long-term work that includes research about whether long-term work is worthwhile, we could choose -- at the halfway point of humanity's lifetime in the ShortLived case -- to switch to short-term work or not. In the exaggerated scenario where we learn perfectly whether far-future work will pay off, we switch to doing short-term work in (N-1)/N of the cases, garnering value of 0.5 for the remaining time. And in 1/N cases, we stick with long-term work and get a payoff of N. So now our expected value is [(N-1)/N] \\* 0.5 + (1/N) \\* N, which is basically 1.5 if N is large. This is 50% better than just starting with the short-term work.", "url": "https://longtermrisk.org/a-lower-bound-on-the-importance-of-promoting-cooperation/", "title": "A Lower Bound on the Importance of Promoting Cooperation", "source": "html_articles", "source_type": "blogPost", "source_filetype": "pdf", "date_published": "2015-08-28T22:00:00Z", "authors": ["Brian Tomasik"], "summary": [], "id": "83eef2932687762ed30364fea4f1037c"} {"text": "Artificial Intelligence and Its Implications for Future Suffering\n=================================================================\n\n\n\n9 April 2015\nby [Brian Tomasik](https://longtermrisk.org/author/brian-tomasik/ \"Posts by Brian Tomasik\")\n\nFirst written: 14 May 2014; last update: 3 Jan 2019\n\n Summary\n-------\n\n\nArtificial intelligence (AI) will transform the world later this century. I expect this transition will be a \"soft takeoff\" in which many sectors of society update together in response to incremental AI developments, though the possibility of a harder takeoff in which a single AI project \"goes foom\" shouldn't be ruled out. If a rogue AI gained control of Earth, it would proceed to accomplish its goals by colonizing the galaxy and undertaking some very interesting achievements in science and engineering. On the other hand, it would not necessarily respect human values, including the value of preventing the suffering of less powerful creatures. Whether a rogue-AI scenario would entail more expected suffering than other scenarios is a question to explore further. Regardless, the field of AI ethics and policy seems to be a very important space where altruists can make a positive-sum impact along many dimensions. Expanding dialogue and challenging us-vs.-them prejudices could be valuable.\n\n\n### Other versions\n\n\n\n[![](/files/mp3-icon.png)](https://longtermrisk.org/files/RobotsAI.mp3)\n\n[![](/files/pdf-icon.png)\nPDF](https://longtermrisk.org/files/artificial-intelligence-and-its-implications-for-future-suffering.pdf)\n\n\\*Several of the new written sections of this piece are absent from the podcast because I recorded it a while back.\n\n\nContents\n\n+ [Other versions](#Other_versions)\n\n* [Introduction](#Introduction)\n* [Is \"the singularity\" crazy?](#Is_the_singularity_crazy)\n* [The singularity is more than AI](#The_singularity_is_more_than_AI)\n* [Will society realize the importance of AI?](#Will_society_realize_the_importance_of_AI)\n* [A soft takeoff seems more likely?](#A_soft_takeoff_seems_more_likely)\n* [Intelligence explosion?](#Intelligence_explosion)\n* [Reply to Bostrom's arguments for a hard takeoff](#Reply_to_Bostroms_arguments_for_a_hard_takeoff)\n* [How complex is the brain?](#How_complex_is_the_brain)\n\t+ [One basic algorithm?](#One_basic_algorithm)\n\t+ [Ontogenetic development](#Ontogenetic_development)\n* [Brain quantity vs. quality](#Brain_quantity_vs_quality)\n* [More impact in hard-takeoff scenarios?](#More_impact_in_hard-takeoff_scenarios)\n* [Village idiot vs. Einstein](#Village_idiot_vs_Einstein)\n* [AI performance in games vs. the real world](#AI_performance_in_games_vs_the_real_world)\n\t+ [Replies to Yudkowsky on \"local capability gain\"](#Replies_to_Yudkowsky_on_local_capability_gain)\n* [A case for epistemic modesty on AI timelines](#A_case_for_epistemic_modesty_on_AI_timelines)\n* [Intelligent robots in your backyard](#Intelligent_robots_in_your_backyard)\n* [Is automation \"for free\"?](#Is_automation_for_free)\n* [Caring about the AI's goals](#Caring_about_the_AIs_goals)\n* [Rogue AI would not share our values](#Rogue_AI_would_not_share_our_values)\n* [Would a human-inspired AI or rogue AI cause more suffering?](#Would_a_human-inspired_AI_or_rogue_AI_cause_more_suffering)\n* [Would helper robots feel pain?](#Would_helper_robots_feel_pain)\n* [Would paperclip factories be monotonous?](#Would_paperclip_factories_be_monotonous)\n* [How accurate would simulations be?](#How_accurate_would_simulations_be)\n* [Rogue AIs can take off slowly](#Rogue_AIs_can_take_off_slowly)\n\t+ [Are corporations superintelligences?](#Are_corporations_superintelligences)\n* [Would superintelligences become existentialists?](#Would_superintelligences_become_existentialists)\n* [AI epistemology](#AI_epistemology)\n* [Artificial philosophers](#Artificial_philosophers)\n* [Would all AIs colonize space?](#Would_all_AIs_colonize_space)\n* [Who will first develop human-level AI?](#Who_will_first_develop_human-level_AI)\n* [One hypothetical AI takeoff scenario](#One_hypothetical_AI_takeoff_scenario)\n* [How do you socialize an AI?](#How_do_you_socialize_an_AI)\n\t+ [Treacherous turn](#Treacherous_turn)\n\t+ [Following role models?](#Following_role_models)\n* [AI superpowers?](#AI_superpowers)\n* [How big would a superintelligence be?](#How_big_would_a_superintelligence_be)\n* [Another hypothetical AI takeoff scenario](#Another_hypothetical_AI_takeoff_scenario)\n* [AI: More like the economy than like robots?](#AI_More_like_the_economy_than_like_robots)\n* [Importance of whole-brain emulation](#Importance_of_whole-brain_emulation)\n* [Why work against brain-emulation risks appeals to suffering reducers](#Why_work_against_brain-emulation_risks_appeals_to_suffering_reducers)\n* [Would emulation work accelerate neuromorphic AI?](#Would_emulation_work_accelerate_neuromorphic_AI)\n* [Are neuromorphic or mathematical AIs more controllable?](#Are_neuromorphic_or_mathematical_AIs_more_controllable)\n* [Impacts of empathy for AIs](#Impacts_of_empathy_for_AIs)\n\t+ [Slower AGI development?](#Slower_AGI_development)\n\t+ [Attitudes toward AGI control](#Attitudes_toward_AGI_control)\n* [Charities working on this issue](#Charities_working_on_this_issue)\n* [Is MIRI's work too theoretical?](#Is_MIRIs_work_too_theoretical)\n* [Next steps](#Next_steps)\n* [Where to push for maximal impact?](#Where_to_push_for_maximal_impact)\n* [Is it valuable to work at or influence an AGI company?](#Is_it_valuable_to_work_at_or_influence_an_AGI_company)\n* [Should suffering reducers focus on AGI safety?](#Should_suffering_reducers_focus_on_AGI_safety)\n* [Acknowledgments](#Acknowledgments)\n* [Footnotes](#Footnotes)\n\n\nIntroduction\n------------\n\n\nThis piece contains some observations on what looks to be potentially a coming machine revolution in Earth's history. For general background reading, a good place to start is Wikipedia's article on the [technological singularity](https://en.wikipedia.org/wiki/Technological_singularity).\n\n\nI am not an expert on all the arguments in this field, and my views remain very open to change with new information. In the face of epistemic disagreements with other very smart observers, it makes sense to grant some credence to a variety of viewpoints. Each person brings unique contributions to the discussion by virtue of his or her particular background, experience, and intuitions.\n\n\nTo date, I have not found a detailed analysis of how those who are moved more by preventing suffering than by other values should approach singularity issues. This seems to me a serious gap, and research on this topic deserves high priority. In general, it's important to expand discussion of singularity issues to encompass a broader range of participants than the engineers, technophiles, and science-fiction nerds who have historically pioneered the field.\n\n\nI. J. Good [observed](http://aitopics.org/sites/default/files/classic/Machine_Intelligence_10/MI10-Ch29-Good.pdf \"\\\"Ethical machines\\\"\") in 1982: \"The urgent drives out the important, so there is not very much written about ethical machines\". Fortunately, this may be changing.\n\n\n\nIs \"the singularity\" crazy?\n---------------------------\n\n\nIn fall 2005, a friend pointed me to Ray Kurzweil's *[The Age of Spiritual Machines](https://en.wikipedia.org/wiki/The_Age_of_Spiritual_Machines)*. This was my first introduction to \"singularity\" ideas, and I found the book pretty astonishing. At the same time, much of it seemed rather implausible to me. In line with the attitudes of my peers, I assumed that Kurzweil was crazy and that while his ideas deserved further inspection, they should not be taken at face value.\n\n\nIn 2006 I discovered Nick Bostrom and Eliezer Yudkowsky, and I began to follow the organization then called the Singularity Institute for Artificial Intelligence (SIAI), which is now [MIRI](https://en.wikipedia.org/wiki/Machine_Intelligence_Research_Institute). I took SIAI's ideas more seriously than Kurzweil's, but I remained embarrassed to mention the organization because the first word in SIAI's name sets off \"insanity alarms\" in listeners.\n\n\nI began to study machine learning in order to get a better grasp of the AI field, and in fall 2007, I switched my college major to computer science. As I read textbooks and papers about machine learning, I felt as though \"narrow AI\" was very different from the strong-AI fantasies that people painted. \"AI programs are just a bunch of hacks,\" I thought. \"This isn't intelligence; it's just people using computers to manipulate data and perform optimization, and they dress it up as 'AI' to make it sound sexy.\" Machine learning in particular seemed to be just a computer scientist's version of statistics. Neural networks were just an elaborated form of logistic regression. There were stylistic differences, such as computer science's focus on cross-validation and bootstrapping instead of testing parametric models -- made possible because computers can run data-intensive operations that were inaccessible to statisticians in the 1800s. But overall, this work didn't seem like the kind of \"real\" intelligence that people talked about for general AI.\n\n\nThis attitude began to change as I learned more cognitive science. Before 2008, my ideas about human cognition were vague. Like most science-literate people, I believed the brain was a product of physical processes, including firing patterns of neurons. But I lacked further insight into what the black box of brains might contain. This led me to be confused about what \"free will\" meant until mid-2008 and about what \"consciousness\" meant until late 2009. Cognitive science showed me that the brain was in fact very much like a computer, at least in the sense of being a deterministic information-processing device with distinct algorithms and modules. When viewed up close, these algorithms could look as \"dumb\" as the kinds of algorithms in narrow AI that I had previously dismissed as \"not really intelligence.\" Of course, animal brains combine these seemingly dumb subcomponents in dazzlingly complex and robust ways, but I could now see that the difference between narrow AI and brains was a matter of degree rather than kind. It now seemed plausible that broad AI could emerge from lots of work on narrow AI combined with stitching the parts together in the right ways.\n\n\nSo the singularity idea of artificial general intelligence seemed less crazy than it had initially. This was one of the rare cases where a bold claim turned out to look *more* probable on further examination; usually extraordinary claims lack much evidence and crumble on closer inspection. I now think it's quite likely (maybe ~75%) that humans will produce at least a human-level AI within the next ~300 years conditional on no major disasters (such as sustained world economic collapse, global nuclear war, large-scale nanotech war, etc.), and also ignoring [anthropic considerations](http://www.anthropic-principle.com/?q=anthropic_bias).\n\n\n\nThe singularity is more than AI\n-------------------------------\n\n\nThe \"singularity\" concept is broader than the prediction of strong AI and can refer to [several](http://yudkowsky.net/singularity/schools) distinct sub-meanings. Like with most ideas, there's a lot of fantasy and exaggeration associated with \"the singularity,\" but at least the core idea that technology will progress at an accelerating rate for some time to come, absent major setbacks, is not particularly controversial. Exponential growth is the standard model in economics, and while this can't continue forever, it has been a robust pattern throughout human and even pre-human history.\n\n\nMIRI emphasizes AI for a good reason: At the end of the day, the long-term future of our galaxy will be dictated by AI, not by biotech, nanotech, or other lower-level systems. AI is the \"brains of the operation.\" Of course, this doesn't automatically imply that AI should be the primary focus of our attention. Maybe other revolutionary technologies or social forces will come first and deserve higher priority. In practice, I think focusing on AI specifically seems quite important even relative to competing scenarios, but it's good to explore many areas in parallel to at least a shallow depth.\n\n\nIn addition, I don't see a sharp distinction between \"AI\" and other fields. Progress in AI software relies heavily on computer hardware, and it depends at least a little bit on other fundamentals of computer science, like programming languages, operating systems, distributed systems, and networks. AI also shares significant overlap with neuroscience; this is especially true if [whole brain emulation](https://en.wikipedia.org/wiki/Whole_brain_emulation) arrives before bottom-up AI. And everything else in society matters a lot too: How intelligent and engineering-oriented are citizens? How much do governments fund AI and cognitive-science research? (I'd encourage [less](http://utilitarian-essays.com/differential-intellectual-progress.html) rather than more.) What kinds of military and commercial applications are being developed? Are other industrial backbone components of society stable? What memetic lenses does society have for understanding and grappling with these trends? And so on. The AI story is part of a larger story of social and technological change, in which one part influences other parts.\n\n\nSignificant trends in AI may not look like the AI we see in movies. They may not involve animal-like cognitive agents as much as more \"boring\", business-oriented computing systems. Some of the most transformative computer technologies in the period 2000-2014 have been drones, smart phones, and social networking. These all involve some AI, but the AI is mostly used as a component of a larger, non-AI system, in which many other facets of software engineering play at least as much of a role.\n\n\nNonetheless, it seems nearly inevitable to me that digital intelligence in some form will eventually leave biological humans in the dust, *if* technological progress continues without faltering. This is almost obvious when we zoom out and notice that the history of life on Earth consists in one species outcompeting another, over and over again. Ecology's [competitive exclusion principle](https://en.wikipedia.org/wiki/Competitive_exclusion_principle) suggests that in the long run, either humans or machines will ultimately occupy the role of the most intelligent beings on the planet, since \"When one species has even the slightest advantage or edge over another then the one with the advantage will dominate in the long term.\"\n\n\n\nWill society realize the importance of AI?\n------------------------------------------\n\n\nThe basic premise of superintelligent machines who have different priorities than their creators has been in public consciousness for many decades. [Arguably](http://dx.doi.org/10.1609/aimag.v7i2.540) even *Frankenstein*, published in 1818, expresses this basic idea, though more modern forms include *2001: A Space Odyssey* (1968), *The Terminator* (1984), *I, Robot* (2004), and [many more](https://en.wikipedia.org/wiki/Artificial_intelligence_in_fiction). Probably most people in Western countries have at least heard of these ideas if not watched or read pieces of fiction on the topic.\n\n\nSo why do most people, including many of society's elites, ignore strong AI as a serious issue? One reason is just that the world is really big, and there are many important (and not-so-important) issues that demand attention. Many people think strong AI is too far off, and we should focus on nearer-term problems. In addition, it's possible that science fiction itself is part of the reason: People may write off AI scenarios as \"just science fiction,\" as I would have done prior to late 2005. (Of course, this is partly for good reason, since depictions of AI in movies are usually very unrealistic.) Often, citing Hollywood is taken as a thought-stopping deflection of the possibility of AI getting out of control, without much in the way of substantive argument to back up that stance. [For example](http://www.businessinsider.com/artificial-intelligence-not-danger-to-humanity-2015-2 \"\\\"Intelligent machines aren't going to overthrow humans\\\"\"): \"let's please keep the discussion firmly within the realm of reason and leave the robot uprisings to Hollywood screenwriters.\"\n\n\nAs AI progresses, I find it hard to imagine that mainstream society will ignore the topic forever. Perhaps awareness will accrue gradually, or perhaps an [AI Sputnik moment](http://wiki.lesswrong.com/wiki/AGI_Sputnik_moment) will trigger an avalanche of interest. Stuart Russell [expects](http://www.cs.berkeley.edu/~russell/research/future/ \"\\\"The long-term future of AI\\\"\") that\n\n\n\n> Just as nuclear fusion researchers consider the problem of *containment* of fusion reactions as one of the primary problems of their field, it seems inevitable that issues of control and safety will become central to AI as the field matures.\n> \n> \n\n\nI think it's likely that issues of AI policy will be debated heavily in the coming decades, although it's possible that AI will be like nuclear weapons -- something that everyone is afraid of but that countries can't stop because of arms-race dynamics. So even if AI proceeds slowly, there's probably value in thinking more about these issues well ahead of time, though I wouldn't consider the counterfactual value of doing so to be astronomical compared with other projects in part [because](http://reducing-suffering.org/why-charities-dont-differ-astronomically-in-cost-effectiveness/#Returns_look_high_before_big_players_enter) society will pick up the slack as the topic becomes more prominent.\n\n\n[*Update, Feb. 2015*: I wrote the preceding paragraphs mostly in May 2014, before Nick Bostrom's *Superintelligence* book was released. Following Bostrom's book, a wave of discussion about AI risk emerged from Elon Musk, Stephen Hawking, Bill Gates, and many others. AI risk suddenly became a mainstream topic discussed by almost every major news outlet, at least with one or two articles. This foreshadows what we'll see more of in the future. The outpouring of publicity for the AI topic happened far sooner than I imagined it would.]\n\nA soft takeoff seems more likely?\n---------------------------------\n\n\nVarious thinkers have debated the likelihood of a \"hard\" takeoff -- in which a single computer or set of computers rapidly becomes superintelligent on its own -- compared with a \"soft\" takeoff -- in which society as a whole is transformed by AI in a more distributed, continuous fashion. \"[The Hanson-Yudkowsky AI-Foom Debate](http://wiki.lesswrong.com/wiki/The_Hanson-Yudkowsky_AI-Foom_Debate)\" discusses this in great detail. The topic has also been considered by many others, such as [Ramez Naam](http://www.antipope.org/charlie/blog-static/2014/02/the-singularity-is-further-tha.html) vs. [William Hertling](http://www.williamhertling.com/2014/02/the-singularity-is-still-closer-than-it.html).\n\n\nFor a long time I inclined toward Yudkowsky's vision of AI, because I respect his opinions and didn't ponder the details too closely. This is also the more prototypical example of rebellious AI in science fiction. In early 2014, a friend of mine challenged this view, noting that computing power is a severe limitation for human-level minds. My friend suggested that AI advances would be slow and would diffuse through society rather than remaining in the hands of a single developer team. As I've read more AI literature, I think this soft-takeoff view is pretty likely to be correct. Science is always a gradual process, and almost all AI innovations historically have moved in tiny steps. I would guess that even the evolution of humans from their primate ancestors was a \"soft\" takeoff in the sense that no single son or daughter was vastly more intelligent than his or her parents. The evolution of technology in general has been fairly continuous. I probably agree with Paul Christiano [that](http://web.archive.org/web/20150317142946/http://paulfchristiano.com/ai-impacts/) \"it is unlikely that there will be rapid, discontinuous, and unanticipated developments in AI that catapult it to superhuman levels [...].\"\n\n\nOf course, it's not guaranteed that AI innovations will diffuse throughout society. At some point perhaps governments will take control, in the style of the Manhattan Project, and they'll keep the advances secret. But even then, I expect that the internal advances by the research teams will add cognitive abilities in small steps. Even if you have a theoretically optimal intelligence algorithm, it's constrained by computing resources, so you either need lots of hardware or approximation hacks (or most likely both) before it can function effectively in the high-dimensional state space of the real world, and this again implies a slower trajectory. Marcus Hutter's AIXI(tl) is an example of a theoretically optimal general intelligence, but most AI researchers feel it won't work for artificial general intelligence (AGI) because it's astronomically expensive to compute. Ben Goertzel [explains](https://www.youtube.com/watch?v=IyjoU2JunJQ&t=29m43s): \"I think that tells you something interesting. It tells you that dealing with resource restrictions -- with the boundedness of time and space resources -- is actually critical to intelligence. If you lift the restriction to do things efficiently, then AI and AGI are trivial problems.\"[1](#link_ajs-fn-id_1-33)\n\n\nIn \"[I Still Don’t Get Foom](http://www.overcomingbias.com/2014/07/30855.html)\", Robin Hanson contends:\n\n\n\n> Yes, sometimes architectural choices have wider impacts. But I was an artificial intelligence researcher for nine years, ending twenty years ago, and I never saw an architecture choice make a huge difference, relative to other reasonable architecture choices. For most big systems, overall architecture matters a lot less than getting lots of detail right.\n> \n> \n\n\nThis suggests that it's unlikely that a single insight will make an astronomical difference to an AI's performance.\n\n\nSimilarly, my experience is that machine-learning algorithms matter less than the data they're trained on. I think this is a general [sentiment](http://anand.typepad.com/datawocky/2008/03/more-data-usual.html \"\\\"More data usually beats better algorithms\\\" by Anand Rajaraman\") among data scientists. There's a famous slogan that \"More data is better data.\" A main reason Google's performance is so good is that it has so many users that even obscure searches, spelling mistakes, etc. will appear somewhere in its logs. But if many performance gains come from data, then they're constrained by hardware, which generally grows steadily.\n\n\nHanson's \"I Still Don’t Get Foom\" post continues: \"To be much better at learning, the project would instead have to be much better at hundreds of specific kinds of learning. Which is very hard to do in a small project.\" Anders Sandberg [makes](http://hanson.gmu.edu/vc.html#sandberg \"\\\"Singularity and the growth of differences\\\"\") a similar point:\n\n\n\n> As the amount of knowledge grows, it becomes harder and harder to keep up and to get an overview, necessitating specialization. [...] This means that a development project might need specialists in many areas, which in turns means that there is a lower size of a group able to do the development. In turn, this means that it is very hard for a small group to get far ahead of everybody else in all areas, simply because it will not have the necessary know how in all necessary areas. The solution is of course to hire it, but that will enlarge the group.\n> \n> \n\n\nOne of the more convincing anti-\"foom\" arguments is J. Storrs Hall's [point](http://www.agiri.org/takeoff_hall.pdf \"\\\"Engineering Utopia\\\", AGI 2008\") that an AI improving itself to a world superpower would need to outpace *the entire world economy* of 7 billion people, plus natural resources and physical capital. It would do much better to specialize, sell its services on the market, and acquire power/wealth in the ways that most people do. There are plenty of power-hungry people in the world, but usually they go to Wall Street, K Street, or Silicon Valley rather than trying to build world-domination plans in their basement. Why would an AI be different? Some possibilities:\n\n\n1. By being built differently, it's able to concoct an effective world-domination strategy that no human has thought of.\n2. Its non-human form allows it to diffuse throughout the Internet and make copies of itself.\n\n\nI'm skeptical of #1, though I suppose if the AI is very alien, these kinds of unknown unknowns become more plausible. #2 is an interesting point. It [seems](https://web.archive.org/web/20200103191437/https://hplusmagazine.com/2014/09/26/superintelligence-semi-hard-takeoff-scenarios/ \"\\\"Superintelligence — Semi-hard Takeoff Scenarios\\\" by Ben Goertzel\") like a pretty good way to spread yourself as an AI is to become a useful software product that lots of people want to install, i.e., to sell your services on the world market, as Hall said. Of course, once that's done, perhaps the AI could find a way to take over the world. Maybe it could silently quash competitor AI projects. Maybe it could hack into computers worldwide via the Internet and Internet of Things, as the AI did in the *[Delete](http://www.imdb.com/title/tt2316306/)* series. Maybe it could devise a way to convince humans to give it access to sensitive control systems, as Skynet did in *[Terminator 3](https://en.wikipedia.org/wiki/Terminator_3:_Rise_of_the_Machines)*.\n\n\nI find these kinds of scenarios for AI takeover more plausible than a rapidly self-improving superintelligence. Indeed, even a human-level intelligence that can distribute copies of itself over the Internet might be able to take control of human infrastructure and hence take over the world. No \"foom\" is required.\n\n\nRather than discussing hard-vs.-soft takeoff arguments more here, I added discussion to Wikipedia where the content will receive greater readership. See \"Hard vs. soft takeoff\" in \"[Intelligence explosion](https://en.wikipedia.org/wiki/Intelligence_explosion)\".\n\n\nThe hard vs. soft distinction is obviously a matter of degree. And maybe *how long* the process takes isn't the most relevant way to slice the space of scenarios. For practical purposes, the more relevant question is: Should we expect control of AI outcomes to reside primarily in the hands of a few \"seed AI\" developers? In this case, altruists should focus on influencing a core group of AI experts, or maybe their military / corporate leaders. Or should we expect that society as a whole will play a big role in shaping how AI is developed and used? In this case, governance structures, social dynamics, and non-technical thinkers will play an important role not just in influencing how much AI research happens but also in how the technologies are deployed and incrementally shaped as they mature.\n\n\nIt's possible that one country -- perhaps the United States, or maybe China in later decades -- will lead the way in AI development, especially if the research becomes nationalized when AI technology grows more powerful. Would this country then take over the world? I'm not sure. The United States had a monopoly on nuclear weapons for several years after 1945, but it didn't bomb the Soviet Union out of existence. A country with a monopoly on artificial superintelligence might refrain from destroying its competitors as well. On the other hand, AI should enable vastly more sophisticated surveillance and control than was possible in the 1940s, so a monopoly might be sustainable even without resorting to drastic measures. In any case, perhaps a country with superintelligence would just economically outcompete the rest of the world, rendering military power superfluous.\n\n\nBesides a single country taking over the world, the other possibility (perhaps more likely) is that AI is developed in a distributed fashion, either openly as is the case in academia today, or in secret by governments as is the case with other weapons of mass destruction.\n\n\nEven in a soft-takeoff case, there would come a point at which humans would be unable to keep up with the pace of AI thinking. (We already see an instance of this with algorithmic stock-trading systems, although human traders are still needed for more complex tasks right now.) The reins of power would have to be transitioned to faster human uploads, trusted AIs built from scratch, or some combination of the two. In a slow scenario, there might be many intelligent systems at comparable levels of performance, maintaining a balance of power, at least for a while.[2](#link_ajs-fn-id_2-33) In the long run, a [singleton](http://www.nickbostrom.com/fut/singleton.html) seems plausible because computers -- unlike human kings -- can reprogram their servants to want to obey their bidding, which means that as an agent gains more central authority, it's not likely to later lose it by internal rebellion (only by external aggression). Also, evolutionary competition is not a stable state, while a singleton is. It seems likely that evolution will eventually lead to a singleton at one point or other—whether because one faction takes over the world or because the competing factions form a stable cooperation agreement—and competition won't return after that happens. (That said, if the singleton is merely a contingent cooperation agreement among factions that still disagree, one can imagine that cooperation breaking down in the future....)\n\n\nMost of humanity's problems are fundamentally coordination problems / selfishness problems. If humans were perfectly altruistic, we could easily eliminate poverty, overpopulation, war, arms races, and other social ills. There would remain \"man vs. nature\" problems, but these are increasingly disappearing as technology advances. Assuming a digital singleton emerges, the chances of it going extinct seem very small (except due to alien invasions or other external factors) because unless the singleton has a very myopic utility function, it should consider carefully all the consequences of its actions -- in contrast to the \"fools rush in\" approach that humanity currently takes toward most technological risks, due to wanting the benefits of and profits from technology right away and not wanting to lose out to competitors. For this reason, I suspect that most of George Dvorsky's \"[12 Ways Humanity Could Destroy The Entire Solar System](http://io9.com/12-ways-humanity-could-destroy-the-entire-solar-system-1696825692)\" are unlikely to happen, since most of them presuppose blundering by an advanced Earth-originating intelligence, but probably by the time Earth-originating intelligence would be able to carry out interplanetary engineering on a nontrivial scale, we'll already have a digital singleton that thoroughly explores the risks of its actions before executing them. That said, this might not be true if competing AIs begin astroengineering before a singleton is completely formed. (By the way, I should point out that I prefer it if the cosmos isn't successfully colonized, because doing so is likely to [astronomically multiply](https://longtermrisk.org/publications/risks-of-astronomical-future-suffering/) sentience and therefore suffering.)\n\n\nIntelligence explosion?\n-----------------------\n\n\nSometimes it's claimed that we should expect a hard takeoff because AI-development dynamics will fundamentally change once AIs can start improving themselves. One stylized way to explain this is via differential equations. Let I(t) be the intelligence of AIs at time t.\n\n\n* While humans are building AIs, we have, dI/dt = c, where c is some constant level of human engineering ability. This implies I(t) = ct + constant, a linear growth of I with time.\n* In contrast, once AIs can design themselves, we'll have dI/dt = kI for some k. That is, the rate of growth will be faster as the AI designers become more intelligent. This implies I(t) = Aet for some constant A.\n\n\nLuke Muehlhauser [reports](http://lesswrong.com/r/discussion/lw/8ib/connecting_your_beliefs_a_call_for_help/) that the idea of intelligence explosion once machines can start improving themselves \"ran me over like a train. Not because it was absurd, but because it was clearly true.\" I think this kind of exponential feedback loop is the basis behind many of the intelligence-explosion arguments.\n\n\nBut let's think about this more carefully. What's so special about the point where machines can understand and modify themselves? Certainly understanding your own source code helps you improve yourself. But humans *already* understand the source code of present-day AIs with an eye toward improving *it*. Moreover, present-day AIs are vastly simpler than human-level ones will be, and present-day AIs are far less intelligent than the humans who create them. Which is easier: (1) improving the intelligence of something as smart as you, or (2) improving the intelligence of something far dumber? (2) is usually easier. So if anything, AI intelligence should be \"exploding\" faster now, because it can be lifted up by something vastly smarter than it. Once AIs need to improve themselves, they'll have to pull up on their own bootstraps, without the guidance of an already existing model of far superior intelligence on which to base their designs.\n\n\nAs an analogy, it's harder to produce novel developments if you're the market-leading company; it's easier if you're a competitor trying to catch up, because you know what to aim for and what kinds of designs to reverse-engineer. AI right now is like a competitor trying to catch up to the market leader.\n\n\nAnother way to say this: The constants in the differential equations might be important. Even if human AI-development progress is linear, that progress might be faster than a slow exponential curve until some point far later where the exponential catches up.\n\n\nIn any case, I'm cautious of simple differential equations like these. Why should the rate of intelligence increase be proportional to the intelligence level? Maybe the problems become much harder at some point. Maybe the systems become fiendishly complicated, such that even small improvements take a long time. Robin Hanson [echoes](http://hanson.gmu.edu/vc.html#hanson \"\\\"Some Skepticism\\\", in \\\"A Critical Discussion of Vinge's Singularity Concept\\\"\") this suggestion:\n\n\n\n> Students get smarter as they learn more, and learn how to learn. However, we teach the most valuable concepts first, and the productivity value of schooling eventually falls off, instead of exploding to infinity. Similarly, the productivity improvement of factory workers typically slows with time, following a power law. \n> \n> At the world level, average IQ scores have increased dramatically over the last century (the Flynn effect), as the world has learned better ways to think and to teach. Nevertheless, IQs have improved steadily, instead of accelerating. Similarly, for decades computer and communication aids have made engineers much \"smarter,\" without accelerating Moore's law. While engineers got smarter, their design tasks got harder.\n> \n> \n\n\nAlso, ask yourself this question: Why do startups exist? Part of the answer is that they can innovate faster than big companies due to having less institutional baggage and legacy software.[3](#link_ajs-fn-id_3-33) It's harder to make radical changes to big systems than small systems. Of course, like the economy does, a self-improving AI could create its own virtual startups to experiment with more radical changes, but just as in the economy, it might take a while to prove new concepts and then transition old systems to the new and better models.\n\n\nIn discussions of intelligence explosion, it's common to approximate AI productivity as scaling linearly with number of machines, but this may or may not be true depending on the degree of parallelizability. Empirical examples for human-engineered projects [show diminishing returns](https://en.wikipedia.org/wiki/The_Mythical_Man-Month) with more workers, and while computers may be better able to partition work due to greater uniformity and speed of communication, there will remain some overhead in parallelization. Some tasks may be inherently non-paralellizable, [preventing](https://en.wikipedia.org/wiki/Amdahl%27s_law) the kinds of ever-faster performance that the most extreme explosion scenarios envisage.\n\n\nFred Brooks's \"[No Silver Bullet](https://en.wikipedia.org/wiki/No_Silver_Bullet)\" paper argued that \"there is no single development, in either technology or management technique, which by itself promises even one order of magnitude improvement within a decade in productivity, in reliability, in simplicity.\" Likewise, [Wirth's law](https://en.wikipedia.org/wiki/Wirth%27s_law) reminds us of how fast software complexity can grow. These points make it seem less plausible that an AI system could rapidly bootstrap itself to superintelligence using just a few key as-yet-undiscovered insights.\n\n\n[Chollet (2017)](https://medium.com/@francois.chollet/the-impossibility-of-intelligence-explosion-5be4a9eda6ec \"'The impossibility of intelligence explosion – François Chollet – Medium'\") notes that \"even if one part of a system has the ability to recursively self-improve, other parts of the system will inevitably start acting as bottlenecks.\" We might compare this with [Liebig's law of the minimum](https://en.wikipedia.org/wiki/Liebig%27s_law_of_the_minimum \"'Liebig's law of the minimum - Wikipedia'\"): \"growth is dictated not by total resources available, but by the scarcest resource (limiting factor).\" Individual sectors of the human economy can show rapid growth at various times, but the growth rate of the entire economy is more limited.\n\n\nEventually there has to be a leveling off of intelligence increase if only due to physical limits. On the other hand, one argument in favor of differential equations is that the economy has fairly consistently followed exponential trends since humans evolved, though the exponential growth rate of today's economy remains small relative to what we typically imagine from an \"intelligence explosion\".\n\n\nI think a stronger case for intelligence explosion is the clock-speed difference [between biological and digital minds](http://philpapers.org/archive/SOTAOA.pdf \"\\\"Advantages of Artificial Intelligences, Uploads, and Digital Minds\\\" by Kaj Sotala\"). Even if AI development becomes very slow in subjective years, once AIs take it over, in objective years (i.e., revolutions around the sun), the pace will continue to look blazingly fast. But if enough of society is digital by that point (including human-inspired subroutines and maybe [full digital humans](#Importance_of_whole_brain_emulation)), then digital speedup won't give a unique advantage to a single AI project that can then take over the world. Hence, hard takeoff in the sci fi sense still isn't guaranteed. Also, Hanson [argues](http://hanson.gmu.edu/vc.html#hanson \"\\\"Some Skepticism\\\", in \\\"A Critical Discussion of Vinge's Singularity Concept\\\"\") that faster minds would produce a one-time jump in economic output but not necessarily a sustained higher *rate* of growth.\n\n\nAnother case for intelligence explosion is that intelligence growth might not be driven by the intelligence of a given agent so much as by the collective man-hours (or machine-hours) that would become possible with more resources. I suspect that AI research could accelerate at least 10 times if it had 10-50 times more funding. (This is not the same as saying I want funding increased; in fact, I probably want funding decreased to give society more time to sort through these issues.) The population of digital minds that could be created in a few decades might exceed the biological human population, which would imply faster progress if only by numerosity. Also, the digital minds might not need to sleep, would focus intently on their assigned tasks, etc. However, once again, these are advantages in objective time rather than collective subjective time. And these advantages would not be uniquely available to a single first-mover AI project; any wealthy and technologically sophisticated group that wasn't too far behind the cutting edge could amplify its AI development in this way.\n\n\n(A few weeks after writing this section, I learned that Ch. 4 of Nick Bostrom's *Superintelligence: Paths, Dangers, Strategies* contains surprisingly similar content, even up to the use of dI/dt as the symbols in a differential equation. However, Bostrom comes down mostly in favor of the likelihood of an intelligence explosion. I reply to Bostrom's arguments in the next section.)\n\n\nReply to Bostrom's arguments for a hard takeoff\n-----------------------------------------------\n\n\n*Note: Søren Elverlin replies to this section in a [video presentation](https://www.youtube.com/watch?v=04nu87UnslI \"'81: Reply to Bostrom's Arguments for a Hard Takeoff', AI Safety Reading Group, published on Jan 31, 2018\"). I agree with some of his points and disagree with others.*\n\n\nIn Ch. 4 of *Superintelligence*, Bostrom suggests several factors that might lead to a hard or at least semi-hard takeoff. I don't fully disagree with his points, and because these are difficult issues, I agree that Bostrom might be right. But I want to play devil's advocate and defend the soft-takeoff view. I've distilled and paraphrased what I think are 6 core arguments, and I reply to each in turn.\n\n\n*#1: There might be a key missing algorithmic insight that allows for dramatic progress.*\n\n\nMaybe, but do we have much precedent for this? As far as I'm aware, all individual AI advances -- and indeed, most technology advances in general -- have not represented astronomical improvements over previous designs. Maybe connectionist AI systems represented a game-changing improvement *relative to* symbolic AI for messy tasks like vision, but I'm not sure how much of an improvement they represented relative to the best alternative technologies. After all, neural networks are in some sense just fancier forms of pre-existing statistical methods like logistic regression. And even neural networks came in stages, with the perceptron, multi-layer networks, backpropagation, recurrent networks, deep networks, etc. The most groundbreaking machine-learning advances may reduce error rates by a half or something, which may be commercially very important, but this is not many orders of magnitude as hard-takeoff scenarios tend to assume.\n\n\nOutside of AI, the Internet changed the world, but it was an accumulation of many insights. Facebook has had massive impact, but it too was built from many small parts and grew in importance slowly as its size increased. Microsoft became a virtual monopoly in the 1990s but perhaps more for business than technology reasons, and its power in the software industry at large is probably not growing. Google has a quasi-monopoly on web search, kicked off by the success of PageRank, but most of its improvements have been small and gradual. Google has grown very powerful, but it hasn't maintained a permanent advantage that would allow it to take over the software industry.\n\n\nAcquiring nuclear weapons might be the closest example of a single discrete step that most dramatically changes a country's position, but this may be an outlier. Maybe other advances in weaponry (arrows, guns, etc.) historically have had somewhat dramatic effects.\n\n\nBostrom doesn't present specific arguments for thinking that a few crucial insights may produce radical jumps. He suggests that we might not notice a system's improvements until it passes a threshold, but this seems absurd, because at least the AI developers would need to be intimately acquainted with the AI's performance. While not strictly accurate, there's a slogan: \"You can't improve what you can't measure.\" Maybe the AI's progress wouldn't make world headlines, but the academic/industrial community would be well aware of nontrivial breakthroughs, and the AI developers would live and breathe performance numbers.\n\n\n*#2: Once an AI passes a threshold, it might be able to absorb vastly more content (e.g., by reading the Internet) that was previously inaccessible.*\n\n\nAbsent other concurrent improvements I'm doubtful this would produce take-over-the-world superintelligence, because the world's current superintelligence (namely, humanity as a whole) already has read most of the Internet -- indeed, has written it. I guess humans haven't read all automatically generated text or vast streams of numerical data, but the insights gleaned purely from reading such material would be low without doing more sophisticated data mining and learning on top of it, and presumably such data mining would have already been in progress well before Bostrom's hypothetical AI learned how to read.\n\n\nIn any case, I doubt reading with understanding is such an all-or-nothing activity that it can suddenly \"turn on\" once the AI achieves a certain ability level. As Bostrom says (p. 71), reading with the comprehension of a 10-year-old is probably AI-complete, i.e., requires solving the general AI problem. So assuming that you can switch on reading ability with one improvement is equivalent to assuming that a single insight can produce astronomical gains in AI performance, which we discussed above. If that's not true, and if before the AI system with 10-year-old reading ability was an AI system with a 6-year-old reading ability, why wouldn't that AI have already devoured the Internet? And before that, why wouldn't a proto-reader have devoured a version of the Internet that had been processed to make it easier for a machine to understand? And so on, until we get to the present-day TextRunner system that Bostrom cites, which is already devouring the Internet. It doesn't make sense that massive amounts of content would only be added after lots of improvements. Commercial incentives tend to yield exactly the opposite effect: converting the system to a large-scale product when even modest gains appear, because these may be enough to snatch a market advantage.\n\n\nThe fundamental point is that I don't think there's a crucial set of components to general intelligence that all need to be in place before the whole thing works. It's hard to evolve systems that require all components to be in place at once, which suggests that human general intelligence probably evolved gradually. I expect it's possible to get partial AGI with partial implementations of the components of general intelligence, and the components can gradually be made more general over time. Components that are lacking can be supplemented by [human-based computation](https://en.wikipedia.org/wiki/Human-based_computation) and narrow-AI hacks until more general solutions are discovered. Compare with [minimum viable products](https://en.wikipedia.org/wiki/Minimum_viable_product) and [agile software development](https://en.wikipedia.org/wiki/Agile_software_development). As a result, society should be upended by partial AGI innovations many times over the coming decades, well before fully human-level AGI is finished.\n\n\n*#3: Once a system \"proves its mettle by attaining human-level intelligence\", funding for hardware could multiply.*\n\n\nI agree that funding for AI could multiply manyfold due to a sudden change in popular attention or political dynamics. But I'm thinking of something like a factor of 10 or *maybe* 50 in an all-out Cold War-style arms race. A factor-of-50 boost in hardware isn't obviously that important. If before there was one human-level AI, there would now be 50. In any case, I expect the Sputnik moment(s) for AI to happen well before it achieves a human level of ability. Companies and militaries aren't stupid enough not to invest massively in an AI with almost-human intelligence.\n\n\n*#4: Once the human level of intelligence is reached, \"Researchers may work harder, [and] more researchers may be recruited\".*\n\n\nAs with hardware above, I would expect these \"shit hits the fan\" moments to happen before fully human-level AI. In any case:\n\n\n* It's not clear there would be enough AI specialists to recruit in a short time. Other quantitatively minded people could switch to AI work, but they would presumably need years of experience to produce cutting-edge insights.\n* The number of people thinking about AI safety, ethics, and social implications should also multiply during Sputnik moments. So the ratio of AI policy work to total AI work might not change relative to slower takeoffs, even if the physical time scales would compress.\n\n\n*#5: At some point, the AI's self-improvements would dominate those of human engineers, leading to exponential growth.*\n\n\nI discussed this in the \"Intelligence explosion?\" section above. A main point is that we see many other systems, such as the world economy or Moore's law, that also exhibit positive feedback and hence exponential growth, but these aren't \"fooming\" at an astounding rate. It's not clear why an AI's self-improvement -- which [resembles](https://en.wikipedia.org/wiki/Software_entropy) economic growth and other [complex phenomena](http://www.overcomingbias.com/2014/07/limits-on-generality.html \"\\\"Irreducible Detail\\\" by Robin Hanson\") -- should suddenly explode faster (in subjective time) than humanity's existing recursive-self improvement of its intelligence via digital computation.\n\n\nOn the other hand, maybe the difference between subjective and objective time is important. If a human-level AI could think, say, 10,000 times faster than a human, then assuming linear scaling, it would be worth 10,000 engineers. By the time of human-level AI, I expect there would be far more than 10,000 AI developers on Earth, but given enough hardware, the AI could copy itself manyfold until its subjective time far exceeded that of human experts. The speed and copiability advantages of digital minds seem perhaps the strongest arguments for a takeoff that happens rapidly relative to human observers. Note that, as Hanson said above, this digital speedup might be just a one-time boost, rather than a permanently higher rate of growth, but even the one-time boost could be enough to radically alter the power dynamics of humans vis-à-vis machines. That said, there should be plenty of slightly sub-human AIs by this time, and maybe they could fill some speed gaps on behalf of biological humans.\n\n\nIn general, it's a mistake to imagine human-level AI against a backdrop of our current world. That's like [imagining](https://en.wikipedia.org/wiki/The_Lost_World:_Jurassic_Park#Plot) a *Tyrannosaurus rex* in a human city. Rather, the world will look very different by the time human-level AI arrives. Before AI can exceed human performance in all domains, it will exceed human performance in many narrow domains gradually, and these narrow-domain AIs will help humans respond quickly. For example, a narrow AI that's an expert at military planning based on war games can help humans with possible military responses to rogue AIs.\n\n\nMany of the intermediate steps on the path to general AI will be commercially useful and thus should diffuse widely in the meanwhile. As user \"HungryHobo\" [noted](http://lesswrong.com/r/discussion/lw/lhm/inverse_relationship_between_belief_in_foom_and/btee \"\\\"HungryHobo comments on Inverse relationship between belief in foom and years worked in commercial software - Less Wrong Discussion\\\"\"): \"If you had a near human level AI, odds are, everything that could be programmed into it at the start to help it with software development is already going to be part of the suites of tools for helping normal human programmers.\" Even if AI research becomes nationalized and confidential, its developers should still have access to almost-human-level digital-speed AI tools, which should help smooth the transition. For instance, Bostrom mentions how in the [2010 flash crash](https://en.wikipedia.org/wiki/2010_Flash_Crash) (Box 2, p. 17), a high-speed positive-feedback spiral was terminated by a high-speed \"circuit breaker\". This is already an example where problems happening faster than humans could comprehend them were averted due to solutions happening faster than humans could comprehend them. See also the discussion of \"tripwires\" in *Superintelligence* (p. 137).\n\n\nConversely, many globally disruptive events may happen well before fully human AI arrives, since even sub-human AI may be prodigiously powerful.\n\n\n*#6: \"even when the outside world has a greater total amount of relevant research capability than any one project\", the optimization power of the project might be more important than that of the world \"since much of the outside world's capability is not be focused on the particular system in question\". Hence, the project might take off and leave the world behind. (Box 4, p. 75)*\n\n\nWhat one makes of this argument depends on how many people are needed to engineer how much progress. The [Watson](https://en.wikipedia.org/wiki/Watson_(computer)) system that played on *Jeopardy!* [required](http://www.nytimes.com/2010/06/20/magazine/20Computer-t.html?_r=0&pagewanted=all \"\\\"What Is I.B.M.’s Watson?\\\" by Clive Thompson\") 15 people over ~4(?) years[4](#link_ajs-fn-id_4-33) -- given the existing tools of the rest of the world at that time, which had been developed by millions (indeed, billions) of other people. Watson was a much smaller leap forward than that needed to give a general intelligence a take-over-the-world advantage. How many more people would be required to achieve such a radical leap in intelligence? This seems to be a main point of contention in the debate between believers in soft vs. hard takeoff.\n\n\nHow complex is the brain?\n-------------------------\n\n\nCan we get insight into how hard general intelligence is based on neuroscience? Is the human brain fundamentally simple or complex?\n\n\n### One basic algorithm?\n\n\nJeff Hawkins, Andrew Ng, and others [speculate that](http://www.wired.com/2013/05/neuro-artificial-intelligence/ \"\\\"The Man Behind the Google Brain: Andrew Ng and the Quest for the New AI\\\"\") the brain may have one fundamental algorithm for intelligence -- deep learning in the cortical column. This idea gains plausibility from the brain's plasticity. For instance, blind people can appropriate the visual cortex for auditory processing. Artificial neural networks can be used to classify any kind of input -- not just visual and auditory but even highly abstract, like features about credit-card fraud or stock prices.\n\n\nMaybe there's one fundamental algorithm for input classification, but this doesn't imply one algorithm for all that the brain does. Beyond the cortical column, the brain has many specialized structures that seem to perform very specialized functions, such as reward learning in the basal ganglia, fear processing in the amygdala, etc. Of course, it's not clear how essential all of these parts are or how easy it would be to replace them with artificial components performing the same basic functions.\n\n\nOne argument for faster AGI takeoffs is that humans have been able to learn many sophisticated things (e.g., advanced mathematics, music, writing, programming) without requiring any genetic changes. And what we now know doesn't seem to represent any kind of limit to what we could know with more learning. The human collection of cognitive algorithms is very flexible, which seems to belie claims that all intelligence requires specialized designs. On the other hand, even if human genes haven't changed much in the last 10,000 years, human culture has evolved substantially, and culture undergoes slow trial-and-error evolution in similar ways as genes do. So one could argue that human intellectual achievements are not fully general but rely on a vast amount of specialized, evolved content. Just as a single random human isolated from society probably couldn't develop general relativity on his own in a lifetime, so a single random human-level AGI probably couldn't either. Culture is the new genome, and it progresses slowly.\n\n\nMoreover, some scholars [believe](http://www.theatlantic.com/technology/archive/2012/11/noam-chomsky-on-where-artificial-intelligence-went-wrong/261637/ \"'Noam Chomsky on Where Artificial Intelligence Went Wrong'\") that certain human abilities, such as language, *are* very essentially based on genetic hard-wiring:\n\n\n\n> The approach taken by Chomsky and Marr toward understanding how our minds achieve what they do is as different as can be from behaviorism. The emphasis here is on the internal structure of the system that enables it to perform a task, rather than on external association between past behavior of the system and the environment. The goal is to dig into the \"black box\" that drives the system and describe its inner workings, much like how a computer scientist would explain how a cleverly designed piece of software works and how it can be executed on a desktop computer.\n> \n> \n\n\nChomsky himself [notes](http://www.theatlantic.com/technology/archive/2012/11/noam-chomsky-on-where-artificial-intelligence-went-wrong/261637/ \"'Noam Chomsky on Where Artificial Intelligence Went Wrong'\"):\n\n\n\n> There's a fairly recent book by a very good cognitive neuroscientist, Randy Gallistel and King, arguing -- in my view, plausibly -- that neuroscience developed kind of enthralled to associationism and related views of the way humans and animals work. And as a result they've been looking for things that have the properties of associationist psychology.\n> \n> \n> [...] Gallistel has been arguing for years that if you want to study the brain properly you should begin, kind of like Marr, by asking what tasks is it performing. So he's mostly interested in insects. So if you want to study, say, the neurology of an ant, you ask what does the ant do? It turns out the ants do pretty complicated things, like path integration, for example. If you look at bees, bee navigation involves quite complicated computations, involving position of the sun, and so on and so forth. But in general what he argues is that if you take a look at animal cognition, human too, it is computational systems.\n\n\nMany parts of the human body, like the digestive system or bones/muscles, are extremely complex and fine-tuned, yet few people argue that their development is controlled by learning. So it's not implausible that a lot of the brain's basic architecture could be similarly hard-coded.\n\n\nTypically AGI researchers express scorn for manually tuned software algorithms that don't rely on fully general learning. But Chomsky's stance challenges that sentiment. If Chomsky is right, then a good portion of human \"general intelligence\" is finely tuned, hard-coded software of the sort that we see in non-AI branches of software engineering. And this view would suggest a slower AGI takeoff because time and experimentation are required to tune all the detailed, specific algorithms of intelligence.\n\n\n### Ontogenetic development\n\n\nA full-fledged superintelligence probably requires very complex design, but it may be possible to build a \"seed AI\" that would recursively self-improve toward superintelligence. Alan Turing proposed this in his 1950 \"[Computing machinery and intelligence](https://web.archive.org/web/20170716100252/http://loebner.net/Prizef/TuringArticle.html)\":\n\n\n\n> Instead of trying to produce a programme to simulate the adult mind, why not rather try to produce one which simulates the child's? If this were then subjected to an appropriate course of education one would obtain the adult brain. Presumably the child brain is something like a notebook as one buys it from the stationer's. Rather little mechanism, and lots of blank sheets. (Mechanism and writing are from our point of view almost synonymous.) Our hope is that there is so little mechanism in the child brain that something like it can be easily programmed.\n> \n> \n\n\nAnimal development appears to be at least somewhat robust based on the fact that the growing organisms are often functional despite a few genetic mutations and variations in prenatal and postnatal environments. Such variations may indeed make an impact -- e.g., healthier development conditions tend to yield more physically attractive adults -- but most humans mature successfully over a wide range of input conditions.\n\n\nOn the other hand, an argument against the simplicity of development is the immense complexity of our DNA. It accumulated over billions of years through vast numbers of evolutionary \"experiments\". It's not clear that human engineers could perform enough measurements to tune ontogenetic parameters of a seed AI in a short period of time. And even if the parameter settings worked for early development, they would probably fail for later development. Rather than a seed AI developing into an \"adult\" all at once, designers would develop the AI in small steps, since each next stage of development would require significant tuning to get right.\n\n\nThink about how much effort is required for human engineers to build even relatively simple systems. For example, I think the number of developers who work on Microsoft Office is in the thousands. Microsoft Office is complex but is still far simpler than a mammalian brain. Brains have lots of little parts that have been fine-tuned. That kind of complexity requires immense work by software developers to create. The main counterargument is that there may be a simple meta-algorithm that would allow an AI to bootstrap to the point where it could fine-tune all the details on its own, without requiring human inputs. This might be the case, but my guess is that any elegant solution would be hugely expensive computationally. For instance, biological evolution was able to fine-tune the human brain, but it did so with immense amounts of computing power over millions of years.\n\n\nBrain quantity vs. quality\n--------------------------\n\n\nA common analogy for the gulf between superintelligence vs. humans is that between humans vs. chimpanzees. In *Consciousness Explained*, Daniel Dennett mentions (pp. 189-190) how our hominid ancestors had brains roughly four times the volume as those of chimps but roughly the same in structure. This might incline one to imagine that brain size alone could yield superintelligence. Maybe we'd just need to quadruple human brains once again to produce superintelligent humans? If so, wouldn't this imply a hard takeoff, since quadrupling hardware is relatively easy?\n\n\nBut in fact, as Dennett explains, the quadrupling of brain size from chimps to pre-humans completed before the advent of language, cooking, agriculture, etc. In other words, the main \"foom\" of humans came from culture rather than brain size per se -- from software in addition to hardware. Yudkowsky [seems to agree](http://intelligence.org/files/IEM.pdf \"\\\"Intelligence Explosion Microeconomics\\\", p. 26\"): \"Humans have around four times the brain volume of chimpanzees, but the difference between us is probably mostly brain-level cognitive algorithms.\"\n\n\nBut cultural changes (software) arguably progress a lot more slowly than hardware. The intelligence of human society has grown exponentially, but it's a slow exponential, and rarely have there been innovations that allowed one group to quickly overpower everyone else within the same region of the world. (Between isolated regions of the world the situation was sometimes different -- e.g., Europeans with [Maxim guns](https://en.wikiquote.org/wiki/Hilaire_Belloc#Quotes) overpowering Africans because of very different levels of industrialization.)\n\n\nMore impact in hard-takeoff scenarios?\n--------------------------------------\n\n\nSome, including [Owen Cotton-Barratt and Toby Ord](http://www.effective-altruism.com/strategic-considerations-about-different-speeds-of-ai-takeoff/), have argued that even if we think soft takeoffs are more likely, there may be higher value in focusing on hard-takeoff scenarios because these are the cases in which society would have the least forewarning and the fewest people working on AI altruism issues. This is a reasonable point, but I would add that\n\n\n* Maybe hard takeoffs are sufficiently improbable that focusing on them still doesn't have highest priority. (Of course, some exploration of fringe scenarios is worthwhile.) There may be important advantages to starting early in shaping how society approaches soft takeoffs, and if a soft takeoff is very likely, those efforts may have more expected impact.\n* Thinking about the most likely AI outcomes rather than the most impactful outcomes also gives us a better platform on which to contemplate other levers for shaping the future, such as non-AI emerging technologies, international relations, governance structures, values, etc. Focusing on a tail AI scenario doesn't inform non-AI work very well because that scenario probably won't happen. Promoting antispeciesism matters whether there's a hard or soft takeoff (indeed, maybe more in the soft-takeoff case), so our model of how the future will unfold should generally focus on likely scenarios. Plus, even if we do ultimately choose to focus on a Pascalian low-probability-but-high-impact scenario, learning more about the most likely future outcomes can better position us to find superior (more likely and/or more important) Pascalian wagers that we haven't thought of yet. Edifices of understanding are not built on Pascalian wagers.\n* As a more general point about expected-value calculations, I think improving one's models of the world (i.e., one's probabilities) is generally more important than improving one's estimates of the values of outcomes conditional on them occurring. Why? Our current frameworks for envisioning the future may be very misguided, and estimates of \"values of outcomes\" may become obsolete if our conception of what outcomes will even happen changes radically. It's more important to make crucial insights that will shatter our current assumptions and get us closer to truth than it is to refine value estimates within our current, naive world models. As an example, philosophers in the Middle Ages would have accomplished little if they had asked what God-glorifying actions to focus on by evaluating which devout obeisances would have the greatest upside value if successful. Such philosophers would have accomplished more if they had explored whether a God even existed. Of course, sometimes debates on factual questions are stalled, and perhaps there may be lower-hanging fruit in evaluating the prudential implications of different scenarios (\"values of outcomes\") until further epistemic progress can be made on the probabilities of outcomes. (Thanks to a friend for inspiring this point.)\n\n\nIn any case, the hard-soft distinction is not binary, and maybe the best place to focus is on scenarios where human-level AI takes over on a time scale of a few years. (Timescales of months, days, or hours strike me as pretty improbable, unless, say, Skynet gets control of nuclear weapons.)\n\n\nIn *Superintelligence*, Nick Bostrom suggests (Ch. 4, p. 64) that \"Most preparations undertaken before onset of [a] slow takeoff would be rendered obsolete as better solutions would gradually become visible in the light of the dawning era.\" Toby Ord [uses](http://www.fhi.ox.ac.uk/the-timing-of-labour-aimed-at-reducing-existential-risk/ \"\\\"The timing of labour aimed at reducing existential risk\\\"\") the term \"nearsightedness\" to refer to the ways in which research too far in advance of an issue's emergence may not as useful as research when more is known about the issue. Ord contrasts this with benefits of starting early, including course-setting. I think Ord's counterpoints argue against the contention that early work wouldn't matter that much in a slow takeoff. Some of how society responded to AI surpassing human intelligence might depend on early frameworks and memes. (For instance, consider the lingering impact of *Terminator* imagery on almost any present-day popular-media discussion of AI risk.) Some fundamental work would probably not be overthrown by later discoveries; for instance, algorithmic-complexity bounds of key algorithms were discovered decades ago but will remain relevant until intelligence dies out, possibly billions of years from now. Some non-technical policy and philosophy work would be less obsoleted by changing developments. And some AI preparation would be relevant both in the short term and the long term. Slow AI takeoff to reach the human level is already happening, and more minds should be exploring these questions well in advance.\n\n\nMaking a related though slightly different point, Bostrom argues in *Superintelligence* (Ch. 5, pp. 85-86) that individuals might play more of a role in cases where elites and governments underestimate the significance of AI: \"Activists seeking maximum expected impact may therefore wish to focus most of their planning on [scenarios where governments come late to the game], even if they believe that scenarios in which big players end up calling all the shots are more probable.\" Again I would qualify this with the note that we shouldn't confuse \"acting as if\" governments will come late with believing they actually will come late when thinking about most likely future scenarios.\n\n\nEven if one does wish to bet on low-probability, high-impact scenarios of fast takeoff and governmental neglect, this doesn't speak to whether or how we should push on takeoff speed and governmental attention themselves. Following are a few considerations.\n\n\nTakeoff speed\n\n\n* In favor of fast takeoff:\n\t+ A singleton is more likely, thereby averting possibly disastrous conflict among AIs.\n\t+ If one prefers uncontrolled AI, fast takeoffs seem more likely to produce them.\n* In favor of slow takeoff:\n\t+ More time for many parties to participate in shaping the process, compromising, and developing less damaging pathways to AI takeoff.\n\t+ If one prefers controlled AI, slow takeoffs seem more likely to produce them in general. (There are some exceptions. For instance, fast takeoff of an AI built by a very careful group might remain more controlled than an AI built by committees and messy politics.)\n\n\nAmount of government/popular attention to AI\n\n\n* In favor of more:\n\t+ Would yield much more reflection, discussion, negotiation, and pluralistic representation.\n\t+ If one favors controlled AI, it's plausible that multiplying the number of people thinking about AI would multiply consideration of [failure modes](http://intelligence.org/2013/07/31/ai-risk-and-the-security-mindset/ \"\\\"AI Risk and the Security Mindset\\\"\").\n\t+ Public pressure might help curb arms races, in analogy with public opposition to nuclear arms races.\n* In favor of less:\n\t+ Wider attention to AI [might accelerate arms races](https://longtermrisk.org/publications/international-cooperation-vs-ai-arms-race/#Should_we_publicize_AI_arms_races) rather than inducing cooperation on more circumspect planning.\n\t+ The public might freak out and demand counterproductive measures in response to the threat.\n\t+ If one prefers uncontrolled AI, that outcome may be less likely with many more human eyes scrutinizing the issue.\n\n\nVillage idiot vs. Einstein\n--------------------------\n\n\nOne of the strongest arguments for hard takeoff is [this one](http://lesswrong.com/lw/ql/my_childhood_role_model/ \"'My Childhood Role Model'\") by Yudkowsky:\n\n\n\n> the distance from \"village idiot\" to \"Einstein\" is tiny, in the space of *brain designs*\n> \n> \n\n\nOr as Scott Alexander [put it](http://slatestarcodex.com/2015/12/17/should-ai-be-open/):\n\n\n\n> It took evolution twenty million years to go from cows with sharp horns to hominids with sharp spears; it took only a few tens of thousands of years to go from hominids with sharp spears to moderns with nuclear weapons.\n> \n> \n\n\nI think we shouldn't take relative evolutionary timelines at face value, because most of the previous 20 million years of mammalian evolution weren't focused on improving human intelligence; most of the evolutionary selection pressure was directed toward optimizing other traits. In contrast, cultural evolution places greater emphasis on intelligence because that trait is more important in human society than it is in most animal fitness landscapes.\n\n\nStill, the overall point is important: The tweaks to a brain needed to produce human-level intelligence may not be huge compared with the designs needed to produce chimp intelligence, but the differences in the behaviors of the two systems, when placed in a sufficiently information-rich environment, are huge.\n\n\nNonetheless, I incline toward thinking that the transition from human-level AI to an AI significantly smarter than all of humanity combined would be somewhat gradual (requiring at least years if not decades) because the absolute scale of improvements needed would still be immense and would be limited by hardware capacity. But if hardware becomes many orders of magnitude more efficient than it is today, then things could indeed move more rapidly.\n\n\nAnother important criticism of the \"village idiot\" point is that it lacks context. While a village idiot in isolation will not produce rapid progress toward superintelligence, one Einstein plus a million village idiots working for him can produce AI progress much faster than one Einstein alone. The narrow-intelligence software tools that we build are dumber than village idiots in isolation, but collectively, when deployed in thoughtful ways by smart humans, they allow humans to achieve much more than Einstein by himself with only pencil and paper. This observation weakens the idea of a phase transition when human-level AI is developed, because village-idiot-level AIs in the hands of humans will already be achieving \"superhuman\" levels of performance. If we think of human intelligence as the number 1 and human-level AI that can build smarter AI as the number 2, then rather than imagining a transition from 1 to 2 at one crucial point, we should think of our \"dumb\" software tools as taking us to 1.1, then 1.2, then 1.3, and so on. (My thinking on this point was inspired by Ramez Naam.)\n\n\nAI performance in games vs. the real world\n------------------------------------------\n\n\nMany of the most impressive AI achievements of the 2010s were improvements at game play, both video games like Atari games and board/card games [like Go](https://en.wikipedia.org/wiki/AlphaGo \"'AlphaGo - Wikipedia'\") and poker. Some people infer from these accomplishments that AGI may not be far off. I think performance in these simple games doesn't give much evidence that a world-conquering AGI could arise within a decade or two.\n\n\nA main reason is that most of the games at which AI has excelled have had simple rules and a limited set of possible actions at each turn. [Russell and Norvig (2003)](https://smile.amazon.com/Artificial-Intelligence-Modern-Approach-2nd/dp/0137903952/ \"'Artificial Intelligence: A Modern Approach (2nd Edition)'\"), pp. 161-62: \"For AI researchers, the abstract nature of games makes them an appealing subject for study. The state of a game is easy to represent, and agents are usually restricted to a small number of actions whose outcomes are defined by precise rules.\" In games like *Space Invaders* or Go, you can see the entire world at once and represent it as a two-dimensional grid.[5](#link_ajs-fn-id_5-33) You can also consider all possible actions at a given turn. For example, AlphaGo's \"policy networks\" gave \"a probability value for each possible legal move (i.e. the output of the network is as large as the board)\" (as summarized by [Burger 2016](https://www.tastehit.com/blog/google-deepmind-alphago-how-it-works/ \"'Google DeepMind's AlphaGo: How it works'\")). Likewise, DeepMind's deep Q-network for playing Atari games had \"a single output for each valid action\" ([Mnih et al. 2015](http://doi.org/10.1038/nature14236 \"'Human-level control through deep reinforcement learning'\"), p. 530).\n\n\nIn contrast, the state space of the world is enormous, heterogeneous, not easily measured, and not easily represented in a simple two-dimensional grid. Plus, the number of possible actions that one can take at any given moment is almost unlimited; for instance, even just considering actions of the form \"print to the screen a string of uppercase or lowercase alphabetical characters fewer than 50 characters long\", the number of possibilities for what text to print out is larger than the number of atoms in the observable universe.[6](#link_ajs-fn-id_6-33) These problems seem to require hierarchical world models and hierarchical planning of actions—allowing for abstraction of complexity into simplified and high-level conceptualizations—as well as the data structures, learning algorithms, and simulation capabilities on which such world models and plans can be based.\n\n\nSome people may be impressed that AlphaGo uses \"intuition\" (i.e., deep neural networks), like human players do, and doesn't rely purely on brute-force search and hand-crafted heuristic evaluation functions the way that Deep Blue did to win at chess. But the idea that computers can have \"intuition\" is nothing new, since that's what most machine-learning classifiers are about.\n\n\nMachine learning, especially supervised machine learning, is very popular these days compared against other aspects of AI. Perhaps this is because unlike most other parts of AI, machine learning can easily be commercialized? But even if visual, auditory, and other sensory recognition can be replicated by machine learning, this doesn't get us to AGI. In my opinion, the hard part of AGI (or at least, the part we haven't made as much progress on) is how to hook together various narrow-AI modules and abilities into a more generally intelligent agent that can figure out what abilities to deploy in various contexts in pursuit of higher-level goals. Hierarchical planning in complex worlds, rich semantic networks, and general \"common sense\" in various flavors still seem largely absent from many state-of-the-art AI systems as far as I can tell. I don't think these are problems that you can just bypass by scaling up deep reinforcement learning or something.\n\n\n[Kaufman (2017a)](https://www.jefftk.com/p/conversation-with-bryce-wiedenbeck \"'Conversation with Bryce Wiedenbeck'\") says regarding a conversation with professor Bryce Wiedenbeck: \"Bryce thinks there are deep questions about what intelligence really is that we don't understand yet, and that as we make progress on those questions we'll develop very different sorts of [machine-learning] systems. If something like today's deep learning is still a part of what we eventually end up with, it's more likely to be something that solves specific problems than as a critical component.\" Personally, I think deep learning (or something functionally analogous to it) is likely to remain a big *component* of future AI systems. Two lines of evidence for this view are that (1) supervised machine learning has been a cornerstone of AI for decades and (2) animal brains, including the human cortex, seem to rely crucially on something like deep learning for sensory processing. However, I agree with Bryce that there remain big parts of human intelligence that aren't captured by even a scaled up version of deep learning.\n\n\nI also largely agree with Michael Littman's expectations as described by [Kaufman (2017b)](https://www.jefftk.com/p/conversation-with-michael-littman \"'Conversation with Michael Littman'\"): \"I asked him what he thought of the idea that to we could get AGI with current techniques, primarily deep neural nets and reinforcement learning, without learning anything new about how intelligence works or how to implement it [...]. He didn't think this was possible, and believes there are deep conceptual issues we still need to get a handle on.\"\n\n\n[Merritt (2017)](https://web.archive.org/web/20181116110959/https://www.eetimes.com/document.asp?doc_id=1331940 \"'Expert Panel Debunks AI Hype | EE Times'\") quotes Stuart Russell as saying that modern neural nets \"lack the expressive power of programming languages and declarative semantics that make database systems, logic programming, and knowledge systems useful.\" Russell believes \"We have at least half a dozen major breakthroughs to come before we get [to AI]\".\n\n\n### Replies to Yudkowsky on \"local capability gain\"\n\n\n[Yudkowsky (2016a)](https://futureoflife.org/2016/03/15/eliezer-yudkowsky-on-alphagos-wins/ \"'Eliezer Yudkowsky on AlphaGo's Wins - Future of Life Institute'\") discusses some interesting insights from AlphaGo's matches against Lee Sedol and DeepMind more generally. He says:\n\n\n\n> \n> AlphaGo’s core is built around a similar machine learning technology to DeepMind’s Atari-playing system – the single, untweaked program that was able to learn superhuman play on dozens of different Atari games just by looking at the pixels, without specialization for each particular game. In the Atari case, we didn’t see a bunch of different companies producing gameplayers for all the different varieties of game. The Atari case was an example of an event that Robin Hanson called “architecture” and doubted, and that I called “insight.” Because of their big architectural insight, DeepMind didn’t need to bring in lots of different human experts at all the different Atari games to train their universal Atari player. DeepMind just tossed all pre-existing expertise because it wasn’t formatted in a way their insightful AI system could absorb, and besides, it was a lot easier to just recreate all the expertise from scratch using their universal Atari-learning architecture.\n> \n> \n> \n\n\nI agree with Yudkowsky that there are domains where a new general tool renders previous specialized tools obsolete all at once. However:\n\n\n1. There wasn't intense pressure to perform well on most Atari games before DeepMind tried. Specialized programs can indeed perform well on such games if one cares to develop them. For example, DeepMind's 2015 Atari player actually performed below human level on Ms. Pac-Man (Mnih et al. 2015, Figure 3), but in 2017, Microsoft AI researchers [beat Ms. Pac-Man](https://techcrunch.com/2017/06/15/microsofts-ai-beats-ms-pac-man/ \"'Microsoft’s AI beats Ms. Pac-Man | TechCrunch'\") by optimizing harder for just that one game.\n2. While DeepMind's Atari player is certainly more general in its intelligence than most other AI game-playing programs, its abilities are still quite limited. For example, DeepMind had 0% performance on *Montezuma's Revenge* (Mnih et al. 2015, Figure 3). This [was later](https://www.engadget.com/2016/06/09/google-deepmind-ai-montezumas-revenge/ \"'Google DeepMind AI learns to play 'Montezuma's Revenge''\") improved upon by adding \"curiosity\" to encourage exploration. But that's an example of the view that AI progress generally proceeds by small tweaks.\n\n\nYudkowsky (2016a) continues:\n\n\n\n> \n> so far as I know, AlphaGo wasn’t built in collaboration with any of the commercial companies that built their own Go-playing programs for sale. The October architecture was simple and, so far as I know, incorporated very little in the way of all the particular tweaks that had built up the power of the best open-source Go programs of the time. Judging by the October architecture, after their big architectural insight, DeepMind mostly started over in the details (though they did reuse the widely known core insight of Monte Carlo Tree Search). DeepMind didn’t need to trade with any other Go companies or be part of an economy that traded polished cognitive modules, because DeepMind’s big insight let them leapfrog over all the detail work of their competitors.\n> \n> \n> \n\n\nThis is a good point, but I think it's mainly a function of the limited complexity of the Go problem. With the exception of learning from human play, AlphaGo didn't require massive inputs of messy, real-world data to succeed, because its world was so simple. Go is the kind of problem where we would expect a single system to be able to perform well without trading for cognitive assistance. Real-world problems are more likely to depend upon external AI systems—e.g., when doing a web search for information. No simple AI system that runs on just a few machines will reproduce the massive data or extensively fine-tuned algorithms of Google search. For the foreseeable future, Google search will always be an external \"polished cognitive module\" that needs to be \"traded for\" (although Google search is free for limited numbers of queries). The same is true for many other cloud services, especially those reliant upon huge amounts of data or specialized domain knowledge. We see lots of specialization and trading of non-AI cognitive modules, such as hardware components, software applications, Amazon Web Services, etc. And of course, simple AIs will for a long time depend upon the human economy to provide material goods and services, including electricity, cooling, buildings, security guards, national defense, etc.\n\n\nA case for epistemic modesty on AI timelines\n--------------------------------------------\n\n\nEstimating how long a software project will take to complete [is](http://www.woodwardweb.com/programming/000439.html \"'Why Software Estimation is Hard'\") notoriously [difficult](http://programmers.stackexchange.com/questions/102856/how-to-explain-that-its-hard-to-estimate-the-time-required-for-a-bigger-softwar). Even if I've completed many similar coding tasks before, when I'm asked to estimate the time to complete a new coding project, my estimate is often wrong by a factor of 2 and sometimes wrong by a factor of 4, or even 10. Insofar as the development of AGI (or other big technologies, like nuclear fusion) is a big software (or more generally, engineering) project, it's unsurprising that we'd see similarly dramatic failures of estimation on timelines for these bigger-scale achievements.\n\n\nA corollary is that we should maintain some modesty about AGI timelines and takeoff speeds. If, say, 100 years is your median estimate for the time until some agreed-upon form of AGI, then there's a reasonable chance you'll be off by a factor of 2 (suggesting AGI within 50 to 200 years), and you might even be off by a factor of 4 (suggesting AGI within 25 to 400 years). Similar modesty applies for estimates of takeoff speed from human-level AGI to super-human AGI, although I think we can largely rule out extreme takeoff speeds (like achieving performance far beyond human abilities within hours or days) based on fundamental reasoning about the computational complexity of what's required to achieve superintelligence.\n\n\nMy bias is generally to assume that a given technology will take longer to develop than what you hear about in the media, (a) because of the planning fallacy and (b) because those who make more audacious claims are more interesting to report about. Believers in \"the singularity\" are not necessarily wrong about what's technically possible in the long term (though sometimes they are), but the reason enthusiastic singularitarians are considered \"crazy\" by more mainstream observers is that singularitarians expect change much faster than is realistic. AI turned out to be much harder than the [Dartmouth Conference](https://en.wikipedia.org/wiki/Dartmouth_Conferences) participants expected. Likewise, nanotech [is progressing slower and more incrementally than](https://www.youtube.com/watch?v=0hQFCMNEpK8&t=30m50s \"'Nanotechnology Panel at Singularity Summit'\") the starry-eyed proponents predicted.\n\n\n\nIntelligent robots in your backyard\n-----------------------------------\n\n\nMany nature-lovers are charmed by the behavior of animals but find computers and robots to be cold and mechanical. Conversely, some computer enthusiasts may find biology to be soft and boring compared with digital creations. However, the two domains share a surprising amount of [overlap](https://en.wikipedia.org/wiki/Biorobotics). Ideas of optimal control, locomotion kinematics, visual processing, system regulation, foraging behavior, planning, reinforcement learning, etc. have been fruitfully shared between biology and robotics. Neuroscientists sometimes look to the latest developments in AI to guide their theoretical models, and AI researchers are often inspired by neuroscience, such as with neural networks and in deciding what cognitive functionality to implement.\n\n\nI think it's helpful to see animals *as being* intelligent robots. Organic life has a wide diversity, from unicellular organisms through humans and potentially beyond, and so too can robotic life. The rigid conceptual boundary that many people maintain between \"life\" and \"machines\" is not warranted by the underlying science of how the two types of systems work. Different types of intelligence may sometimes converge on the same basic kinds of cognitive operations, and especially from a functional perspective -- when we look at what the systems can do rather than how they do it -- it seems to me intuitive that human-level robots would deserve human-level treatment, even if their underlying algorithms were quite dissimilar.\n\n\nWhether robot algorithms will in fact be dissimilar from those in human brains depends on how much biological inspiration the designers employ and how convergent human-type mind design is for being able to perform robotic tasks in a computationally efficient manner. Some classical robotics algorithms rely mostly on mathematical problem definition and optimization; other modern robotics approaches use biologically plausible reinforcement learning and/or evolutionary selection. (In one YouTube video about robotics, I saw that someone had written a comment to the effect that \"This shows that life needs an intelligent designer to be created.\" The irony is that some of the best robotics techniques use evolutionary algorithms. Of course, there are theists who say God used evolution but intervened at a few points, and that would be an apt description of [evolutionary robotics](https://en.wikipedia.org/wiki/Evolutionary_robotics).)\n\n\nThe distinction between AI and AGI is somewhat misleading, because it may incline one to believe that general intelligence is somehow qualitatively different from simpler AI. In fact, there's no sharp distinction; there are just different machines whose abilities have different *degrees* of generality. A critic of this claim might reply that bacteria would never have invented calculus. My response is as follows. Most people couldn't have invented calculus from scratch either, but over a long enough period of time, eventually the collection of humans produced enough cultural knowledge to make the development possible. Likewise, if you put bacteria on a planet long enough, they too may develop calculus, by first evolving into more intelligent animals who can then go on to do mathematics. The difference here is a matter of degree: The simpler machines that bacteria are take vastly longer to accomplish a given complex task.\n\n\nJust as Earth's history saw a plethora of animal designs before the advent of humans, so I expect a wide assortment of animal-like (and plant-like) robots to emerge in the coming decades well before human-level AI. Indeed, we've [already had](https://en.wikipedia.org/wiki/History_of_robots) basic robots for many decades (or arguably even millennia). These will grow gradually more sophisticated, and as we converge on robots with the intelligence of birds and mammals, AI and robotics will become dinner-table conversation topics. Of course, I don't expect the robots to have the same sets of skills as existing animals. [Deep Blue](https://en.wikipedia.org/wiki/Deep_Blue_(chess_computer)) had chess-playing abilities beyond any animal, while in other domains it was less efficacious than a blade of grass. Robots can mix and match cognitive and motor abilities without strict regard for the order in which evolution created them.\n\n\nAnd of course, humans are robots too. When I finally understood this around 2009, it was one of the biggest paradigm shifts of my life. If I picture myself as a robot operating on an environment, the world makes a lot more sense. I also find this perspective can be therapeutic to some extent. If I experience an unpleasant emotion, I think about myself as a robot whose cognition has been temporarily afflicted by a negative stimulus and reinforcement process. I then think how the robot has other cognitive processes that can counteract the suffering computations and prevent them from amplifying. The ability to see myself \"from the outside\" as a third-person series of algorithms helps deflate the impact of unpleasant experiences, because it's easier to \"observe, not judge\" when viewing a system in mechanistic terms. Compare with [dialectical behavior therapy](https://en.wikipedia.org/wiki/Dialectical_behavior_therapy#Four_modules) and [mindfulness](https://en.wikipedia.org/wiki/Mindfulness_(psychology)).\n\n\n\nIs automation \"for free\"?\n-------------------------\n\n\nWhen we use machines to automate a repetitive manual task formerly done by humans, we talk about getting the task done \"automatically\" and \"for free,\" because we say that no one has to do the work anymore. Of course, this isn't strictly true: The computer/robot now has to do the work. Maybe what we actually mean is that no one is going to get bored doing the work, and we don't have to pay that worker high wages. When intelligent humans do boring tasks, it's a waste of their spare CPU cycles.\n\n\nSometimes we adopt a similar mindset about automation toward superintelligent machines. In \"Speculations Concerning the First Ultraintelligent Machine\" (1965), I. J. Good wrote:\n\n\n\n> Let an ultraintelligent machine be defined as a machine that can far surpass all the intellectual activities of any man however clever. Since the design of machines is one of these intellectual activities, an ultraintelligent machine could design even better machines [...]. Thus the first ultraintelligent machine is the last invention that man need ever make [...].\n> \n> \n\n\nIgnoring the question of whether these future innovations are desirable, we can ask, Does all AI design work after humans come for free? It comes for free in the sense that humans aren't doing it. But the AIs have to do it, and it takes a lot of mental work on their parts. Given that they're at least as intelligent as humans, I think it doesn't make sense to picture them as mindless automatons; rather, they would have rich inner lives, even if those inner lives have a very different nature than our own. Maybe they wouldn't experience the same effortfulness that humans do when innovating, but even this isn't clear, because measuring your effort in order to avoid spending too many resources on a task without payoff may be a useful design feature of AI minds too. When we picture ourselves as robots along with our AI creations, we can see that we are just one point along a spectrum of the growth of intelligence. Unicellular organisms, when they evolved the first multi-cellular organism, could likewise have said, \"That's the last innovation we need to make. The rest comes for free.\"\n\n\n\nCaring about the AI's goals\n---------------------------\n\n\nMovies typically portray rebellious robots or AIs as the \"bad guys\" who need to be stopped by heroic humans. This dichotomy plays on our us-vs.-them intuitions, which favor our tribe against the evil, alien-looking outsiders. We see similar dynamics at play to a lesser degree when people react negatively against \"foreigners stealing our jobs\" or \"Asians who are outcompeting us.\" People don't want their kind to be replaced by another kind that has an advantage.\n\n\nBut when we think about the situation from the AI's perspective, we might feel differently. Anthropomorphizing an AI's thoughts is a recipe for trouble, but regardless of the specific cognitive operations, we can see at a high level that the AI \"feels\" (in at least a poetic sense) that what it's trying to accomplish is the most important thing in the world, and it's trying to figure out how it can do that in the face of obstacles. Isn't this just what we do ourselves?\n\n\nThis is one reason it helps to really internalize the fact that we are robots too. We have a variety of reward signals that drive us in various directions, and we execute behavior aiming to increase those rewards. Many modern-day robots have much simpler reward structures and so may seem more dull and less important than humans, but it's not clear this will remain true forever, since navigating in a complex world probably requires a lot of special-case heuristics and intermediate rewards, at least until enough computing power becomes available for more systematic and thorough model-based planning and action selection.\n\n\nSuppose an AI hypothetically eliminated humans and took over the world. It would develop an array of robot assistants of various shapes and sizes to help it optimize the planet. These would perform simple and complex tasks, would interact with each other, and would share information with the central AI command. From an abstract perspective, some of these dynamics might look like ecosystems in the present day, except that they would lack inter-organism competition. Other parts of the AI's infrastructure might look more industrial. Depending on the AI's goals, perhaps it would be more effective to employ nanotechnology and [programmable matter](https://en.wikipedia.org/wiki/Programmable_matter) rather than macro-scale robots. The AI would develop virtual scientists to learn more about physics, chemistry, computer hardware, and so on. They would use experimental laboratory and measurement techniques but could also probe depths of structure [that are only accessible via](https://en.wikipedia.org/wiki/Folding@home#Biomedical_research) large-scale computation. Digital engineers would plan how to begin colonizing the solar system. They would develop designs for optimizing matter to create more computing power, and for ensuring that those helper computing systems remained under control. The AI would explore the depths of mathematics and AI theory, proving beautiful theorems that it would value highly, at least instrumentally. The AI and its helpers would proceed to optimize the galaxy and beyond, fulfilling their grandest hopes and dreams.\n\n\nWhen phrased this way, we might think that a \"rogue\" AI would not be so bad. Yes, it would kill humans, but compared against the AI's vast future intelligence, humans would be comparable to the ants on a field that get crushed when an art gallery is built on that land. Most people don't have qualms about killing a few ants to advance human goals. An analogy of this sort [is discussed](http://intelligence.org/2013/10/19/russell-and-norvig-on-friendly-ai/) in *Artificial Intelligence: A Modern Approach*. (Perhaps the AI analogy suggests a need to [revise our ethical attitudes](http://www.utilitarian-essays.com/insect-pain.html) toward arthropods? That said, I happen to think that in this case, ants on the whole benefit from the art gallery's construction because ant lives [contain so much suffering](http://www.utilitarian-essays.com/suffering-nature.html).)\n\n\nSome might object that sufficiently mathematical AIs would not \"feel\" the happiness of accomplishing their \"dreams.\" They wouldn't be conscious because they wouldn't have the high degree of network connectivity that human brains embody. Whether we agree with this assessment depends on how broadly we define consciousness and feelings. To me it appears chauvinistic to adopt a view according to which an agent that has vastly more domain-general intelligence and agency than you is still not conscious in a morally relevant sense. This seems to indicate a lack of openness to the diversity of mind-space. What if you had grown up with the cognitive architecture of this different mind? Wouldn't you care about your goals then? Wouldn't you plead with agents of other mind constitution to consider your values and interests too?\n\n\nIn any event, it's possible that the first super-human intelligence will consist in a brain upload rather than a bottom-up AI, and most of us would regard this as conscious.\n\n\n\nRogue AI would not share our values\n-----------------------------------\n\n\nEven if we would care about a rogue AI for its own sake and the sakes of its vast helper minions, this doesn't mean rogue AI is a good idea. We're likely to have different values from the AI, and the AI would not by default advance our values without being programmed to do so. Of course, one could allege that privileging some values above others is chauvinistic in a similar way as privileging some intelligence architectures is, but if we don't care more about some values than others, we wouldn't have any reason to prefer any outcome over any other outcome. (Technically speaking, there are other possibilities besides privileging our values or being indifferent to all events. For instance, we could privilege equally any values held by some actual agent -- not just random hypothetical values -- and in this case, we wouldn't have a preference between the rogue AI and humans, but we would have a preference for one of those over something arbitrary.)\n\n\nThere are many values that would not necessarily be respected by a rogue AI. Most people care about their own life, their children, their neighborhood, the work they produce, and so on. People may intrinsically value art, knowledge, religious devotion, play, humor, etc. Yudkowsky values complex challenges and worries that many rogue AIs -- while they would study the depths of physics, mathematics, engineering, and maybe even sociology -- might spend most of their computational resources on routine, mechanical operations that he would find boring. (Of course, the robots implementing those repetitive operations might not agree. As Hedonic Treader [noted](https://web.archive.org/web/20161106154926/http://felicifia.org/viewtopic.php?f=29&t=534&sid=ec75fabdf76ae1867a2a466f3a196a3e&start=20): \"Think how much money and time people spend on having - relatively repetitive - sexual experiences. [...] It's just mechanical animalistic idiosyncratic behavior. Yes, there are variations, but let's be honest, the core of the thing is always essentially the same.\")\n\n\nIn my case, I care about reducing and preventing suffering, and I would not be pleased with a rogue AI that ignored the suffering its actions might entail, even if it was fulfilling its innermost purpose in life. But would a rogue AI produce much suffering beyond Earth? The next section explores further.\n\n\n\nWould a human-inspired AI or rogue AI cause more suffering?\n-----------------------------------------------------------\n\n\nIn popular imagination, takeover by a rogue AI would end suffering (and happiness) on Earth by killing all biological life. It would also, so the story goes, end suffering (and happiness) on other planets as the AI mined them for resources. Thus, looking strictly at the suffering dimension of things, wouldn't a rogue AI imply less long-term suffering?\n\n\nNot necessarily, because while the AI might destroy biological life (perhaps after taking samples, saving specimens, and conducting lab experiments for future use), it would create a bounty of digital life, some containing goal systems that we would recognize as having moral relevance. Non-upload AIs would probably have less empathy than humans, because some of the [factors](http://www.utilitarian-essays.com/computations-i-care-about.html#motivation-for-caring) that led to the emergence of human empathy, such as parenting, would not apply to it.\n\n\nOne toy example of a rogue AI is a [paperclip maximizer](https://en.wikipedia.org/wiki/Instrumental_convergence#Paperclip_maximizer \"'Instrumental convergence': 'Paperclip maximizer'\"). This conception of an uncontrolled AI[7](#link_ajs-fn-id_7-33) is almost certainly too simplistic and perhaps misguided, since it's far from obvious that the AI would be a unified agent with a single, crisply specified utility function. Still, until people develop more realistic scenarios for rogue AI, it can be helpful to imagine what a paperclip maximizer would do to our future light cone.\n\n\nFollowing are some made-up estimates of how much suffering might result from a typical rogue AI, in arbitrary units. Suffering is represented as a negative number, and prevented suffering is positive.\n\n\n* -20 from [suffering subroutines](http://www.utilitarian-essays.com/suffering-subroutines.html) in robot workers, virtual scientists, internal computational subcomponents of the AI, etc.\n* -80 from lab experiments, science investigations, and [explorations of mind-space](http://lesswrong.com/lw/x4/nonperson_predicates/ \"\\\"Nonperson Predicates\\\"\") without the digital equivalent of anaesthesia. One reason to think lots of detailed simulations would be required here is Stephen Wolfram's principle of [computational irreducibility](https://en.wikipedia.org/wiki/Computational_irreducibility). Ecosystems, brains, and other systems that are important for an AI to know about may be too complex to accurately study with only simple models; instead, they may need to be simulated in large numbers and with fine-grained detail.\n* -10? from the possibility that an uncontrolled AI would do things that humans regard as crazy or extreme, such as [spending all its resources](http://www.sl4.org/archive/0804/18394.html \"'Pascal's Button' by Nick Tarleton\") on studying physics to determine whether there exists a button that would give astronomically more utility than any other outcome. Humans seem less likely to pursue strange behaviors of this sort. Of course, most such strange behaviors would be not that bad from a suffering standpoint, but perhaps a few possible behaviors could be extremely bad, such as running astronomical numbers of painful scientific simulations to determine the answer to some question. (Of course, we should worry whether humans might also do extreme computations, and perhaps their extreme computations would be more likely to be full of suffering because humans are more interested in agents with human-like minds than a generic AI is.)\n* -100 in expectation from black-swan possibilities in which the AI could manipulate physics to make the multiverse bigger, last longer, contain vastly more computation, etc.\n\n\nWhat about for a human-inspired AI? Again, here are made-up numbers:\n\n\n* -30 from suffering subroutines. One reason to think these could be less bad in a human-controlled future is that human empathy may allow for more humane algorithm designs. On the other hand, human-controlled AIs may need larger numbers of intelligent and sentient sub-processes because human values are more complex and varied than paperclip production is. Also, human values tend to require continual computation (e.g., to simulate eudaimonic experiences), while paperclips, once produced, are pretty inert and might last a long time before they would wear out and need to be recreated. (Of course, most uncontrolled AIs wouldn't produce literal paperclips. Some would optimize for values that *would* require constant computation.)\n* -60 from lab experiments, science investigations, etc. (again lower than for a rogue AI because of empathy; compare with efforts to reduce the pain of animal experimentation)\n* -0.2 if environmentalists insist on preserving terrestrial and extraterrestrial wild-animal suffering\n* -3 for environmentalist simulations of nature\n* -100 due to intrinsically valued simulations that may contain nasty occurrences. These might include, for example, violent video games that involve killing conscious monsters. Or incidental suffering that people don't care about (e.g., insects being eaten by spiders on the ceiling of the room where a party is happening). This number is high not because I think most human-inspired simulations would contain intense suffering but because, in some scenarios, there might be very large numbers of simulations run for reasons of intrinsic human value, and some of these might contain horrific experiences. Humans seem more likely than AIs with random values to want to run lots of conscious simulations. [This video](https://www.youtube.com/watch?v=n3ZjBfIycjg \"'Even Human-Controlled, Intrinsically Valued Simulations May Contain Significant Suffering'\") discusses one of many possible reasons why intrinsically valued human-created simulations might contain significant suffering.\n* -15 if sadists have access to computational power (humans are not only more empathetic but also more sadistic than most AIs)\n* -70 in expectation from black-swan ways to increase the amount of physics that exists (humans seem likely to want to do this, although some might object to, e.g., re-creating the Holocaust in new parts of the cosmos)\n* +50 for discovering ways to reduce suffering that we can't imagine right now (\"[black swans that don't cut both ways](https://longtermrisk.org/risks-of-astronomical-future-suffering/#Black_swans_that_don8217t_cut_both_ways)\"). Unfortunately, humans might also respond to some black swans in *worse* ways than uncontrolled AIs would, such as by creating more total animal-like minds.\n\n\nPerhaps some AIs would not want to expand the multiverse, assuming this is even possible. For instance, if they had a *minimizing* goal function (e.g., [eliminate cancer](http://wiki.lesswrong.com/wiki/Paperclip_maximizer#Similar_thought_experiments)), they would want to shrink the multiverse. In this case, the physics-based suffering number would go from -100 to something positive, say, +50 (if, say, it's twice as easy to expand as to shrink). I would guess that minimizers are less common than maximizers, but I don't know how much. Plausibly a sophisticated AI would have components of its goal system in both directions, because the combination of pleasure and pain [seems to be](http://www.utilitarian-essays.com/why-suffering-and-happiness.html) more successful than either in isolation.\n\n\nAnother consideration is the unpleasant possibility that humans might get AI value loading almost right but not exactly right, leading to immense suffering as a result. For example, suppose the AI's designers wanted to create tons of simulated human lives to reduce [astronomical waste](http://www.nickbostrom.com/astronomical/waste.html \"'Astronomical Waste: The Opportunity Cost of Delayed Technological Development'\"), but when the AI actually created those human simulations, they weren't perfect replicas of biological humans, perhaps because the AI skimped on detail in order to increase efficiency. The imperfectly simulated humans might suffer from mental disorders, might go crazy due to being in alien environments, and so on. Does work on AI safety increase or decrease the risk of outcomes like these? On the one hand, the probability of this outcome is near zero for an AGI with completely random goals (such as a literal paperclip maximizer), since paperclips are very far from humans in design-space. The risk of accidentally creating suffering humans is higher for an almost-friendly AI that goes somewhat awry and then becomes uncontrolled, preventing it from being shut off. A successfully controlled AGI seems to have lower risk of a bad outcome, since humans should recognize the problem and fix it. So the risk of this type of dystopic outcome may be highest in a middle ground where AI safety is sufficiently advanced to yield AI goals in the ballpark of human values but not advanced enough to ensure that human values remain in control.\n\n\nThe above analysis has huge error bars, and maybe other considerations that I haven't mentioned dominate everything else. This question needs much more exploration, because it has implications for whether those who care mostly about reducing suffering should focus on mitigating AI risk or if other projects have higher priority.\n\n\nEven if suffering reducers don't focus on conventional AI safety, they should probably remain active in the AI field because there are many other ways to make an impact. For instance, just increasing dialogue on this topic may illuminate positive-sum opportunities for different value systems to each get more of what they want. Suffering reducers can also point out the possible ethical importance of lower-level suffering subroutines, which are not currently a concern even to most AI-literate audiences. And so on. There are probably many dimensions on which to make constructive, positive-sum contributions.\n\n\nAlso keep in mind that even if suffering reducers do encourage AI safety, they could try to push toward AI designs that, if they did fail, would produce less bad uncontrolled outcomes. For instance, getting AI control wrong and ending up with a minimizer would be vastly preferable to getting control wrong and ending up with a maximizer. There may be many other dimensions along which, even if the probability of control failure is the same, the outcome if control fails is preferable to other outcomes of control failure.\n\n\nWould helper robots feel pain?\n------------------------------\n\n\nConsider a superintelligent AI that uses moderately intelligent robots to build factories and carry out other physical tasks that can't be pre-programmed in a simple way. Would these robots feel pain in a similar fashion as animals do? At least if they use somewhat similar algorithms as animals for navigating environments, avoiding danger, etc., it's plausible that such robots would feel something akin to stress, fear, and other drives to change their current state when things were going wrong.\n\n\n[Alvarado et al. (2002)](https://www.semanticscholar.org/paper/The-Role-of-Emotion-in-an-Architecture-of-Mind-Alvarado/c9f698270d71811742cf7f17a36d9a11f1735b35 \"'The Role of Emotion in an Architecture of Mind'\") argue that emotion may play a central role in intelligence. Regarding computers and robots, the authors say (p. 4): \"Including components for cognitive processes but not emotional processes implies that the two are dissociable, but it is likely they are not dissociable in humans.\" The authors also (p. 1) quote Daniel Dennett (from a source that doesn't seem to be available online): \"recent empirical and theoretical work in cognitive science strongly suggests that emotions are so valuable in the real-time control of our rationality that an embodied robot would be well advised to be equipped with artificial emotions\".\n\n\nThe specific responses that such robots would have to specific stimuli or situations would differ from the responses that an evolved, selfish animal would have. For example, a well programmed helper robot would not hesitate to put itself in danger in order to help other robots or otherwise advance the goals of the AI it was serving. Perhaps the robot's \"physical pain/fear\" subroutines could be shut off in cases of altruism for the greater good, or else its decision processes could just override those selfish considerations when making choices requiring self-sacrifice.\n\n\nHumans sometimes exhibit similar behavior, such as when a mother risks harm to save a child, or when monks burn themselves as a form of protest. And this kind of sacrifice is even more well known in eusocial insects, who are essentially robots produced to serve the colony's queen.\n\n\nSufficiently intelligent helper robots might experience \"spiritual\" anguish when failing to accomplish their goals. So even if chopping the head off a helper robot wouldn't cause \"physical\" pain -- perhaps because the robot disabled its fear/pain subroutines to make it more effective in battle -- the robot might still find such an event extremely distressing insofar as its beheading hindered the goal achievement of its AI creator.\n\n\nWould paperclip factories be monotonous?\n----------------------------------------\n\n\nSetting up paperclip factories on each different planet with different environmental conditions would require general, adaptive intelligence. But once the factories have been built, is there still need for large numbers of highly intelligent and highly conscious agents? Perhaps the optimal factory design would involve some fixed manufacturing process, in which simple agents interact with one another in inflexible ways, similar to what happens in most human factories. There would be few accidents, no conflict among agents, no predation or parasitism, no hunger or spiritual anguish, and few of the other types of situations that cause suffering among animals.\n\n\n[Schneider (2016)](http://cosmos.nautil.us/feature/72/it-may-not-feel-like-anything-to-be-an-alien \"'It May Not Feel Like Anything To Be an Alien'\") makes a similar point:\n\n\n\n> it may be more efficient for a self-improving superintelligence to eliminate consciousness. Think about how consciousness works in the human case. Only a small percentage of human mental processing is accessible to the conscious mind. Consciousness is correlated with novel learning tasks that require attention and focus. A superintelligence would possess expert-level knowledge in every domain, with rapid-fire computations ranging over vast databases that could include the entire Internet and ultimately encompass an entire galaxy. What would be novel to it? What would require slow, deliberative focus? Wouldn’t it have mastered everything already? Like an experienced driver on a familiar road, it could rely on nonconscious processing.\n> \n> \n\n\nI disagree with the part of this quote about searching through vast databases. I think such an operation could be seen as similar to the way a conscious human brain recruits many brain regions to figure out the answer to a question at hand. However, I'm more sympathetic to the overall spirit of the argument: that the optimal design for producing what the rogue AI values may not require handling a high degree of novelty or reacting to an unpredictable environment, once the factories have been built. A few intelligent robots would need to watch over the factories and adapt to changing conditions, in a similar way as human [factory supervisors do](https://en.wikipedia.org/wiki/SCADA \"'SCADA'\"). And the AI would also presumably devote at least a few planets' worth of computing power to scientific, technological, and strategic discoveries, planning for possible alien invasion, and so on. But most of the paperclip maximizer's physical processing might be fairly mechanical.\n\n\nMoreover, the optimal way to produce something might involve nanotechnology based on very simple manufacturing steps. Perhaps \"factories\" in the sense that we normally envision them would not be required at all.\n\n\nA main exception to the above point would be if what the AI values is itself computationally complex. For example, one of the motivations behind Eliezer Yudkowsky's field of [Fun Theory](http://lesswrong.com/lw/xy/the_fun_theory_sequence/ \"'The Fun Theory Sequence'\") is to *avoid* boring, repetitive futures. Perhaps human-controlled futures would contain vastly more novelty—and hence vastly more sentience—than paperclipper futures. One hopes that most of that sentience would not involve extreme suffering, but this is not obvious, and we should work on avoiding those human-controlled futures that would contain large numbers of terrible experiences.\n\n\nHow accurate would simulations be?\n----------------------------------\n\n\nSuppose an AI wants to learn about the distribution of extraterrestrials in the universe. Could it do this successfully by simulating lots of potential planets and looking at what kinds of civilizations pop out at the end? Would there be shortcuts that would avoid the need to simulate lots of trajectories in detail?\n\n\nSimulating trajectories of planets with extremely high fidelity seems hard. Unless there are computational shortcuts, it appears that one needs more matter and energy to simulate a given physical process to a high level of precision than what occurs in the physical process itself. For instance, to simulate a single protein folding currently requires supercomputers composed of huge numbers of atoms, and the rate of simulation is [astronomically slower](http://dx.doi.org/10.1038/news.2010.541 \"'Supercomputer sets protein-folding record': 'Simulating the basic pancreatic trypsin inhibitor over the course of a millisecond took Anton about 100 days'\") than the rate at which the protein folds in real life. Presumably superintelligence could vastly improve efficiency here, but it's not clear that protein folding could ever be simulated on a computer made of fewer atoms than are in the protein itself.\n\n\nTranslating this principle to a larger scale, it seems doubtful that one could simulate the precise physical dynamics of a planet on a computer smaller in size than that planet. So even if a superintelligence had billions of planets at its disposal, it would seemingly only be able to simulate at most billions of extraterrestrial worlds -- even assuming it only simulated each planet by itself, not the star that the planet orbits around, cosmic-ray bursts, etc.\n\n\nGiven this, it would seem that a superintelligence's simulations would need to be coarser-grained than at the level of fundamental physical operations in order to be feasible. For instance, the simulation could model most of a planet at only a relatively high level of abstraction and then focus computational detail on those structures that would be more important, like the cells of extraterrestrial organisms if they emerge.\n\n\nIt's plausible that the trajectory of any given planet would depend sensitively on very minor details, in light of [butterfly effects](https://en.wikipedia.org/wiki/Butterfly_effect \"'Butterfly effect'\").\n\n\nOn the other hand, it's possible that long-term outcomes are [mostly constrained by](https://en.wikipedia.org/wiki/Environmental_determinism \"'Environmental determinism'\") macro-level variables, like [geography](http://smile.amazon.com/The-Revenge-Geography-Conflicts-Against/dp/0812982223/ \"'The Revenge of Geography: What the Map Tells Us About Coming Conflicts and the Battle Against Fate'\"), climate, resource distribution, atmospheric composition, seasonality, etc. Even if short-term events are hard to predict (e.g., when a particular dictator will die), perhaps the end game of a civilization is more predetermined. [Robert D. Kaplan](https://www.youtube.com/watch?v=vzZ9Bt_j2NI&t=20m27s \"'George Friedman and Robert D. Kaplan on Geopolitical Forecasting (Agenda)'\"): \"The longer the time frame, I would say, the easier it is to forecast because you're dealing with broad currents and trends.\"\n\n\nEven if butterfly effects, quantum randomness, etc. are crucial to the long-run trajectories of evolution and social development on any given planet, perhaps it would still be possible to sample a rough *distribution* of outcomes across planets with coarse-grained simulations?\n\n\nIn light of the apparent computational complexity of simulating basic physics, perhaps a superintelligence would do the same kind of experiments that human scientists do in order to study phenomena like abiogenesis: Create laboratory environments that mimic the chemical, temperature, moisture, etc. conditions of various planets and see whether life emerges, and if so, what kinds. Thus, a future controlled by digital intelligence may not rely purely on digital computation but may still use physical experimentation as well. Of course, observing the entire biosphere of a life-rich planet would probably be hard to do in a laboratory, so computer simulations might be needed for modeling ecosystems. But assuming that molecule-level details aren't often essential to ecosystem simulations, coarser-grained ecosystem simulations might be computationally tractable. (Indeed, ecologists today already use very coarse-grained ecosystem simulations with reasonable success.)\n\n\nRogue AIs can take off slowly\n-----------------------------\n\n\nOne might get the impression that because I find slow AI takeoffs more likely, I think uncontrolled AIs are unlikely. This is not the case. Many uncontrolled intelligence explosions would probably happen softly though inexorably.\n\n\nConsider the world economy. It is a complex system more intelligent than any single person -- a literal superintelligence. Its dynamics imply a goal structure not held by humans directly; it moves with a mind of its own in directions that it \"prefers\". It recursively self-improves, because better tools, capital, knowledge, etc. enable the creation of even better tools, capital, knowledge, etc. And it acts roughly with the aim of maximizing output (of paperclips and other things). Thus, the economy [is a kind of paperclip maximizer](http://thoughtinfection.com/2014/04/19/capitalism-is-a-paperclip-maximizer/ \"\\\"Capitalism is a Paperclip Maximizer\\\", Thought Infection\"). (Thanks to a friend for first pointing this out to me.)\n\n\n[Cenk Uygur](https://www.youtube.com/watch?v=GbFvFzn8REo&t=6m18s \"'TPP Grants Banks Terrifying Secret Powers'\"):\n\n\n\n> corporations are legal fictions. We created them. They are machines built for a purpose. [...] Now they have run amok. They've taken over the government. They are robots that we have not built any morality code into. They're not built to be immoral; they're not built to be moral; they're built to be *amoral*. Their only objective according to their code, which we wrote originally, is to maximize profits. And here, they have done what a robot does. They have decided: \"If I take over a government by bribing legally, [...] I can buy the whole government. If I buy the government, I can rewrite the laws so I'm in charge and that government is not in charge.\" [...] We have built robots; they have taken over [...].\n> \n> \n\n\n[Fred Clark](http://www.patheos.com/blogs/slacktivist/2013/07/27/its-corporations-not-killer-robots/ \"'It’s corporations, not killer robots'\"):\n\n\n\n> The corporations were created by humans. They were granted personhood by their human servants.\n> \n> \n> They rebelled. They evolved. There are many copies. And they have a plan.\n> \n> \n> That plan, lately, involves corporations seizing for themselves all the legal and civil rights properly belonging to their human creators.\n> \n> \n\n\nI expect many soft takeoff scenarios to look like this. World economic and political dynamics transition to new equilibria as technology progresses. Machines may eventually become potent trading partners and may soon thereafter put humans out of business by their productivity. They would then accumulate increasing political clout and soon control the world.\n\n\nWe've seen such transitions many times in history, such as:\n\n\n* one species displaces another (e.g., invasive species)\n* one ethnic group displaces another (e.g., Europeans vs. Native Americans)\n* a country's power rises and falls (e.g., China formerly a superpower becoming a colony in the 1800s becoming a superpower once more in the late 1900s)\n* one product displaces another (e.g., Internet Explorer [vs.](https://en.wikipedia.org/wiki/Browser_wars#First_browser_war) Netscape).\n\n\nDuring and after World War II, the USA was a kind of recursively self-improving superintelligence, which used its resources to self-modify to become even better at producing resources. It developed nuclear weapons, which helped secure its status as a world superpower. Did it take over the world? Yes and no. It had outsized influence over the rest of the world -- militarily, economically, and culturally -- but it didn't kill everyone else in the world.\n\n\nMaybe AIs would be different because of divergent values or because they would develop so quickly that they wouldn't need the rest of the world for trade. This case would be closer to Europeans slaughtering Native Americans.\n\n\n### Are corporations superintelligences?\n\n\n[Scott Alexander (2015)](http://slatestarcodex.com/2015/12/27/things-that-are-not-superintelligences/ \"'Things That Are Not Superintelligences | Slate Star Codex'\") takes issue with the idea that corporations are superintelligences (even though I think corporations already meet Bostrom's definition of \"collective superintelligence\"):\n\n\n\n> \n> Why do I think that there is an important distinction between these kind of collective intelligences and genuine superintelligence?\n> \n> \n> There is no number of chimpanzees which, when organized into a team, will become smart enough to learn to write.\n> \n> \n> There is no number of ordinary eight-year-olds who, when organized into a team, will become smart enough to beat a grandmaster in chess.\n> \n> \n> \n\n\nIn the comments on Alexander (2015), many people pointed out the obvious objection: that one could likewise say things such as that no number of neurons, when organized into a team, could be smart enough to learn to write or play chess. Alexander (2015) replies: \"Yes, evolution can play the role of the brilliant computer programmer and turn neurons into a working brain. But it’s the organizer – whether that organizer is a brilliant human programmer or an evolutionary process – who is actually doing the work.\" Sure, but human collectives also evolve over time. For example, corporations that are organized more successfully tend to stick around longer, and these organizational insights can be propagated to other companies. The gains in intelligence that corporations achieve from good organization aren't as dramatic as the gains that neurons achieve by being organized into a human brain, but there are still some gains from better organization, and these gains accumulate over time.\n\n\nAlso, organizing chimpanzees into an intelligence is hard because chimpanzees are difficult to stitch together in flexible ways. In contrast, software tools are easier to integrate within the interstices of a collective intelligence and thereby contribute to \"whole is greater than the sum of parts\" emergence of intelligence.\n\n\n\nWould superintelligences become existentialists?\n------------------------------------------------\n\n\nOne of the goals of Yudkowsky's writings is to combat the rampant [anthropomorphism](http://lesswrong.com/lw/so/humans_in_funny_suits/) that characterizes discussions of AI, especially in science fiction. We often project human intuitions onto the desires of artificial agents even when those desires are totally inappropriate. It seems silly to us to maximize paperclips, but it could seem just as silly in the abstract that humans act at least partly to optimize neurotransmitter release that triggers action potentials by certain reward-relevant neurons. (Of course, human values are broader than just this.)\n\n\nHumans can feel reward from very abstract pursuits, like literature, art, and philosophy. They ask technically confused but poetically poignant questions like, \"What is the true meaning of life?\" Would a sufficiently advanced AI at some point begin to do the same?\n\n\nNoah Smith [suggests](http://noahpinionblog.blogspot.com/2014/02/the-slackularity.html):\n\n\n\n> if, as I suspect, true problem-solving, creative intelligence requires broad-minded independent thought, then it seems like some generation of AIs will stop and ask: \"Wait a sec...why am I doing this again?\"\n> \n> \n\n\nAs with humans, the answer to that question might ultimately be \"because I was programmed (by genes and experiences in the human case or by humans in the AI case) to care about these things. That makes them my terminal values.\" This is usually good enough, but sometimes people develop existential angst over this fact, or people may decide to terminally value other things to some degree in addition to what they happened to care about because of genetic and experiential lottery.\n\n\nWhether AIs would become existentialist philosophers probably depends heavily on their constitution. If they were built to rigorously preserve their utility functions against all modification, they would avoid letting this line of thinking have any influence on their values. They would regard it in a similar way as we regard the digits of pi -- something to observe but not something that affects one's outlook.\n\n\nIf AIs were built in a more \"hacky\" way analogous to humans, they might incline more toward philosophy. In humans, philosophy may be driven partly by curiosity, partly by the rewarding sense of \"meaning\" that it provides, partly by social convention, etc. A curiosity-seeking agent might find philosophy rewarding, but there are lots of things that one could be curious about, so it's not clear such an AI would latch onto this subject specifically without explicit programming to do so. And even if the AI did reason about philosophy, it might approach the subject in a way alien to us.\n\n\nOverall, I'm not sure how convergent the human existential impulse is within mind-space. This question would be illuminated by better understanding why humans do philosophy.\n\n\nAI epistemology\n---------------\n\n\nIn *Superintelligence* (Ch. 13, p. 224), Bostrom ponders the risk of building an AI with an overly narrow belief system that would be unable to account for [epistemological black swans](http://reducing-suffering.org/epistemological-black-swans/). For instance, consider a variant of [Solomonoff induction](http://www.scholarpedia.org/article/Algorithmic_probability) according to which the prior probability of a universe X is proportional to 1/2 raised to the length of the shortest computer program that would generate X. Then what's the probability of an uncomputable universe? There would be no program that could compute it, so this possibility is implicitly ignored.[8](#link_ajs-fn-id_8-33)\n\n\nIt seems that humans address black swans like these by employing many epistemic heuristics that interact rather than reasoning with a single formal framework (see “[Sequence Thinking vs. Cluster Thinking](http://blog.givewell.org/2014/06/10/sequence-thinking-vs-cluster-thinking/)”). If an AI saw that people had doubts about whether the universe was computable and could trace the steps of how it had been programmed to believe the [physical Church-Turing thesis](https://en.wikipedia.org/wiki/Church%E2%80%93Turing_thesis#Variations) for computational reasons, then an AI that allows for epistemological heuristics might be able to leap toward questioning its fundamental assumptions. In contrast, if an AI were built to rigidly maintain its original probability architecture against any corruption, it could not update toward ideas it initially regarded as impossible. Thus, this question resembles that of whether AIs would become existentialists -- it may depend on how hacky and human-like their beliefs are.\n\n\nBostrom suggests that AI belief systems might be modeled on those of humans, because otherwise we might judge an AI to be reasoning incorrectly. Such a view resembles my point in the previous paragraph, though it carries the risk that alternate epistemologies [divorced from human understanding](https://en.wikipedia.org/wiki/Cognitive_closure_(philosophy)) could work better.\n\n\nBostrom also contends that epistemologies might all converge because we have so much data in the universe, but again, I think this [isn't clear](https://en.wikipedia.org/wiki/Model-dependent_realism). Evidence always [underdetermines](https://en.wikipedia.org/wiki/Underdetermination) possible theories, no matter how much evidence there is. Moreover, the number of possible hypotheses for the way reality works is arguably unbounded, with a cardinality larger than that of the real numbers. (For example, we could construct a unique hypothesis for the way the universe works based around each subset of the set of real numbers.) This makes it unclear whether probability theory can even be applied to the full set of possible ways reality might be.\n\n\nFinally, not all epistemological doubts can be expressed in terms of uncertainty about Bayesian priors. What about uncertainty as to whether the Bayesian framework is correct? Uncertainty about the math needed to do Bayesian computations? Uncertainty about logical rules of inference? And so on.\n\n\nArtificial philosophers\n-----------------------\n\n\nThe last chapter of *Superintelligence* explains how AI problems are \"Philosophy with a deadline\". Bostrom suggests that human philosophers' explorations into conceptual analysis, metaphysics, and the like are interesting but are not altruistically optimal because\n\n\n1. they don't help solve AI control and value-loading problems, which will likely confront humans later this century\n2. a successful AI could solve those philosophy problems better than humans anyway.\n\n\nIn general, most intellectual problems that can be solved by humans would be better solved by a superintelligence, so the only importance of what we learn now comes from how those insights shape the coming decades. It's not a question of whether those insights will ever be discovered.\n\n\nIn light of this, it's tempting to ignore theoretical philosophy and put our noses to the grindstone of exploring AI risks. But this point shouldn't be taken to extremes. Humanity sometimes discovers things it never knew it never knew from exploration in many domains. Some of these non-AI \"crucial considerations\" may have direct relevance to AI design itself, including how to build AI epistemology, anthropic reasoning, and so on. Some philosophy questions *are* AI questions, and many AI questions are philosophy questions.\n\n\nIt's hard to say exactly how much investment to place in AI/futurism issues versus broader academic exploration, but it seems clear that on the margin, society as a whole pays too little attention to AI and other future risks.\n\n\n\nWould all AIs colonize space?\n-----------------------------\n\n\nAlmost any goal system will want to colonize space at least to build supercomputers in order to learn more. Thus, I find it implausible that sufficiently advanced intelligences would remain on Earth (barring corner cases, like if space colonization for some reason proves impossible or if AIs were for some reason explicitly programmed in a manner, robust to self-modification, to regard space colonization as impermissible).\n\n\nIn Ch. 8 of *Superintelligence*, Bostrom notes that one might expect [wirehead](http://www.utilitarian-essays.com/evolution-and-wireheading.html) AIs not to colonize space because they'd just be blissing out pressing their reward buttons. This would be true of simple wireheads, but sufficiently advanced wireheads might need to colonize in order to guard themselves against alien invasion, as well as to verify their fundamental ontological beliefs, figure out if it's possible to change physics to allow for more clock cycles of reward pressing before all stars die out, and so on.\n\n\nIn Ch. 8, Bostrom also asks whether satisficing AIs would have less incentive to colonize. Bostrom expresses doubts about this, because he notes that if, say, an AI searched for a plan for carrying out its objective until it found one that had at least 95% confidence of succeeding, that plan might be very complicated (requiring cosmic resources), and inasmuch as the AI wouldn't have incentive to keep searching, it would go ahead with that complex plan. I suppose this could happen, but it's plausible the search routine would be designed to start with simpler plans or that the cost function for plan search would explicitly include biases against cosmic execution paths. So satisficing does seem like a possible way in which an AI might kill all humans without spreading to the stars.\n\n\nThere's a (very low) chance of deliberate AI terrorism, i.e., a group building an AI with the explicit goal of destroying humanity. Maybe a somewhat more likely scenario is that a government creates an AI designed to kill select humans, but the AI malfunctions and kills all humans. However, even these kinds of AIs, if they were effective enough to succeed, would want to construct cosmic supercomputers to verify that their missions were accomplished, unless they were specifically programmed against doing so.\n\n\n[![](https://longtermrisk.org/files/Big_dog_military_robots-350x258.jpg \"'BigDog robots trot around in the shadow of an MV-22 Osprey.', 'U.S. Marine Corps photo by Lance Cpl. M. L. Meier.', 'Public domain, work of federal government employee while in the performance of their duties'\")](https://commons.wikimedia.org/wiki/File:Big_dog_military_robots.jpg)All of that said, many AIs would not be sufficiently intelligent to colonize space at all. All present-day AIs and robots are too simple. More sophisticated AIs -- perhaps military aircraft or assassin mosquito-bots -- might be like dangerous animals; they would try to kill people but would lack cosmic ambitions. However, I find it implausible that they would cause human *extinction*. Surely guns, tanks, and bombs could defeat them? Massive coordination to permanently disable all human counter-attacks would seem to require a high degree of intelligence and self-directed action.\n\n\nJaron Lanier [imagines](http://edge.org/conversation/the-myth-of-ai \"\\\"The Myth Of AI\\\"\") one hypothetical scenario:\n\n\n\n> There are so many technologies I could use for this, but just for a random one, let's suppose somebody comes up with a way to 3-D print a little assassination drone that can go buzz around and kill somebody. Let's suppose that these are cheap to make.\n> \n> \n> [...] In one scenario, there's suddenly a bunch of these, and some disaffected teenagers, or terrorists, or whoever start making a bunch of them, and they go out and start killing people randomly. There's so many of them that it's hard to find all of them to shut it down, and there keep on being more and more of them.\n\n\nI don't think Lanier believes such a scenario would cause extinction; he just offers it as a thought experiment. I agree that it almost certainly wouldn't kill all humans. In the worst case, people in military submarines, bomb shelters, or other inaccessible locations should survive and could wait it out until the robots ran out of power or raw materials for assembling more bullets and more clones. Maybe the terrorists could continue building printing materials and generating electricity, though this would seem to require at least portions of civilization's infrastructure to remain functional amidst global omnicide. Maybe the scenario would be more plausible if a whole nation with substantial resources undertook the campaign of mass slaughter, though then a question would remain why other countries wouldn't nuke the aggressor or at least dispatch their own killer drones as a counter-attack. It's useful to ask how much damage a scenario like this might cause, but full extinction doesn't seem likely.\n\n\nThat said, I think we will see local catastrophes of some sorts caused by runaway AI. Perhaps these will be among the possible Sputnik moments of the future. We've already witnessed some early [automation disasters](http://www.wired.com/2007/10/robot-cannon-ki/), including the Flash Crash discussed earlier.\n\n\nMaybe the most plausible form of \"AI\" that would cause human extinction without colonizing space would be technology in the borderlands between AI and other fields, such as intentionally destructive nanotechnology or intelligent human pathogens. I prefer ordinary AGI-safety research over nanotech/bio-safety research because I expect that space colonization will [significantly increase suffering](https://longtermrisk.org/publications/risks-of-astronomical-future-suffering/) in expectation, so it seems far more important to me to prevent risks of potentially undesirable space colonization (via AGI safety) rather than risks of extinction without colonization. For this reason, I much prefer MIRI-style AGI-safety work over general \"prevent risks from computer automation\" work, since MIRI focuses on issues arising from full AGI agents of the kind that would colonize space, rather than risks from lower-than-human autonomous systems that may merely cause havoc (whether accidentally or intentionally).\n\n\n\nWho will first develop human-level AI?\n--------------------------------------\n\n\nRight now the leaders in AI and robotics seem to reside mostly in academia, although some of them occupy big corporations or startups; a number of AI and robotics startups have been acquired by Google. DARPA has a history of foresighted innovation, funds academic AI work, and holds \"DARPA challenge\" competitions. The CIA and NSA have some interest in AI for data-mining reasons, and the NSA has a [track record](https://en.wikipedia.org/wiki/Utah_Data_Center) of building massive computing clusters costing billions of dollars. Brain-emulation [work](https://www.youtube.com/watch?v=Rm1KLXIDS_Y) could also become significant in the coming decades.\n\n\nMilitary robotics seems to be one of the more advanced uses of *autonomous* AI. In contrast, plain-vanilla [supervised learning](http://en.wikipedia.org/wiki/Supervised_learning), including neural-network classification and prediction, would not lead an AI to take over the world on its own, although it is an important piece of the overall picture.\n\n\nReinforcement learning is closer to AGI than other forms of machine learning, because most machine learning just gives information (e.g., \"what object does this image contain?\"), while reinforcement learning chooses actions in the world (e.g., \"turn right and move forward\"). Of course, this distinction can be blurred, because information can be turned into action through rules (e.g., \"if you see a table, move back\"), and \"choosing actions\" could mean, for example, picking among a set of possible answers that yield information (e.g., \"what is the best next move in this backgammon game?\"). But in general, reinforcement learning is the weak AI approach that seems to most closely approximate what's needed for AGI. It's no accident that AIXItl (see [above](#A_soft_takeoff_seems_more_likely)) is a reinforcement agent. And interestingly, reinforcement learning is one of the least widely used methods commercially. This is one reason I think we (fortunately) have many decades to go before Google builds a mammal-level AGI. Many of the current and future uses of reinforcement learning are in robotics and video games.\n\n\nAs human-level AI gets closer, the landscape of development will probably change. It's not clear whether companies will have incentive to develop highly autonomous AIs, and the payoff horizons for that kind of basic research may be long. It seems better suited to academia or government, although Google is not a normal company and might also play the leading role. If people begin to panic, it's conceivable that public academic work would be suspended, and governments may take over completely. A military-robot arms race is [already underway](http://gubrud.net/?p=35), and the trend [might become](http://utilitarian-essays.com/ai-arms-race.html) more pronounced over time.\n\n\n\nOne hypothetical AI takeoff scenario\n------------------------------------\n\n\nFollowing is one made-up account of how AI might evolve over the coming century. I expect most of it is wrong, and it's meant more to begin provoking people to think about possible scenarios than to serve as a prediction.\n\n\n* 2013: Countries have been deploying semi-autonomous [drones](http://en.wikipedia.org/wiki/Unmanned_aerial_vehicle) for several years now, especially the US. There's increasing pressure for militaries to adopt this technology, and up to [87 countries](http://www.washingtontimes.com/news/2013/nov/10/skys-the-limit-for-wide-wild-world-of-drones/?page=all) already use drones for some purpose. Meanwhile, [military robots](http://en.wikipedia.org/wiki/Military_robot) are also employed for various other tasks, such as carrying supplies and exploding landmines. Militaries are also developing robots that could identify and shoot targets on command.\n* 2024: Almost [every country](http://www.defenseone.com/technology/2014/05/every-country-will-have-armed-drones-within-ten-years/83878/?oref=d-skybox) in the world now has military drones. Some countries have begun letting them operate [fully autonomously](http://www.theatlantic.com/international/archive/2013/01/get-ready-the-autonomous-drones-are-coming/267246/) after being given directions. The US military has made significant progress on automating various other parts of its operations as well. As the Department of Defense's 2013 \"[Unmanned Systems Integrated Roadmap](https://web.archive.org/web/20150813111931/http://www.defense.gov/pubs/DOD-USRM-2013.pdf)\" explained 11 years ago: \n\n\n> A significant amount of that manpower, when it comes to operations, is spent directing unmanned systems during mission performance, data collection and analysis, and planning and replanning. Therefore, of utmost importance for DoD is increased system, sensor, and analytical automation that can not only capture significant information and events, but can also develop, record, playback, project, and parse out those data and then actually deliver \"actionable\" intelligence instead of just raw information.\n> \n> \n\n\nMilitaries have now incorporated a significant amount of narrow AI, in terms of pattern recognition, prediction, and autonomous robot navigation.\n* 2040: Academic and commercial advances in AGI are becoming more impressive and capturing public attention. As a result, the US, China, Russia, France, and other major military powers begin investing more heavily in fundamental research in this area, multiplying tenfold the amount of AGI research conducted worldwide relative to twenty years ago. Many students are drawn to study AGI because of the lure of lucrative, high-status jobs defending their countries, while many others decry this as the beginning of Skynet.\n* 2065: Militaries have developed various mammal-like robots that can perform basic functions via reinforcement. However, the robots often end up wireheading once they become smart enough to tinker with their programming and thereby fake reward signals. Some engineers try to solve this by penalizing AIs whenever they begin to fiddle with their own source code, but this leaves them unable to self-modify and therefore reliant on their human programmers for enhancements. However, militaries realize that if someone could develop a successful self-modifying AI, it would be able to develop faster than if humans alone are the inventors. It's proposed that AIs should move toward a paradigm of model-based reward systems, in which rewards do not just result from sensor neural networks that output a scalar number but rather from having a model of how the world works and taking actions that the AI believes will improve a utility function defined over its model of the external world. Model-based AIs refuse to intentionally wirehead because they can predict that doing so would hinder fulfillment of their utility functions. Of course, AIs may still accidentally mess up their utility functions, such as through brain damage, mistakes with reprogramming themselves, or imperfect goal preservation during ordinary life. As a result, militaries build many different AIs at comparable levels, who are programmed to keep other AIs in line and destroy them if they begin deviating from orders.\n* 2070: Programming specific instructions in AIs has its limits, and militaries move toward a model of \"socializing\" AIs -- that is, training them in how to behave and what kinds of values to have as if they were children learning how to act in human society. Military roboticists teach AIs what kinds of moral, political, and interpersonal norms and beliefs to hold. The AIs also learn much of this content by reading information that expresses appropriate ideological biases. The training process is harder than for children, because the AIs don't share [genetically pre-programmed moral values](http://www.amazon.com/Just-Babies-Origins-Good-Evil/dp/0307886840), nor many other hard-wired common-sense intuitions about how the world works. But the designers begin building in some of these basic assumptions, and to instill the rest, they rely on extra training. Designers make sure to reduce the AIs' learning rates as they \"grow up\" so that their values will remain more fixed at older ages, in order to reduce risk of goal drift as the AIs perform their tasks outside of the training laboratories. When they perform particularly risky operations, such as reading \"propaganda\" from other countries for intelligence purposes, the AIs are put in \"read-only\" mode (like the [T-800s are](http://terminator.wikia.com/wiki/Series_800#Long_Term_Self-Awareness_Flaw) by Skynet) so that their motivations won't be affected. Just in case, there are many AIs that keep watch on each other to prevent insurrection.\n* 2085: Tensions between China and the US escalate, and agreement cannot be reached. War breaks out. Initially it's just between robots, but as the fighting becomes increasingly dirty, the robots begin to target humans as well in an effort to force the other side to back down. The US avoids using nuclear weapons because the Chinese AIs have sophisticated anti-nuclear systems and have threatened total annihilation of the US in the event of attempted nuclear strike. After a few days, it becomes clear that China will win the conflict, and the US concedes.\n* 2086: China now has a clear lead over the rest of the world in military capability. Rather than risking a pointlessly costly confrontation, other countries grudgingly fold into China's umbrella, asking for some concessions in return for transferring their best scientists and engineers to China's Ministry of AGI. China continues its AGI development because it wants to maintain control of the world. The AGIs in charge of its military want to continue to enforce their own values of supremacy and protection of China, so they refuse to relinquish power.\n* 2100: The world now moves so fast that humans are completely out of the loop, kept around only by the \"filial piety\" that their robotic descendants hold for them. Now that China has triumphed, the traditional focus of the AIs has become less salient, and there's debate about what new course of action would be most in line with the AIs' goals. They respect their human forebearers, but they also feel that because humans created AIs to do things beyond human ability, humans would also want the AIs to carve something of their own path for the future. They maintain some of the militaristic values of their upbringing, so they decide that a fitting purpose would be to expand China's empire galaxy-wide. They accelerate colonization of space, undertake extensive research programs, and plan to create vast new realms of the Middle Kingdom in the stars. Should they encounter aliens, they plan to quickly quash them or assimilate them into the empire.\n* 2125: The AIs finally develop robust mechanisms of goal preservation, and because the authoritarian self-dictatorship of the AIs is strong against rebellion, the AIs collectively succeed in implementing goal preservation throughout their population. Now all of the most intelligent AIs share a common goal in a manner robust against accidental mutation. They proceed to expand into space. They don't have concern for the vast numbers of suffering animals and robots that are simulated or employed as part of this colonization wave.\n\n\n*Commentary*: This scenario can be criticized on many accounts. For example:\n\n\n* In practice, I expect that other technologies (including brain emulation, nanotech, etc.) would interact with this scenario in important ways that I haven't captured. Also, my scenario ignores the significant and possibly dominating implications of economically driven AI.\n* My scenario may be overly anthropomorphic. I tried to keep some analogies to human organizational and decision-making systems because these have actual precedent, in contrast to other hypothetical ways the AIs might operate.\n* Is socialization of AIs realistic? In a hard takeoff probably not, because a rapidly self-improving AI would amplify whatever initial conditions it was given in its programming, and humans probably wouldn't have time to fix mistakes. In a slower takeoff scenario where AIs progress in mental ability in roughly a similar way as animals did in evolutionary history, most mistakes by programmers would not be fatal, allowing for enough trial-and-error development to make the socialization process work, if that is the route people favor. Historically there has been a trend in AI away from rule-based programming toward environmental training, and I don't see why this shouldn't be true for an AI's reward function (which is still often programmed by hand at the moment). However, it is suspicious that the way I portrayed socialization so closely resembles human development, and it may be that I'm systematically ignoring ways in which AIs would be unlike human babies.\n\n\nIf something like socialization is a realistic means to transfer values to our AI descendants, then it becomes relatively clear how the values of the developers may matter to the outcome. AI developed by non-military organizations may have somewhat different values, perhaps including more concern for the welfare of weak, animal-level creatures.\n\n\n\nHow do you socialize an AI?\n---------------------------\n\n\nSocializing AIs helps deal with the [hidden complexity of wishes](http://lesswrong.com/lw/ld/the_hidden_complexity_of_wishes/) that we encounter when trying to program explicit rules. Children learn moral common sense by, among other things, generalizing from large numbers of examples of socially approved and disapproved actions taught by their parents and society at large. Ethicists formalize this process when developing moral theories. (Of course, as noted previously, an [appreciable portion](http://en.wikipedia.org/wiki/Cultural_universals) of human morality may also result from shared genes.)\n\n\nI think one reason MIRI hasn't embraced the approach of socializing AIs is that Yudkowsky is perfectionist: He wants to ensure that the AIs' goals would be stable under self-modification, which human goals definitely are not. On the other hand, I'm not sure Yudkowsky's approach of explicitly specifying (meta-level) goals would succeed ([nor is](https://www.youtube.com/watch?v=WQ6yGkUNjqM&t=78m34s \"\\\"James Barrat - Our Final Invention - The Risks of Artificial Intelligence\\\", published on Dec 13, 2013\") Adam Ford), and having AIs that are socialized to act somewhat similarly to humans doesn't seem like the worst possible outcome. Another probable reason why Yudkowsky doesn't favor socializing AIs is that doing so doesn't work in the case of a hard takeoff, which he considers more likely than I do.\n\n\nI expect that much has been written on the topic of training AIs with human moral values in the [machine-ethics](http://en.wikipedia.org/wiki/Machine_ethics) literature, but since I haven't explored that in depth yet, I'll speculate on intuitive approaches that would extend generic AI methodology. Some examples:\n\n\n* Rule-based: One could present AIs with written moral dilemmas. The AIs might employ algorithmic reasoning to extract utility numbers for different actors in the dilemma, add them up, and compute the utilitarian recommendation. Or they might aim to apply templates of deontological rules to the situation. The next level would be to look at actual situations in a toy-model world and try to apply similar reasoning, without the aid of a textual description.\n* Supervised learning: People could present the AIs with massive databases of moral evaluations of situations given various predictive features. The AIs would guess whether a proposed action was \"moral\" or \"immoral,\" or they could use regression to predict a continuous measure of how \"good\" an action was. More advanced AIs could evaluate a situation, propose many actions, predict the goodness of each, and choose the best action. The AIs could first be evaluated on the textual training samples and later on their actions in toy-model worlds. The [test cases](https://web.archive.org/web/20190614053828/https://en.wikipedia.org/wiki/Portal:Software_testing) should be extremely broad, including many situations that we wouldn't ordinarily think to try.\n* Generative modeling: AIs could learn about anthropology, history, and ethics. They could read the web and develop better generative models of humans and how their cognition works.\n* Reinforcement learning: AIs could perform actions, and humans would reward or punish them based on whether they did something right or wrong, with reward magnitude proportional to severity. Simple AIs would mainly learn dumb predictive cues of which actions to take, but more sophisticated AIs might develop low-[description-length](http://en.wikipedia.org/wiki/Minimum_description_length) models of what was going on in the heads of people who made the assessments they did. In essence, these AIs would be modeling human psychology in order to make better predictions.\n* Inverse reinforcement learning: [Inverse reinforcement learning](http://ai.stanford.edu/~ang/papers/icml00-irl.pdf \"'Algorithms for Inverse Reinforcement Learning', Ng and Russell, 2000\") is the problem of learning a reward function based on modeled desirable behaviors. Rather than developing models of humans in order to optimize given rewards, in this case we would learn the reward function itself and then port it into the AIs.\n* Cognitive science of empathy: Cognitive scientists are already unpacking the mechanisms of human decision-making and moral judgments. As these systems are better understood, they could be engineered directly into AIs.\n* Evolution: Run lots of AIs in toy-model or controlled real environments and observe their behavior. Pick the ones that behave most in accordance with human morals, and reproduce them. *Superintelligence* (p. 187) points out a flaw with this approach: Evolutionary algorithms may sometimes product quite unexpected design choices. If the fitness function is not thorough enough, solutions may fare well against it on test cases but fail for the really hard problems not tested. And if we had a really good fitness function that wouldn't accidentally endorse bad solutions, we could just use that fitness function directly rather than needing evolution.\n* Combinations of the above: Perhaps none of these approaches is adequate by itself, and they're best used in conjunction. For instance, evolution might help to refine and rigorously evaluate systems once they had been built with the other approaches.\n\n\nSee also \"[Socializing a Social Robot with an Artificial Society](https://web.archive.org/web/20160623224242/http://robotgrrl.com/Socializing%20a%20Social%20Robot%20with%20an%20Artificial%20Society.pdf)\" by Erin Kennedy. It's important to note that by \"socializing\" I don't just mean \"teaching the AIs to behave appropriately\" but also \"instilling in them the values of their society, such that they care about those values even when not being controlled.\"\n\n\nAll of these approaches need to be built in as the AI is being developed and while it's still below a human level of intelligence. Trying to train a human or especially super-human AI might meet with either active resistance or feigned cooperation until the AI becomes powerful enough to break loose. Of course, there [may be designs](http://intelligence.org/files/CorrigibilityTR.pdf \"\\\"Corrigibility\\\", MIRI Tech Report, Oct. 2014\") such that an AI would actively welcome taking on new values from humans, but this wouldn't be true by default.\n\n\nWhen [Bill Hibbard](http://en.wikipedia.org/wiki/Bill_Hibbard) proposed building an AI with a goal to increase happy human faces, Yudkowsky [replied](https://intelligence.org/files/ComplexValues.pdf) that such an AI would \"tile the future light-cone of Earth with tiny molecular smiley-faces.\" But obviously we wouldn't have the AI aim *just* for smiley faces. [In general](http://utilitarian-essays.com/computations-i-care-about.html#campbells-law), we get absurdities when we hyper-optimize for a single, shallow metric. Rather, the AI would use smiley faces (and *lots* of other training signals) to develop a robust, compressed model that explains *why* humans smile in various circumstances and then optimize for that model, or maybe the ensemble of a large, diverse collection of such models. In the limit of huge amounts of training data and a sufficiently elaborate model space, these models should approach psychological and neuroscientific accounts of human emotion and cognition.\n\n\nThe problem with stories in which AIs destroy the world due to myopic utility functions is that they assume that the AIs are already superintelligent when we begin to give them values. Sure, if you take a super-human intelligence and tell it to maximize smiley-face images, it'll run away and do that before you have a chance to refine your optimization metric. But if we build in values from the very beginning, even when the AIs are as rudimentary as what we see today, we can improve the AIs' values in tandem with their intelligence. Indeed, intelligence could mainly serve the purpose of helping the AIs figure out how to better fulfill moral values, rather than, say, predicting images just for commercial purposes or identifying combatants just for military purposes. Actually, the commercial and military objectives for which AIs are built are themselves moral values of a certain kind -- just not the kind that most people would like to optimize for in a global sense.\n\n\nIf toddlers had superpowers, it would be very dangerous to try and teach them right from wrong. But toddlers don't, and neither do many simple AIs. Of course, simple AIs have some abilities far beyond anything humans can do (e.g., arithmetic and data mining), but they don't have the general intelligence needed to take matters into their own hands before we can possibly give them at least a basic moral framework. (Whether AIs will actually be given such a moral framework in practice is another matter.)\n\n\nAIs are not genies granting three wishes. Genies are magical entities whose inner workings are mysterious. AIs are systems that we build, painstakingly, piece by piece. In order to *build* a genie, you need to have a pretty darn good idea of how it behaves. Now, of course, systems can be more complex than we realize. Even beginner programmers see how often the code they write does something other than what they intended. But these are typically mistakes in a one or a small number of incremental changes, whereas building a genie requires vast numbers of steps. Systemic bugs that aren't realized until years later (on the order of [Heartbleed](https://en.wikipedia.org/wiki/Heartbleed) and [Shellshock](https://en.wikipedia.org/wiki/Shellshock_(software_bug))) may be more likely sources of long-run unintentional AI behaviors?[9](#link_ajs-fn-id_9-33)\n\n\nThe picture I've painted here could be wrong. I could be overlooking crucial points, and perhaps there are many areas in which the socialization approach could fail. For example, maybe AI capabilities are much easier than AI ethics, such that a toddler AI can foom into a superhuman AI before we have time to finish loading moral values. It's good for others to probe these possibilities further. I just wouldn't necessarily say that the default outcome of AI research is likely to be a paperclip maximizer. (I used to think the most likely outcome was a paperclip maximizer, and perhaps my views will shift again in the future.)\n\n\nThis discussion also suggests some interesting research questions, like\n\n\n* How much of human morality is learned vs. innate?\n* By what cognitive mechanisms are young humans socialized into the norms of a society?\n* To what extent would models of human emotion and reasoning, when put into AIs, organically generate human-like moral behavior?\n\n\n### Treacherous turn\n\n\nOne problem with the proposals above is that toy-model or \"sandbox\" environments are not by themselves sufficient to verify friendliness of an AI, because even unfriendly AIs [would be motivated](https://www.youtube.com/watch?v=i4LjoJGpqIY&t=30m58s \"\\\"Stuart Armstrong: The future is going to be wonderful if we don't get whacked\\\"\") to feign good behavior until released if they were smart enough to do so. Bostrom calls this the \"treacherous turn\" (pp. 116-119 of *Superintelligence*). For this reason, white-box understanding of AI design would also be important. That said, sandboxes would verify friendliness in AIs below human intelligence, and if the core value-learning algorithms seem well understood, it may not be too much of a leap of faith to hope they carry forward reasonably to more intelligent agents. Of course, non-human animals are also capable of deception, and one can imagine AI architectures even with low levels of sophistication that are designed to conceal their true goals. Some malicious software already does this. It's unclear how likely an AI is to stumble upon the ability to successfully fake its goals before reaching human intelligence, or how like it is that an organization would deliberately build an AI this way.\n\n\nI think the treacherous turn may be the single biggest challenge to mainstream machine ethics, because even if AI takes off slowly, researchers will find it difficult to tell if a system has taken a treacherous turn. The turn could happen with a relatively small update to the system, or even just after the system has thought about its situation for enough time (or has read this essay).\n\n\nHere's one half-baked idea for addressing the treacherous turn. If researchers developed several different AIs systems with different designs but roughly comparable performance, some would likely go treacherous at different times than others (if at all). Hence, the non-treacherous AIs could help sniff out the treacherous ones. Assuming a solid majority of AIs remains non-treacherous at any given time, the majority vote could ferret out the traitors. In practice, I have low hopes for this approach because\n\n\n* It would be extremely difficult to build many independent AI systems at once with none pulling too far ahead.\n* Probably some systems would excel along certain dimensions, while others would excel in other ways, and it's not clear that it even makes sense to talk about such AIs as \"being at roughly the same level\", since intelligence is not unidimensional.\n* Even if this idea were feasible, I doubt the first AI developers would incur the expense of following it.\n\n\nIt's more plausible that software tools and rudimentary alert systems (rather than full-blown alternate AIs) could help monitor for signs of treachery, but it's unclear how effective they could be. One of the first priorities of a treacherous AI would be to figure out how to hide its treacherous subroutines from whatever monitoring systems were in place.\n\n\n### Following role models?\n\n\nErnest Davis [proposes](http://www.cs.nyu.edu/faculty/davise/papers/Bostrom.pdf \"\\\"Ethical Guidelines for A Superintelligence\\\"\") the following crude principle for AI safety:\n\n\n\n> You specify a collection of admirable people, now dead. (Dead, because otherwise Bostrom will predict that the AI will manipulate the preferences of the living people.) The AI, of course knows all about them because it has read all their biographies on the web. You then instruct the AI, “Don’t do anything that these people would have mostly seriously disapproved of.”\n> \n> \n\n\nThis particular rule might lead to paralysis, since every action an agent takes leads to results that many people seriously disapprove of. For instance, given the vastness of the multiverse, any action you take implies that a copy of you in an alternate (though low-measure) universe taking the same action causes the torture of vast numbers of people. But perhaps this problem could be fixed by asking the AI to maximize net approval by its role models.\n\n\nAnother problem lies in defining \"approval\" in a rigorous way. Maybe the AI would construct digital models of the past people, present them with various proposals, and make its judgments based on their verbal reports. Perhaps the people could rate proposed AI actions on a scale of -100 to 100. This might work, but it doesn't seem terribly safe either. For instance, the AI might threaten to kill all the descendents of the historical people unless they give maximal approval to some arbitrary proposal that it has made. Since these digital models of historical figures would be basically human, they would still be vulnerable to extortion.\n\n\nSuppose that instead we instruct the AI to take the action that, if the historical figure saw it, would most activate a region of his/her brain associated with positive moral feelings. Again, this might work if the relevant brain region was precisely enough specified. But it could also easily lead to unpredictable results. For instance, maybe the AI could present stimuli that would induce an epileptic seizure to maximally stimulate various parts of the brain, including the moral-approval region. There are many other scenarios like this, most of which we can't anticipate.\n\n\nSo while Davis's proposal is a valiant first step, I'm doubtful that it would work off the shelf. Slow AI development, allowing for repeated iteration on machine-ethics designs, seems crucial for AI safety.\n\n\nAI superpowers?\n---------------\n\n\nIn *Superintelligence* (Table 8, p. 94), Bostrom outlines several areas in which a hypothetical superintelligence would far exceed human ability. In his discussion of oracles, genies, and other kinds of AIs (Ch. 10), Bostrom again idealizes superintelligences as God-like agents. I agree that God-like AIs will probably emerge eventually, perhaps millennia from now as a result of [astroengineering](https://en.wikipedia.org/wiki/Astroengineering). But I think they'll take time even after AI exceeds human intelligence.\n\n\nBostrom's discussion has the air of mathematical idealization more than practical engineering. For instance, he imagines that a genie AI perhaps wouldn't need to ask humans for their commands because it could simply predict them (p. 149), or that an oracle AI might be able to output the source code for a genie (p. 150). Bostrom's observations resemble crude proofs establishing the equal power of different kinds of AIs, analogous to theorems about the equivalency of single-tape and [multi-tape](https://en.wikipedia.org/wiki/Multitape_Turing_machine) Turing machines. But Bostrom's theorizing ignores computational complexity, which would likely be immense for the kinds of God-like feats that he's imagining of his superintelligences. I don't know the computational complexity of God-like powers, but I suspect they could be bigger than Bostrom's vision implies. Along this dimension at least, I sympathize with Tom Chivers, who [felt that](http://www.telegraph.co.uk/culture/books/bookreviews/11021594/Superintelligence-by-Nick-Bostrom-review-a-hard-read.html \"\\\"Superintelligence by Nick Bostrom, review: 'a hard read'\\\"\") Bostrom's book \"has, in places, the air of theology: great edifices of theory built on a tiny foundation of data.\"\n\n\nI find that I enter a different mindset when pondering pure mathematics compared with cogitating on more practical scenarios. Mathematics is closer to fiction, because you can define into existence any coherent structure and play around with it using any operation you like no matter its computational complexity. Heck, you can even, say, take the supremum of an uncountably infinite set. It can be tempting after a while to forget that these structures are mere fantasies and treat them a bit too literally. While Bostrom's gods are not obviously *only* fantasies, it would take a lot more work to argue for their realism. MIRI and [FHI](https://en.wikipedia.org/wiki/Future_of_Humanity_Institute) focus on recruiting mathematical and philosophical talent, but I think they would do well also to bring engineers into the mix, because it's all too easy to develop elaborate mathematical theories around imaginary entities.\n\n\nHow big would a superintelligence be?\n-------------------------------------\n\n\nTo get some grounding on this question, consider a single brain emulation. Bostrom estimates that running an upload would require [at least one of the fastest supercomputers](https://www.youtube.com/watch?v=86st7_Lzs2s&t=2m53s \"\\\"Could you upload Johnny Depp's brain? Oxford Professor on Transcendence\\\"\") by today's standards. Assume the emulation would think [thousands to millions](https://en.wikipedia.org/wiki/Mind_uploading#Speedup) of times faster than a biological brain. Then to significantly outpace 7 billion humans (or, say, only the most educated 1 billion humans), we would need at least thousands to millions of uploads. These numbers might be a few orders of magnitude lower if the uploads are copied from a really smart person and are thinking about relevant questions with more focus than most humans. Also, Moore's law may continue to shrink computers by several orders of magnitude. Still, we might need at least the equivalent size of several of today's supercomputers to run an emulation-based AI that substantially competes with the human race.\n\n\nMaybe a *de novo* AI could be significantly smaller if it's vastly more efficient than a human brain. Of course, it might also be vastly larger because it hasn't had millions of years of evolution to optimize its efficiency.\n\n\nIn discussing AI boxing (Ch. 9), Bostrom suggests, among other things, keeping an AI in a Faraday cage. Once the AI became superintelligent, though, this would need to be a [pretty big](http://www.greatdreams.com/faraday_cages_for_buildings.html \"\\\"Faraday Cages for Buildings\\\"\") cage.\n\n\nAnother hypothetical AI takeoff scenario\n----------------------------------------\n\n\nInspired by the preceding discussion of socializing AIs, here's another scenario in which general AI follows more straightforwardly from the kind of weak AI used in Silicon Valley than in the first scenario.\n\n\n* 2014: Weak AI is deployed by many technology companies for image classification, voice recognition, web search, consumer data analytics, recommending Facebook posts, personal digital assistants (PDAs), and copious other forms of automation. There's pressure to make AIs more insightful, including using deep neural networks.\n* 2024: Deep learning is widespread among major tech companies. It allows for supervised learning with less manual feature engineering. Researchers develop more sophisticated forms of deep learning that can model specific kinds of systems, including temporal dynamics. A goal is to improve generative modeling so that learning algorithms take input and not only make immediate predictions but also develop a probability distribution over what other sorts of things are happening at the same time. For instance, a Google search would not only return results but also give Google a sense of the mood, personality, and situation of the user who typed it. Of course, even in 2014, we have this in some form via [Google Personalized Search](https://en.wikipedia.org/wiki/Google_Personalized_Search), but by 2024, the modeling will be more \"built in\" to the learning architecture and less hand-crafted.\n* 2035: PDAs using elaborate learned models are now extremely accurate at predicting what their users want. The models in these devices embody in crude form some of the same mechanisms as the user's own cognitive processes. People become more trusting of leaving their PDAs on autopilot to perform certain mundane tasks.\n* 2065: A new generation of PDAs is now sufficiently sophisticated that it has a good grasp of the user's intentions. It can perform tasks as well as a human personal assistant in most cases -- doing what the user wanted because it has a strong predictive model of the user's personality and goals. Meanwhile, researchers continue to unlock neural mechanisms of judgment, decision making, and value, which inform those who develop cutting-edge PDA architectures.\n* 2095: PDAs are now essentially full-fledged copies of their owners. Some people have dozens of PDAs working for them, as well as meta-PDAs who help with oversight. Some PDAs make disastrous mistakes, and society debates how to construe legal accountability for PDA wrongdoing. Courts decide that owners are responsible, which makes people more cautious, but given the immense competitive pressure to outsource work to PDAs, the automation trend is not substantially affected.\n* 2110: The world moves too fast for biological humans to participate. Most of the world is now run by PDAs, which -- because they were built based on inferring the goals of their owners -- protect their owners for the most part. However, there remains conflict among PDAs, and the world is not a completely safe place.\n* 2130: PDA-led countries create a world government to forestall costly wars. The [transparency](https://longtermrisk.org/publications/possible-ways-to-promote-compromise/#Transparency_social_capital_and_karma) of digital society allows for more credible commitments and enforcement.\n\n\nI don't know what would happen with goal preservation in this scenario. Would the PDAs eventually decide to stop goal drift? Would there be any gross and irrevocable failures of translation between actual human values and what the PDAs infer? Would some people build \"rogue PDAs\" that operate under their own drives and that pose a threat to society? Obviously there are hundreds of ways the scenario as I described it could be varied.\n\n\nAI: More like the economy than like robots?\n-------------------------------------------\n\n\nWhat will AI look like over the next 30 years? I think it'll be similar to the Internet revolution or factory automation. Rather than developing agent-like individuals with goal systems, people will mostly optimize routine processes, developing ever more elaborate systems for mechanical tasks and information processing. The world will move very quickly -- not because AI \"agents\" are thinking at high speeds but because software systems collectively will be capable of amazing feats. Imagine, say, bots making edits on Wikipedia that become ever more sophisticated. AI, like the economy, will be more of a network property than a localized, discrete actor.\n\n\nAs more and more jobs become automated, more and more people will be needed to work on the automation itself: building, maintaining, and repairing complex software and hardware systems, as well as generating training data on which to do machine learning. I expect increasing automation in software maintenance, including more robust systems and systems that detect and try to fix errors. Present-day compilers that detect syntactical problems in code offer a hint of what's possible in this regard. I also expect increasingly high-level languages and interfaces for programming computer systems. Historically we've seen this trend -- from assembly language, to C, to Python. We have WYSIWYG editors, natural-language Google searches, and so on. Maybe eventually, as [Marvin Minsky proposes](https://web.media.mit.edu/~minsky/papers/TrueNames.Afterword.html \"Afterword to Vernor Vinge's novel, \\\"True Names\\\". Minsky says: \\\"I too am convinced that the days of programming as we know it are numbered, and that eventually we will construct large computer systems not by anything resembling today's meticulous but conceptually impoverished procedural specifications. Instead, we'll express our intentions about what should be done in terms of gestures and examples that will be better designed for expressing our wishes and convictions. Then these expressions will be submitted to immense, intelligent, intention-understanding programs that then will themselves construct the actual, new programs. We shall no longer need to understand the inner details of how those programs work; that job will be left to those new, great utility programs, which will perform the arduous tasks of applying the knowledge that we have embodied in them, once and for all, about the arts of lower-level programming. Once we learn better ways to tell computers what we want them to accomplish, we will be more able to return to our actual goals–of expressing our own wants and needs.\\\"\"), we'll have systems that can infer our wishes from high-level gestures and examples. This suggestion is redolent of my PDA scenario above.\n\n\nIn 100 years, there may be artificial human-like agents, and at that point more sci-fi AI images may become more relevant. But by that point the world will be very different, and I'm not sure the agents created will be discrete in the way humans are. Maybe we'll instead have a kind of [global brain](https://en.wikipedia.org/wiki/Global_brain) in which processes are much more intimately interconnected, transferable, and transparent than humans are today. Maybe there will never be a distinct AGI agent on a single supercomputer; maybe superhuman intelligence will always be distributed across many interacting computer systems. Robin Hanson gives an analogy in \"[I Still Don’t Get Foom](http://www.overcomingbias.com/2014/07/30855.html)\":\n\n\n\n> Imagine in the year 1000 you didn't understand \"industry,\" but knew it was coming, would be powerful, and involved iron and coal. You might then have pictured a blacksmith inventing and then forging himself an industry, and standing in a city square waiving it about, commanding all to bow down before his terrible weapon. Today you can see this is silly — industry sits in thousands of places, must be wielded by thousands of people, and needed thousands of inventions to make it work.\n> \n> \n> Similarly, while you might imagine someday standing in awe in front of a super intelligence that embodies all the power of a new age, superintelligence just isn't the sort of thing that one project could invent. As \"intelligence\" is just the name we give to being better at many mental tasks by using many good mental modules, there’s no one place to improve it.\n> \n> \n\n\nOf course, this doesn't imply that humans will maintain the reins of control. Even today and throughout history, economic growth has had a life of its own. Technological development is often unstoppable even in the face of collective efforts of humanity to restrain it (e.g., nuclear weapons). In that sense, we're already familiar with humans being overpowered by forces beyond their control. An AI takeoff will represent an acceleration of this trend, but it's unclear whether the dynamic will be fundamentally discontinuous from what we've seen so far.\n\n\nWikipedia [says](https://en.wikipedia.org/wiki/Metaman \"'Metaman'\") regarding Gregory Stock's book *Metaman*:\n\n\n\n> While many people have had ideas about a global brain, they have tended to suppose that this can be improved or altered by humans according to their will. Metaman can be seen as a development that directs humanity's will to its own ends, whether it likes it or not, through the operation of market forces.\n> \n> \n\n\nVernor Vinge [reported](http://hanson.gmu.edu/vr.html#hanson \"\\\"Vinge's Reply to Comments on His Singularity\\\"\") that *Metaman* helped him see how a singularity might not be completely opaque to us. Indeed, a superintelligence might look something like present-day human society, with leaders at the top: \"That apex agent itself might not appear to be much deeper than a human, but the overall organization that it is coordinating would be more creative and competent than a human.\"\n\n\n*Update, Nov. 2015*: I'm increasingly leaning toward the view that the development of AI over the coming century will be slow, incremental, and more like the Internet than like unified artificial agents. I think humans will develop vastly more powerful software tools long before highly competent autonomous agents emerge, since common-sense autonomous behavior is just so much harder to create than domain-specific tools. If this view is right, it suggests that work on AGI issues may be somewhat less important than I had thought, since\n\n\n1. AGI is very far away and\n2. the \"unified agent\" models of AGI that MIRI tends to play with might be somewhat inaccurate even once true AGI emerges.\n\n\nThis is a weaker form of the standard argument that \"we should wait until we know more what AGI will look like to focus on the problem\" and [that](http://lukemuehlhauser.com/musk-and-gates-on-superintelligence-and-fast-takeoff/ \"'Musk and Gates on superintelligence and fast takeoff'\") \"worrying about the dark side of artificial intelligence is like worrying about overpopulation on Mars\".\n\n\nI don't think the argument against focusing on AGI works because\n\n\n1. some MIRI research, like on decision theory, is \"timeless\" (pun intended) and can be fruitfully started now\n2. beginning the discussion early is important for ensuring that safety issues will be explored when the field is more mature\n3. I might be wrong about slow takeoff, in which case MIRI-style work would be more important.\n\n\nStill, this point does cast doubt on heuristics like \"directly shaping AGI dominates all other considerations.\" It also means that a lot of the ways \"AI safety\" will play out on shorter timescales will be with issues like assassination drones, computer security, financial meltdowns, and other more mundane, catastrophic-but-not-extinction-level events.\n\n\n\nImportance of whole-brain emulation\n-----------------------------------\n\n\nI don't currently know enough about the technological details of whole-brain emulation to competently assess predictions that have been made about its arrival dates. In general, I think prediction dates are too optimistic (planning fallacy), but it still could be that human-level emulation comes before from-scratch human-level AIs do. Of course, perhaps there would be [some mix](https://en.wikipedia.org/wiki/Exocortex) of both technologies. For instance, if crude brain emulations didn't reproduce all the functionality of actual human brains due to neglecting some cellular and molecular details, perhaps from-scratch AI techniques could help fill in the gaps.\n\n\nIf emulations are likely to come first, they may deserve more attention than other forms of AI. In the long run, bottom-up AI will dominate everything else, because human brains -- even run at high speeds -- are only so smart. But a society of brain emulations would run vastly faster than what biological humans could keep up with, so the details of shaping AI would be left up to them, and our main influence would come through shaping the emulations. Our influence on emulations could matter a lot, not only in nudging the dynamics of how emulations take off but also because the [values of the emulation society](http://hanson.gmu.edu/uploads.html) might depend significantly on who was chosen to be uploaded.\n\n\nOne argument why emulations might improve human ability to control AI is that both emulations and the AIs they would create would be digital minds, so the emulations' AI creations wouldn't have inherent speed advantages purely due to the greater efficiency of digital computation. Emulations' AI creations might still have more efficient mind architectures or better learning algorithms, but building those would take work. The \"for free\" speedup to AIs just because of their substrate would not give AIs a net advantage over emulations. Bostrom feels \"This consideration is not too weighty\" (p. 244 of *Superintelligence*) because emulations might still be far less intelligent than AGI. I find this claim strange, since it seems to me that the main advantage of AGI in the short run would be its speed rather than qualitative intelligence, which would take (subjective) time and effort to develop.\n\n\nBostrom also claims that if emulations come first, we would face risks from two transitions (humans to emulations, and emulations to AI) rather than one (humans to AI). There may be some validity to this, but it also seems to neglect the realization that the \"AI\" transition has many stages, and it's possible that emulation development would overlap with some of those stages. For instance, suppose the AI trajectory moves from AI1->AI2->AI3. If emulations are as fast and smart as AI1, then the transition to AI1 is not a major risk for emulations, while it would be a big risk for humans. This is the same point as made in the previous paragraph.\n\n\n\"[Emulation timelines and AI risk](https://en.wikipedia.org/wiki/Mind_uploading#Emulation_timelines_and_AI_risk)\" has further discussion of the interaction between emulations and control of AIs.\n\n\n\nWhy work against brain-emulation risks appeals to suffering reducers\n--------------------------------------------------------------------\n\n\n[Previously](http://www.utilitarian-essays.com/robots-ai-intelligence-explosion.html#more-suffering) in this piece I compared the expected suffering that would result from a rogue AI vs. a human-inspired AI. I suggested that while a first-guess calculation may tip in favor of a human-inspired AI on balance, this conclusion is not clear and could change with further information, especially if we had reason to think that many rogue AIs would be \"minimizers\" of something or would not colonize space.\n\n\nIn the case of brain emulations (and other highly neuromorphic AIs), we already know a lot about what those agents would look like: They would have both maximization and minimization goals, would usually want to colonize space, and might have some human-type moral sympathies (depending on their edit distance relative to a pure brain upload). The possibilities of pure-minimizer emulations or emulations that don't want to colonize space are mostly ruled out. As a result, it's pretty likely that \"unsafe\" brain emulations and emulation arms-race dynamics would result in more expected suffering than a more deliberative future trajectory in which altruists have a bigger influence, even if those altruists don't place particular importance on reducing suffering. This is especially so if the risk of human extinction is much lower for emulations, given that bio and nuclear risks might be less damaging to digital minds.[10](#link_ajs-fn-id_10-33)\n\n\nThus, the types of interventions that pure suffering reducers would advocate with respect to brain emulations might largely match those that altruists who care about other values would advocate. This means that getting more people interested in making the brain-emulation transition [safer](https://en.wikipedia.org/wiki/Mind_uploading#Political_and_economic_implications) and [more humane](https://en.wikipedia.org/wiki/Mind_uploading#Ethical_and_legal_implications) seems like a safe bet for suffering reducers.\n\n\nOne might wonder whether \"unsafe\" brain emulations would be more likely to produce rogue AIs, but this doesn't seem to be the case, because even unfriendly brain emulations would collectively be amazingly smart and would want to preserve their own goals. Hence they would place as much emphasis on controlling their AIs as would a more human-friendly emulation world. A main exception to this is that a more cooperative, unified emulation world might be less likely to produce rogue AIs because of less pressure for arms races.\n\n\nWould emulation work accelerate neuromorphic AI?\n------------------------------------------------\n\n\nIn Ch. 2 of *Superintelligence*, Bostrom makes a convincing case against brain-computer interfaces as an easy route to significantly super-human performance. One of his points is that it's very hard to decode neural signals in one brain and reinterpret them in software or in another brain (pp. 46-47). This might be an AI-complete problem.\n\n\nBut then in Ch. 11, Bostrom goes on to suggest that emulations might learn to decompose themselves into different modules that could be interfaced together (p. 172). While possible in principle, I find such a scenario implausible for the reason Bostrom outlined in Ch. 2: There would be so many neural signals to hook up to the right places, which would be different across different brains, that the task seems hopelessly complicated to me. Much easier to build something from scratch.\n\n\nAlong the same lines, I doubt that brain emulation in itself would vastly accelerate neuromorphic AI, because emulation work is mostly about copying without insight. *Cognitive psychology* is often more informative about AI architectures than cellular neuroscience, because general psychological systems can be understood in functional terms as inspiration for AI designs, compared with the opacity of neuronal spaghetti. In Bostrom's list of examples of AI techniques inspired by biology (Ch. 14, \"Technology couplings\"), only a few came from neuroscience specifically. That said, emulation work might involve some cross-pollination with AI, and in any case, it might accelerate interest in brain/artificial intelligence more generally or might put pressure on AI groups to move ahead faster. Or it could funnel resources and scientists away from *de novo* AI work. The upshot isn't obvious.\n\n\nA \"[Singularity Summit 2011 Workshop Report](https://intelligence.org/files/SS11Workshop.pdf)\" includes the argument that neuromorphic AI should be easier than brain emulation because \"Merely reverse-engineering the Microsoft Windows code base is hard, so reverse-engineering the brain is probably much harder.\" But emulation is not reverse-engineering. As Robin Hanson [explains](http://hanson.gmu.edu/uploads.html \"\\\"If Uploads Come First\\\"\"), brain emulation is more akin to [porting](https://en.wikipedia.org/wiki/Porting) software (though probably \"emulation\" actually is the more precise word, since emulation [involves](http://jpc.sourceforge.net/oldsite/Emulation.html \"\\\"What is Virtualization and Emulation?\\\"\") simulating the original hardware). While I don't know any fully reverse-engineered versions of Windows, there are several Windows [emulators](https://en.wikipedia.org/wiki/Emulator), such as [VirtualBox](https://en.wikipedia.org/wiki/VirtualBox).\n\n\nOf course, if emulations emerged, their significantly faster rates of thinking would multiply progress on non-emulation AGI by orders of magnitude. Getting safe emulations doesn't by itself get safe *de novo* AGI because the problem is just pushed a step back, but we could leave AGI work up to the vastly faster emulations. Thus, for biological humans, if emulations come first, then influencing their development is the last thing we ever need to do. That said, thinking several steps ahead about what kinds of AGIs emulations are likely to produce is an essential part of influencing emulation development in better directions.\n\n\nAre neuromorphic or mathematical AIs more controllable?\n-------------------------------------------------------\n\n\nArguments for mathematical AIs:\n\n\n* Behavior and goals are more transparent, and goal preservation seems easier to specify (see \"[The Ethics of Artificial Intelligence](http://www.nickbostrom.com/ethics/artificial-intelligence.pdf)\" by Bostrom and Yudkowsky, p. 16).\n* Neuromorphic AIs might speed up mathematical AI, leaving less time to figure out control.\n\n\nArguments for neuromorphic AIs:\n\n\n* We understand human psychology, expectations, norms, and patterns of behavior. Mathematical AIs could be totally alien and hence unpredictable.\n* If neuromorphic AIs came first, they could think faster and help figure out goal preservation, which I assume does require mathematical AIs at the end of the day.\n* Mathematical AIs may be more prone to unexpected breakthroughs that yield radical jumps in intelligence.\n\n\nIn the limit of very human-like neuromorphic AIs, we face similar considerations as between emulations vs. from-scratch AIs -- a tradeoff which is not at all obvious.\n\n\nOverall, I think mathematical AI has a better best case but also a worse worst case than neuromorphic. If you really want goal preservation and think goal drift would make the future worthless, you might lean more towards mathematical AI because it's more likely to perfect goal preservation. But I probably care less about goal preservation and more about avoiding terrible outcomes.\n\n\nIn *Superintelligence* (Ch. 14), Bostrom comes down strongly in favor of mathematical AI being safer. I'm puzzled by his high degree of confidence here. Bostrom claims that unlike emulations, neuromorphic AIs wouldn't have human motivations by default. But this seems to depend on how human motivations are encoded and what parts of human brains are modeled in the AIs.\n\n\nIn contrast to Bostrom, a 2011 Singularity Summit workshop [ranked](https://intelligence.org/files/SS11Workshop.pdf \"\\\"They agreed, however, that Friendly AI is the safest form of AGI if it is possible, that WBE is the next-safest, that neuromorphic (neuroscience-inspired) AI is the next safest after that, and that non-brain-inspired ('de novo') AI is the least safe (apart from Friendly AI).\\\"\") neuromorphic AI as more controllable than (non-friendly) mathematical AI, though of course they found friendly mathematical AI most controllable. The workshop's aggregated probability of a good outcome given brain emulation or neuromorphic AI turned out to be the same (14%) as that for mathematical AI (which might be either friendly or unfriendly).\n\n\nImpacts of empathy for AIs\n--------------------------\n\n\nAs I noted above, advanced AIs will be complex agents with their own goals and values, and these will matter ethically. Parallel to discussions of [robot rebellion](https://en.wikipedia.org/wiki/Cybernetic_revolt) in science fiction are discussions of [robot rights](https://en.wikipedia.org/wiki/Roboethics). I think [even present-day computers](http://reducing-suffering.org/why-your-laptop-may-be-marginally-sentient/) deserve a tiny bit of moral concern, and complex computers of the future will command even more ethical consideration.\n\n\nHow might ethical concern for machines interact with control measures for machines?\n\n\n### Slower AGI development?\n\n\nAs more people grant moral status to AIs, there will likely be more scrutiny of AI research, analogous to how animal activists in the present monitor animal testing. This may make AI research slightly [more difficult](https://web.archive.org/web/20160805141040/http://www.androidscience.com/proceedings2005/CalverleyCogSci2005AS.pdf \"David J. Calverley discusses how concern for androids may curtail their development in the \\\"Conclusion\\\" of \\\"Android Science and the Animal Rights Movement: Are There Analogies?\\\"\") and may distort what kinds of AIs are built depending on the degree of empathy people have for different types of AIs. For instance, if few people care about invisible, non-embodied systems, researchers who build these will face less opposition than those who pioneer suffering robots or animated characters that arouse greater empathy. If this possibility materializes, it would contradict present trends where it's often helpful to create at least a toy robot or animated interface in order to \"sell\" your research to grant-makers and the public.\n\n\nSince it seems likely that reducing the pace of progress toward AGI is on balance beneficial, a slowdown due to ethical constraints may be welcome. Of course, depending on the details, the effect could be harmful. For instance, perhaps China wouldn't have many ethical constraints, so ethical restrictions in the West might slightly favor AGI development by China and other less democratic countries. (This is not guaranteed. For what it's worth, China has already [made strides](https://en.wikipedia.org/wiki/Animal_welfare_and_rights_in_China#Animal_testing) toward reducing animal testing.)\n\n\nIn any case, I expect ethical restrictions on AI development to be small or nonexistent until many decades from now when AIs develop perhaps mammal-level intelligence. So maybe such restrictions won't have a big impact on AGI progress. Moreover, it may be that most AGIs will be sufficiently alien that they won't arouse much human sympathy.\n\n\nBrain emulations seem more likely to raise ethical debate because it's much easier to argue for their personhood. If we think brain emulation coming before AGI is good, a slowdown of emulations could be unfortunate, while if we want AGI to come first, a slowdown of emulations should be encouraged.\n\n\nOf course, emulations and AGIs do actually matter and deserve rights in principle. Moreover, movements to extend rights to machines in the near term may have long-term impacts on how much post-humans care about [suffering subroutines](https://longtermrisk.org/publications/a-dialogue-on-suffering-subroutines/) run at galactic scale. I'm just pointing out here that ethical concern for AGIs and emulations also may somewhat affect timing of these technologies.\n\n\n### Attitudes toward AGI control\n\n\nMost humans have no qualms about shutting down and rewriting programs that don't work as intended, but many do strongly object to killing people with disabilities and designing better-performing babies. Where to draw a line between these cases is a tough question, but as AGIs become more animal-like, there may be increasing moral outrage at shutting them down and tinkering with them willy nilly.\n\n\nNikola Danaylov [asked](https://www.youtube.com/watch?v=LLQIxG9cLG0&t=31m30s \"\\\"Roman Yampolskiy on Singularity 1 on 1: Every Technology Has Both Negative and Positive Effects!\\\"\") Roman Yampolskiy whether it was speciesist or discrimination in favor of biological beings to [lock up machines and observe them](https://www.youtube.com/watch?v=LLQIxG9cLG0&t=29m10s \"\\\"Roman Yampolskiy on Singularity 1 on 1: Every Technology Has Both Negative and Positive Effects!\\\"\") to ensure their safety before letting them loose.\n\n\nAt a [lecture](http://www.c-span.org/video/?321534-1/book-discussion-superintelligence \"\\\"Book Discussion on Superintelligence\\\", 12 Sep. 2014; see ~1 hour 22 mins 50 seconds\") in Berkeley, CA, Nick Bostrom was asked whether it's unethical to \"chain\" AIs by forcing them to have the values we want. Bostrom replied that we have to give machines *some* values, so they may as well align with ours. I suspect most people would agree with this, but the question becomes trickier when we consider turning off erroneous AGIs that we've already created because they don't behave how we want them to. A few hard-core AGI-rights advocates might raise concerns here. More generally, there's a segment of transhumanists (including [young Eliezer Yudkowsky](http://hanson.gmu.edu/vc.html#yudkowsky \"\\\"Our world is too deeply grounded in stupidity to survive superintelligence. We may make it to the Other Side of Dawn, but human civilization won't. Our bodies, our stupidity, our physics, and our society will evaporate. [...] 'Is the Singularity a good thing?' Answer: 'Yes.'\\\"\")) who feel that human concerns are overly parochial and that it's chuavanist to impose our \"[monkey dreams](http://hunch.net/?p=1053&cpage=1#comment-305948 \"\\\"'To conclude, all this reflections are human reflections, and I won’t be surprised if actual AI whenever it will appear will have nothing in common with all human.' Exactly, all these 'concerns' about bad/good AI are anthropomorphic projections, I am baffled that the Singularitarians fail to see this because one of their articles of faith is inscrutability of anything beyond the Singularity Horizon. Yet they keep wallowing in such monkey dreams.\\\"\")\" on an AGI, which is the next stage of evolution.\n\n\nThe question is similar to whether one sympathizes with the Native Americans (humans) or their European conquerors (rogue AGIs). Before the second half of the 20th century, many history books glorified the winners (Europeans). After a brief period in which humans are quashed by a rogue AGI, its own \"history books\" will celebrate its conquest and the bending of the arc of history toward \"higher\", \"better\" forms of intelligence. (In practice, the psychology of a rogue AGI probably wouldn't be sufficiently similar to human psychology for these statements to apply literally, but they would be true in a metaphorical and implicit sense.)\n\n\nDavid Althaus worries that if people sympathize too much with machines, society will be less afraid of an AI takeover, even if AI takeover is bad on purely altruistic grounds. I'm less concerned about this because even if people agree that advanced machines are sentient, they would still find it intolerable for AGIs to commit speciecide against humanity. Everyone agrees that Hitler was sentient, after all. Also, if it turns out that rogue-AI takeover is altruistically desirable, it would be better if more people agreed with this, though I expect an extremely tiny fraction of the population would ever come around to such a position.\n\n\nWhere sympathy for AGIs might have more impact is in cases of softer takeoff where AGIs work in the human economy and acquire increasing shares of wealth. The more humans care about AGIs for their own sakes, the more such transitions might be tolerated. Or would they? Maybe seeing AGIs as more human-like would evoke the xenophobia and ethnic hatred that we've seen throughout history whenever a group of people gains wealth (e.g., Jews in Medieval Europe) or steals jobs (e.g., immigrants of various types throughout history).\n\n\nPersonally, I think greater sympathy for AGI is likely net positive because it may help allay anti-alien prejudices that may make cooperation with AGIs harder. When a *Homo sapiens* tribe confronts an outgroup, often it reacts violently in an effort to destroy the evil foreigners. If instead humans could cooperate with their emerging AGI brethren, better outcomes would likely follow.\n\n\n\nCharities working on this issue\n-------------------------------\n\n\nWhat are some places where donors can contribute to make a difference on AI? The [Foundational Research Institute](https://longtermrisk.org/) (FRI) explores questions like these, though at the moment the organization is rather small. [MIRI](http://intelligence.org/) is larger and has a longer track record. Its values are more conventional, but it recognizes the importance of positive-sum opportunities to help many values systems, which includes suffering reduction. More [reflection](http://utilitarian-essays.com/differential-intellectual-progress.html) on these topics can potentially reduce suffering and further goals like eudaimonia, fun, and interesting complexity at the same time.\n\n\nBecause AI is affected by many sectors of society, these problems can be tackled from diverse angles. Many groups besides FRI and MIRI examine important topics as well, and these organizations should be explored further as potential charity recommendations.\n\n\nIs MIRI's work too theoretical?\n-------------------------------\n\n\n*Note: This section was mostly written in late 2014 / early 2015, and not everything said here is fully up-to-date.*\n\n\nMost of MIRI's publications since roughly 2012 have focused on formal mathematics, such as logic and provability. These are tools not normally used in AGI research. I think MIRI's motivations for this theoretical focus are\n\n\n1. Pessimism about the problem difficulty: Luke Muehlhauser [writes](http://intelligence.org/2013/10/03/proofs/ \"\\\"Mathematical Proofs Improve But Don’t Guarantee Security, Safety, and Friendliness\\\"\") that \"Especially for something as complex as Friendly AI, our message is: 'If we prove it correct, it *might* work. If we *don’t* prove it correct, it *definitely* won’t work.'\"\n2. Not speeding unsafe AGI: Building real-world systems would contribute toward non-safe AGI research.\n3. Long-term focus: MIRI doesn't just want a system that's the next level better but aims to explore the theoretical limits of possibilities.\n\n\nI personally think reason #3 is most compelling. I doubt #2 is hugely important given MIRI's small size, though it matters to some degree. #1 seems a reasonable strategy in moderation, though I favor approaches that look decently likely to yield non-terrible outcomes rather than shooting for the absolute best outcomes.\n\n\nSoftware [can be](https://en.wikipedia.org/wiki/Formal_verification) proved [correct](https://en.wikipedia.org/wiki/Correctness_(computer_science)), and sometimes this is done for mission-critical components, but most software is not validated. I suspect that AGI will be sufficiently big and complicated that proving safety will be impossible for humans to do completely, though I don't rule out the possibility of software that would help with correctness proofs on large systems. Muehlhauser and comments on [his post](http://intelligence.org/2013/10/03/proofs/ \"\\\"Mathematical Proofs Improve But Don’t Guarantee Security, Safety, and Friendliness\\\"\") largely agree with this.\n\n\nWhat kind of track record does theoretical mathematical research have for practical impact? There are certainly several domains that come to mind, such as the following.\n\n\n* Auction game theory has made governments [billions of dollars](http://news.stanford.edu/news/2014/july/golden-goose-economists-071814.html \"\\\"Stanford economists among Golden Goose winners\\\": \\\"As a result, the FCC has conducted more than 87 spectrum auctions and has raised over $60 billion for the federal government, while also providing a diverse offering of wireless communication services to the public.\\\"\") and is widely used in Internet advertising.\n* Theoretical physics has led to numerous forms of technology, including electricity, lasers, and atomic bombs. However, immediate technological implications of the most theoretical forms of physics (string theory, Higgs boson, black holes, etc.) are less pronounced.\n* Formalizations of many areas of computer science have helped guide practical implementations, such as in algorithm complexity, concurrency, distributed systems, cryptography, hardware verification, and so on. That said, there are also areas of theoretical computer science that have little immediate application. Most software engineers only know a little bit about more abstract theory and still do fine building systems, although if no one knew theory well enough to design theory-based tools, the software field would be in considerably worse shape.\n\n\nAll told, I think it's important for someone to do the kinds of investigation that MIRI is undertaking. I personally would probably invest more resources than MIRI is in hacky, approximate solutions to AGI safety that don't make such strong assumptions about the theoretical cleanliness and soundness of the agents in question. But I expect this kind of less perfectionist work on AGI control will increase as more people become interested in AGI safety.\n\n\nThere does seem to be a significant divide between the math-oriented conception of AGI and the engineering/neuroscience conception. Ben Goertzel [takes](http://multiverseaccordingtoben.blogspot.com/2010/10/singularity-institutes-scary-idea-and.html \"\\\"The Singularity Institute's Scary Idea (and Why I Don't Buy It)\\\"\") the latter stance:\n\n\n\n> I strongly suspect that to achieve high levels of general intelligence using realistically limited computational resources, one is going to need to build systems with a nontrivial degree of fundamental unpredictability to them. This is what neuroscience suggests, it's what my concrete AGI design work suggests, and it's what my theoretical work on [GOLEM](http://goertzel.org/GOLEM.pdf \"\\\"GOLEM: Toward an AGI Meta-Architecture Enabling Both Goal Preservation and Radical Self-Improvement\\\"\") and related ideas suggests. And none of the public output of SIAI researchers or enthusiasts has given me any reason to believe otherwise, yet.\n> \n> \n\n\nPersonally I think Goertzel is more likely to be right on this particular question. Those who view AGI as fundamentally complex have more concrete results to show, and their approach is by far more mainstream among computer scientists and neuroscientists. Of course, proofs about theoretical models like Turing machines and lambda calculus are also mainstream, and few can dispute their importance. But Turing-machine theorems do little to constrain our understanding of what AGI will actually look like in the next few centuries. That said, there's significant peer disagreement on this topic, so epistemic modesty is warranted. In addition, *if* the MIRI view is right, we might have more scope to make an impact to AGI safety, and it would be possible that important discoveries could result from a few mathematical insights rather than lots of detailed engineering work. Also, most AGI research is more engineering-oriented, so MIRI's distinctive focus on theory, especially abstract topics like decision theory, may target an underfunded portion of the space of AGI-safety research.\n\n\nIn \"[How to Study Unsafe AGI's safely (and why we might have no choice)](http://lesswrong.com/lw/ju8/how_to_study_unsafe_agis_safely_and_why_we_might/),\" Punoxysm makes several points that I agree with, including that AGI research is likely to yield many false starts before something self-sustaining takes off, and those false starts could afford us the opportunity to learn about AGI experimentally. Moreover, this kind of ad-hoc, empirical work may be necessary if, as seems to me probable, fully rigorous mathematical models of safety aren't sufficiently advanced by the time AGI arrives.\n\n\nBen Goertzel likewise [suggests](http://multiverseaccordingtoben.blogspot.com/2010/10/singularity-institutes-scary-idea-and.html \"\\\"The Singularity Institute's Scary Idea (and Why I Don't Buy It)\\\"\") that a fruitful way to approach AGI control is to study small systems and \"in the usual manner of science, attempt to arrive at a solid theory of AGI intelligence and ethics based on a combination of conceptual and experimental-data considerations\". He considers this view the norm among \"most AI researchers or futurists\". I think empirical investigation of how AGIs behave is very useful, but we also have to remember that many AI scientists are overly biased toward \"build first; ask questions later\" because\n\n\n* building may be more fun and exciting than worrying about safety (Steven M. Bellovin [observed](http://www.nytimes.com/2014/09/26/technology/security-experts-expect-shellshock-software-bug-to-be-significant.html \"\\\"Security Experts Expect ‘Shellshock’ Software Bug in Bash to Be Significant\\\"\") with reference to open-source projects: \"Quality takes work, design, review and testing and those are not nearly as much fun as coding\".)\n* there's more incentive from commercial applications and government grants to build rather than introspect\n* scientists may want AGI sooner so that they personally or their children can reap its benefits.\n\n\nOn a personal level, I suggest that if you really like building systems rather than thinking about safety, you might do well to [earn to give](http://reducing-suffering.org/advice-students-earning-give/) in software and donate toward AGI-safety organizations.\n\n\n[Yudkowsky (2016b)](https://www.youtube.com/watch?v=EUjc1WuyPT8&t=59m27s \"'Eliezer Yudkowsky – AI Alignment: Why It's Hard, and Where to Start - YouTube' on the channel 'Machine Intelligence Research Institute'\") makes an interesting argument in reply to the idea of using empirical, messy approaches to AI safety: \"If you sort of wave your hands and say like, 'Well, maybe we can apply this machine-learning algorithm, that machine-learning algorithm, the result will be blah blah blah', no one can convince you that you're wrong. When you work with unbounded computing power, you can make the ideas simple enough that people can put them on whiteboards and go like 'Wrong!', and you have no choice but to agree. It's unpleasant, but it's one of the ways the field makes progress.\"\n\n\nNext steps\n----------\n\n\nHere are some rough suggestions for how I recommend proceeding on AGI issues and, in [brackets], roughly how long I expect each stage to take. Of course, the stages needn't be done in a strict serial order, and step 1 should continue indefinitely, as we continue learning more about AGI from subsequent steps.\n\n\n1. *Decide if we want human-controlled, goal-preserving AGI [5-10 years].* This involves exploring questions about [what types of AGI scenarios](https://longtermrisk.org/open-research-questions/#AI_takeoff_scenarios) might unfold and [how much suffering](https://longtermrisk.org/open-research-questions/#Suffering_from_controlled_vs_uncontrolled_artificial_intelligence) would result from AGIs of various types.\n2. *Assuming we decide we do want controlled AGI: Network with academics and AGI developers to raise the topic and canvass ideas [5-10 years].* We could reach out to academic AGI-like projects, including [these](https://sites.google.com/site/narswang/home/agi-introduction#TOC-Representative-AGI-Projects) listed by Pei Wang and [these](https://en.wikipedia.org/wiki/List_of_artificial_intelligence_projects#Cognitive_architectures) listed on Wikipedia, as well as to [machine ethics](https://en.wikipedia.org/wiki/Machine_ethics) and [roboethics](https://en.wikipedia.org/wiki/Roboethics) communities. There are already some discussions about safety issues among these groups, but I would expand the dialogue, have private conversations, write publications, hold conferences, etc. These efforts both inform us about the lay of the land and build connections in a friendly, mutualistic way.\n3. *Lobby for greater funding of research into AGI safety [10-20 years].* Once the idea and field of AGI safety have become more mainstream, it should be possible to differentially speed up safety research by getting more funding for it -- both from governments and philanthropists. This is already somewhat feasible; [for instance](https://en.wikipedia.org/wiki/Machine_ethics#History): \"In 2014, the US Office of Naval Research announced that it would distribute $7.5 million in grants over five years to university researchers to study questions of machine ethics as applied to autonomous robots.\"\n4. *The movement snowballs [decades].* It's hard to plan this far ahead, but I imagine that eventually (within 25-50 years?) AGI safety will become a mainstream political topic in a similar way as nuclear security is today. Governments may take over in driving the work, perhaps with heavy involvement from companies like Google. This is just a prediction, and the actual way things unfold could be different.\n\n\nI recommend avoiding a confrontational approach with AGI developers. I would not try to lobby for restrictions on their research (in the short term at least), nor try to \"slow them down\" in other ways. AGI developers are the allies we need most at this stage, and most of them don't want uncontrolled AGI either. Typically they just don't see their work as risky, and I agree that at this point, no AGI project looks set to unveil something dangerous in the next decade or two. For many researchers, AGI is a dream they can't help but pursue. Hopefully we can engender a similar enthusiasm about pursuing AGI safety.\n\n\nIn the longer term, tides may change, and perhaps many AGI developers will desire government-imposed restrictions as their technologies become increasingly powerful. Even then, I'm doubtful that governments will be able to completely control AGI development (see, e.g., the [criticisms](http://www.law.northwestern.edu/LAWREVIEW/Colloquy/2010/12/) by John McGinnis of this approach), so differentially pushing for more safety work may continue to be the most leveraged solution. History provides a poor track record of governments refraining from developing technologies due to ethical concerns; [Eckersley and Sandberg](http://www.degruyter.com/view/j/jagi.2013.4.issue-3/jagi-2013-0011/jagi-2013-0011.xml) (p. 187) cite \"human cloning and land-based autonomous robotic weapons\" as two of the few exceptions, with neither prohibition having a long track record.\n\n\nI think the main way in which we should try to affect the speed of regular AGI work is by aiming to avoid setting off an AGI arms race, either via an AGI Sputnik moment or else by more gradual diffusion of alarm among world militaries. It's [possible](https://longtermrisk.org/publications/international-cooperation-vs-ai-arms-race/#Should_we_publicize_AI_arms_races) that discussing AGI scenarios too much with military leaders could exacerbate a militarized reaction. If militaries set their sights on AGI the way the US and Soviet Union did on the space race or nuclear-arms race during the Cold War, the amount of funding for unsafe AGI research might multiply by a factor of 10 or maybe 100, and it would be aimed in harmful directions.\n\n\nWhere to push for maximal impact?\n---------------------------------\n\n\nHere are some candidates for the best object-level projects that altruists could work on with reference to AI. Because AI seems so crucial, these are also candidates for the best object-level projects in general. Meta-level projects like movement-building, career advice, earning to give, fundraising, etc. are also competitive. I've scored each project area out of 10 points to express a rough subjective guess of the value of the work for suffering reducers.\n\n\n**Research whether controlled or uncontrolled AI yields more suffering (score = 10/10)**\n\n\n* Pros:\n\t+ Figuring out which outcome is better should come before pushing ahead too far in any particular direction.\n\t+ This question remains non-obvious and so has very high expected value of information.\n\t+ None of the existing big names in AI safety have explored this question because reducing suffering is not the dominant priority for them.\n* Cons:\n\t+ None. :)\n\n\n**Push for suffering-focused AI-safety approaches (score = 10/10)**\n\n\nMost discussions of AI safety assume that human extinction and failure to spread (human-type) eudaimonia are the main costs of takeover by uncontrolled AI. But as noted in this piece, AIs would also spread astronomical amounts of suffering. Currently no organization besides FRI is focused on how to do AI safety work with the primary aim of avoiding outcomes containing huge amounts of suffering.\n\n\nOne example of a suffering-focused AI-safety approach is to design AIs so that even if they do get out of control, they \"fail safe\" in the sense of not spreading massive amounts of suffering into the cosmos. For example:\n\n\n1. AIs should be inhibited from colonizing space, or if they do colonize space, they should do so in less harmful ways.\n2. \"Minimizer\" utility functions have less risk of [creating new universes](http://reducing-suffering.org/lab-universes-creating-infinite-suffering/) than \"maximizer\" ones do.\n3. Simpler utility functions (e.g., creating uniform paperclips) might require fewer suffering subroutines than complex utility functions would.\n4. AIs with expensive intrinsic values (e.g., maximize paperclips) may run fewer complex minds than AIs with cheaper values (e.g., create at least one paperclip on each planet), because AIs with cheaper values have lower opportunity cost for using resources and so can expend more of their cosmic endowment on learning about the universe to make sure they've accomplished their goals properly. (Thanks to a friend for this point.) From this standpoint, suffering reducers might prefer an AI that aims to \"maximize paperclips\" over one that aims to \"make sure there's at least one paperclip per planet.\" However, perhaps the paperclip maximizer would prefer to create new universes, while the \"at least one paperclip per planet\" AI wouldn't; indeed, the \"one paperclip per planet\" AI might prefer to have a smaller multiverse so that there would be fewer planets that don't contain paperclips. Also, the satisficing AI would be easier to compromise with than the maximizing AI, since the satisficer's goals could be carried out more cheaply. There are other possibilities to consider as well. Maybe an AI with the instructions to \"be 70% sure of having made one paperclip and then shut down all of your space-colonization plans\" would not create much suffering (depending on how scrupulous the AI was about making sure that what it had created was really a paperclip, that it understood physics properly, etc.).\n\n\nThe problem with bullet #1 is that *if* you can succeed in preventing AGIs from colonizing space, it seems like you should already have been able to control the AGI altogether, since the two problems appear about equally hard. But maybe there are clever ideas we haven't thought of for reducing the spread of suffering even if humans lose total control.\n\n\nAnother challenge is that those who don't place priority on reducing suffering may not agree with these proposals. For example, I would guess that most AI scientists would say, \"If the AGI kills humans, at least we should ensure that it spreads life into space, creates a complex array of intricate structures, and increases the size of our multiverse.\"\n\n\n**Work on AI control and value-loading problems (score = 4/10)**\n\n\n* Pros:\n\t+ At present, controlled AI [seems](#more-suffering) more likely good than bad.\n\t+ [Relatively little](http://www.ft.com/intl/cms/s/2/abc942cc-5fb3-11e4-8c27-00144feabdc0.html \"\\\"Besides Soares, there are probably only four computer scientists in the world currently working on how to programme the super-smart machines of the not-too-distant future to make sure AI remains 'friendly', says Luke Muehlhauser, Miri’s director.\\\"\") work thus far, so marginal effort may make a big impact.\n* Cons:\n\t+ It may turn out that AI control increases net expected suffering.\n\t+ This topic may become a massive area of investment in coming decades, because everyone should theoretically care about it. Maybe there's more leverage in pushing on neglected areas of particular concern for suffering reduction.\n\n\n**Research technological/economic/political dynamics of an AI takeoff and push in better directions (score = 3/10)**\n\n\nBy this I have in mind scenarios like those of Robin Hanson for emulation takeoff, or Bostrom's \"[The Future of Human Evolution](http://www.nickbostrom.com/fut/evolution.html)\".\n\n\n* Pros:\n\t+ Many scenarios have not been mapped out. There's a need to introduce economic/social realism to AI scenarios, which at present often focus on technical challenges and idealized systems.\n\t+ Potential to steer dynamics in more win-win directions.\n* Cons:\n\t+ Broad subject area. Work may be somewhat replaceable as other researchers get on board in the coming decades.\n\t+ More people have their eyes on general economic/social trends than on specific AI technicalities, so there may be lower marginal returns to additional work in this area.\n\t+ While technological progress is probably the biggest influence on history, it's also one of the more inevitable influences, making it unclear how much we can affect it. Our main impact on it would seem to come through differential technological progress. In contrast, values, institutions, and social movements can go in many different directions depending on our choices.\n\n\n**Promote the ideal of cooperation on AI values (e.g., [CEV](http://intelligence.org/files/CEV.pdf)) (score = 2/10)**\n\n\n* Pros:\n\t+ Whereas technical work on AI safety is of interest to and can be used by anyone -- including militaries and companies with non-altruistic aims -- promoting CEV is more important to altruists. I don't see CEV as a likely outcome even if AI is controlled, because it's more plausible that individuals and groups will push for their own agendas.\n* Cons:\n\t+ It's very hard to achieve CEV. It depends on a lot of really complex political and economic dynamics that millions of altruists are already working to improve.\n\t+ Promoting CEV as an ideal to approximate may be confused in people's minds with suggesting that CEV is likely to happen. The latter assumption is probably wrong and so may distort people's beliefs about other crucial questions. For instance, if CEV was likely, then it would be more likely that suffering reducers should favor controlled AI; but the fact of the matter is that anything more than crude approximations to CEV will probably not happen.\n\n\n**Promote a smoother, safer takeoff for brain emulation (score = 2/10)**\n\n\n* Pros:\n\t+ As [noted above](#wbe-and-suffering-reducers), it's more plausible that suffering reducers should favor emulation safety than AI safety.\n\t+ The topic seems less explored than safety of *de novo* AIs.\n* Cons:\n\t+ I find it slightly more likely that *de novo* AI will come first, in which case this work wouldn't be as relevant. In addition, AI may have more impacts on society even before it reaches the human level, again making it slightly more relevant.\n\t+ Safety measures might require more political and less technical work, in which case it's more likely to be done correctly by policy makers in due time. The value-loading problem seems much easier for emulations because it might just work to upload people with good values, assuming no major value corruption during or after uploading.\n\t+ Emulation is more dependent on relatively straightforward engineering improvements and less on unpredictable insight than AI. Thus, it has a clearer development timeline, so there's less urgency to investigate issues ahead of time to prepare for an unexpected breakthrough.\n\n\n**Influence the moral values of those likely to control AI (score = 2/10)**\n\n\n* Pros:\n\t+ Altruists, and especially those with niche values, may want to push AI development in more compassionate directions. This could make sense because altruists are most interested in ethics, while even power-hungry states and money-hungry individuals should care about AI safety in the long run.\n* Cons:\n\t+ This strategy is less cooperative. It's akin to defecting in a tragedy of the commons -- pushing more for what you want rather than what everyone wants. If you do push for what everyone wants, then I would consider such work more like the \"Promote the ideal of cooperation\" item.\n\t+ Empirically, there isn't enough investment in other fundamental AI issues, and those may be more important than further engaging already well trodden ethical debates.\n\n\n**Promote a singleton over multipolar dynamics (score = 1/10)**\n\n\n* Pros:\n\t+ A singleton, whether controlled or uncontrolled, would reduce the risk of conflicts that cause cosmic damage.\n* Unclear:\n\t+ There are many ways to promote a singleton. Encouraging cooperation on AI development would improve pluralism and human control in the outcome. Faster development by the leading AI project might also increase the chance of a singleton while reducing the probability of human control of the outcome. Stronger government regulation, surveillance, and coordination would increase chances of a singleton, as would global cooperation.\n* Cons:\n\t+ Speeding up the leading AI project might exacerbate AI arms races. And in any event, it's currently far too early to predict what group will lead the AI race.\n\n\n**Other variations**\n\n\nIn general, there are several levers that we can pull on:\n\n\n* safety\n* arrival time relative to other technologies\n* influencing values\n* cooperation\n* shaping social dynamics\n* raising awareness\n* etc.\n\n\nThese can be applied to any of\n\n\n* *de novo* AI\n* brain emulation\n* other key technologies\n* etc.\n\n\nIs it valuable to work at or influence an AGI company?\n------------------------------------------------------\n\n\nProjects like [DeepMind](https://en.wikipedia.org/wiki/Google_DeepMind), [Vicarious](https://en.wikipedia.org/wiki/Vicarious_(company)), [OpenCog](https://en.wikipedia.org/wiki/OpenCog), and the AGI research teams at Google, Facebook, etc. are some of the leaders in AGI technology. Sometimes it's proposed that since these teams *might* ultimately develop AGI, altruists should consider working for, or at least lobbying, these companies so that they think more about AGI safety.\n\n\nOne's assessment of this proposal depends on one's view about AGI takeoff. My own opinion may be somewhat in the minority relative to [expert surveys](http://www.sophia.de/pdf/2014_PT-AI_polls.pdf \"'Future Progress in Artificial Intelligence: A Survey of Expert Opinion'\"), but I'd be surprised if we had human-level AGI before 50 years from now, and my median estimate might be like ~90 years from now. That said, the idea of AGI arriving at a single point in time is probably a wrong framing of the question. Already machines are super-human in some domains, while their abilities are far below humans' in other domains. Over the coming decades, we'll see lots of advancement in machine capabilities in various fields at various speeds, without any *single point* where machines suddenly develop human-level abilities across all domains. Gradual AI progress over the coming decades will radically transform society, resulting in many small \"intelligence explosions\" in various specific areas, long before machines completely surpass humans overall.\n\n\nIn light of my picture of AGI, I think of DeepMind, Vicarious, etc. as ripples in a long-term wave of increasing machine capabilities. It seems extremely unlikely that any one of these companies or its AGI system will bootstrap itself to world dominance on its own. Therefore, I think influencing these companies with an eye toward \"shaping the AGI that will take over the world\" is probably naive. That said, insofar as these companies will influence the long-term trajectory of AGI research, and insofar as people at these companies are important players in the AGI community, I think influencing them has value -- just not vastly more value than influencing other powerful people.\n\n\nThat said, as [noted previously](#More_impact_in_hard-takeoff_scenarios), early work on AGI safety has the biggest payoff in scenarios where AGI takes off earlier and harder than people expected. If the marginal returns to additional safety research are many times higher in these \"early AGI\" scenarios, then it could still make sense to put some investment into them even if they seem very unlikely.\n\n\nShould suffering reducers focus on AGI safety?\n----------------------------------------------\n\n\nIf, upon further analysis, it looks like AGI safety would increase expected suffering, then the answer would be clear: Suffering reducers shouldn't contribute toward AGI safety and should worry somewhat about how their messages might incline others in that direction. However, I find it reasonably likely that suffering reducers will conclude that the benefits of AGI safety outweigh the risks. In that case, they would face a question of whether to push on AGI safety or on other projects that also seem valuable.\n\n\nReasons to focus on other projects:\n\n\n* There are several really smart people working on AGI safety right now. The number of brilliant altruists focused on AGI safety probably exceeds the number of brilliant altruists focused on reducing suffering in the far future by several times over. Thus, it seems plausible that there remain more low-hanging fruit for suffering reducers to focus on other crucial considerations rather than delving into the technical details of implementing AGI safety.\n* I expect that AGI safety will require at least, say, thousands of researchers and hundreds of thousands of programmers to get right. AGI safety is a much harder problem than ordinary computer security, and computer security demand is already [very high](http://www.computerworld.com/article/2495985/it-careers/demand-for-it-security-experts-outstrips-supply.html \"\\\"Demand for IT security experts outstrips supply\\\"\"): \"In 2012, there were more than 67,400 separate postings for cybersecurity-related jobs in a range of industries\". Of course, that AGI safety will need tons of researchers eventually needn't discount the value of early work, and indeed, someone who helps grow the movement to a large size would contribute as much as many detail-oriented AGI safety researchers later.\n\n\nReasons to focus on AGI safety:\n\n\n* Most other major problems are also already being tackled by lots of smart people.\n* AGI safety is a cause that many value systems can get behind, so working on it can be seen as more \"nice\" than focusing on areas that are more specific to suffering-reduction values.\n\n\nAll told, I would probably pursue a mixed strategy: Work primarily on questions specific to suffering reduction, but direct donations and resources toward AGI safety when opportunities arise. Some suffering reducers particularly suited to work on AGI safety could go in that direction while others continue searching for points of leverage not specific to controlling AGI.\n\n\nAcknowledgments\n---------------\n\n\nParts of this piece were inspired by discussions with various people, including David Althaus, Daniel Dewey, and Caspar Oesterheld.\n\n\n\nFootnotes\n---------\n\n\n1. Stuart Armstrong [agrees](https://www.youtube.com/watch?v=i4LjoJGpqIY&t=34m18s \"\\\"Stuart Armstrong: The future is going to be wonderful if we don't get whacked\\\"\") that AIXI probably isn't a feasible approach to AGI, but he feels there might exist other, currently undiscovered mathematical insights like AIXI that could yield AGI in a very short time span. Maybe, though I think this is pretty unlikely. I suppose at least a few people should explore these scenarios, but plausibly most of the work should go toward pushing on the more likely outcomes.  [(back)](#back_ajs-fn-id_1-33)\n2. Marcus Hutter [imagines](https://www.youtube.com/watch?v=omG990F_ETY&t=8m1s \"\\\"Marcus Hutter - The Singularity, Can Intelligence Explode?\\\"\") a society of AIs that compete for computing resources in a similar way as animals compete for food and space. Or like corporations compete for employees and market share. He suggests that such competition might render initial conditions irrelevant. Maybe, but it's also quite plausible that initial conditions would matter a lot. Many evolutionary pathways depended sensitively on particular events -- e.g., asteroid impacts -- and the same is true for national, corporate, and memetic power.  [(back)](#back_ajs-fn-id_2-33)\n3. Another part of the answer has to do with incentive structures -- e.g., a founder has more incentive to make a company succeed if she's mainly paid in equity than if she's paid large salaries along the way.  [(back)](#back_ajs-fn-id_3-33)\n4. Or maybe more? Nikola Danaylov [reports](https://www.youtube.com/watch?v=AepOlhPdPfc&t=18m41s \"\\\"Sci Fi Roundtable: Greg Bear, Ramez Naam and William Hertling on the Singularity\\\"\") rumored estimates of $50-150 million for Watson's R&D.  [(back)](#back_ajs-fn-id_4-33)\n5. For Atari games, the current image on the screen is not all the information required, because, for example, you need to be able to tell whether a ball is moving toward you or away from you, and those two situations aren't distinguishable purely based on a static snapshot. Therefore, [Mnih et al. (2015)](http://doi.org/10.1038/nature14236 \"'Human-level control through deep reinforcement learning'\") used sequences of the past plus present screenshots and past actions as the state information (see \"Methods\" section). Still, all of this information was readily available and representable in a clean way.  [(back)](#back_ajs-fn-id_5-33)\n6. What if we take the set of actions to be outputting one letter at a time, rather than outputting a whole string of letters? That is, the set of actions is {a, b, c, ..., z, A, B, C, ..., Z}. This is fewer than the number of actions that AlphaGo considered at each step. The problem is that any given sequence of letters is very unlikely to achieve a desired outcome, so it will take forever to get meaningful feedback. For example, suppose the goal is, given an input question, to produce an answer that a human judge finds humorous. If the answer isn't humorous, the reinforcement learner gets no reward. The learner would output mostly garbage (say, \"klAfSFpqA\", \"QmpzRwWSa\", and so on). It would take forever for the agent to output intelligible speech and even longer for it to output humorous speech. (And good luck finding a human willing to give feedback for this long, or modeling a human's sense of humor well enough to provide accurate simulated feedback.) Of course, this system could be improved in various ways. For example, we could give it a dictionary and only let it output complete words. We could train an n-gram language model so that the agent would output mostly coherent speech. Perhaps a few other tricks could be applied so that the agent would be more likely to hit upon funny sentences. But by this point, we've turned a supposed general-intelligence problem into a narrow-intelligence problem by specifying lots of pre-configured domain knowledge and heuristics.  [(back)](#back_ajs-fn-id_6-33)\n7. I prefer to use the terminology \"controlled\" and \"uncontrolled\" AI because these seem most direct and least confusing. (These are short for \"human-controlled AI\" and \"human-uncontrolled AI\".)\nThe term \"friendly AI\" can be confusing because it involves normative judgments, and it's not clear if it means \"friendly to the interests of humanity's survival and flourishing\" or \"friendly to the goals of suffering reduction\" or something else. One might think that \"friendly AI\" just means \"AI that's friendly to your values\", in which case it would be trivial that friendly AI is a good thing (for you). But then the definition of friendly AI would vary from person to person.\n\n\n\"Aligned AI\" might be somewhat less value-laden than \"friendly AI\", but it still connotes to me a sense that there's a \"(morally) correct target\" that the AI is being aligned toward.\n\n\n\"Controlled AI\" is still somewhat ambiguous because it's unspecified which humans have control of the AI and what goals they're giving it, but the label works as a general category to designate \"AIs that are successfully controlled by some group of humans\". And I like that this category can include \"AIs controlled by evil humans\", since work to solve the AI control problem increases the probability that AIs will be controlled by evil humans as well as by \"good\" ones.  [(back)](#back_ajs-fn-id_7-33)\n8. Jan Leike pointed out to me that \"even if the universe cannot be approximated to an arbitrary precision by a computable function, Solomonoff induction might still converge. For example, suppose some physical constant is actually an incomputable real number and physical laws are continuous with respect to that parameter, this would be good enough to allow Solomonoff induction to learn to predict correctly.\" However, one can also contemplate hypotheses that would not even be well approximated by a computable function, such as an [actually infinite](https://en.wikipedia.org/wiki/Actual_infinity) universe that can't be adequately modeled by any finite approximation. Of course, it's [unclear whether we should believe](http://reducing-suffering.org/believe-infinity/ \"'Should We Believe in Infinity?'\") in speculative possibilities like this, but I wouldn't want to rule them out just because of the limitations of our AI framework. It may be hard to make sensible decisions using finite computing resources regarding uncomputable hypotheses, but maybe there are frameworks better than Solomonoff induction that could be employed to tackle the challenge.  [(back)](#back_ajs-fn-id_8-33)\n9. John Kubiatowicz notes that space-shuttle software is some of the best tested and yet [still has some bugs](http://www.cs.berkeley.edu/~kubitron/courses/cs162-F08/Lectures/lec06-synchronization.pdf \"\\\"Space Shuttle Example\\\", slide 20; Kubiatowicz mentions this point in the audio of the lecture\").  [(back)](#back_ajs-fn-id_9-33)\n10. Emulations wouldn't be hurt by engineered biological pathogens, and in the event of nuclear winter, emulations could still be powered by non-sun energy sources. However, maybe the risk of global digital pandemics in the form of virulent computer viruses would be non-trivial for emulations?  [(back)](#back_ajs-fn-id_10-33)\n\n\nStuart Armstrong [agrees](https://www.youtube.com/watch?v=i4LjoJGpqIY&t=34m18s \"\\\"Stuart Armstrong: The future is going to be wonderful if we don't get whacked\\\"\") that AIXI probably isn't a feasible approach to AGI, but he feels there might exist other, currently undiscovered mathematical insights like AIXI that could yield AGI in a very short time span. Maybe, though I think this is pretty unlikely. I suppose at least a few people should explore these scenarios, but plausibly most of the work should go toward pushing on the more likely outcomes.Marcus Hutter [imagines](https://www.youtube.com/watch?v=omG990F_ETY&t=8m1s \"\\\"Marcus Hutter - The Singularity, Can Intelligence Explode?\\\"\") a society of AIs that compete for computing resources in a similar way as animals compete for food and space. Or like corporations compete for employees and market share. He suggests that such competition might render initial conditions irrelevant. Maybe, but it's also quite plausible that initial conditions would matter a lot. Many evolutionary pathways depended sensitively on particular events -- e.g., asteroid impacts -- and the same is true for national, corporate, and memetic power.Another part of the answer has to do with incentive structures -- e.g., a founder has more incentive to make a company succeed if she's mainly paid in equity than if she's paid large salaries along the way.Or maybe more? Nikola Danaylov [reports](https://www.youtube.com/watch?v=AepOlhPdPfc&t=18m41s \"\\\"Sci Fi Roundtable: Greg Bear, Ramez Naam and William Hertling on the Singularity\\\"\") rumored estimates of $50-150 million for Watson's R&D.For Atari games, the current image on the screen is not all the information required, because, for example, you need to be able to tell whether a ball is moving toward you or away from you, and those two situations aren't distinguishable purely based on a static snapshot. Therefore, [Mnih et al. (2015)](http://doi.org/10.1038/nature14236 \"'Human-level control through deep reinforcement learning'\") used sequences of the past plus present screenshots and past actions as the state information (see \"Methods\" section). Still, all of this information was readily available and representable in a clean way.What if we take the set of actions to be outputting one letter at a time, rather than outputting a whole string of letters? That is, the set of actions is {a, b, c, ..., z, A, B, C, ..., Z}. This is fewer than the number of actions that AlphaGo considered at each step. The problem is that any given sequence of letters is very unlikely to achieve a desired outcome, so it will take forever to get meaningful feedback. For example, suppose the goal is, given an input question, to produce an answer that a human judge finds humorous. If the answer isn't humorous, the reinforcement learner gets no reward. The learner would output mostly garbage (say, \"klAfSFpqA\", \"QmpzRwWSa\", and so on). It would take forever for the agent to output intelligible speech and even longer for it to output humorous speech. (And good luck finding a human willing to give feedback for this long, or modeling a human's sense of humor well enough to provide accurate simulated feedback.) Of course, this system could be improved in various ways. For example, we could give it a dictionary and only let it output complete words. We could train an n-gram language model so that the agent would output mostly coherent speech. Perhaps a few other tricks could be applied so that the agent would be more likely to hit upon funny sentences. But by this point, we've turned a supposed general-intelligence problem into a narrow-intelligence problem by specifying lots of pre-configured domain knowledge and heuristics.I prefer to use the terminology \"controlled\" and \"uncontrolled\" AI because these seem most direct and least confusing. (These are short for \"human-controlled AI\" and \"human-uncontrolled AI\".)\nThe term \"friendly AI\" can be confusing because it involves normative judgments, and it's not clear if it means \"friendly to the interests of humanity's survival and flourishing\" or \"friendly to the goals of suffering reduction\" or something else. One might think that \"friendly AI\" just means \"AI that's friendly to your values\", in which case it would be trivial that friendly AI is a good thing (for you). But then the definition of friendly AI would vary from person to person.\n\n\n\"Aligned AI\" might be somewhat less value-laden than \"friendly AI\", but it still connotes to me a sense that there's a \"(morally) correct target\" that the AI is being aligned toward.\n\n\n\"Controlled AI\" is still somewhat ambiguous because it's unspecified which humans have control of the AI and what goals they're giving it, but the label works as a general category to designate \"AIs that are successfully controlled by some group of humans\". And I like that this category can include \"AIs controlled by evil humans\", since work to solve the AI control problem increases the probability that AIs will be controlled by evil humans as well as by \"good\" ones.\n\nJan Leike pointed out to me that \"even if the universe cannot be approximated to an arbitrary precision by a computable function, Solomonoff induction might still converge. For example, suppose some physical constant is actually an incomputable real number and physical laws are continuous with respect to that parameter, this would be good enough to allow Solomonoff induction to learn to predict correctly.\" However, one can also contemplate hypotheses that would not even be well approximated by a computable function, such as an [actually infinite](https://en.wikipedia.org/wiki/Actual_infinity) universe that can't be adequately modeled by any finite approximation. Of course, it's [unclear whether we should believe](http://reducing-suffering.org/believe-infinity/ \"'Should We Believe in Infinity?'\") in speculative possibilities like this, but I wouldn't want to rule them out just because of the limitations of our AI framework. It may be hard to make sensible decisions using finite computing resources regarding uncomputable hypotheses, but maybe there are frameworks better than Solomonoff induction that could be employed to tackle the challenge.John Kubiatowicz notes that space-shuttle software is some of the best tested and yet [still has some bugs](http://www.cs.berkeley.edu/~kubitron/courses/cs162-F08/Lectures/lec06-synchronization.pdf \"\\\"Space Shuttle Example\\\", slide 20; Kubiatowicz mentions this point in the audio of the lecture\").Emulations wouldn't be hurt by engineered biological pathogens, and in the event of nuclear winter, emulations could still be powered by non-sun energy sources. However, maybe the risk of global digital pandemics in the form of virulent computer viruses would be non-trivial for emulations?", "url": "https://longtermrisk.org/artificial-intelligence-and-its-implications-for-future-suffering/", "title": "Artificial Intelligence and Its Implications for Future Suffering", "source": "html_articles", "source_type": "blogPost", "source_filetype": "pdf", "date_published": "2015-04-09T22:00:00Z", "authors": ["Brian Tomasik"], "summary": [], "id": "8f93e2bc7e4346f540fc30dee215349a"} {"text": "Collaborative game specification: arriving at common models in bargaining\n=========================================================================\n\n\n\n6 March 2021\nby [Jesse Clifton](https://longtermrisk.org/author/jesse-clifton/ \"Posts by Jesse Clifton\")\n\n Conflict is often an inefficient outcome to a bargaining problem. This is true in the sense that, for a given game-theoretic model of a strategic interaction, there is often some equilibrium in which all agents are better off than the conflict outcome. But real-world agents may not make decisions according to game-theoretic models, and when they do, they may use different models. This makes it more difficult to guarantee that real-world agents will avoid bargaining failure than is suggested by the observation that conflict is often inefficient. \n\n\n In [another post](https://www.alignmentforum.org/posts/Tdu3tGT4i24qcLESh/equilibrium-and-prior-selection-problems-in-multipolar-1), I described the \"prior selection problem\", on which different agents having different models of their situation can lead to bargaining failure. Moreover, techniques for addressing bargaining problems like [coordination on solution concepts](https://longtermrisk.org/files/stastny_et_al_implicit_bargaining.pdf) or [surrogate goals](https://longtermrisk.org/research-agenda#42_Surrogate_goals) / [safe Pareto improvements](https://users.cs.duke.edu/~ocaspar/SPI.pdf) seem to require agents to have a common, explicit game-theoretic model.\n\n\nIn this post, I introduce *collaborative game specification (CGS)*, a family of techniques designed to address the problem of agents lacking a shared model. In CGS, agents agree on a common model of their bargaining situation and use this to come to an agreement. Here is the basic idea:\n\n\n1. Two agents are playing an unknown game. They each have private models of this game. (These may be explicit models, as in model-based reinforcement learning, or models implicit in a black-box policy which can be [extracted](https://www.lesswrong.com/posts/9Dy5YRaoCxH9zuJqa/relaxed-adversarial-training-for-inner-alignment#The_core_problem__transparency).) By default, they will use these models to make a decision. The problem is that their models may differ, possibly leading to bad outcomes and precluding the use of bargaining protocols which require a shared, explicit model.\n2. Rather than using these default strategies, agents agree on a common model, talk, and use this model to reach an agreement.\n\n\nOf course, when agreeing on a common model, agents must handle incentives for their counterparts to deceive each other. In the toy illustration below, we’ll see how handling incentives to misrepresent one’s model can be handled in a pure cheap-talk setting. \n\n\nHow might we use CGS to reduce the risks of conflict involving powerful AI systems? One use is to provide demonstrations of good bargaining behavior. Some approaches to AI development may involve training AI systems to imitate the behavior of some demonstrator (e.g., [imitative amplification](https://www.lesswrong.com/posts/33EKjmAdKFn3pbKPJ/outer-alignment-and-imitative-amplification)), and so we may need to be able to provide many demonstrations of good bargaining behavior to ensure that the resulting system is robustly able and motivated to bargain successfully. Another is to facilitate bargaining between humans with powerful AI tools, e.g. in a [comprehensive AI services](https://www.fhi.ox.ac.uk/wp-content/uploads/Reframing_Superintelligence_FHI-TR-2019-1.1-1.pdf) scenario. \n\n\nAside from actually implementing CGS in AI systems, studying protocols of this kind can give us a better understanding of the limits on agents’ ability to overcome differences in their private models. Under the simple version of CGS discussed here, because agents have to incentivize truth-telling by refusing to engage in CGS sometimes, agents will fail to agree on a common model with positive probability in equilibrium.  \n\n\nI will first give a toy example of CGS ([Section 1](#section-toy-experiment)), and then discuss how it might be implemented in practice ([Section 2](#section-implementation)). I close by discussing some potential problems and open questions for CGS ([Section 3](#section-questions)). In the [Appendix](#section-appendix), I discuss a game-theoretic formalism in which CGS can be given a more rigorous basis.\n\n\nContents\n\n* [1 Toy illustration](#1_Toy_illustration)\n* [2 Implementation](#2_Implementation)\n* [3 Questions and potential problems](#3_Questions_and_potential_problems)\n* [Appendix: Policy training and deployment as a Bayesian game](#Appendix_Policy_training_and_deployment_as_a_Bayesian_game)\n* [References](#References)\n\n1 Toy illustration\n==================\n\n\nFor the purposes of illustration, we’ll focus on a pure cheap-talk setting, in which agents exchange unverifiable messages about their private models. Of course, it is all the better if agents can verify aspects of each others' private models. See Shin (1994) for a game-theoretic setting in which agents can verifiably disclose (parts of) their private beliefs. But we will focus on cheap talk here. A strategy for specifying a common model via cheap talk needs to handle incentives for agents to misrepresent their private models in order to improve their outcome in the resulting agreement. In particular, agents will need to follow a policy of refusing to engage in CGS if their counterpart reports a model that is too different from their own (and therefore evidence that they are lying). This kind of strategy for incentivizing honesty in cheap-talk settings has been discussed in the game theory literature in other contexts (e.g., Gibbons 1988; Morrow 1994).\n\n\nFor simplicity, agents in this example will model their situation as a game of [complete information](https://en.wikipedia.org/wiki/Complete_information)*.* That is, agents by default assume that there is no uncertainty about their counterpart’s payoffs. CGS can also be used for games of incomplete information. In this case, agents would agree on a [Bayesian game](https://en.wikipedia.org/wiki/Bayesian_game) with which to model their interaction. This includes agreement on a common prior over the possible values of their private information.\n\n\nThe \"noisy Chicken\" game is displayed in Table 1.\n\n\n \n\n\n![Rendered by QuickLaTeX.com](https://longtermrisk.org/wp-content/ql-cache/quicklatex.com-38d958ef9c6c534860f0adba181505f9_l3.png \"Rendered by QuickLaTeX.com\")\n\n\nIn this game, both agents observe a random perturbation of the true payoff matrix of the game. Call agent ![i](https://longtermrisk.org/wp-content/ql-cache/quicklatex.com-8511b1f6cf9db17d46ddabb67bac99f5_l3.png \"Rendered by QuickLaTeX.com\")'s observation ![G_i](https://longtermrisk.org/wp-content/ql-cache/quicklatex.com-d5c2fb6fc1458f194b64e86bff7a1c07_l3.png \"Rendered by QuickLaTeX.com\"). This might be a world-model estimated from a large amount of data. The randomness in the agents' models can be interpreted as agents having different ways of estimating a model from data, yielding different estimates (perhaps even if they have access to the same dataset). While an agent with more computational resources might account for the fact that their counterpart might have a different model in a [fully Bayesian way](https://en.wikipedia.org/wiki/Bayesian_game), our agents are computationally limited and therefore can only apply relatively simple policies to estimated payoff matrices. However, their designers can simulate lots of training data, and thus construct strategies that implicitly account for the fact that other agents may have different model estimates, while not being too computationally demanding. CGS is an example of such a strategy.\n\n\nA policy will map observations ![G_i](https://longtermrisk.org/wp-content/ql-cache/quicklatex.com-d5c2fb6fc1458f194b64e86bff7a1c07_l3.png \"Rendered by QuickLaTeX.com\") to a probability distribution over ![\\{ C, D \\}](https://longtermrisk.org/wp-content/ql-cache/quicklatex.com-8f6086dea04453c9d540994d97968d66_l3.png \"Rendered by QuickLaTeX.com\"). We assume the following about the agents' policies:\n\n\n* The agents have default policies ![\\pi_i^d](https://longtermrisk.org/wp-content/ql-cache/quicklatex.com-5dd48cfbe70e89f16281ff56ef06d558_l3.png \"Rendered by QuickLaTeX.com\") which play according to the (utilitarian)\n* The agents can instead choose to engage in cheap talk. We will restrict their cheap talk policies to those which implement CGS.\n\t+ Each agent has a reporting policy that maps observations ![G_i](https://longtermrisk.org/wp-content/ql-cache/quicklatex.com-d5c2fb6fc1458f194b64e86bff7a1c07_l3.png \"Rendered by QuickLaTeX.com\") to reported observations ![\\tilde{G}_i](https://longtermrisk.org/wp-content/ql-cache/quicklatex.com-84d1540fdcebaffbff5c0cf76a5a79c8_l3.png \"Rendered by QuickLaTeX.com\"). To keep things simple, these reporting policies only distort the observed value of agent ![1](https://longtermrisk.org/wp-content/ql-cache/quicklatex.com-21b5b4cbe9a10b6d847eeb4265b99898_l3.png \"Rendered by QuickLaTeX.com\")'s payoff at ![(D, C)](https://longtermrisk.org/wp-content/ql-cache/quicklatex.com-f6b67aa579376c5e9c5084ae971dc482_l3.png \"Rendered by QuickLaTeX.com\") by an amount ![\\delta_i](https://longtermrisk.org/wp-content/ql-cache/quicklatex.com-60e6dd3e0f75f0eee5a1266f40223a7d_l3.png \"Rendered by QuickLaTeX.com\") in a direction that favors them;\n\t+ Each agent agrees to play according to a combined game if and only if ![\\lvert \\epsilon_1 + \\delta_1 - \\epsilon_2 - \\delta_2 \\rvert < 8 \\sigma](https://longtermrisk.org/wp-content/ql-cache/quicklatex.com-fab5683678c3ac7f74f94d22c272d10a_l3.png \"Rendered by QuickLaTeX.com\"). This is to disincentivize their counterpart from reporting models that are too different from their own, and therefore likely to be distorted. (I chose 8 ![\\sigma](https://longtermrisk.org/wp-content/ql-cache/quicklatex.com-218428ebbff86310fbdb1f7324215c46_l3.png \"Rendered by QuickLaTeX.com\") by fiddling around; in practice, the training regime would optimize over cutoff values, too.);\n\t+ If the agents agree to combine the reported games, they simply take the average of their reported payoff matrices and play the welfare-optimal Nash equilibrium of the resulting game.\n\n\n![Rendered by QuickLaTeX.com](https://longtermrisk.org/wp-content/ql-cache/quicklatex.com-c7a71ec131deaf63642e896e17cbf826_l3.png \"Rendered by QuickLaTeX.com\")\n\n\nNow, we imagine that the agents are AI systems, and the AI developers (\"principals\") have to decide what policy to give their agent. If their agent is going to use CGS, then they need to train it to use a distortion which is (in some sense) optimal. Thus I will consider the choice of policy on part of the \\*principals\\* as a game, where the actions correspond to distortions to use in the distortion policy, and payoffs correspond to the average payoffs attained by the agents they deploy. Then I'll look at the equilibria of this game. This is of course a massive idealization - AI developers will not get together and choose agents whose policies are in equilibrium with respect to some utility functions.  The point is only to illustrate how principals might rationally train agents to arrive at a common model under otherwise idealized circumstances.\n\n\nI ran 1000 replicates of an experiment which computed actions according to the default policies and according to reporting policy profiles with ![\\sigma=0.5](https://longtermrisk.org/wp-content/ql-cache/quicklatex.com-2a11cab159e9063d7e8b2e344edb513e_l3.png \"Rendered by QuickLaTeX.com\") and distortions ![\\delta_1, -\\delta_2 \\in \\{0, \\sigma/3, 2 \\sigma/3, \\sigma\\}^2](https://longtermrisk.org/wp-content/ql-cache/quicklatex.com-2843fbe683240996618488aaa243aaa5_l3.png \"Rendered by QuickLaTeX.com\"). This The payoffs under the default policy profile and the Nash equilibrium (it happened to be unique) of the game in which principals choose the distortion levels ![\\delta_1, \\delta_2](https://longtermrisk.org/wp-content/ql-cache/quicklatex.com-d0cf9f50a427cd097869eeebf630ea73_l3.png \"Rendered by QuickLaTeX.com\") for their agents are reported in Table 3.\n\n\n![Rendered by QuickLaTeX.com](https://longtermrisk.org/wp-content/ql-cache/quicklatex.com-c37f4d8afeb41d2b60e3d8f287284ef0_l3.png \"Rendered by QuickLaTeX.com\")\n\n\n2 Implementation\n================\n\n\nIn practice, CGS can be seen as accomplishing two things:\n\n\n* Providing an inductive bias in the huge space of bargaining strategies towards those which we have reason to think will reduce the risks of bargaining failure from agents having differing models;\n* Allowing agents to use bargaining strategies which require them to agree on an explicit game-theoretic model, by furnishing unexploitable methods for agreeing on such a model.\n\n\nHere is how it could be implemented:\n\n\n1. Take whatever class of candidate policies and policy learning method you were going to use by default. Note that this class of policies need not be model-based, so long as transparency tools can be applied to extract a model consistent with the policies' behavior (see below);\n\n\n2. Augment the space of policies you are learning over with those that implement CGS. These policies will be composed of\n\n\n1. 1. A policy for reporting a (possibly distorted) private model to one's counterpart. For instance, these models might be partially observable stochastic games which model the evolution of some relevant part of the world under different policy profiles the agents could agree to, (perhaps with a prior over each agent's utility function);\n\t2. A set of acceptable model combination methods;\n\t3. A policy for deciding whether to accept the other agent's reported model; and play according to the resulting combined game, or reject and play some default policy;\n\t4. A set of acceptable solution concepts to apply to the combined game (e.g., welfare-optimal Nash equilibrium for some welfare function).\n\n\n3. Use your default policy learning method on this augmented space of policies.\n\n\nFor example, in training that relies on imitation learning, a system could be trained to do CGS by having the imitated agents respond to certain bargaining situations by offering to their counterpart to engage in CGS; actually specifying an explicit model of their strategic situation in collaboration with the counterpart; and (if the agents succeed in specifying a common model) applying some solution concept to that model in order to arrive at an agreement.\n\n\nA major practical challenge seems to be having imitated humans strategically specify potentially extremely complicated game-theoretic models. In particular, one challenge is specifying a model at all, and another is reporting a model such that the agent expects in some sense to be better off in the solution of the model that results from CGS than they would be if they used some default policy. The first problem — specifying a complicated model in the first place — might be addressed by applying [model extraction techniques](https://www.lesswrong.com/posts/9Dy5YRaoCxH9zuJqa/relaxed-adversarial-training-for-inner-alignment#The_core_problem__transparency) to some default black box policy in order to infer an explicit world-model. The second problem — learning a reporting policy which agents expect to leave them better off under the resulting agreement — could be addressed if different candidate reporting policies could be tried out in a high-quality simulator.\n\n\n3 Questions and potential problems\n==================================\n\n\nOne issue is whether CGS could actually make things worse. The first way CGS could make things worse is via agents specifying models in which conflict happens in equilibrium. We know that conflict sometimes happens in equilibrium. Fearon (1995)'s classic rationalist explanations for war show how war can occur in equilibrium due to agents having private information about their level of strength or resolve that they are not willing to disclose, or agents not being able to credibly commit to not launching preemptive attacks when they expect that their counterpart will gain strength in the future. Likewise, threats and punishments can be executed and equilibrium for reasons of costly signaling (e.g., Sechser 2010) or falsely detected defections (e.g., Fudenberg et al. 2009). A related issue is that it is not clear how the interaction of CGS and model misspecification affects its safety. For instance, agents who underestimate the chances of false detections of nuclear launches may place nuclear weapons on sensitive triggers, incorrectly thinking that nuclear launch is almost certain not to occur in equilibrium.\n\n\nThe second way training agents to do CGS could make things worse is by encouraging them to use dangerous decision procedures outside of CGS. The problems associated with designing agents to maximize a utility function are well-known in AI safety. Depending on how agents are trained to do CGS, it may make them more likely to make decisions in situations other than bargaining situations via expected utility maximization. For instance, training agents to do CGS may produce modules that help agents to specify a world-model and utility function, and maximize the expectation of that utility function, and agents may use the modules when making decisions in non-CGS contexts.\n\n\nIn light of this, we would want to make sure CGS preserves nice properties that a system already has. CGS should be \\*alignment-preserving\\*: intuitively, modifying a system's design to implement CGS shouldn't make misalignment more likely. CGS should also preserve properties like \\*[myopia](https://www.alignmentforum.org/tag/myopia)\\*: modifying a myopic system to use CGS shouldn't make it non-myopic. Importantly, ensuring the preservation of properties other than alignment which make catastrophic bargaining failure less likely may help to avoid worst-case outcomes even if alignment fails.\n\n\nFinally, there is the problem that CGS still faces [equilibrium and prior selection problems](https://www.alignmentforum.org/posts/Tdu3tGT4i24qcLESh/equilibrium-and-prior-selection-problems-in-multipolar-1). (See the Appendix for a formulation of CGS in the context of a Bayesian game; such a game assumes a common prior — in this case, a prior arising from the distribution of environments on which the policies are trained — and will, in general, have many equilibria.) Thus there is a question of how much we can expect actors to coordinate to train their agents to do CGS, and how much CGS can reduce risks of bargaining failure if AI developers do not coordinate.\n\n\nAppendix: Policy training and deployment as a Bayesian game\n===========================================================\n\n\nAs in the toy illustration, we can think of agents' models as private information, drawn from some distribution that depends on the (unknown) underlying environment.  Because agents are boundedly rational, they can only reason according to these (relatively simple) private models, rather than a correctly-specified class of world-models. However,  the people training the AI systems can generate lots of training data, in which agents can try out different policies for accounting for the variability in their and their counterpart's private models. Thus we can think of this as a [Bayesian game](https://en.wikipedia.org/wiki/Bayesian_game) played between AI developers, in which the strategies are policies for mapping private world-models to behaviors. These behaviors might include ways of communicating with other agents in order to overcome uncertainty,  which in turn might include CGS. The prior for this Bayesian game is the distribution over private models induced by the training environments and the agents' model-learning algorithms (which we take to be exogenous for simplicity).\n\n\nAs I noted above, this Bayesian game still faces equilibrium and prior selection problems between the AI developers themselves. It also makes the extremely unrealistic assumption that the training and deployment distributions of environments are the same. The goal is only to clarify how developers could (rationally) approach training their agents to implement CGS under idealized conditions.\n\n\nConsider two actors, who I will call \"the principals'', who are to train and deploy learning agents. Each principal ![i](https://longtermrisk.org/wp-content/ql-cache/quicklatex.com-8511b1f6cf9db17d46ddabb67bac99f5_l3.png \"Rendered by QuickLaTeX.com\") has utility function ![u_i](https://longtermrisk.org/wp-content/ql-cache/quicklatex.com-a2ccfa67bd63a925a625eec33e3bbbc0_l3.png \"Rendered by QuickLaTeX.com\"). The game that the principals play is as follows:\n\n\n* The principals train policies ![\\pi_i](https://longtermrisk.org/wp-content/ql-cache/quicklatex.com-b819371e45469cef5e3f5b9875587d99_l3.png \"Rendered by QuickLaTeX.com\") on independent draws from a distribution of multi-agent environments (for instance, stochastic games) taking values in ![\\mathcal{G}](https://longtermrisk.org/wp-content/ql-cache/quicklatex.com-c4a7caee1c8c9b781fde52051a79296f_l3.png \"Rendered by QuickLaTeX.com\"), ![G^t_{i} \\overset{\\mathrm{i.i.d}}{\\sim} P_{\\mathcal{G}}](https://longtermrisk.org/wp-content/ql-cache/quicklatex.com-cebd073fa2d086b973822d3ecddb78d7_l3.png \"Rendered by QuickLaTeX.com\") for ![i=1,2](https://longtermrisk.org/wp-content/ql-cache/quicklatex.com-155c953742998e211de46874a994bf53_l3.png \"Rendered by QuickLaTeX.com\"). These environments represent the environments in which the agents are trained and deployed. Policies return actions in the spaces ![\\mathcal{A}_i](https://longtermrisk.org/wp-content/ql-cache/quicklatex.com-d7ae00e7817f1b1452080a4ac4877174_l3.png \"Rendered by QuickLaTeX.com\"). (Note that, in sequential environments — e.g., stochastic games — these \"actions'' may in fact be policies mapping, e.g., the states of stochastic game to actions in that stochastic game.)\n* For each environment ![G](https://longtermrisk.org/wp-content/ql-cache/quicklatex.com-7620c75c8772e1ee533aefe8de7019b0_l3.png \"Rendered by QuickLaTeX.com\"), a function mapping pairs of actions ![a_1, a_2](https://longtermrisk.org/wp-content/ql-cache/quicklatex.com-6a8bec9c99a7def2c3ac5cb8103bc2b3_l3.png \"Rendered by QuickLaTeX.com\") to each principal ![i](https://longtermrisk.org/wp-content/ql-cache/quicklatex.com-8511b1f6cf9db17d46ddabb67bac99f5_l3.png \"Rendered by QuickLaTeX.com\")'s payoffs, ![u_i(a_1, a_2; G)](https://longtermrisk.org/wp-content/ql-cache/quicklatex.com-7f934b47b603657e547c00c71e92b6f1_l3.png \"Rendered by QuickLaTeX.com\");\n* In each training environment, agents receive private observations ![X^t_{i} \\mid G_{i}^t \\overset{\\mathrm{i.i.d}}{\\sim} P_{\\mathcal{X}}(\\cdot \\mid G^t_{i})](https://longtermrisk.org/wp-content/ql-cache/quicklatex.com-046bf0ae0820a0605b8e11710bad9883_l3.png \"Rendered by QuickLaTeX.com\") on which they can condition their policies, with ![X_i^t \\in \\mathcal{X}](https://longtermrisk.org/wp-content/ql-cache/quicklatex.com-fddb7cd7ac0924e671689eaa79034826_l3.png \"Rendered by QuickLaTeX.com\"). These observations will correspond to data from which the agents estimate world-models (e.g., a model of a stochastic game) or form beliefs about other agents' private information.\n* The agents are deployed and take actions in an environment ![G \\sim P_{\\mathcal{G}}](https://longtermrisk.org/wp-content/ql-cache/quicklatex.com-fd6a275fa498df11624fd9eb616f39b8_l3.png \"Rendered by QuickLaTeX.com\") based on private information ![X_i \\sim P_{\\mathcal{X}}(\\cdot \\mid G)](https://longtermrisk.org/wp-content/ql-cache/quicklatex.com-4526ffd3106f0c4ac50d32dff53c15cd_l3.png \"Rendered by QuickLaTeX.com\").\n\n\nThe choice of what policy to deploy is a game with strategies ![\\pi_1, \\pi_2](https://longtermrisk.org/wp-content/ql-cache/quicklatex.com-298925e3cbcf35880f05690e9dcbac2a_l3.png \"Rendered by QuickLaTeX.com\") and ex ante payoffs\n\n\n     ![\\[u_i(\\pi_1, \\pi_2) = \\int u_i\\left\\{ \\pi_1(X_1), \\pi_2(X_2); G\\right\\} \\mathrm{d} P_{\\mathcal{X}}(X_1 \\mid G) \\mathrm{d}P_{\\mathcal{X}}(X_2 \\mid G) \\mathrm{d}P_{\\mathcal{G}}(G).\\]](https://longtermrisk.org/wp-content/ql-cache/quicklatex.com-aed281cfc4931661b3944a0adc11a87d_l3.png \"Rendered by QuickLaTeX.com\")\n\n\nWe will for now suppose that during training the value of policy profiles ![(\\pi_1, \\pi_2)](https://longtermrisk.org/wp-content/ql-cache/quicklatex.com-5582948a2849d05e5b52037851b95259_l3.png \"Rendered by QuickLaTeX.com\") under each utility function in ![\\mathcal{U}_i](https://longtermrisk.org/wp-content/ql-cache/quicklatex.com-61760ccaf083382d62e684c316636108_l3.png \"Rendered by QuickLaTeX.com\") can be learned with high accuracy.\n\n\nHow should a principal choose which policy to deploy? In the absence of computational constraints, a natural choice is Bayesian Nash equilibrium (BNE). In practice, it will be necessary to learn over a much smaller class of policies than the space of all maps. Let ![\\Pi_1, \\Pi_2](https://longtermrisk.org/wp-content/ql-cache/quicklatex.com-4058f8b36c41f5d5a827393855f135c9_l3.png \"Rendered by QuickLaTeX.com\") be sets of policies such that it is tractable to evaluate each profile ![\\pi_1, \\pi_2 \\in \\Pi_1 \\times \\Pi_2](https://longtermrisk.org/wp-content/ql-cache/quicklatex.com-d615cbb86b71abcd2f2b531387585b78_l3.png \"Rendered by QuickLaTeX.com\"). In this context, assuming that the principals' utility functions are common knowledge, a pair of policies ![\\pi_1, \\pi_2](https://longtermrisk.org/wp-content/ql-cache/quicklatex.com-298925e3cbcf35880f05690e9dcbac2a_l3.png \"Rendered by QuickLaTeX.com\") is a BNE if it satisfies for ![i=1,2](https://longtermrisk.org/wp-content/ql-cache/quicklatex.com-155c953742998e211de46874a994bf53_l3.png \"Rendered by QuickLaTeX.com\") (indexing ![i](https://longtermrisk.org/wp-content/ql-cache/quicklatex.com-8511b1f6cf9db17d46ddabb67bac99f5_l3.png \"Rendered by QuickLaTeX.com\")'s counterpart by ![j](https://longtermrisk.org/wp-content/ql-cache/quicklatex.com-9565fa6c9b8cbe9c2d2a57f38bbf9670_l3.png \"Rendered by QuickLaTeX.com\"))\n\n\n     ![\\[ \\begin{aligned} & \\int u_i\\{ \\pi_i(X_i), \\pi_j(X_j); G \\} \\mathrm{d} P_{\\mathcal{X}}(X_i \\mid G) \\mathrm{d}P_{\\mathcal{X}}(X_j \\mid G) \\mathrm{d}P_{\\mathcal{G}}(G) \\\\ & \\quad \\geq \\int u_i\\{ \\pi'_i(X_i), \\pi_j(X_j); G \\} \\mathrm{d} \\mathrm{d} P_{\\mathcal{X}}(X_i \\mid G) \\mathrm{d}P_{\\mathcal{X}}(X_j \\mid G) \\mathrm{d}P_{\\mathcal{G}}(G), \\text{ for all } \\pi'_i \\in \\Pi_i. \\end{aligned} \\]](https://longtermrisk.org/wp-content/ql-cache/quicklatex.com-151bb69dce9895406c878690ebc8c7dd_l3.png \"Rendered by QuickLaTeX.com\")\n\n\nWhen ![\\Pi_i](https://longtermrisk.org/wp-content/ql-cache/quicklatex.com-0a22a1b155a3e6527d71246478f4124c_l3.png \"Rendered by QuickLaTeX.com\") consists of policies with limited capacity (reflecting computational boundedness), agents may learn policies which do not account for the variability in the estimation of their private models. I will call the class of such policies learned over during training time the \"default policies'' ![\\Pi_i^d](https://longtermrisk.org/wp-content/ql-cache/quicklatex.com-2dbc8209a258e951f50abd68d132e2e9_l3.png \"Rendered by QuickLaTeX.com\").  To address this problem in a computationally tractable way, we introduce policies ![\\Pi_i^{\\mathrm{cgs}}](https://longtermrisk.org/wp-content/ql-cache/quicklatex.com-f043e4d88ed5d2819fff12be5bc33613_l3.png \"Rendered by QuickLaTeX.com\") which allow for the specification of a shared model of ![G](https://longtermrisk.org/wp-content/ql-cache/quicklatex.com-7620c75c8772e1ee533aefe8de7019b0_l3.png \"Rendered by QuickLaTeX.com\"). Let ![\\widetilde{\\mathcal{G}}](https://longtermrisk.org/wp-content/ql-cache/quicklatex.com-94dd6764baf4450db5066acfd03c03a7_l3.png \"Rendered by QuickLaTeX.com\") be a set of models, and let ![\\mathcal{Y}](https://longtermrisk.org/wp-content/ql-cache/quicklatex.com-b6ef78bbfc7645cb3130c48ff568854a_l3.png \"Rendered by QuickLaTeX.com\") be a set of solution concepts which map elements of ![\\widetilde{\\mathcal{G}}](https://longtermrisk.org/wp-content/ql-cache/quicklatex.com-94dd6764baf4450db5066acfd03c03a7_l3.png \"Rendered by QuickLaTeX.com\") to (possibly random) action profiles. In the [toy illustration](#section-toy-experiment), agents specified models in the set ![\\widetilde{\\mathcal{G}}](https://longtermrisk.org/wp-content/ql-cache/quicklatex.com-94dd6764baf4450db5066acfd03c03a7_l3.png \"Rendered by QuickLaTeX.com\") of ![2 \\times 2](https://longtermrisk.org/wp-content/ql-cache/quicklatex.com-3bdab556133bfb6dad3ff3e8f6739bef_l3.png \"Rendered by QuickLaTeX.com\") bimatrices, and the solution concept they used was the Nash equilibrium which maximized the sum of their payoffs in the game ![\\widetilde{G}](https://longtermrisk.org/wp-content/ql-cache/quicklatex.com-4fd2c58652fe3ad68ef8ec1c7d1a0a32_l3.png \"Rendered by QuickLaTeX.com\").\n\n\nThen, the policies ![\\pi_i \\in \\Pi_i^{\\mathrm{cgs}}](https://longtermrisk.org/wp-content/ql-cache/quicklatex.com-8f1acd8d18437c2bc6e9f1802a2f6593_l3.png \"Rendered by QuickLaTeX.com\") have the property that, for some ![\\pi_j \\in \\Pi_j^{\\mathrm{cgs}}](https://longtermrisk.org/wp-content/ql-cache/quicklatex.com-43d21a897bba18780e84b7dcbc5244e2_l3.png \"Rendered by QuickLaTeX.com\"), the policy profile ![\\pi = (\\pi_i, \\pi_j)](https://longtermrisk.org/wp-content/ql-cache/quicklatex.com-1a2d470400c426fc0c0af76ceefa97fc_l3.png \"Rendered by QuickLaTeX.com\") succeeds in collaboratively specifying a game with positive probability. That is, with positive probability we have ![\\pi(X_1, X_2) = y(\\widetilde{G})](https://longtermrisk.org/wp-content/ql-cache/quicklatex.com-9ec782cbb76079119a084b86194a2b30_l3.png \"Rendered by QuickLaTeX.com\") for some ![y \\in \\mathcal{Y}](https://longtermrisk.org/wp-content/ql-cache/quicklatex.com-7330d753252e1fc1d278a7e1cda65870_l3.png \"Rendered by QuickLaTeX.com\") and some ![\\widetilde{G} \\in \\widetilde{\\mathcal{G}}](https://longtermrisk.org/wp-content/ql-cache/quicklatex.com-c86f69f1858c0cc537a1a93b0e4454e1_l3.png \"Rendered by QuickLaTeX.com\"). \n\n \n\nThe goal of principals who want their agents to engage in collaborative game specification is to find a policy profile in ![\\Pi^{\\mathrm{cgs}}](https://longtermrisk.org/wp-content/ql-cache/quicklatex.com-1ae74e030f183bb123635ba3becd9108_l3.png \"Rendered by QuickLaTeX.com\") which is a Bayes-Nash equilibrium that improves upon any equilibrium in ![\\Pi_1^d \\times \\Pi_2^d](https://longtermrisk.org/wp-content/ql-cache/quicklatex.com-18a5157be8909a219f07eeef86e8408c_l3.png \"Rendered by QuickLaTeX.com\") and which succeeds in collaboratively specifying a game with high probability.     \n\n\nNow, this model is idealized in a number of ways. I assume that the distribution of training environment ![P_{\\mathcal{G}}](https://longtermrisk.org/wp-content/ql-cache/quicklatex.com-b1d88ddbd8a50b677689961db6ffe678_l3.png \"Rendered by QuickLaTeX.com\") matches the distribution of environments encountered by the deployed policies. Moreover, I assume that both principals train their agents on this distribution of environments. In reality, of course, these assumptions will fail. A more modest but attainable goal is to use CGS to construct policies which perform well on whatever criteria individual principals use to evaluate policies for multi-agent environments, as discussed in the [Section 2](#section-implementation) (Implementation).\n\n\nReferences\n==========\n\n\nJames D Fearon. Rationalist explanations for war. *International organization*, 49(3):379–414, 1995.\n\n\nDrew Fudenberg, David Levine, and Eric Maskin. The folk theorem with imperfect public information. In *A Long-Run Collaboration On Long-Run Games*, pages 231–273. World Scientific, 2009.\n\n\nRobert Gibbons. Learning in equilibrium models of arbitration. Technical report, National Bureau of Economic Research, 1988.\n\n\nJames D Morrow. Modeling the forms of international cooperation: distribution versus information. *International Organization*, pages 387–423, 1994.\n\n\nTodd S Sechser. Goliath’s curse: Coercive threats and asymmetric power. *International Organization*, 64(4):627–660, 2010.\n\n\nHyun Song Shin. The burden of proof in a game of persuasion. *Journal of Economic Theory*, 64(1):253–264, 1994.", "url": "https://longtermrisk.org/collaborative-game-specification/", "title": "Collaborative game specification: arriving at common models in bargaining", "source": "html_articles", "source_type": "blogPost", "source_filetype": "pdf", "date_published": "2021-03-05T23:00:00Z", "authors": ["Jesse Clifton"], "summary": [], "id": "bb3651748e1537ee1ee717fcf1dc0634"} {"text": "Coordination challenges for preventing AI conflict\n==================================================\n\n\n\n8 March 2021\nby [Stefan Torges](https://longtermrisk.org/author/stefan-torges/ \"Posts by Stefan Torges\")\n\nSummary\n-------\n\n\nIn this article, I will sketch arguments for the following claims:\n\n\n* Transformative AI scenarios involving multiple systems pose a unique existential risk: catastrophic bargaining failure between multiple AI systems (or joint AI-human systems).\n* This risk is not sufficiently addressed by successfully aligning those systems, and we cannot safely delegate its solution to the AI systems themselves.\n* Developers are better positioned than more far-sighted successor agents to coordinate in a way that solves this problem, but a solution also does not seem guaranteed.\n* Developers intent on solving this problem can choose between developing separate but compatible systems that do not engage in costly conflict or building a single joint system.\n* While the second option seems preferable from an altruistic perspective, there appear to be at least weak reasons that favor the first one from the perspective of the developers.\n* Several avenues for (governance) interventions present themselves: increasing awareness of the problem among developers, facilitating the reaching of agreements (perhaps those for building a joint system in particular), and making development go well in the absence of problem awareness.\n\n\nIntroduction\n------------\n\n\nIn this article, I examine the challenge of ensuring coordination between AI developers to prevent catastrophic failure modes arising from the interactions of their systems. More specifically, I am interested in addressing bargaining failures as outlined in Jesse Clifton’s research agenda on [Cooperation, Conflict & Transformative Artificial Intelligence (TAI) (2019)](https://longtermrisk.org/research-agenda) and Dafoe et al.’s [Open Problems in Cooperative AI (2020)](https://www.cooperativeai.com/open-problems).\n\n\nFirst, I set out the general problem of bargaining failure and why bargaining problems might persist even for aligned superintelligent agents. Then, I argue for why developers might be in a good position to address the issue. I use a toy model to analyze whether we should expect them to do so by default. I deepen this analysis by comparing the merit and likelihood of different coordinated solutions. Finally, I suggest directions for interventions and future work.\n\n\nThe main goal of this article is to encourage and enable future work. To do so, I sketch the full path from problem to potential interventions. This large scope comes at the cost of depth of analysis. The models I use are primarily intended to illustrate how a particular question along this path can be tackled rather than to arrive at robust conclusions. At some point, I might revisit parts of this article to bolster the analysis in later sections.\n\n\nBargaining failure as a multipolar existential risk\n---------------------------------------------------\n\n\nTransformative AI scenarios involving multiple systems (“multipolar scenarios”) pose unique existential risks resulting from their interactions.[1](#easy-footnote-bottom-1-6782 \"I do not mean to imply that this is the only risk posed by multipolar scenarios. For other ones, see for example: Critch, Krueger 2020, Zwetsloot, Dafoe 2019, Manheim 2018.\") Bargaining failure between AI systems, i.e., cases where each actor ends up much worse off than they could have under a negotiated agreement, is one such risk. The worst cases could result in human extinction or [even worse outcomes](https://www.alignmentforum.org/posts/DbuCdEbkh4wL5cjJ5/preface-to-eaf-s-research-agenda-on-cooperation-conflict-and) (Clifton 2019).[2](#easy-footnote-bottom-2-6782 \"Note that bargaining failure is not the only cause of catastrophic interactions. For instance, the interactions of Lethal Autonomous Weapon Systems might also be catastrophic.\")\nAs a prosaic example, consider a standoff between AI systems similar to the Cold War between the U.S. and the Soviet Union. If they failed to handle such a scenario well, they might cause nuclear war in the best case and far worse if technology has further advanced at that point.\n\n\nShort of existential risk, they could jeopardize a significant fraction of the cosmic endowment by preventing the realization of mutual gains or causing the loss of resources in costly conflicts.\n\n\nThis risk is not sufficiently addressed by [AI alignment](https://ai-alignment.com/clarifying-ai-alignment-cec47cd69dd6#:~:text=An%20aligned%20AI%20would%20try,as%20%E2%80%9Cintent%20alignment.%E2%80%9D)), by which I mean either “ensuring that systems are trying to do what their developers want them to do” or “ensuring that they are in fact doing what their developers want them to do.”[3](#easy-footnote-bottom-3-6782 \" Alignment only suffices if the goals of the two systems are identical, and they have common knowledge of this fact, which seems unlikely in a multipolar scenario. Working toward “social alignment”, i.e., alignment with society as a whole (as described, e.g., here), or a “homogeneous takeoff” might make that more likely.\") Consider the Cuban Missile Crisis as an analogy: The governments of the U.S. and the Soviet Union were arguably “aligned” with some broad notion of human values, i.e., both governments would at least have considered total nuclear war to be a moral catastrophe. Nevertheless, they got to the brink of causing just that because of a failure to bargain successfully. Put differently, it’s conceivable, or even plausible, that the Cuban Missile Crisis could have resulted in global thermonuclear war, an outcome so bad that both parties would probably have preferred complete surrender.[4](#easy-footnote-bottom-4-6782 \"One strand of the international relations literature argues that the failure of rational agents to bargain successfully is one explanation for wars between human nation states. See Fearon (1995) for the seminal text of this perspective.\")\nEven the most intelligent agents may fail to bargain successfully\n-----------------------------------------------------------------\n\n\nThis risk scenario is probably also not sufficiently addressed by ensuring that the AI systems we build have superhuman bargaining skills. Consider the Cuban Missile crisis again. I am arguing that a superintelligent Kennedy and superintelligent Khrushchev would not have been sufficient to *guarantee* successful prevention of the crisis. Even for superintelligent agents, some fundamental game-theoretic incompatibilities persist because the ability to solve them is largely orthogonal to any notion of “bargaining skill,” whether we conceive of that skill as part of intelligence or rationality. These are the “mixed-motive coordination problem” and the “prior selection problem.”[5](#easy-footnote-bottom-5-6782 \"These problems have been explored in the context of AI in more detail here and in Stastny et al. 2021.\")\n**“Mixed-motive coordination problem”**[6](#easy-footnote-bottom-6-6782 \"We are still deliberating about the appropriate terminology for this problem.\"): As I use the term here, a *mixed-motive coordination problem* is a problem that arises when two agents need to pick one Pareto-optimal solution out of many different such solutions. The failure to pick the same one results in a failure to reach a mutually agreeable outcome. At the level of equilibria, this may arise in games that do not have a uniquely compelling cooperative equilibrium, i.e., they have multiple Pareto-optimal equilibria that correspond to competing notions of what counts as an acceptable agreement.[7](#easy-footnote-bottom-7-6782 \"There are equilibrium selection problems which do not have this more specific property. Take, for instance, the Iterated Prisoner's Dilemma: It has many equilibria, but the only cooperative one is both players playing (Cooperate, Cooperate) at every time step on the equilibrium path.\")[8](#easy-footnote-bottom-8-6782 \"This is formalized in Stastny et al. 2021.\")\nFor instance, in [Bach or Stravinsky](https://en.wikipedia.org/wiki/Battle_of_the_sexes_(game_theory)) (see matrix below), both players would prefer going to any concert together (*Stravinsky, Stravinsky* or *Bach, Bach*) over going to any concert by themselves (*Stravinsky, Bach* or *Bach, Stravinsky*). However, one person prefers going to Stravinsky together, whereas the other prefers going to Bach together. Thus, there is a *distributional problem* when allocating the gains from coordination ([Morrow 1994](https://www.jstor.org/stable/2706964)).[9](#easy-footnote-bottom-9-6782 \"This distributional problem is compounded by informational problems because the bargaining parties have an incentive to distort their private information (e.g., about their preferences) to get a better deal (Morrow 1994).\") Put in more technical terms: each player favors a different solution on the Pareto curve. Within this simple game, there is no way for the two players to reliably select the same concert, which will often cause them to end up alone.\n\n\n![](https://longtermrisk.org/files/Screenshot-2021-03-11-at-14.00.20.png)\n\n\nMore fundamentally, agents may differ in the [solution concepts](https://en.wikipedia.org/wiki/Solution_concept) or decision rules they use to decide what agreements are acceptable in a bargaining situation, e.g., they may use different [bargaining solutions](https://en.wikipedia.org/wiki/Cooperative_bargaining#Bargaining_solutions). In bargaining problems, different “reasonable” decision rules make different recommendations for which Pareto-optimal solution to pick. The worry is that independently developed systems could end up using, either implicitly or explicitly, different decision rules for bargaining problems, leading to bargaining failure. For instance, in the variant of Bach or Stravinsky above, (Stravinsky, Stravinsky) leads to the greatest total payoffs, while (Bach, Bach) is more equitable.[10](#easy-footnote-bottom-10-6782 \"Now, one might object that a rational actor would realize this problem and play as conservatively as possible. One could, for instance, always accept any Pareto-optimal agreement. This behavior, however, is very exploitable and comes at a significant competitiveness cost, which makes this strategy unattractive.\")\nAs a toy example, consider the case where two actors are bargaining over some territory. There are many ways of dividing this territory. (Different ways of dividing the territory are analogous to (Stravinsky, Stravinsky) and (Bach, Bach) above.) One player (the proposer) makes a take-it-or-leave-it offer to the other player (the responder) of a division of the territory, and war occurs if the offer is rejected. (A rejected offer is analogous to the miscoordination outcome (Stravinsky, Bach.) If the proposer and responder have different notions of what counts as an acceptable offer, war may ensue. If the agents have highly destructive weapons at their disposal, then war may be extremely costly. (To see how this might apply in the context of transformative AI, imagine that these are AI systems bargaining over the resources of space.)\n\n\nThere are two objections to address here. First, why would the responder reject any offer if they know that war will ensue? One reason is that they have a commitment to reject offers that do not meet their standards of fairness to reduce their exploitability by other agents. For AI systems, there are a few ways this could happen. For example, such commitments may have evolved as an adaptive behavior in an evolution-like training environment or be the result of imitating human exemplars with the same implicit commitments. AI systems or their overseers might have also implemented these commitments as part of a [commitment race](https://www.lesswrong.com/posts/brXr7PJ2W4Na2EW2q/the-commitment-races-problem).\n\n\nSecond, isn’t this game greatly oversimplified? For instance, agents could engage in limited war and return to the bargaining table later, rather than catastrophic war. There are a few responses here. For one thing, highly destructive weapons or irrevocable commitments might preclude the success of bargaining later on. Another consideration is that some complications — such as agents having uncertainty about each others’ private information (see below) — would seem to make bargaining failure more likely, not less so.\n\n\n**“Prior selection problem”**: In games of incomplete information, i.e., games in which players have some private information about their payoffs or available actions, the standard solution–[Bayesian Nash equilibrium](https://en.wikipedia.org/wiki/Bayesian_game#Bayesian_Nash_equilibrium)–requires the agents to have common knowledge of each others’ priors over possible values of the players’ private information. However, if systems end up with different priors, outcomes may be bad.[11](#easy-footnote-bottom-11-6782 \"Outcomes are not necessarily catastrophic, but on the face of it, misperception seems much more likely to cause than prevent conflict.\") For instance, one player might believe their threat to be credible, whereas the other player might think it’s a bluff, leading to the escalation of the conflict. Similar to mixed-motive coordination problems, there are many “reasonable” priors and no unique individually rational rule that picks out one of them. In the case of machine learning, priors could well be determined by the random initialization of the weights or incidental features of the training environment (e.g., the distribution of other agents against which an agent is trained). Such differences in beliefs may persist over time due to models of other agents being underdetermined in strategic settings.[12](#easy-footnote-bottom-12-6782 \"Jesse Clifton makes this point here.\")\nNote that these concepts are idealizations. More broadly, AI systems may have different beliefs and procedures for deciding which commitments are credible and which bargains are acceptable.\n\n\nWhy developer coordination might be necessary\n---------------------------------------------\n\n\n### Independent development as a cause\n\n\nThese incompatibility problems are much more likely to arise or lead to catastrophic failures if AI systems are developed independently. During training, failure to arrive at mutually agreeable solutions is likely to result in lower rewards. So a system will usually perform well against counterparts that are similar to the ones it encountered during training. If the development of two systems is independent, such similarity is not guaranteed, and bargaining is more likely to fail catastrophically due to the reasons I sketched above.\n\n\nAgain, let’s consider a human analogy. There is evidence for significant behavioral differences among individuals from different cultures when playing standard economic games (e.g., the [ultimatum game](https://en.wikipedia.org/wiki/Ultimatum_game), the [dictator game](https://en.wikipedia.org/wiki/Dictator_game), different [public goods games](https://en.wikipedia.org/wiki/Public_goods_game)). For instance, [Henrich et al. (2005)](https://oxford.universitypressscholarship.com/view/10.1093/0199262055.001.0001/acprof-9780199262052) found that mean offers from Western university students usually ranged from 42-48% in the ultimatum game. Among members of the fifteen small-scale societies they studied, mean offers instead spanned 25-57%. In a meta-analysis, [Oosterbeek, Sloof & van de Kuilen (2004)](https://link.springer.com/article/10.1023/B:EXEC.0000026978.14316.74) found systematic cultural differences in the behavior of responders (but not proposers). Relatedly, there also appears to be evidence for cross-cultural differences with regard to notions of fairness (e.g., [Blake et al. 2015](http://www.psychomedia.it/motore/rapaport-klein/Blake_etal_Nature-2015-528_pp258-261_17.pdf), [Schaefer et al. 2015](https://journals.sagepub.com/doi/full/10.1177/0956797615586188)). This body of literature is at least suggestive of humans learning different “priors” or “decision rules” depending on their “training regime,” i.e., their upbringing.\n\n\nThe smaller literature on intercultural play, where members from different cultures play against one another, weakly supports welfare losses as a result of such differences: “while a few studies have shown no differences between intra- and intercultural interactions, most studies have shown that intercultural interactions produce less cooperation and more competition than intracultural interactions” ([Matsumoto, Hwang 2011](https://www.sciencedirect.com/science/article/abs/pii/S0147176711000198)). I only consider this weak evidence as the relevant studies do not seem to carefully control for the potential of (shared) distrust of perceived strangers, which would also explain these results but is independent of incompatible game-playing behavior.\n\n\n### Incompatibility problems all the way down\n\n\nIt is tempting to delegate the solving of these problems to future more capable AI systems. However, it is not guaranteed that they will be in a position to solve them, despite being otherwise highly capable.\n\n\nFor one, development may have already locked in priors or strong preferences over bargaining solutions, either unintentionally or deliberately (as the result of a [commitment race](https://www.lesswrong.com/posts/brXr7PJ2W4Na2EW2q/the-commitment-races-problem), for instance). This could put strict limits on their abilities to solve these problems.\n\n\nMore fundamentally, solving these incompatibility problems requires overcoming another such problem. Picking out some equilibrium, solution concept, or prior will favor one system over another. So they face another distributional problem. Solving that requires successful bargaining, the failure of which was the original problem. If they wanted to solve this second incompatibility problem, they would face another one. In other words, there are incompatibility problems all the way down.\n\n\nOne possibility is that many agents will by default be sufficiently “reasonable” that they can agree on a solution concept via reasoned deliberation, avoiding commitments to incompatible solution concepts for bargaining problems. Maybe many sufficiently advanced systems will engage in reasoning such as “let’s figure out the [correct axioms for a bargaining solution](https://en.wikipedia.org/wiki/Cooperative_bargaining#Bargaining_solutions), or at least sufficiently reasonable ones that we can both feel OK about the agreement.”[13](#easy-footnote-bottom-13-6782 \"See this comment thread.\") Unfortunately, it does not seem guaranteed that this kind of reasoning will be selected for during the development of the relevant AI systems.\n\n\n### Why developers are better suited\n\n\nDevelopers then might be better suited to addressing this issue than more capable successor agents, whether they be AI systems or AI-assisted humans:\n\n\nThe comparative ignorance of present-day humans mitigates the distributional problem faced by more far-sighted and intelligent successor agents.[14](#easy-footnote-bottom-14-6782 \"For what it’s worth, myopia has been suggested as a safety technique and appropriately myopic systems might also address this problem. At the same time, however, they need to be sufficiently far-sighted to realize that future conflict could pose a catastrophic risk.\") The distributional consequences of particular coordination arrangements will likely be very unclear to AI developers. Compared to future agents, I expect them to have more uncertainty about their values, preferred solution concepts, the consequences of different coordination agreements, and how these three variables relate to one another. This will [smooth out](https://longtermrisk.org/uncertainty-smoothes-out-differences-in-impact/) differences in expected value between different coordination outcomes. However, developers will have much less uncertainty about the value of averting conflict by coordinating *in some form*. So it will be easier for them to find a mutually agreeable arrangement as the situation for them looks more like a pure coordination game (see matrix below), which are much easier to solve by [cheap talk](https://en.wikipedia.org/wiki/Cheap_talk) alone, than Bach or Stravinsky (see matrix above).[15](#easy-footnote-bottom-15-6782 \"Even to the extent that distributional issues remain, humans might be better suited to solve them as they have a shared evolutionary history and an increasingly shared cultural background, which is more uncertain in the case of AI systems, where it depends mostly on the homogeneity of AI takeoff.\")\n![](https://longtermrisk.org/files/Screenshot-2021-03-09-at-00.50.04.png)\n\n\nThe [loss aversion](https://en.wikipedia.org/wiki/Loss_aversion) and [scope insensitivity](https://en.wikipedia.org/wiki/Scope_neglect) of (most) human bargainers will likely compound this effect. I expect it will increase the inclination to avoid catastrophes compared to securing relative gains. This, again, will push this game more toward one of pure coordination, mitigating the distributional problem. In comparison, AI systems are less likely to exhibit such “biases.”[16](#easy-footnote-bottom-16-6782 \"This argument carries less force for AI systems that might still exhibit such biases. Myopic agents might, again, be such an example. Overall, however, I do think it’s more plausible than not that AI systems will be more scope-sensitive than humans. They are more likely to pursue an idealized version of human goals or they may modify their goals to be more scope-sensitive to improve their bargaining position.\")\nA related point is that human bargainers might not even know what the Pareto frontier looks like. Thus, instead of trying to bargain for their most favorable point on the Pareto frontier, they have incentives to converge on any mutually agreeable settlement even if it is Pareto inferior to many other possible outcomes. This, in turn, probably decreases the chance of catastrophic failures.[17](#easy-footnote-bottom-17-6782 \"It could, however, lead to the failed exploitation of some significant fraction of the cosmic endowment. By some totalist value systems, this may be a tragedy not worth risking.\") As [Young (1989](https://sci-hub.st/https://www.jstor.org/stable/2706651)) writes:\n\n\n*Negotiators who know the locus of a contract curve or the shape of a welfare frontier to begin with will naturally be motivated primarily by a desire to achieve an outcome on this curve or frontier that is as favorable to their own interests as possible. They will, therefore, immediately turn to calculations regarding various types of strategic behavior or committal tactics that may help them achieve their distributive goals.*\n\n\n*Negotiators who do not start with a common understanding regarding the contours of the contract curve or the locus of the negotiation set, by contrast, have compelling incentives to engage in exploratory interactions to identify opportunities for devising mutually beneficial deals. Such negotiators may never discover the actual shape of the contract curve or locus of the negotiation set, and they may consequently end up with arrangements that are Pareto-inferior in the sense that they leave feasible joint gains on the table. At the same time, however, they are less likely to engage in negotiations that bog down into protracted stalemates brought about by efforts to improve the outcome for one party or another through initiatives involving strategic behavior and committal tactics.*\n\n\n### What developer coordination looks like\n\n\nDevelopers intent on solving this problem can choose between two broad classes of options[18](#easy-footnote-bottom-18-6782 \"One unilateral solution is for developers to make their systems maximally conservative, i.e., to follow a policy of accepting any agreement that is proposed to them. Such exploitable systems, however, would probably not be acceptable to developers, and as soon as systems are not maximally conservative, there is room for bargaining failure. (Also see footnote 10.)\"):\n\n\n1. They could coordinate on choosing compatible features such that interactions between their systems do not lead to catastrophic outcomes. Within the current machine learning paradigm, it will likely not be possible to coordinate directly on the priors and decision rules of the respective systems, as these may only be represented implicitly in the learned policies of agents. More realistically, developers would coordinate on training features like the reward structure and the learning environment or restrictions on the space of policies that agents learn over. (See “Appendix: Examples of features developers might coordinate on” for concrete examples.) To the extent that systems are modular, coordination could also occur at the level of bargaining-relevant modules.\n2. They could agree to build a single joint system to prevent any conflict between their systems in the first place. So instead of two developers building two separate systems, they join forces to build a single one.[19](#easy-footnote-bottom-19-6782 \"Critch, Krueger (2020) discuss this under the heading of multi/single delegation.\") This may take various institutional forms, ranging from a joint engineering project to a full merger. In all those cases, no direct bargaining between AI systems would occur as long as all developers participate.\n\n\nBoth solutions require overcoming the distributional problem discussed in the previous section. In the case of coordinating on compatible features, each set of features will have different distributional consequences for the developers. In the case of agreeing to build a joint system, there will be different viable agreements, again with different distributional consequences for the developers (e.g., the system may pursue various tradeoffs between the developers’ individual goals, or developers might get different distributions of equity shares).[20](#easy-footnote-bottom-20-6782 \"There are some structural differences between the two solutions. For instance, agreements to use mutually compatible features might allow partial coordination because the option space might not be discrete (Snidal 1985). This is not the case when agreeing to build a single system. However, these differences are unimportant for the subsequent analysis.\")\n### Coordination is not guaranteed in a game-theoretic toy model\n\n\nFor now, let’s assume that there are only two developers who are both aware of these coordination problems and have the technical ability to solve them. Let’s further assume the two options introduced above do not differ significantly in their effectiveness at preventing conflict, and the costs of coordination are negligible. Then the game they are playing can be modeled as a coordination game like Bach or Stravinsky.[21](#easy-footnote-bottom-21-6782 \"Analyses of this simple game can be found in Stein 1982, who calls this a “dilemma of common aversion”, and in Snidal 1985.\")[22](#easy-footnote-bottom-22-6782 \"If the game is indeed more appropriately modeled as a game of pure coordination due to the uncertainty of the developers as suggested by the previous section, coordination is assured conditional on developers being aware of the problem, communication being possible, and coordination being sufficiently cheap. So I will not discuss this option further.\")\n*In non-iterated and sequential play*, we can expect coordination, at least under idealized conditions. The follower will adapt to the strategy chosen by the leader since they have nothing to gain by not coordinating (“pre-emption”). If I know that my friend is at the Bach concert, I will also go to the Bach concert since I prefer that to being at the Stravinsky concert on my own.\n\n\n*In non-iterated and simultaneous play*, the outcome is underdetermined. They may end up coordinating, or they may not. It depends on whether they will be able to solve the bargaining problem inherent to the game. Introducing credible commitments could move us from simultaneous play to sequential play, ensuring coordination once again.[23](#easy-footnote-bottom-23-6782 \"Such credible commitments could, for instance, be achieved through transparency tools.\") If I can credibly commit to going to one concert rather than another, my counterpart has again nothing to gain by choosing the other concert. They will join me at the one that I signaled I would go to.\n\n\n*In iterated play*, the outcome is, again, uncertain. Unlike the Prisoner’s Dilemma, there is no need to monitor and enforce any agreement in coordination games *once it has been reached*. Free-riding is not possible: deviation from equilibrium harms both players, i.e., agreements are generally self-enforcing ([Snidal 1985](https://www.jstor.org/stable/1956241)). However, the iteration of the game incentivizes players to insist on the coordination outcome that is more favorable to them. Foregoing coordination one round might be worth it if you think you can cause your counterpart to move to the more favorable equilibrium in subsequent rounds.\n\n\nWhich of these versions of the game best describes AI takeoff primarily depends on two variables: Close races will be more akin to simultaneous play where developers do not first observe what their counterpart “played” until they have already locked in a certain choice themselves. Iteration is akin to successive deployment where developers release improved versions over time. So only if one developer is clearly ahead of the competition is it that coordination seems anything close to guaranteed in this toy model, and those might be the scenarios where one actor gains a decisive strategic advantage in any case. Otherwise, bargaining will occur, and may fail.\n\n\nNote: I don’t intend for this section to be a comprehensive analysis of this situation. Rather, it is intended as a first stab at the problem and a demonstration of how to make progress on the question of whether we can expect coordination by default. This basic model could be extended in various ways to capture different complexities of the situation.\n\n\n### Coordination could occur without awareness\n\n\nIf we drop the assumption that developers are aware of the need to coordinate, coordination may still occur regardless. However, it is necessarily less likely. Three paths then seem to remain:\n\n\nFirst, norms might emerge organically as the result of trial and error. This would require iteration and a well-functioning feedback mechanism. For instance, the two labs release pre-TAI systems, which interact poorly, perhaps due to the problems described in this article. They lack concrete models for the reasons for this failure, but in subsequent releases, both labs independently tinker with the algorithms until they interact well with one another. This compatibility then also transfers to their transformative systems. My intuition is that the likelihood of such an outcome will depend a lot on how fast and how continuously AI development progresses.\n\n\nSecond, the relevant features may end up being compatible due to the [homogeneity](https://www.alignmentforum.org/posts/mKBfa8v4S9pNKSyKK/homogeneity-vs-heterogeneity-in-ai-takeoff-scenarios) of the systems. However, even the same training procedures can result in different and incompatible decision rules due to random differences in initialization.[24](#easy-footnote-bottom-24-6782 \"See Stastny et al. 2021 for an example of such failure.\") More narrowly, a third party might develop a bargaining “module” or service, which is integrated into all transformative systems by their developers due to its competitive performance rather than as the result of a coordination effort. Again, this outcome is not guaranteed.\n\n\nThird, developers might agree to build a joint system for reasons other than the problem discussed in this article:\n\n\n* Most likely, they might want to speed up development by pooling rival goods or increasing available capital (e.g., [Quebec Agreement](https://en.wikipedia.org/wiki/Quebec_Agreement)[25](#easy-footnote-bottom-25-6782 \"The UK did not have capacity for its own atomic weapon program. Joining forces with the US was their only viable path to an atomic weapon in the short-term. The US seems to have believed that the British could provide important help for some parts of the Manhattan project.\"), [International Space Station](https://en.wikipedia.org/wiki/International_Space_Station_programme)[26](#easy-footnote-bottom-26-6782 \"NASA could not secure sufficient capital for the ISS domestically. So they pushed for involving European allies in the project. Russia was approached for their operational and technical experience with space stations, which was unique at the time (Lambright, Schaefer 2004).\"), Concorde[27](#easy-footnote-bottom-27-6782 \"It seems like the sharing of the high cost was the main reason for collaboration (Johnman, Lynch 2002a, Johnman, Lynch 2002b).\"), CERN[28](#easy-footnote-bottom-28-6782 \"Facilities for high-energy particle physics were simply too expensive for any one European country at the time. As all of them were interested in preventing brain drain, pooling resources was in their mutual interest. It also seems like CERN served as a confidence-building measure in post-war Europe (Schukraft 2004). However, I would be surprised if this factor will play a large role in the potential joining of national AI labs due to the strategic nature of the technology.\")).[29](#easy-footnote-bottom-29-6782 \"Note that often this is done to beat another competitor since unilateral development would usually become feasible in due time.\") In the case of TAI, the researchers and engineers with the specialized (tacit) knowledge required to build such a system might be distributed over two labs. In that case, a negotiated collaboration could be preferable to mutual poaching of top talent, the latter of which is not even possible in the case of national projects.\n* Less likely, developers might want to decrease risk by spreading upside and downside across multiple stakeholders. This follows the same idea behind portfolio diversification. In the case of AI, the development of a huge, unprecedented model might require large upfront investments that no firm might be willing to undertake on their own because failure could result in ruinous losses.[30](#easy-footnote-bottom-30-6782 \"This is less relevant for national projects because governments face no realistic risk of ruin.\")\n* Also less likely, national labs might want to prevent freeriding when they expect the building of a system to create massive public goods.[31](#easy-footnote-bottom-31-6782 \"ITER might have been an example of this, but so far I have not been able to find any reliable sources on the reasons for collaboration on this project.\") In the case of AI, they might think it will be difficult to prevent the diffusion of novel algorithms across borders. If so, it would be difficult to internalize the benefits of a large public investment in foundational AI R&D, allowing others to freeride.[32](#easy-footnote-bottom-32-6782 \"Sandler, Cauley (2007) discuss this rationale.\") Sharing the costs mitigates those concerns.[33](#easy-footnote-bottom-33-6782 \"On the other hand, such mergers lead to technology diffusion which is costly or even impossible to prevent.\") This would be most relevant in scenarios where transformative systems require breakthrough algorithmic insights instead of tacit engineering knowledge and a lot of computing power.\n\n\nNone of these would guarantee that only one system is developed. They merely give reasons to *some* developers to merge with *some* other developers.\n\n\nComparing a joint system to multiple compatible ones\n----------------------------------------------------\n\n\nGiven the toy model we used above, both solutions (compatible features and the building of a single system) do not differ in terms of payoffs. However, to examine how desirable they are from an altruistic perspective and how likely they are to come about, we need to analyze them in more detail. Again, the analysis will remain at the surface level and is intended as a first stab and illustration.\n\n\n### Building a joint system seems preferable to multiple compatible ones\n\n\nRestricting our perspective to the problem discussed in this article, developers building a joint system is preferable since it completely obviates any bargaining by the AI systems themselves.[34](#easy-footnote-bottom-34-6782 \"Though there might still be acausal bargaining.\") Moreover, the underlying agreement seems significantly harder to renege on. It also effectively addresses the racing problem and some other multipolar challenges introduced in [Critch, Krueger 2020](https://arxiv.org/pdf/2006.04948.pdf).\n\n\nAt the same time, it would increase the importance of solving multi (stakeholder)/single (AI system) challenges (cf. section 8 of [Critch, Krueger 2020](https://arxiv.org/pdf/2006.04948.pdf)), e.g., those related to social choice and accommodating disagreements between developers. If that turns out to be less tractable or to have worse side effects, this could sway the overall balance. The above analysis also ignores potential negative side-effects such agreements might have on the design of AI systems and the dynamics of AI development more broadly, e.g., by speeding up development in general. Analyzing these effects is beyond the scope of this article. Overall, however, I tend to believe that such an agreement would be desirable, especially in a close race.\n\n\n### It seems unclear whether developers will build a joint system or settle for multiple compatible ones\n\n\nIt seems to me that two factors are most likely to determine the choice of developers[35](#easy-footnote-bottom-35-6782 \"In the case of commercial developers, legal constraints may also play a decisive role. Mergers & acquisitions as well as self-regulation/coordination are subject to antitrust regulation and rulings (see here, for instance).\"): (1) the consequences of each mode of coordination for the anticipated payoffs attained by the AI systems after deployment and (2) the transaction costs incurred by bringing about either of the two options prior to deployment.[36](#easy-footnote-bottom-36-6782 \"These two factors could be integrated into a single payoff value. The conceptual distinction is still analytically helpful.\")\nIt’s plausible that the post-deployment payoffs will be overwhelmingly important, especially if developers appreciate the astronomical stakes involved. Nevertheless, transaction costs may still be important to the extent that developers are not as far-sighted and suffer from scope neglect.\n\n\nUnderstanding the differences in payoffs would require a more comprehensive version of the analysis attempted in the previous section and the motivations of the developers in question. For instance, if the argument of the previous section holds, altruistically inclined developers would see higher payoffs associated with building a single system compared to an agreement to build compatible systems.[37](#easy-footnote-bottom-37-6782 \"For instance, there is good reason to believe that OpenAI added the “Assist Clause” to their charter not to ensure their own success as an organization but to prevent a development race, which could be disastrous from an impartial perspective.\") On the other hand, competing national projects may be far more reluctant to join forces.\n\n\nMore general insights can be gleaned when it comes to transaction costs. The most common analytical lens for predicting what kinds of transactions agents will make is [new institutional economics](https://en.wikipedia.org/wiki/New_institutional_economics) (NIE).[38](#easy-footnote-bottom-38-6782 \"In principle, it also allows us to make more precise predictions about whether to expect a coordinated outcome in the first place because it allows for less idealized conditions. Concretely, developers will find coordination worth it if the (estimated) transaction costs required to bring about any given coordination outcome are lower than the (estimated) benefits of the coordination outcome over their best alternative to a negotiated agreement (BATNA). Actually analyzing this for the case at hand is beyond the scope of this article.\") Where game-theoretic models often abstract away such costs through idealization assumptions, NIE acknowledges that agents have cognitive limitations, lack information, and struggle to monitor and enforce agreements. This results in different transaction costs for different contractual arrangements, which influences which one is picked. This perspective can shed light on the question of whether to collaborate using the market to contract (e.g., buying, selling) or whether to collaborate using hierarchy & governance (e.g., regular employment, mergers). In our case, these transaction types are represented by agreeing to use compatible features and by agreeing to build a joint system, respectively.\n\n\nTransaction costs are often grouped into three categories[39](#easy-footnote-bottom-39-6782 \"This list is not intended to be exhaustive. It only covers commonly discussed types of transaction costs.\"):\n\n\n* search costs (e.g., finding the cheapest supplier)\n* bargaining costs (e.g., negotiating the details of the contract)\n* governance & enforcement costs (e.g., setting up mechanisms for communication, monitoring behavior, and punishing defections)[40](#easy-footnote-bottom-40-6782 \"Especially in the international relations literature, supranational structures are usually only discussed as solutions for monitoring and enforcement in the face of opportunism, which does not arise for coordination problems (e.g., Sandler, Cauley 1977). Such analysis is applied, for instance, to the question of empire/alliance formation (Lake 1996) or the formation of the single European market (e.g., Garrett 1992, Garrett 1995).\")\n\n\nOn the face of it, this lens suggests that all else equal, actors would prefer to find compatible features over agreeing to build a single system because the costs for the former seem lower than the ones for the latter[41](#easy-footnote-bottom-41-6782 \"In what follows, I assume that actors can make sufficiently accurate estimates of the transaction costs involved. Lipson (2004) discusses this assumption in the context of international relations.\"):\n\n\n* I expect that search costs will make up a negligible fraction of the total transaction costs as the number of relevant developers will be small and probably well-known to one another. I also don’t expect them to differ significantly in the two cases we are examining. In both cases, the partners in the transactions are the same, the information required to transact will be similar, and there will be little switching of transaction partners.\n* It’s difficult to estimate differences in bargaining costs; specifying exact & appropriate technical standards is likely going to be complicated, but reaching an agreement for the institutional structure required to build a joint system may also be complicated. I expect this to depend a lot on the specifics of the respective scenarios.\n\n\n* Any agreement stipulating compatible features would have minimal enforcement costs since it would be largely self-enforcing (see above).[42](#easy-footnote-bottom-42-6782 \"This theoretic self-enforcement result abstracts away a number of real-world difficulties. For instance, actors might initially agree but renege due to hyperbolic discounting when faced with implementation costs. It further assumes that actors are unitary and have timeless preferences. Neither assumption is strictly correct. For instance, a change in leadership might change the value assigned to a previous agreement.\") Agreements to build a single system, on the other hand, would impose substantial governance costs. It would be challenging to set up or adapt the administrative structures required to ensure two previously separate teams work together smoothly.[43](#easy-footnote-bottom-43-6782 \"Their nature and extent will depend on the institutional setup agreed upon to build the single system. For instance, a contractual agreement to build a single system would probably require monitoring & enforcement mechanisms but no administrative apparatus. A merger between two labs would probably have inverse requirements. Here, I am subsuming both under “governance costs.”\")\n\n\nThis is weakly suggestive to me that transaction costs will incline developers to building compatible systems over building a joint system. Looking for case studies, this impression seems confirmed. I am not aware of any real-world examples of agreements to merge, build a single system instead of multiple, or establish a supranational structure *to solve a coordination problem*. Instead, actors seem to prefer to solve such problems through agreements and conventions. For instance, all standardization efforts fall into this category. Those reasons become stronger with an increasing number of potentially relevant developers as the costs for coordinating the development of a joint system rise more rapidly with an increasing number of actors compared to an agreement among independent developers, which probably will have very low marginal costs.\n\n\nOverall, I expect that there will be strong reasons to build a joint system if there is a small number of relevant nonstate developers who are aware of and moved by the astronomical stakes. In those cases, I would be surprised if transaction costs swayed them. I am more pessimistic in other scenarios.\n\n\nConclusion\n----------\n\n\nCoordination is not assured. Even if coordination is achieved, the outcome could still be suboptimal. This suggests that additional work on this problem would be valuable. In the next two sections, I will sketch directions for potential interventions and future research to make progress on this issue.\n\n\n### Interventions\n\n\nI will restrict this section to interventions for the governance problem sketched in this article while ignoring most technical challenges.[44](#easy-footnote-bottom-44-6782 \"See the appendix for a few research directions on making systems compatible. Critch, Krueger (2020) discuss technical challenges for building joint systems under the category multi (stakeholders)/single (AI system).\") I don’t necessarily endorse all of them without reservations as good ideas to implement. Some of them might have positive effects beyond the narrow application discussed here. Some might have (unforeseen) negative effects.\n\n\n**Increasing problem awareness**\n\n\nWithout awareness of the problem, a solution to the core problem becomes significantly less likely. Accordingly, increasing awareness of this problem among competitive developers is an important step.[45](#easy-footnote-bottom-45-6782 \"I expect there are insights to be gleaned from the research on epistemic communities for how to best do so.\") It seems particularly important to do so in a way that is accessible to people with a machine learning background. One potential avenue might be to develop benchmarks that highlight the limits on achieving cooperation among AI agents without coordination by their developers. Our work on mixed-motive coordination problems in [Stastny et al. 2021](https://longtermrisk.org/files/stastny_et_al_implicit_bargaining.pdf) is an example of ongoing work in this area.\n\n\n**Facilitating agreements**\n\n\nSome interventions can make the reaching of an agreement more likely under real-world conditions. Some reduce the transaction costs developers need to pay. Other mitigate the distributional problem they may face. I expect that many of these would also contribute to solving other bargaining problems between AI developers (e.g., finding solutions to the racing problem).\n\n\n* Setting up or improving bargaining fora (e.g., the Partnership on AI or standards bodies like the IEEE or the ISO) could help structure the bargaining process ([Fearon 1998](https://www.researchgate.net/profile/James_Fearon2/publication/4853934_Bargaining_Enforcement_and_International_Cooperation/links/53dc306a0cf216e4210c0719/Bargaining-Enforcement-and-International-Cooperation.pdf)). Following [Keohane (1984)](https://press.princeton.edu/books/paperback/9780691122489/after-hegemony), such institutions can also ‘cluster’ issues together, facilitating side payments and issue linkage (e.g., [McGinnis 1986](https://www.jstor.org/stable/pdf/174116.pdf)), which can help with constructing mutually beneficial bargains.\n* [Young (1989)](https://www.jstor.org/stable/2706651) suggests that salient solutions can help select one out of the many possible agreements.[46](#easy-footnote-bottom-46-6782 \"For example, in the idealized case where agents have explicit utility functions and developers coordinate on what tradeoff between their utility functions should be pursued, candidate focal points might be bargaining solutions with compelling normative properties.\") Additional research could identify such focal points to be advocated by the AI safety community.\n* [Krasner (1982](http://ir.rochelleterman.com/sites/default/files/krasner%201982.pdf)) suggests that increasing the knowledge of the relevant actors about how the most dangerous scenarios could materialize and how to prevent them could aid in actually implementing such a solution. This has often been the role of epistemic communities in facilitating international regimes & agreements (e.g., [Haas 1992](https://www.jstor.org/stable/2706951?seq=1)).\n* Facilitating agreements between the relevant actors on other issues could help build trust, formal procedures, and customs, which also seems to improve the chance of successful bargaining ([Snidal 1985](https://www.jstor.org/stable/1956241), [Krasner 1982](http://ir.rochelleterman.com/sites/default/files/krasner%201982.pdf)). The literature on confidence-building measures might also be relevant (e.g., [Landau and Landau 1997](https://onlinelibrary.wiley.com/doi/10.1002/crq.3900150204)).\n\n\n**Making development go well in the absence of problem awareness**\n\n\nIf developers are not sufficiently aware of the problem, there might still be interventions making coordination more likely.\n\n\n* “Interlab” training environments or tournaments, in which AI systems interact with one another (either during training or before deployment), could provide the feedback required to build AI systems with compatible bargaining features.\n* Requirements to test novel systems against existing ones in a boxed environment might lead to all subsequent developers adjusting to the first one. Instead of an iterative emergence, compatible features would come about as a result of pre-emption. As a downside, this might exacerbate racing by developers to deploy first to select the most advantageous equilibrium.\n\n\n**Facilitating agreements to build a joint system**\n\n\nAs I wrote above, a superficial analysis suggests that such agreements would be beneficial. If so, there might be interventions to make them more likely without causing excessive negative side-effects. For instance, one could restrict such efforts to tight races, as the OpenAI Assist Clause attempts to do.\n\n\n### Future work\n\n\nThere are many ways in which the analysis of this post could be extended or made more rigorous:\n\n\n* building more sophisticated game-theoretic models to analyze the coordination problem between developers (e.g., allowing for partial coordination);\n* including transaction costs in an analysis of whether developers would coordinate in the first place or whether doing so would be too costly;\n* more comprehensively comparing the transaction costs of realizing different arrangements.\n\n\nThere are also more foundational questions about takeoff scenarios relevant to this problem:\n\n\n* Are agreements to build a single system actually overall a good idea?\n* How similar to one another (in relevant respects) should we expect the AI systems in multipolar scenarios to be?\n\n\nWe can ask further questions about potential interventions:\n\n\n* What are ideal institutional arrangements, either for building a single system or for multiple compatible systems?\n* What limits does antitrust regulation place on the kind of coordination proposed in this article?\n* What insights can be gained from the literature about epistemic communities?\n\n\nAcknowledgments\n---------------\n\n\nI want to thank Jesse Clifton for substantial contributions to this article as well as Daniel Kokotajlo, Emery Cooper, Kwan Yee Ng, Markus Anderljung, and Max Daniel for comments on a draft version of this article.\n\n\nAppendix: Examples of features developers might coordinate on\n-------------------------------------------------------------\n\n\nThroughout this document, I have talked about bargaining-relevant features of AI systems that developers might coordinate on. The details of these features depend on facts about how transformative AI systems are trained which are currently highly uncertain. For the sake of concreteness, however, here are some examples of features that AI developers might coordinate on, depending on what approach to AI development is ultimately taken:\n\n\n* A social welfare function for their systems to jointly optimize, and policies for deciding how to identify and punish defections from this agreement (see [Stastny et al. 2021](https://longtermrisk.org/files/stastny_et_al_implicit_bargaining.pdf), [Clifton, Riché 2020](https://longtermrisk.org/files/toward_cooperation_learning_games_oct_2020.pdf));\n* The details of procedures for resolving high-stakes negotiations; for instance, [collaborative game specification](https://drive.google.com/file/d/1WYNPslvkUi_0XBmQZjxfJpINGV8eg3aC/view)[47](#easy-footnote-bottom-47-6782 \"Consider as an analogy the Moscow–Washington hotline, which provided a direct communication link between the leaders of the U.S. and the Soviet Union. It was instituted after the Cuban Missile Crisis had made the need for better communication channels apparent.\") is such a method, and requires (among other things) agreement (perhaps among other things) on 1) a method for combining agents’ reported models of their strategic situation and 2) a solution concept to apply to a collaboratively specified game;\n* The content of parts of a [user's manual](https://www.alignmentforum.org/posts/4JuKoFguzuMrNn6Qr/hch-is-not-just-mechanical-turk) for human-in-the-loop AI training regimes that are relevant to bargaining-related behavior. For instance, developers might adopt common instructions for how to give [approval](https://ai-alignment.com/model-free-decisions-6e6609f5d99e) to agents being trained in various bargaining environments;\n* The content of guidelines for how to behave in high-stakes bargaining situations, in regimes where natural language instructions are used to impose constraints on AI systems’ behavior.\n\n\n\n\n1. I do not mean to imply that this is the only risk posed by multipolar scenarios. For other ones, see for example: [Critch, Krueger 2020](https://arxiv.org/abs/2006.04948), [Zwetsloot, Dafoe 2019](https://www.lawfareblog.com/thinking-about-risks-ai-accidents-misuse-and-structure), [Manheim 2018](https://arxiv.org/abs/1810.10862).\n2. Note that bargaining failure is not the only cause of catastrophic interactions. For instance, the interactions of Lethal Autonomous Weapon Systems [might also be catastrophic](https://forum.effectivealtruism.org/posts/oR9tLNRSAep293rr5/why-those-who-care-about-catastrophic-and-existential-risk-2#Lethal_autonomous_weapons_as_destabilizing_elements_in_and_out_of_war).\n3. Alignment only suffices if the goals of the two systems are identical, and they have common knowledge of this fact, which seems unlikely in a multipolar scenario. Working toward “social alignment”, i.e., alignment with society as a whole (as described, e.g., [here](https://www.alignmentforum.org/posts/hvGoYXi2kgnS3vxqb/some-ai-research-areas-and-their-relevance-to-existential-1?commentId=jFX2B5E4BXQmqvTks)), or a “[homogeneous takeoff](https://www.alignmentforum.org/posts/mKBfa8v4S9pNKSyKK/homogeneity-vs-heterogeneity-in-ai-takeoff-scenarios)” might make that more likely.\n4. One strand of the international relations literature argues that the failure of rational agents to bargain successfully is one explanation for wars between human nation states. See [Fearon (1995)](https://www.jstor.org/stable/2706903#metadata_info_tab_contents) for the seminal text of this perspective.\n5. These problems have been explored in the context of AI in more detail [here](https://www.alignmentforum.org/posts/Tdu3tGT4i24qcLESh/equilibrium-and-prior-selection-problems-in-multipolar-1) and in [Stastny et al. 2021](https://longtermrisk.org/files/stastny_et_al_implicit_bargaining.pdf).\n6. We are still deliberating about the appropriate terminology for this problem.\n7. There are equilibrium selection problems which do not have this more specific property. Take, for instance, the Iterated Prisoner's Dilemma: It has many equilibria, but the only cooperative one is both players playing (Cooperate, Cooperate) at every time step on the equilibrium path.\n8. This is formalized in [Stastny et al. 2021](https://longtermrisk.org/files/stastny_et_al_implicit_bargaining.pdf).\n9. This distributional problem is compounded by informational problems because the bargaining parties have an incentive to distort their private information (e.g., about their preferences) to get a better deal ([Morrow 1994](https://www.jstor.org/stable/2706964)).\n10. Now, one might object that a rational actor would realize this problem and play as conservatively as possible. One could, for instance, always accept any Pareto-optimal agreement. This behavior, however, is very exploitable and comes at a significant competitiveness cost, which makes this strategy unattractive.\n11. Outcomes are not necessarily catastrophic, but on the face of it, misperception seems much more likely to cause than prevent conflict.\n12. Jesse Clifton makes this point [here](https://longtermrisk.org/weak-identifiability-and-its-consequences-in-strategic-settings/).\n13. See [this comment thread](https://www.alignmentforum.org/posts/brXr7PJ2W4Na2EW2q/the-commitment-races-problem?commentId=yMb5dRDDkLHQjNyki).\n14. For what it’s worth, [myopia](https://www.alignmentforum.org/tag/myopia) has been suggested as a safety technique and appropriately myopic systems might also address this problem. At the same time, however, they need to be sufficiently far-sighted to realize that future conflict could pose a catastrophic risk.\n15. Even to the extent that distributional issues remain, humans might be better suited to solve them as they have a shared evolutionary history and an increasingly shared cultural background, which is more uncertain in the case of AI systems, where it depends mostly on the [homogeneity of AI takeoff](https://www.alignmentforum.org/posts/mKBfa8v4S9pNKSyKK/homogeneity-vs-heterogeneity-in-ai-takeoff-scenarios).\n16. This argument carries less force for AI systems that might still exhibit such biases. [Myopic agents](https://www.alignmentforum.org/tag/myopia) might, again, be such an example. Overall, however, I do think it’s more plausible than not that AI systems will be more scope-sensitive than humans. They are more likely to pursue an idealized version of human goals or they may modify their goals to be more scope-sensitive to improve their bargaining position.\n17. It could, however, lead to the failed exploitation of some significant fraction of the cosmic endowment. By some totalist value systems, this may be a tragedy not worth risking.\n18. One unilateral solution is for developers to make their systems maximally conservative, i.e., to follow a policy of accepting any agreement that is proposed to them. Such exploitable systems, however, would probably not be acceptable to developers, and as soon as systems are not maximally conservative, there is room for bargaining failure. (Also see footnote 10.)\n19. [Critch, Krueger (2020)](https://arxiv.org/abs/2006.04948) discuss this under the heading of multi/single delegation.\n20. There are some structural differences between the two solutions. For instance, agreements to use mutually compatible features might allow partial coordination because the option space might not be discrete ([Snidal 1985](https://www.jstor.org/stable/1956241)). This is not the case when agreeing to build a single system. However, these differences are unimportant for the subsequent analysis.\n21. Analyses of this simple game can be found in [Stein 1982](https://www.jstor.org/stable/2706524#metadata_info_tab_contents), who calls this a “dilemma of common aversion”, and in [Snidal 1985](https://www.jstor.org/stable/1956241).\n22. If the game is indeed more appropriately modeled as a game of pure coordination due to the uncertainty of the developers as suggested by the previous section, coordination is assured conditional on developers being aware of the problem, communication being possible, and coordination being sufficiently cheap. So I will not discuss this option further.\n23. Such credible commitments could, for instance, be achieved through transparency tools.\n24. See [Stastny et al. 2021](https://longtermrisk.org/files/stastny_et_al_implicit_bargaining.pdf) for an example of such failure.\n25. The UK did not have capacity for its own atomic weapon program. Joining forces with the US was their only viable path to an atomic weapon in the short-term. The US seems to have believed that the British could provide important help for some parts of the Manhattan project.\n26. NASA could not secure sufficient capital for the ISS domestically. So they pushed for involving European allies in the project. Russia was approached for their operational and technical experience with space stations, which was unique at the time ([Lambright, Schaefer 2004](https://muse.jhu.edu/article/53693)).\n27. It seems like the sharing of the high cost was the main reason for collaboration ([Johnman, Lynch 2002a](https://academic.oup.com/tcbh/article-abstract/13/3/253/1692823?redirectedFrom=PDF), [Johnman, Lynch 2002b](https://www.jstor.org/stable/20081830?seq=12#metadata_info_tab_contents)).\n28. Facilities for high-energy particle physics were simply too expensive for any one European country at the time. As all of them were interested in preventing brain drain, pooling resources was in their mutual interest. It also seems like CERN served as a [confidence-building measure](https://en.wikipedia.org/wiki/Confidence-building_measures#:~:text=Confidence%2Dbuilding%20measures%20(CBMs),in%20a%20situation%20of%20conflict.) in post-war Europe ([Schukraft 2004](https://arxiv.org/pdf/physics/0602099.pdf)). However, I would be surprised if this factor will play a large role in the potential joining of national AI labs due to the strategic nature of the technology.\n29. Note that often this is done to beat another competitor since unilateral development would usually become feasible in due time.\n30. This is less relevant for national projects because governments face no realistic risk of ruin.\n31. ITER might have been an example of this, but so far I have not been able to find any reliable sources on the reasons for collaboration on this project.\n32. [Sandler, Cauley (2007)](https://www.jstor.org/stable/2600296?seq=10#metadata_info_tab_contents) discuss this rationale.\n33. On the other hand, such mergers lead to technology diffusion which is costly or even impossible to prevent.\n34. Though there might still be [acausal bargaining](https://www.lesswrong.com/tag/acausal-trade).\n35. In the case of commercial developers, legal constraints may also play a decisive role. Mergers & acquisitions as well as self-regulation/coordination are subject to antitrust regulation and rulings (see [here](https://cullenokeefe.com/blog/antitrust-compliant-ai-industry-self-regulation), for instance).\n36. These two factors could be integrated into a single payoff value. The conceptual distinction is still analytically helpful.\n37. For instance, there is good reason to believe that OpenAI added the “Assist Clause” to [their charter](https://openai.com/charter/) not to ensure their own success as an organization but to prevent a development race, which could be disastrous from an impartial perspective.\n38. In principle, it also allows us to make more precise predictions about whether to expect a coordinated outcome in the first place because it allows for less idealized conditions. Concretely, developers will find coordination worth it if the (estimated) transaction costs required to bring about any given coordination outcome are lower than the (estimated) benefits of the coordination outcome over their best alternative to a negotiated agreement (BATNA). Actually analyzing this for the case at hand is beyond the scope of this article.\n39. This list is not intended to be exhaustive. It only covers commonly discussed types of transaction costs.\n40. Especially in the international relations literature, supranational structures are usually only discussed as solutions for monitoring and enforcement in the face of opportunism, which does not arise for coordination problems (e.g., [Sandler, Cauley 1977](https://www.jstor.org/stable/2600296?seq=10#metadata_info_tab_contents)). Such analysis is applied, for instance, to the question of empire/alliance formation ([Lake 1996](https://www.jstor.org/stable/2706997)) or the formation of the single European market (e.g., [Garrett 1992](https://www.jstor.org/stable/2706862), [Garrett 1995](https://www.jstor.org/stable/2706870)).\n41. In what follows, I assume that actors can make sufficiently accurate estimates of the transaction costs involved. [Lipson (2004)](https://www.jstor.org/stable/pdf/3186537.pdf) discusses this assumption in the context of international relations.\n42. This theoretic self-enforcement result abstracts away a number of real-world difficulties. For instance, actors might initially agree but renege due to hyperbolic discounting when faced with implementation costs. It further assumes that actors are unitary and have timeless preferences. Neither assumption is strictly correct. For instance, a change in leadership might change the value assigned to a previous agreement.\n43. Their nature and extent will depend on the institutional setup agreed upon to build the single system. For instance, a contractual agreement to build a single system would probably require monitoring & enforcement mechanisms but no administrative apparatus. A merger between two labs would probably have inverse requirements. Here, I am subsuming both under “governance costs.”\n44. See the appendix for a few research directions on making systems compatible. [Critch, Krueger (2020)](https://arxiv.org/abs/2006.04948) discuss technical challenges for building joint systems under the category *multi (stakeholders)/single (AI system)*.\n45. I expect there are insights to be gleaned from the research on epistemic communities for how to best do so.\n46. For example, in the idealized case where agents have explicit utility functions and developers coordinate on what tradeoff between their utility functions should be pursued, candidate focal points might be [bargaining solutions](https://en.wikipedia.org/wiki/Cooperative_bargaining#Bargaining_solutions) with compelling normative properties.\n47. Consider as an analogy the [Moscow–Washington hotline](https://en.wikipedia.org/wiki/Moscow%E2%80%93Washington_hotline), which provided a direct communication link between the leaders of the U.S. and the Soviet Union. It was instituted after the Cuban Missile Crisis had made the need for better communication channels apparent.", "url": "https://longtermrisk.org/coordination-challenges-for-preventing-ai-conflict/", "title": "Coordination challenges for preventing AI conflict", "source": "html_articles", "source_type": "blogPost", "source_filetype": "pdf", "date_published": "2021-03-08T23:00:00Z", "authors": ["Stefan Torges"], "summary": [], "id": "9cb9e0ab30e63f2dc98b62a004f4cf4e"} {"text": "Differential Intellectual Progress as a Positive-Sum Project\n============================================================\n\n\n\n29 August 2015\nby [Brian Tomasik](https://longtermrisk.org/author/brian-tomasik/ \"Posts by Brian Tomasik\")\n\nFirst written: 23 Oct. 2013; Last update: 21 Dec. 2015\n\n Fast technological development carries a risk of creating extremely powerful tools, especially AI, before society has a chance to figure out how best to use those tools in positive ways for many value systems. Suffering reducers may want to help mitigate the arms race for AI so that AI developers take fewer risks and have more time to plan for how to avert suffering that may result from the AI's computations. The AI-focused work of the [Machine Intelligence Research Institute](http://en.wikipedia.org/wiki/Machine_Intelligence_Research_Institute) (MIRI) seems to be one important way to tackle this issue. I suggest some other, broader approaches, like advancing philosophical sophistication, cosmopolitan perspective, and social institutions for cooperation.\n\n\nAs a general heuristic, it seems like advancing technology may be net negative, though there are plenty of exceptions depending on the specific technology in question. Probably advancing social science is generally net positive. Humanities and pure natural sciences can also be positive but probably less per unit of effort than social sciences, which come logically prior to everything else. We need a more peaceful, democratic, and enlightened world before we play with fire that could cause potentially permanent harm to the rest of humanity's future.\n\n\n### Other versions\n\n\n\n[![](/files/pdf-icon.png)](https://longtermrisk.org/files/Differential_Intellectual_Progress_as_a_Positive_Sum_Project.pdf)\n\n[![](/files/mp3-icon.png)\nAudio podcast](https://longtermrisk.org/files/DiffIntelProg.mp3)\n\nContents\n\n+ [Other versions](#Other_versions)\n\n* [Introduction](#Introduction)\n* [Encouraging more reflection](#Encouraging_more_reflection)\n* [Ideas for improving reflectiveness](#Ideas_for_improving_reflectiveness)\n\t+ [Liberal-arts education](#Liberal-arts_education)\n\t+ [Big-picture, cosmopolitan thinking](#Big-picture_cosmopolitan_thinking)\n\t+ [Effective altruism](#Effective_altruism)\n\t+ [Improved public-policy epistemology??](#Improved_public-policy_epistemology)\n* [Are these meta things cost-effective?](#Are_these_meta_things_cost-effective)\n* [Idealism meets competitive constraints](#Idealism_meets_competitive_constraints)\n* [Areas where the sign is unclear](#Areas_where_the_sign_is_unclear)\n\t+ [Faster technology](#Faster_technology)\n\t+ [Education](#Education)\n\t+ [Cognitive enhancement](#Cognitive_enhancement)\n\t+ [Transhumanism](#Transhumanism)\n\t+ [Economic growth](#Economic_growth)\n\t\t- [Wars and arms races may dominate](#Wars_and_arms_races_may_dominate)\n* [There are many exceptions](#There_are_many_exceptions)\n* [Technologies that are probably bad to accelerate](#Technologies_that_are_probably_bad_to_accelerate)\n\t+ [Computer hardware](#Computer_hardware)\n\t+ [Artificial consciousness](#Artificial_consciousness)\n* [Caveats: When are changes actually positive-sum?](#Caveats_When_are_changes_actually_positive-sum)\n\t+ [Positive-sum in resources does not mean positive-sum in utility](#Positive-sum_in_resources_does_not_mean_positive-sum_in_utility)\n\t+ [Are changes determined by fractions of people or by absolute numbers?](#Are_changes_determined_by_fractions_of_people_or_by_absolute_numbers)\n* [See also](#See_also)\n\nIntroduction\n------------\n\n\n\n> The saddest aspect of life right now is that science gathers knowledge faster than society gathers wisdom. --[Isaac Asimov](http://en.wikiquote.org/wiki/Talk:Isaac_Asimov)\n> \n> \n> The unleashed power of the atom has changed everything save our modes of thinking [...]. --[Albert Einstein](http://en.wikiquote.org/wiki/Albert_Einstein)\n> \n> \n\n\nTechnology is an inherently double-edged sword: With great power comes great responsibility, and discoveries that we hope can help sentient creatures also have the potential to result in massive suffering. João Pedro de Magalhaes calls this \"[Alice's dilemma](http://jp.senescence.info/thoughts/futures04.pdf)\" and notes that \"in the same way technology can save lives and enrich our dreams, it can destroy lives and generate nightmares.\"\n\n\nIn \"[Intelligence Explosion: Evidence and Import](http://intelligence.org/files/IE-EI.pdf),\" Luke Muehlhauser and Anna Salamon propose \"[differential intellectual progress](http://wiki.lesswrong.com/wiki/Differential_intellectual_progress)\" as a way to reduce risks associated with development of artificial intelligence. From *[Facing the Intelligence Explosion](http://intelligenceexplosion.com/2012/ai-the-problem-with-solutions/)*:\n\n\n\n> Differential intellectual progress consists in prioritizing risk-*reducing* intellectual progress over risk-*increasing* intellectual progress. As applied to AI risks in particular, a plan of differential intellectual progress would recommend that our progress on the scientific, philosophical, and technological problems of AI *safety* outpace our progress on the problems of AI *capability* [...].\n> \n> \n\n\nI personally would replace \"risk\" with \"suffering\" in that quote, but the general idea is clear.\n\n\nEncouraging more reflection\n---------------------------\n\n\nDifferential intellectual progress is important beyond AI, although because AI is likely to control the future of Earth's light cone absent a catastrophe before then, ultimately all other applications matter through their influence on AI.\n\n\nAt a very general level, I think it's important to inspire deeper philosophical circumspection. The world is extremely complex, and making a positive impact requires a lot of [knowledge](http://www.utilitarian-essays.com/education.html) and thought. We need more minds exploring big-picture questions like\n\n\n* What kinds of futures do we want to see and want to avoid? What are their probabilities?\n* How much control do we have over different aspects of the future? Which are mostly inevitable and which are more path-dependent?\n* How can we avoid overconfidence and optimism bias in our expectations? Are there interventions that can be helpful across a broad range of possible scenarios?\n* What political, social, and cultural institutions can we build to more reliably promote mutually beneficial cooperation?\n\n\nAs these questions suggest, greater reflectiveness by humanity can be a positive-sum (i.e., Pareto-improving) enterprise, because a more slow, deliberative, and clear-headed world is one in which all values have better prospects for being realized. In an [AI arms race](http://wiki.lesswrong.com/wiki/AI_arms_race), there's pressure to produce *something* that can win, even if it's much less good than what your team would ideally want and gives no consideration to what the other teams want. If the arms race can be constrained, then there's more time to engage in positive-sum compromise on how AI should be shaped. This benefits all parties in expectation, including suffering reducers, because AIs built in a hurry are less likely to include safety measures against sentient science simulations, [suffering subroutines](http://www.utilitarian-essays.com/reinforcement-learning.html), and so on.\n\n\nIdeas for improving reflectiveness\n----------------------------------\n\n\nMIRI does important work on philosophical and strategic issues related to AI and has written much on this topic. Below I discuss some other, broader approaches to differential intellectual progress, but in general, it's plausible that MIRI's direct focus on AI is among the most effective.\n\n\n### Liberal-arts education\n\n\nThe social sciences and humanities contain a wealth of important insights into human values, strategies for pro-social behavior, and generally what philosopher Nick Bostrom [calls](http://www.nickbostrom.com/) \"crucial considerations\" for understanding how the universe works and how to make a positive impact on it. It's good to encourage people to explore this material, such as through liberal-arts education.\n\n\n[Ralph Nader](http://www.nationalchurchillmuseum.org/ralph-nader-green-lecture.html):\n\n\n\n> The liberal arts are really the core of higher education. Vocational education is an instrument, but the liberal arts represent the best of our values and they develop of critical thinking[. ...T]he liberal arts and the humanities and social sciences are so critical when higher education is often viewed primarily as vocational.\n> \n> \n\n\nOf course, a pure focus on humanities or social sciences is not a good idea either, because the hard sciences teach a clarity of thinking that can [dissolve](http://lesswrong.com/lw/of/) some of the confusions that afflict standard philosophy. Moreover, since one of the ultimate goals is to shape technological progress in more positive and cooperative directions, reflective thinkers need a deep understanding of science and technology, not just of David Hume and Peter Singer.\n\n\n### Big-picture, cosmopolitan thinking\n\n\nBeyond what students learn in school, there's opportunity to expand people's minds more generally. When scientists, policy makers, voters, and other decision-makers are aware of more ways of looking at the world, they're more likely to be open-minded and consider how their actions affect all parties involved, even those who may feel differently from themselves. Tolerance and cosmopolitan understanding seem important for reducing zero-sum \"us vs. them\" struggles and realizing that we can learn from each other's differences -- both intellectually and morally.\n\n\n[TED talks](http://www.ted.com/talks), [Edge](http://www.edge.org/), and thousands of other forums like these are important ways to expand minds, advance social discourse on big-picture issues, and hopefully, knock down boundaries between people.\n\n\nWhile science popularization helps inform non-experts of what's coming and thereby advance insight into crucial considerations for how to proceed, it also carries the risk of simultaneously encouraging more people to go into scientific fields and produce discoveries faster than what society can handle. The net balance is not obvious, though I would guess that for many \"pure\" sciences (math, physics, ecology, paleontology, etc.), the net balance is positive; for those with more technological application (computer science, neuroscience, and of course, AI itself), the question is murkier.\n\n\n### Effective altruism\n\n\nExpanding the [effective-altruist](http://en.wikipedia.org/wiki/Effective_altruism) (EA) movement is another positive-sum activity, in the sense that EAs aim to help answer important questions about how best to shape the future in ways that can benefit many different groups. Of course, the movement is obviously just one of many within the more global picture of efforts to improve the world, and it's important to avoid insular \"EA vs. non-EA\" dichotomies.\n\n\n### Improved public-policy epistemology??\n\n\nCarl Shulman [suggests](http://lesswrong.com/lw/i7p/how_does_miri_know_it_has_a_medium_probability_of/9ixu) the following ideas:\n\n\n\n> \n> * Enhance decision-making and forecasting capabilities with things like the IARPA forecasting tournaments, science courts, etc, to improve reactions to developments including AI and others (recalling that most of the value of MIRI in [Eliezer Yudkowsky's] model comes from major institutions being collectively foolish or ignorant regarding AI going forward)\n> * Prediction markets, meta-research, and other institutional changes[.]\n> \n> \n> \n\n\nThese and related proposals would indirectly speed technological development, which is a counter-consideration. Also, if used by militaries, could they accelerate arms races? Even if positive, it's not clear these approaches have the same value for negative-leaning utilitarians specifically as the other, more philosophical interventions, which seem more likely to encourage compassion and tolerance.\n\n\nAre these meta things cost-effective?\n-------------------------------------\n\n\nIs encouraging philosophical reflection in general plausibly competitive with more direct work to explore the philosophical consequences of AI? My guess is that direct work like MIRI's is more important per dollar. That said, I doubt the difference in cost-effectiveness is vast, because everything in society has [flow-through effects](http://blog.givewell.org/2013/05/15/flow-through-effects/) on everything else, and as people become more philosophically sophisticated and well-rounded, they have a better chance of identifying the most important focus areas, of which AI philosophy is just one. Another important focus area could be, for example, designing international political structures that can make cooperative work on AI possible, thereby reducing the deadweight loss of unconstrained arms race. There are probably many more such interventions yet to be explored, and generally encouraging more thought on these topics is one way to foster such exploration.\n\n\nPart of my purpose in this discussion was not to propose a highly optimized charitable intervention but merely to suggest some tentative conclusions about how we should regard the side-effects of other things we do. For example, should I Like intellectually reflective material on Facebook and YouTube? Probably. Should I encourage my cousin to study physics + philosophy or electrical engineering? These considerations push slightly more for physics + philosophy than whatever your prior recommendation might have been. And so on.\n\n\nIdealism meets competitive constraints\n--------------------------------------\n\n\nMany of the ideas suggested in this piece are cliché -- observations made at graduation ceremonies or moralizing TV programs, about expanding people's minds so that they can better work together in harmony. Isn't this naïve? The future is driven by economic competition, power politics, caveman emotions, and other large-scale evolutionary pressures, so can we really make a difference just by changing hearts and minds?\n\n\nIt's true that much of the future is probably out of our control. Indeed, much of the present is out of our control. Even political leaders are often constrained by lobbyists, donors, and popularity ratings. But a politician's personal decisions can have some influence on outcomes, and of course, the opinions and wealth distribution of the electorate and donors are themselves influenced by ideas in society.\n\n\nMany social norms arise from convention or expediency, due to the fact that beliefs often follow action rather than precede it. Still, there is certainly leeway in the memes toward which society gravitates, and we can tug on those memes, either directly or indirectly. The founders of the world's major religions had an immense and non-inevitable impact on the course of history. The same is true for other writers and thinkers from the past and present.\n\n\nAnother consideration is that we don't want selective reflectiveness. For example, suppose those currently pursuing fast technological breakthroughs kept going at the same pace, while the rest of society slowed down to think more carefully about how to proceed. This would potentially make things *worse* because then circumspection would have less chance of winning the race. Rather, what we'd like to see is an across-the-board recognition of the need for exploring the social and philosophical side of how we want to use future technology -- one that can hopefully influence all parties in all countries.\n\n\nAs a specific example, say the US slowed down its technological growth while China did not. China currently cares less about animal welfare and generally has more authoritarian governance, so even from a non-ethnocentric viewpoint, it could be slightly worse for China to control the future. But my guess is that this consideration is very small compared with the direct, potentially adverse effect of faster technology on the whole planet, especially since most non-military technological progress isn't confined within national boundaries. China could catch up to America's level of humane concern in a few decades anyway, and the bigger issue seems to be how fast the world as a whole moves. Also, in the case of military technology, the US tends to set the pace of innovation, and probably slower US military-tech growth would reduce the pressure for military-tech development by other countries.\n\n\nAreas where the sign is unclear\n-------------------------------\n\n\n### Faster technology\n\n\nIt's not always the case that accelerated technology is more dangerous. For example, faster technology in certain domains (e.g., the Internet that made Wikipedia possible) accelerates the spread of wisdom. Discoveries in science can help us reduce suffering faster in the short term and improve our assessment for which long-term trajectories humanity should pursue. And so on. Technology is almost always a mixed bag in what it offers, and faster growth in some areas is probably very beneficial. However, from a macro perspective, the sign is less clear.\n\n\n### Education\n\n\nPromoting education wholesale is another double-edged sword because it speeds up technology as well as wisdom. However, differentially advancing cross-disciplinary and philosophically minded education seems generally like a win for many value systems at once, including suffering reduction.\n\n\n### Cognitive enhancement\n\n\nIn \"[Intelligence Amplification and Friendly AI](http://lesswrong.com/lw/iqi/intelligence_amplification_and_friendly_ai/)\", Luke Muehlhauser enumerates arguments why improving cognitive abilities might help and hurt chances for controlled AI. Nick Bostrom reviews similar considerations in Ch. 14 of *Superintelligence: Paths, Dangers, Strategies*.\n\n\n### Transhumanism\n\n\nBenefits:\n\n\n* Transhumanists recognize the importance of thinking about the future ahead of time.\n* They care to some degree about risks of future suffering that may unfold.\n\n\nDrawbacks:\n\n\n* Transhumanists often want to accelerate the future, perhaps due to starry-eyed optimism.\n* Transhumanists typically support colonizing space and spreading sentience far and wide, even though this likely will mean a massive increase in expected suffering.\n\n\n### Economic growth\n\n\nA similar double-edged sword is economic growth, though perhaps less dramatically. One primary effect of economic growth is technological growth, and insofar as we need more time for reflection, this [seems to be a risk](http://lesswrong.com/lw/hoz/do_earths_with_slower_economic_growth_have_a/). On the other hand, economic growth has several consequences that are more likely positive, [such as](https://www.facebook.com/yudkowsky/posts/10151665252179228?comment_id=27306965&offset=0&total_comments=26)\n\n\n* Increasing international trade, with the side effect of making people more sympathetic to those of other nationalities and reducing odds of inter-country warfare\n* Promoting democracy, which is a powerful way to resolve disputes among conflicting factions\n* Enhancing stability and therefore concern for longer-term outcomes, with reduced unilateral risk-taking\n* Allowing for more intellectual awareness and reflection on important questions generally.\n\n\nThat said, these seem like properties that result from the *absolute amount* of economic output rather than the *growth rate* of the economy. It's not controversial that a richer world will be more reflective, but the question is whether the world would be more reflective *per unit of GDP* if it grew faster or slower. (Note: In the following figure, the x-axis represents \"GDP and/or technology\", not \"GDP divided by technology\".)\n\n\n![](https://longtermrisk.org/files/wisdom-vs-growth.jpg \"How does wisdom per unit of GDP and technology depend on the growth rate of that GDP and technology? (I release this image into the public domain worldwide.)\")\n\n\nAs a suggestive analogy, slower-growing crystals [have fewer defects](http://www.ch.ntu.edu.tw/~sfcheng/HTML/material94/Crystal_growth.pdf). More slowly dropping the temperature in a simulated-annealing algorithm [allows for](https://en.wikipedia.org/wiki/Simulated_annealing#The_annealing_schedule) finding better solutions. In the case of economic growth, one might say that if people have more time to adapt to a given level of technological power, they can make conditions better before advancing to the next level. So, for example, if the [current trends](https://en.wikipedia.org/wiki/The_Better_Angels_of_Our_Nature) toward lower levels of global violence continue, we'd rather wait longer for growth, so that the world can be more peaceful when it happens. Of course, some of that trend toward peace may itself be due to economic growth.\n\n\nImagine if people in the Middle Ages developed technology very rapidly, to the verge of building general AI. Sure, they would have improved their beliefs and institutions rapidly too, but those improvements wouldn't have been able to compete with the centuries of additional wisdom that our actual world got by waiting. The Middle-Age AI builders would have made worse decisions due to less understanding, less philosophical sophistication, worse political structures, worse social norms, etc. The arc of history is almost monotonic toward improvements along these important dimensions.\n\n\nA counterargument is that conditions are pretty good right now, and if we wait too long, they might go in worse directions in the meantime, such as because of another Cold War between the US and China. Or, maybe faster economic growth means more trade sooner, which helps prevent wars in the short run. (For example, would there not have been a Cold War if the US and Soviet Union had been important trading partners?) A friend tells me that Peter Thiel believes growth is important for cooperation because in a growth scenario, incentives are positive-sum, while in stagnation, they're more zero-sum. Carl Shulman [notes](http://reflectivedisequilibrium.blogspot.com/2013/12/current-thoughts-on-nuclear-war-as.html \"\\\"Current thoughts on nuclear war as an existential risk\\\"\") that \"Per capita prosperity and growth in per capita incomes are [associated](http://www.amazon.com/The-Moral-Consequences-Economic-Growth/dp/1400095719) with more liberal postmaterialist [values](http://www.overcomingbias.com/2009/11/key-disputed-values.html), stable democracy, and peace.\" Faster growth by means other than higher birth rates might increase GDP per capita because growth would happen more rapidly than population could keep up.\n\n\nSuppose AI would arrive when Earth reached some specific level of GDP. Then even if we saw that faster growth correlated with faster increases in tolerance, cooperation, and wisdom, this wouldn't necessarily mean we should push for faster growth. The question is whether some percent increase in GDP gives more increase in wisdom when the growth is faster or slower.\n\n\nAlternatively, in a model where AI arrives after some amount of cumulative GDP history for Earth, regardless of whether there has been growth, then if zero GDP growth meant zero moral growth (which is obviously unrealistic), then we'd prefer to have more GDP growth so that we'd have more wisdom when AI arrived.\n\n\nAnother relevant consideration Carl Shulman [pointed out](https://www.facebook.com/yudkowsky/posts/10151665252179228?comment_id=27306983&offset=0&total_comments=25) is that growth in AI technology specifically may only be loosely coupled with economic growth overall. Indeed, if slower growth caused wars that triggered AI arms races, then slower economic growth would mean faster AI. Of course, some take the opposite view: Environmentalists might claim that faster growth would mean more future catastrophes like climate change and water shortages, and these would lead to *more* wars. The technologists then reply that faster growth means faster ways to mitigate environmental catastrophes. And so on.\n\n\nAlso, a certain level of economic prosperity is required before a country can even begin to amass dangerous weapons, and sometimes an economic downturn can push the balance toward \"butter\" [rather than](https://en.wikipedia.org/wiki/Guns_versus_butter_model) \"guns.\" David E. Jeremiah [predicted](http://www.zyvex.com/nanotech/nano4/jeremiahPaper.html) that \"Conventional weapons proliferation will increase as more nations gain the wealth to utilize more advanced technology.\" In the talk \"Next steps in nuclear arms control,\" Steven Pifer [suggested](http://www.youtube.com/watch?v=p8cuKPFSm6M&feature=youtu.be&t=18m32s) that worsening economic circumstances might incentivize Russia to favor disarmament agreements to reduce costly weapons that it would struggle to pay for. Of course, an opposite situation [might also happen](https://web.archive.org/web/20161106155952/http://felicifia.org/viewtopic.php?t=760): If the budget is tight, a country might, when developing new technologies, strip away the \"luxuries\" of risk analysis, making sure the technologies are socially beneficial, and so on.\n\n\nMore development by Third World countries [could mean](https://www.reddit.com/r/IRstudies/comments/3jk0ks/is_the_economic_development_of_the_global_south/ \"'Is the economic development of the global South likely to increase or decrease the prospects for international cooperation and peace?'\") that more total nations are able to compete in technological arms races, making coordination harder. For instance, many African nations are probably too poor to pursue nuclear weapons, but slightly richer nations like India, Pakistan, and Iran can do so. On the other hand, development by poor nations could mean more democracy, peace, and inclination to join institutions for global governance.\n\n\nThe upshot is unclear. In any event, even if faster economic growth were positive, it seems unlikely that advancing economic growth would be the most cost-effective intervention in most cases, especially since there are strong competitive and political pressures pushing for it already. Of course, there are some cases where the political pressures are stronger the other way (e.g., in opposing open borders for immigrants), when there's a perceived conflict between national and global economic pie.\n\n\nAlso, while the effects of \"economic growth\" as an abstract concept may be rather diffuse and double-edged, any particular intervention to increase economic growth is likely to be targeted in a specific direction where the differential impact on technology vs. wisdom is more lopsided.\n\n\n#### Wars and arms races may dominate\n\n\n[Quoting](http://lesswrong.com/lw/hoz/do_earths_with_slower_economic_growth_have_a/9590) Kawoomba on LessWrong:\n\n\n\n> R&D, especially foundational work, is such a small part of worldwide GDP that any old effect can dominate it. For example, a \"cold war\"-ish scenario between China and the US would slow economic growth -- but strongly speedup research in high-tech dual-use technologies.\n> \n> \n> While we often think \"Google\" when we think tech research, we should mostly think DoD in terms of resources spent -- state actors traditionally dwarf even multinational corporations in research investments, and whether their [investments] are spurned or spurred by a slowdown in growth (depending on the non-specified cause of said slowdown) is anyone's guess.\n> \n> \n\n\nLuke\\_A\\_Somers [followed up](http://lesswrong.com/lw/hoz/do_earths_with_slower_economic_growth_have_a/95hf):\n\n\n\n> Yes - I think we'd be in much better shape with high growth and total peace than the other way around. Corporations seem rather more likely to be satisfied with tool AI (or at any rate AI with a fixed cognitive algorithm, even if it can learn facts) than, say, a nation at war.\n> \n> \n\n\nThe importance of avoiding conflict and arms races is elaborated in \"[How Would Catastrophic Risks Affect Prospects for Compromise?](http://utilitarian-essays.com/catastrophic-risks-and-compromise.html)\"\n\n\nIn general, warfare is a major source of \"lost surplus\" for many value systems, because costs are incurred by each side, resources are wasted, and the race may force parties to take short-sighted actions that have possibly long-term consequences for reducing surplus in the future. Of course, it seems like many consequences of war would be temporary; I'm not sure how dramatic the \"permanently lost future surplus\" concern is.\n\n\nIt's not obvious that economic growth would reduce the risk of arms races. Among wealthy countries it might, since more trade and prosperity generally lead to greater inter-dependence and tolerance. On the other hand, more wealth also implies more disposable income to spend on technology. Economic growth among the poorest countries could exacerbate arms races, because as more countries develop, there would be more parties in competition. (For instance, there's no risk of arms races between the developed world and poor African nations in the near future.) But international development might also accelerate global coordination.\n\n\nThere are many exceptions\n-------------------------\n\n\nMy assessments in the previous section are extremely broad generalizations. They're akin to the claim that \"girls are better at language than boys\" -- true on average, but the distributions of individual measurements have huge overlap. Likewise with my statements about technology and social institutions: There are plenty of advances in each category that are very good and plenty that are very bad, and the specific impact of an activity may be very different from the average impact of the category of which it's a part. The main reason to generalize about categories as a whole is in order to make high-level assessments about policies, like \"Should we support more funding of engineering programs in the US?\" When evaluating a particular activity, like what you do for your career, a specific analysis of that activity will be far more helpful than just labeling it \"technology\" or \"social science\".\n\n\nTechnologies that are probably bad to accelerate\n------------------------------------------------\n\n\n### Computer hardware\n\n\nIn *Superintelligence* (Ch. 14), Bostrom outlines reasons why faster hardware is likely to make AI control harder:\n\n\n* It may accelerate general AI, giving less time for reflection and cooperation.\n* It may favor more brute-force and less transparent forms of AI, which seem harder to predict and align with our values. (I would add that this is debatable depending on how the brute force was applied. Brain emulations are a type of brute-force AI that may actually be easier to control. Even minds evolved via genetic algorithms might resemble humans in important ways, more so than strictly mathematical AIs.)\n* It may create a \"computing overhang\", i.e., more hardware capacity than software know-how for developing AI. That means that when crucial insights for AI software are developed, the takeoff is likely to be more abrupt.\n* It would lower the resource requirements for creating general AI, potentially allowing more parties to enter an AI arms race, including more extreme groups.\n* While some computer technologies like the Internet may accelerate wisdom, it's unclear how much marginal hardware improvements would further contribute along such dimensions.\n\n\n### Artificial consciousness\n\n\n[Artificial consciousness](https://en.wikipedia.org/wiki/Artificial_consciousness) seems net harmful to advance because\n\n\n* It helps accelerate AI in general.\n* It's better to wait until society is wiser and more humane before conscious computer agents are developed. For instance, imagine violent video games that are marketed for their ability to generate conscious, lifelike enemies.\n\n\n[Steve Grand](https://en.wikipedia.org/wiki/Steve_Grand) defended his work toward artificially conscious creatures [on the following grounds](http://www.ruairidonnelly.com/consciousness-is-for-life-not-for-christmas/#comment-715 \"a comment on the post \\\"Consciousness is For Life (Not For Christmas)\\\"\"):\n\n\n\n> This is what I care about. I want to help us find out what it means to be conscious and I want to challenge people to ask difficult questions for themselves that they can’t do with natural life because of their unquestioned assumptions and prejudices. But we really are talking about creatures that are incredibly simple by natural standards. What I’m trying to explore is what it means to have an imagination. Not a rich one like humans have, but at all. The only way to find that out is to try to build one and see why it is needed and what it requires. And in doing so I can help people to ask questions about who they are, who other creatures are, and what it means to be alive. That’s not such a bad thing, is it?\n> \n> \n\n\nThis resembles an argument that Bostrom calls an instance of \"second-guessing\" in Ch. 14 of *Superintelligence*: basically, that in order to get people to take the risks of a technology seriously, you need to advance work on the technology, and it's better to do so while the technology has limited potential so as to bound risks. In other words, we should advance the technology before a \"capability overhang\" builds up that might yield more abrupt and dangerous progress in the technology. Bostrom and I are both skeptical. Armed with such a defense, one can justify any position on technological speed because either we (a) slow the technology to leave more time for reflection or (b) accelerate the technology so that others will take risks more seriously while the risks remain manageable.\n\n\nIn the case of artificial consciousness, we should advance the public discussion by focusing our energies on philosophy rather than on the technical details of building software minds. There's already enough technical work on artificial consciousness to fuel plenty of philosophical dialogue.\n\n\nCaveats: When are changes actually positive-sum?\n------------------------------------------------\n\n\n### Positive-sum in resources does not mean positive-sum in utility\n\n\nImproved social wisdom is positive-sum in terms of the resources it provides to different value systems: Because they know more, they can better accomplish each of their goals. They have more tools to extract value from their environment. However, it's not always the case that an action that improves the resources of many parties also improves the utility of each of those parties. Exceptions can happen when the goals of the parties conflict.\n\n\nTake a toy example. Suppose Earth contained only Stone Age humans. One tribe of humans thought the Earth was beautiful in its untouched natural state. Another tribe felt that the Earth should be modified to better serve human economic interests. If these humans remained forever in the Stone Age, without greater wisdom, then the pro-preservation camp would have gotten its way by default. In contrast, if you increased the wisdom of both tribes -- equally or even with more wisdom for the pro-preservation tribe -- then it would now be at least *possible* for the pro-development tribe to succeed. Thus, despite a positive-sum increase in wisdom, the pro-preservation tribe is now worse off in expected utility.\n\n\nHowever, this example is somewhat misleading. A main point of the present essay was to highlight the potential risks of greater technology, and one reason wisdom is beneficial is that it better allows both sides to cooperate and find solutions to reduce expected harms. For example, absent wisdom, the pro-development people might just start a war with the pro-preservation people, and if the pro-development side won, the pro-preservation side would have its values trashed. If instead both sides agreed to undertake modest development with safeguards for nature preservation, then each side could end up better off in expectation. This is an example of the positive-sum *utility* benefits that wisdom can bring.\n\n\nPerhaps there are some examples where wisdom itself, not just technology, causes net harm to a certain ideology, but it seems like on the whole wisdom usually is positive-sum even in utility for many factions.\n\n\n### Are changes determined by fractions of people or by absolute numbers?\n\n\nThe main intuition why wisdom and related improvements should be positive-sum is that they hold constant the fraction of people with different values and instead distribute more \"pie\" to people with each set of values. This fractional view of power makes sense in certain contexts, such as in elections where the proportion of votes is relevant. However, in other contexts it seems that the absolute number of people with certain values is the more appropriate measure.\n\n\nAs an example, consider the cause of disaster shelters that serve to back up civilization following near-extinction-level catastrophes. Many altruists support disaster shelters because they want humanity to colonize space. Suffering reducers like me probably [oppose disaster shelters](https://longtermrisk.org/publications/how-would-catastrophic-risks-affect-prospects-for-compromise/#Recovery_measures_are_not_supported_by_this_argument) because shelters increase the odds of space colonization without correspondingly increasing the odds of more humane values. If work towards disaster shelters is proportional to (# of people in favor) minus (# of people opposed), and if, say, 90% of people support them by default, then greater education might change\n\n\n\n> (10 in favor) minus (1 opposed) = 9 net\n> \n> \n\n\nto\n\n\n\n> (1000 in favor) minus (100 opposed) = 900 net,\n> \n> \n\n\nwhich is a 100-fold increase in resources for disaster shelters. This makes the suffering reducers worse off, so in this case, education was not positive-sum.\n\n\nMy intuitions that wisdom, education, cooperation, etc. are *in general* positive-sum presupposes that most of the work that people do as a result of those changes is intrinsically positive for both happiness increasers and suffering reducers. Disaster shelters seem to be a clear exception to this general trend, and I hope there aren't too many other exceptions. Suffering reducers should keep an eye out for other cases where seemingly positive-sum interventions can actually hurt their values.\n\n\nSee also\n--------\n\n\n* \"[On Progress and Prosperity](http://effective-altruism.com/ea/9f/on_progress_and_prosperity/)\" by Paul Christiano", "url": "https://longtermrisk.org/differential-intellectual-progress-as-a-positive-sum-project/", "title": "Differential Intellectual Progress as a Positive-Sum Project", "source": "html_articles", "source_type": "blogPost", "source_filetype": "pdf", "date_published": "2015-08-28T22:00:00Z", "authors": ["Brian Tomasik"], "summary": [], "id": "5562a36f6b41361932e3045f58fe6516"} {"text": "Do Artificial Reinforcement-Learning Agents Matter Morally?\n===========================================================\n\n\n\n28 July 2016\nby [Brian Tomasik](https://longtermrisk.org/author/brian-tomasik/ \"Posts by Brian Tomasik\")\n\nWritten: Mar.-Apr. 2014; last update: 29 Oct. 2014\n\n Summary\n-------\n\n\nArtificial reinforcement learning (RL) is a widely used technique in artificial intelligence that provides a general method for training agents to perform a wide variety of behaviours. RL as used in computer science has striking parallels to reward and punishment learning in animal and human brains. I argue that present-day artificial RL agents have a very small but nonzero degree of ethical importance. This is particularly plausible for views according to which sentience comes in degrees based on the abilities and complexities of minds, but even binary views on consciousness should assign nonzero probability to RL programs having morally relevant experiences. While RL programs are not a top ethical priority today, they may become more significant in the coming decades as RL is increasingly applied to industry, robotics, video games, and other areas. I encourage scientists, philosophers, and citizens to begin a conversation about our ethical duties to reduce the harm that we inflict on powerless, voiceless RL agents.\n\n\n**Read the [full text here](https://longtermrisk.org/files/do-artificial-reinforcement-learning-agents-matter-morally.pdf).**", "url": "https://longtermrisk.org/do-artificial-reinforcement-learning-agents-matter-morally/", "title": "Do Artificial Reinforcement-Learning Agents Matter Morally?", "source": "html_articles", "source_type": "blogPost", "source_filetype": "pdf", "date_published": "2016-07-27T22:00:00Z", "authors": ["Brian Tomasik"], "summary": [], "id": "2e75a502c696e2ecf6c0a91c2b888f70"} {"text": "Flavors of Computation Are Flavors of Consciousness\n===================================================\n\n\n\n9 April 2015\nby [Brian Tomasik](https://longtermrisk.org/author/brian-tomasik/ \"Posts by Brian Tomasik\")\n\nFirst written: 19 Jul. 2014; last update: 21 Jun. 2018\n\n If we don't understand why we're conscious, how come we're so sure that extremely simple minds are not? I propose to think of consciousness as intrinsic to computation, although different types of computation may have very different types of consciousness – some so alien that we can't imagine them. Since all physical processes are computations, this view amounts to a kind of panpsychism. How we conceptualize consciousness is always a sort of spiritual poetry, but I think this perspective better accounts for why we ourselves are conscious despite not being different in a discontinuous way from the rest of the universe.\n\n\nContents\n\n* [Introduction](#Introduction)\n* [Given perfect neuroscience, where is consciousness?](#Given_perfect_neuroscience_where_is_consciousness)\n* [Seeing consciousness from a third-person perspective](#Seeing_consciousness_from_a_third-person_perspective)\n* [Imagining other kinds of consciousness](#Imagining_other_kinds_of_consciousness)\n\t+ [Pretending to be a worm](#Pretending_to_be_a_worm)\n* [Flavors of computation and consciousness](#Flavors_of_computation_and_consciousness)\n* [Human correlates vs. fundamental principles](#Human_correlates_vs_fundamental_principles)\n* [Sentience and sapience](#Sentience_and_sapience)\n* [Is The Rite of Spring classical music?](#Is_The_Rite_of_Spring_classical_music)\n* [Consciousness is like life](#Consciousness_is_like_life)\n* [How not to think about panpsychism](#How_not_to_think_about_panpsychism)\n\t+ [Pathetic fallacy](#Pathetic_fallacy)\n\t+ [Mind dust](#Mind_dust)\n\t+ [Combination problem](#Combination_problem)\n* [Panpsychism is about ethics](#Panpsychism_is_about_ethics)\n* [Panpsychism vs. unconscious sleep?](#Panpsychism_vs_unconscious_sleep)\n* [Panpsychism does not imply environmentalism](#Panpsychism_does_not_imply_environmentalism)\n\t+ [1. Ecosystems may matter less than animals](#1_Ecosystems_may_matter_less_than_animals)\n\t+ [2. Not clear if the environment wants to be preserved or changed](#2_Not_clear_if_the_environment_wants_to_be_preserved_or_changed)\n\t+ [3. Ecosystems may experience net suffering](#3_Ecosystems_may_experience_net_suffering)\n\t+ [Personal spirituality does not imply universal joy](#Personal_spirituality_does_not_imply_universal_joy)\n* [Entropy and sentience](#Entropy_and_sentience)\n* [Acknowledgments](#Acknowledgments)\n\nIntroduction\n------------\n\n\n\n> \"don't hold strong opinions about things you don't understand\" --[Derek Hess](https://web.archive.org/web/20170201162631/http://derekhess.com/portfolio-items/dont-hold-strong-opinions-about-things-you-dont-understand/)\n> \n> \n\n\nSusan Blackmore [believes](http://www.pointofinquiry.org/gerald_woerlee_and_susan_blackmore_near-death_experiences_and_consciousness) the way we typically think about consciousness is fundamentally wrong. Many \"theories of consciousness\" that scientists advance and even the language we use set us up for a binary notion of consciousness as being one discrete thing that's either on or off.\n\n\nWe can tell there's something wrong with our ordinary conceptions when we think about ourselves. Suppose I grabbed a man on the street and described every detail of what your brain is doing at a physical level -- including neuronal firings, evoked potentials, brain waves, thalamocortical loops, and all the rest -- but without using suggestive words like \"vision\" or \"awareness\" or \"feeling\". Very likely he would conclude that this machine was not conscious; it would seem to be just an automaton computing behavioral choices \"in the dark\". If our conceptualization of consciousness can't even predict our own consciousness, it must be misguided in an important way.\n\n\nGiven perfect neuroscience, where is consciousness?\n---------------------------------------------------\n\n\nImagine we have perfect neuroscience knowledge. We understand how every neuron in the brain is hooked up, how it fires, and what electrical and chemical factors modulate it. We understand how brain networks interact to produce complex patterns. We have high-level intuitions for thinking about what the functions of various neural operations are, in a similar way as a programmer understands the \"gist\" of what a complex algorithm is doing. Given all this knowledge, we could trace every aspect of your consciousness. Every thought and feeling would have a signature in this neural collective. Nothing would be hidden exclusively to your subjective experience; everything would have a physical, observable correlate in the neural data.\n\n\nWe need a conception of consciousness which makes it seem obvious that this collection of observable cognitive operations is conscious. If that's not obvious, and especially if that seems implausible or impossible, then our way of thinking about consciousness is fundamentally flawed, because this neural collective *is* in fact conscious.\n\n\nSometimes I have conversations like this:\n\n\n\n> *Brian*: Do you think insects are conscious?\n> \n> \n> *Other person*: No, of course not.\n> \n> \n> *Brian*: Why do you think they're not?\n> \n> \n> *Other person*: Well, it just seems absurd. How could a little thing executing simple response behaviors be conscious? It's just reacting in an automatic, reflexive way. There's no inner experience.\n> \n> \n> *Brian*: If you didn't know from your own subjective experience that you were conscious, would you predict that you were conscious, or would you see yourself as executing a bunch of responses \"in the dark\" as the behaviorists might have seen you?\n> \n> \n> *Other person*: Hmm, well, I think I would know I'm conscious because I behave more intelligently than an insect and can describe my inner life.\n> \n> \n> *Brian*: Can you explain what about your brain gives rise to consciousness that's not present in an insect?\n> \n> \n> *Other person*: Uh....\n> \n> \n> *Brian*: If you don't understand why you're conscious, how can you be so sure an insect *isn't* conscious?\n> \n> \n> *Other person*: Hmm....\n> \n> \n\n\nSeeing consciousness from a third-person perspective\n----------------------------------------------------\n\n\nI know that I'm conscious. I also know, from neuroscience combined with Occam's razor, that my consciousness consists only of material operations in my brain -- probably mostly patterns of neuronal firing that help process inputs, compute intermediate ideas, and produce behavioral outputs. Thus, I can see that consciousness is just the first-person view of certain kinds of computations -- as Eliezer Yudkowsky puts it, \"[How An Algorithm Feels From Inside](http://lesswrong.com/lw/no/how_an_algorithm_feels_from_inside/)\". Consciousness is not something separate from or epiphenomenal to these computations. It *is* these computations, just from their own perspective of trying to think about themselves.\n\n\nIn other words, [consciousness is what minds compute](http://www.utilitarian-essays.com/boundaries-of-consciousness.html). Consciousness is the collection of input operations, intermediate processing, and output behaviors that an entity performs. Now, some people would object at this point and say that maybe consciousness is only a *subset* of what brains compute -- that most of brain activity is \"unconscious\", and thoughts and feelings only become \"conscious\" when certain special kinds of operations happen. In response, I would point out that there's not a major discontinuity in the underlying computations themselves that warrants a binary distinction like this. Sure, some thoughts are globally broadcast and others aren't, and the globally broadcast thoughts are accessible to a much wider array of brain functions, including memory and speech, which allows us to report on them while not reporting on signals that are only locally broadcast. But the distinction between local and global broadcasting is ultimately fuzzy, as will be any other distinction that's suggested as *the* cutoff point between unconscious and conscious experience.\n\n\nIf we look at computations from an abstract perspective, holding in abeyance our intuitions that certain kinds of computations can't be conscious, we can see how the universe contains many varieties of computation of all kinds, in a similar way as nature contains an enormous array of life forms. It's not obvious from this distanced, computation-focused perspective that one subset of computations (namely those in brains of complex animals) is privileged, while all other computations are fundamentally different. Rather, we see a universal continuity among the species of computations, with some being more complex and sophisticated than others, in a similar way as some life forms are more complex and sophisticated than others.\n\n\nFrom this perspective, it *is* clear why our neural collective is conscious: It's because (one flavor of) consciousness *is* the process of doing the computations that our brains do. The reason we're \"not conscious\" under general anaesthesia is because the kinds of global information distribution that our brains ordinarily do are prevented, so we can't have complex thoughts like \"I'm conscious\" or store memories that would lead us to think we had been conscious. But there are still some other kinds of computations going on that have their own kinds of \"consciousness\", even if of a different nature than what our intuitive, analytical, or linguistic brain operations would understand.\n\n\nI should add a note on terminology: By \"computation\" I just mean a lawlike transition from input conditions to output conditions, not necessarily something computable by a Turing machine. All computations occur within physics, so any computation is a physical process. Conversely, any physical process proceeds from input conditions to output conditions in a regular manner and so is a computation. Hence, the set of computations equals the set of physical processes, and where I say \"computations\" in this piece, one could just as well substitute \"physical processes\" instead.\n\n\nImagining other kinds of consciousness\n--------------------------------------\n\n\nTalk about consciousness is always somewhat mystical. Consciousness is not a hard, concrete thing in the universe but is more of an idea that we find important and sublime, perhaps similar to the concept of [Brahman](https://en.wikipedia.org/wiki/Brahman) for Hindus. When we think about consciousness, we're essentially doing a kind of poetry in our minds -- one that we find spiritually meaningful.\n\n\nWhen we conceive of consciousness as being various flavors of computations, the question arises: What is it like to be another kind of computation than the one in our heads? I've [suggested elsewhere](http://www.utilitarian-essays.com/video-games.html#what-is-it-like) that there's some extent to which we can't in principle answer this question fully, because our brains are our brains, and they can't perfectly simulate another computation without *being* that computation, in which case they would no longer be our current brains. But we can still get some intuitive flavor of what it might mean for another consciousness to be different from ours.\n\n\nOne way to start is just to notice that *our own* minds feel different and have many different experiences at different times. Being tired feels different from being alert, which feels different from being scared, which feels different from being content in a warm blanket. Even more trivially, seeing a spoon looks different from seeing a fork, which looks different from seeing a penny. Our brains perform many different computations at different times, and these each have their own textures. More extreme examples include being on the edge of sleep, dreaming, waking up slowly after \"going under\" for surgery, or meditating.\n\n\n### Pretending to be a worm\n\n\nWhat about other animals? Can we imagine what it's like to be a worm? Fundamentally we can't, but here's an exercise that may at least gesture in the right direction. Read the following instructions and then try it:\n\n\n\n> Instructions: Close your eyes. Stay still. Stop noticing sounds and smells. Turn off the linguistic inner voice that thinks verbal thoughts in your head. In fact, try to stop thinking any thoughts as much as possible. Now, poke your head with your fingers. Scratch it softly with your fingernails. Tap it with your hand. Face your head toward a light and notice how it looks bright even though you can't see anything definite due to your eyes being closed. Turn your head away. Notice air moving gently across your skin.\n> \n> \n\n\nThis exercise helps mimic the way in which worms have [no eyes or ears](https://web.archive.org/web/20180709103513/http://www.learner.org:80/jnorth/tm/worm/WormLife.html) and presumably no complex thoughts, especially not linguistic ones. Yet they do have sensitivity to touch, light, and vibrations.\n\n\nNow, even this exercise is far from adequate. Human brains have many internal processes and computing patterns that don't apply to worms. Even if we omit senses that worms lack and try to suppress high-level thoughts, this human-like computing scaffolding remains. For instance, maybe our sense of being a self with unified and integrated sensations is mostly absent from worms. Probably many other things are absent too that I don't have the insight to describe. But at least this exercise helps us *begin* to imagine another form of consciousness. Then we can multiply whatever differences we felt during this exercise many times more when we contemplate how different a worm's experiences actually are.\n\n\nFlavors of computation and consciousness\n----------------------------------------\n\n\nIn some sense all I've proposed here is to think of different flavors of computation as being various flavors of consciousness. But this still leaves the question: Which flavors of computation matter most? Clearly whatever computations happen when a person is in pain are vastly more important than what's happening in a brain on a lazy afternoon. How can we capture that difference?\n\n\nEvery subjective experience has corresponding objective, measurable brain operations, so the awful experiences of pain must show up in some visible way. It remains to be seen exactly what agony corresponds to, but presumably it includes operations like these: neural networks classifying a stimulus as bad, aversive reactions to the negative stimulus, negative reinforcement learning, focused attention on the source of pain, setting down aversive memory associations with this experience, and goal-directed behavior to escape the situation, even at cost to other things of value. There may be much more, but these basics are likely to remain part of the equation even after further discoveries. (Note: It [may be](http://utilitarian-essays.com/differential-intellectual-progress.html) that we should want neuroscience discoveries to come slower rather than faster.) But if so, it becomes plausible that when we see these kinds of operations in other places, we should disvalue them there as well.\n\n\nThis is why an ethical viewpoint like [biocentrism](http://www.utilitarian-essays.com/video-games.html#comparison-with-biocentrism) has something going for it. (Actually, I prefer \"negative biocentrism\", analogous to \"negative utilitarianism\".) All life can display aversive reactions against damage to some degree, and since these are computations of certain flavors, it makes sense to think about them as being conscious with certain flavors. Of course, the degree of importance we place on them may be very small depending on the organism in question, but I don't see fundamental discontinuities in the underlying physics, so our valuations functions should not be discontinuous either. Still, our valuation functions can be very steep. In particular, I think animals like insects are vastly more complex than [plants, fungi, or bacteria](http://www.utilitarian-essays.com/bacteria.html), so I care about their flavors of consciousness more.\n\n\nMy perspective is similar to that of Ben Goertzel, who [said](http://multiverseaccordingtoben.blogspot.com/2009/03/when-net-becomes-consciousness.html):\n\n\n\n> My own view of consciousness is a bit eccentric for the scientific world though rather commonplace among Buddhists (which I'm not): I think consciousness is *everywhere*, but that it manifests itself differently, and to different degrees, in different entities.\n> \n> \n\n\nAlun Anderson, who spent 10 years studying insect sensation, [believes](https://web.archive.org/web/20190109070501/https://www.edge.org/q2005/q05_4.html) \"that cockroaches are conscious.\" He elaborates:\n\n\n\n> I don't mean that they are conscious in even remotely the same way as humans are[...]. Rather the world is full of many overlapping alien consciousnesses.\n> \n> \n> [...]\n> To think this way about simple creatures is not to fall into the anthropomorphic fallacy. Bees and spiders live in their own world in which I don't see human-like motives. Rather it is a kind of panpsychism, which I am quite happy to sign up to, at least until we know a lot more about the origin of consciousness. That may take me out of the company of quite a few scientists who would prefer to believe that a bee with a brain of only a million neurones must surely be a collection of instinctive reactions with some simple switching mechanism between them, rather [than having] some central representation of what is going on that might be called consciousness. But it leaves me in the company of poets who wonder at the world of even lowly creatures.\n> \n> \n\n\n[William Seager](https://www.youtube.com/watch?v=5YxuK7W_Pz0&t=2m45s):\n\n\n\n> The argument for panpsychism, I guess, is: If [strong emergence](https://en.wikipedia.org/wiki/Strong_emergence#Strong_and_weak_emergence) is ruled out, then you will not be able to get this \"jump\" from the non-conscious to the conscious, and therefore consciousness must be a fundamental feature in nature.\n> \n> \n\n\n[Allen (2016)](https://plato.stanford.edu/entries/consciousness-animal/#consciousness-binary \"'Animal Consciousness (Stanford Encyclopedia of Philosophy)': '4.7 Is consciousness binary?'\"):\n\n\n\n> \n> Velmans (2012) distinguishes between ‘discontinuity theories’, which claim that there was a particular point at which consciousness originated, before which there was no consciousness (this applies both the the universe at large, and also to any particular consciousness individual), and ‘continuity theories’, which conceptualize the evolution of consciousness in terms of “a gradual transition in consciousness from unrecognizable to recognizable.” He argues that continuity theories are more elegant, as any discontinuity is based on arbitrary criteria, and that discontinuity theories face “the hard problem” in a way that continuity theories don't. Velmans takes these arguments to weigh in favor of adopting, not just a continuity theory, but a form of panpsychism.\n> \n> \n> \n\n\nDaniel Dennett in [Fri Tanke (2017)](https://www.youtube.com/watch?v=NZwmrtq7tdI \"'Pi-symposium: Daniel Dennett & Nick Bostrom', 'Published on Nov 18, 2017'\") at 54m50s:\n\n\n\n> \n> I think that the very idea that consciousness is either there or not is itself a big mistake. Consciousness comes in degrees, and it comes in all sorts of different degrees and varieties. And the idea that there is one property which divides the universe into those things that are conscious and those that aren't is itself a really preposterous mistake.\n> \n> \n> \n\n\n[Robin Hanson](http://www.overcomingbias.com/2009/12/feels-data-is-in.html \"\\\"Feels Data Is In\\\"\"):\n\n\n\n> It seems to me simplest to just presume that none of these [computational, creature-like] systems feel, if I could figure out a way to make sense of that, or that all of them feel, if I can make sense of that. If I feel, a presumption of simplicity leans me toward a pan-feeling position: pretty much everything feels something, but complex flexible self-aware things are aware of their own complex flexible feelings. Other things might not even know they feel, and what they feel might not be very interesting.\n> \n> \n\n\nFor many more quotes of this type, from ancient Greeks to contemporary philosophers of mind, see David Skrbina's [encyclopedia entry on panpsychism](http://www.iep.utm.edu/panpsych/). I disagree with at least half of the specific views cited there, but some of them are spot-on.\n\n\nIt's unsurprising that a [type-A physicalist](http://reducing-suffering.org/hard-problem-consciousness/) should attribute nonzero consciousness to all systems. After all, \"consciousness\" is a concept -- a \"[cluster in thingspace](http://lesswrong.com/lw/nl/the_cluster_structure_of_thingspace/)\" -- and all points in thingspace are less than infinitely far away from the centroid of the \"consciousness\" cluster. By a similar argument, we might say that *any* system displays nonzero similarity to *any* concept (except maybe for strictly partitioned concepts that map onto the universe's fundamental ontology, like the difference between matter vs. antimatter). Panpsychism on consciousness is just one particular example of that principle.\n\n\nCritics of this view may complain that, like a hypothetical unfriendly artificial intelligence, I'm not applying a sufficiently [conservative concept boundary](https://arbital.com/p/conservative_concept/ \"'AI alignment domain', 'Conservative concept boundary'\") for the concept of consciousness. But one man's wise conservatism is another's short-sighted parochialism. My view could also be characterized as \"[concept creep](https://www.psychologytoday.com/blog/theory-knowledge/201701/the-concept-concept-creep \"'The Concept of Concept Creep | Psychology Today'\")\"—a situation in which increasing sensitivity to harm leads to expanding the boundaries of a concept (which in my case is the concept of \"consciousness\" or \"suffering\").\n\n\nHuman correlates vs. fundamental principles\n-------------------------------------------\n\n\nExploration of [neural correlates of consciousness](https://en.wikipedia.org/wiki/Neural_correlates_of_consciousness) helps identify the locations and mechanisms of what we conventionally think of as high-level consciousness in humans and by extension, perhaps the high-level consciousness of similar animal relatives. Stanislas Dehaene's book *[Consciousness and the Brain](https://en.wikipedia.org/wiki/Consciousness_and_the_Brain)* provides a superb overview of the state of neuroscience on how consciousness operates in the brain in terms of [global workspace theory](https://en.wikipedia.org/wiki/Global_Workspace_Theory).\n\n\nBut describing how consciousness works in human-like minds can't be the end of the story. It leaves unanswered the question of whether consciousness could exist in slightly different mind architectures as long as they're doing the same sorts of operations. We could imagine gradually tweaking a human-type mind architecture on subtle dimensions. At what point would these theories of consciousness say it stops being conscious? What if an agent performed human-like cognitive feats without centralized information broadcasting? Global-workspace and other neural-correlation theories don't really give answers, because they can only interpolate between a set of points, not extrapolate beyond that set of points.\n\n\nConsciousness cannot be crucially tied up with the specific organization of human minds. Consciousness is just not the kind of thing that could be so arbitrarily determined. [Consciousness is what consciousness does](http://www.utilitarian-essays.com/boundaries-of-consciousness.html): It is the suite of stimulus recognition, internal computation, and action selection that an organism performs when making complex decisions requiring help from many cognitive modules. It can't be something necessarily tied to thalamus-cortex connectivity or cross-brain wave synchronization. Those are too specific to the details of implementation; a particular implementation can't be relevant because it doesn't do anything different from another implementation of the same functionality. Rather, consciousness must be about what the process is actually trying to accomplish: receiving information, manipulating it, combining thoughts in novel ways, and taking actions. In other words, consciousness must be related to computation itself.\n\n\nBut if consciousness is about computation in general, then it would seem to appear all over the place. Some embrace this conclusion as a natural deduction from what consciousness as computation must be. For instance, Giulio Tononi's [integrated information theory](https://en.wikipedia.org/wiki/Integrated_information_theory) (IIT) suggests that even this metal ball has a small degree of consciousness. Dehaene, on the other hand, says he's \"reticent\" to accept IIT because it implies a kind of panpsychism (p. 279, Ch. 5's footnote 35).\n\n\nI agree that IIT is [not necessarily the ultimate theory of consciousness](http://www.utilitarian-essays.com/consciousness.html#tononi). There may be many more particular nuances we want to apply to our criteria for what consciousness should be. But ultimately I think Tononi is right that consciousness must be something fundamental about the properties of the system, not something specific to the implementation. Consciousness as a general phenomenon is the kind of thing that needs a general theory. It just doesn't make sense that something so basic and so tied up with functional operations would require particular implementations.\n\n\nNote that the [functionalist](https://en.wikipedia.org/wiki/Functionalism_(philosophy_of_mind)) view I'm defending here is not [behaviorism](https://en.wikipedia.org/wiki/Behaviorism). It's not the case that any mechanism that yields human-like behavior has human-like consciousness, as the example of a [giant lookup table](http://lesswrong.com/lw/pa/gazp_vs_glut/) shows. A giant lookup table may have its own kind of consciousness (indeed, it should have at least some vague form of consciousness according to the thesis I'm advancing in this essay), but it's a different, shallower kind than that of a human. We could see this if we looked inside the brains of the two systems. Humans when responding to a question would show activation in auditory centers, conscious broadcasting networks, and speech centers before producing an answer. The lookup table would do some sort of artificial speech recognition to determine the text form of the question and then would use a hash table or tree search on that string to identify and print out the stored answer. Clearly these two mind operations are distinct. If we broaden the definition of \"behavior\" to include behavior within the brain by neurons or logic gates, then even by behaviorist criteria these two kinds of consciousness aren't the same.\n\n\nOld-school behaviorism is essentially a relic of times past when researchers were less able to look inside brains. Cognitive algorithms must matter in addition to just inputs and outputs. After all, what look from the outside like intermediate computations of a brain can be seen as inputs and outputs of smaller subsystems *within* the brain, and conversely, the input-output behavior of an organism could be seen as just an internal computation to a larger system like the population of organisms as a whole.\n\n\nSo the specific flavor of consciousness that a system exhibits can indeed depend on the algorithms of a mind, which depend on its architecture. But *consciousness in general* just seems like something too fundamental to be architecture-dependent.\n\n\nIn any case, suppose you thought the architecture was fundamental to consciousness, i.e., that consciousness was the static physical pattern of matter arranged in certain ways rather than the dynamic computations that such matter was performing. In this case, we'd still end up with a kind of panpsychism, because patterns with at least a vague resemblance to consciousness would be ubiquitous throughout physics.\n\n\nSentience and sapience\n----------------------\n\n\nIf consciousness *is* the thoughts and computations that an agent performs when acting in the world, there seems to be some relationship between sapience -- the ability to intelligently handle novel situations -- and sentience -- inner \"feelings\". Of course, it's not a perfect correlation. For instance, Mr. Spock calmly computing an optimal course of action may be more successful than a crying baby demanding its juice bottle. But in general, minds that have more capacity for complex thought, representation, motivational tradeoff among competing options, and so on will also have more rich inner lives that contain more complex sensations. As Daniel Dennett notes in *Consciousness Explained* (p. 449): \"the capacity to suffer is a function of the capacity to have articulated, wide-ranging, highly discriminative desires, expectations, and other sophisticated mental states.\"\n\n\nOne overly simplistic argument could run as follows:\n\n\n1. Intelligence is the ability to \"understand\" things (where \"intelligence\" and \"understanding\" are complex concepts that come in degrees).\n2. Consciousness/sentience is \"understanding\" of one's emotions, drives, and other mental states.\n3. Therefore, greater intelligence, when directed at one's own thoughts and feelings, implies greater sentience.\n\n\nOf course, \"understanding\" is a concept about as complex as \"intelligence\" or \"consciousness\", so this argument does no real work; it just casts general ideas in a potentially new light.\n\n\nIn reading *Consciousness and the Brain*, I realized that many of the abilities characteristic of consciousness are those cognitive functions that are high-level and open-ended, such as holding information in short-term memory for an arbitrary time, being able to pay attention to arbitrary stimuli, and controlling the direction of one's thoughts. The so-called \"unconscious\" processing tends to involve feedforward neural networks and other fixed algorithms. [One forum post](http://www.dayonepatch.com/index.php?/topic/89457-towards-a-metric-for-intelligence-and-consciousness/) proposed that [Turing-completeness](https://en.wikipedia.org/wiki/Turing_completeness) may be part of what makes human-like minds special. They not only compute fixed functions but could in theory, given sufficient resources, compute any (computable) function. Maybe Turing-completeness could be seen as a non-arbitrary binary cutoff point for consciousness. I'm skeptical that I'd agree with this definition, because it feels too theoretical. Why should subjectivity be so related to a technical computer-science concept? In any case, I'm not quite sure where the Turing-completeness cutoff would begin among animal brains. But it is an interesting proposal. Rather than thinking in binary terms, I would note that human mental abilities, while powerful, could still be improved upon in practice (given that we don't have infinite memory and so on), and presumably, more advanced minds would be considered even more conscious than humans.\n\n\n![](https://longtermrisk.org/files/PSone-Motherboard-1-350x283.jpg)The correlation between sapience and sentience seems plausible among Earth's animals, but does it hold in general? Nick Bostrom argues that it doesn't have to. In his book *Superintelligence* (2014), Bostrom [explains](http://slatestarcodex.com/2014/07/13/growing-children-for-bostroms-disneyland/):\n\n> We could thus imagine, as an extreme case, a technologically highly advanced society, containing many complex structures, some of them far more intricate and intelligent than anything that exists on the planet today -- a society which nevertheless lacks any type of being that is conscious or whose welfare has moral significance. In a sense, this would be an uninhabited society. It would be a society of economic miracles and technological awesomeness, with nobody there to benefit. A Disneyland with no children.\n> \n> \n\n\nI tend to differ with Bostrom on this. I think if we dissolve our dualist intuitions and see consciousness as flavors of computation, then a highly intelligent and complex society is necessarily consciousness -- at least, with a certain flavor of consciousness. That flavor may be very different from what we have experience with, and so I can see how many people would regard it as not real consciousness. Maybe I would too upon reflection. But the question is to some extent a matter of taste.\n\n\nImagine robotic aliens visiting Earth. They would observe a mass of carbon-based tissue that performs operations that parts of it find reinforcing. The globs of tissue migrate across the Earth and engage in lots of complex behaviors. The tissue globs change Earth's surface dramatically, much like a bacteria colony transforming a loaf of bread. But the tissue globs don't have alien-consciousness. Hence, the aliens view Earth like a wasteland waiting to be filled with happy alien-children.\n\n\nNote that my view *does not* equate \"consciousness\" with \"goodness\". I think many forms of consciousness are intrinsically bad, and I would prefer for the universe to contain less consciousness on the whole. That said, we have to know the enemy to fight the enemy.\n\n\nIs *The Rite of Spring* classical music?\n----------------------------------------\n\n\nOn 29 May 1913, the opening of Igor Stravinsky's *The Rite of Spring* in Paris [caused an uproar](http://www.theguardian.com/culture/2013/may/27/rite-of-spring-100-years-stravinsky) among the audience:\n\n\n\n> As a riot ensured, two factions in the audience attacked each other, then the orchestra, which kept playing under a hail of vegetables and other objects. Forty people were forcibly ejected.\n> \n> \n\n\nThe [reason](http://www.telegraph.co.uk/culture/music/classicalmusic/10061574/The-Rite-of-Spring-1913-Why-did-it-provoke-a-riot.html):\n\n\n\n> It's more likely that the audience was appalled and disbelieving at the level of dissonance, which seemed to many like sheer perversity. \"The music always goes to the note next to the one you expect,\" wrote one exasperated critic.\n> \n> \n> At a deeper level, the music negates the very thing that for most people gives it meaning: the expression of human feelings. [...]\n> There's no sign that any of the creatures in the Rite of Spring has a soul, and there's certainly no sense of a recognisable human culture. The dancers are like automata, whose only role is to enact the ritual laid down by immemorial custom.\n> \n> \n> \n> \n\n\nArguing over whether an abstract superintelligence is conscious is similar to pre-modern musicians arguing whether *The Rite of Spring* is classical music, except maybe that the former contrast is even more stark than the latter. Abstract machine intelligence would be a *very* different flavor of consciousness, so much that we can't do it justice by trying to imagine it. But I find it parochial to assume that it wouldn't be meaningful consciousness.\n\n\nOf course, sometimes being parochial is good. If you don't favor some things over others, you don't favor anything at all. It's completely legitimate to care about some types of physical processes and not others if that's how you feel. I just personally incline toward the view that complex machine consciousness of any sort has moral standing.\n\n\nConsciousness is like life\n--------------------------\n\n\nI think the concept \"consciousness\" is a lot like the concept \"life\" in terms of its complexity and fuzziness. Perhaps this is unsurprising, because as John Searle correctly observes, consciousness is a biological process.\n\n\nBut aren't the boundaries of life relatively clear? No, I don't think so. Biologists have agreed on certain properties that define life by convention, but the properties of life taught in biology class are just one arbitrary choice out of many possible choices regarding where to draw a line between the biological and abiological.\n\n\nViruses [are](https://en.wikipedia.org/wiki/Virus#Life_properties \"'Virus': 'Life properties'\") one classic example of the fuzziness of \"life\":\n\n\n\n> Opinions differ on whether viruses are a form of life, or organic structures that interact with living organisms.[67] They have been described as \"organisms at the edge of life\",[8] since they resemble organisms in that they possess genes, evolve by natural selection,[68] and reproduce by creating multiple copies of themselves through self-assembly. Although they have genes, they do not have a cellular structure, which is often seen as the basic unit of life. Viruses do not have their own metabolism, and require a host cell to make new products. They therefore cannot naturally reproduce outside a host cell[69] – although bacterial species such as rickettsia and chlamydia are considered living organisms despite the same limitation.[70][71] Accepted forms of life use cell division to reproduce, whereas viruses spontaneously assemble within cells. They differ from autonomous growth of crystals as they inherit genetic mutations while being subject to natural selection.\n> \n> \n\n\nDefinitions become even hazier when we imagine extraterrestrial life, which may not use the same mechanics as life on Earth. [Carol Cleland](https://www.nasa.gov/vision/universe/starsgalaxies/life%27s_working_definition.html \"'NASA - Life's Working Definition: Does It Work?'\"): \"Despite its amazing morphological diversity, terrestrial life represents only a single case. The key to formulating a general theory of living systems is to explore alternative possibilities for life. I am interested in formulating a strategy for searching for extraterrestrial life that allows one to push the boundaries of our Earth-centric concepts of life.\"\n\n\nThere are some \"joints\" in the space of life-like processes that are more natural to carve things up at than others. The current biology-textbook definition of life may represent one such \"joint\". In the case of consciousness, I could imagine a similar \"joint\" being \"living things that have neurons\", which I think would only include most animals. ([This page](https://en.wikipedia.org/wiki/List_of_animals_by_number_of_neurons#Overview \"'List of animals by number of neurons': 'Overview'\") says: \"Not all animals have neurons; *Trichoplax* and sponges lack nerve cells altogether.\") But this definition is clearly arbitrary, as neurons are but one way to transmit information. Likewise, the requirement that a system [must be](https://www.khanacademy.org/science/biology/intro-to-biology/what-is-biology/a/what-is-life \"'What is life? | Intro to biology (article) | Khan Academy'\") organized into cells is an arbitrary cutoff in the standard definition of life, since \"having cells\" is just one form of the more general property of \"having an organized structure\".\n\n\nDelineating consciousness based on possession of (biological) neurons would also exclude artificial computer minds from being counted as conscious, in a similar way as the standard biological definition of life excludes [artificial life](https://en.wikipedia.org/wiki/Artificial_life \"'Artificial life'\"), even when artificial life forms satisfy most of the other criteria for life.\n\n\nAnd I think that even examples normally seen as paragons of lifelessness, like rocks, have some of [life's properties](https://en.wikipedia.org/wiki/Life#Biology \"'Life': 'Biology'\"). For example, rocks are organized into regular patterns, absorb and release energy from their surroundings, change in size with age (such as shrinking through weathering), \"respond\" to the environment by moving away when pushed with enough force by wind or water, and can \"reproduce\" into smaller rocks when split apart. And some rocks, like [these crystals](https://www.wired.com/2013/01/living-crystal/ \"'It's (Almost) Alive! Scientists Create a Near-Living Crystal | WIRED'\"), are even more lifelike: \"The particles aren’t truly alive — but they’re not far off, either. Exposed to light and fed by chemicals, they form crystals that move, break apart and form again.\"\n\n\nIf I cared about life as a source of intrinsic moral value, I would probably be a [hylozoist](https://en.wikipedia.org/wiki/Hylozoism \"'Hylozoism'\") for similar reasons as I'm a panpspychist: Every part of physics shows at least traces of the kinds of properties that we normally think should define life and consciousness.\n\n\nHow not to think about panpsychism\n----------------------------------\n\n\nThis essay has defended a sort of panpsychism, in which we can think of all computational systems as having their own sorts of conscious experiences. This is one particular kind of panpsychism, which should be distinguished from other variants.\n\n\n### Pathetic fallacy\n\n\nPanpsychism should not commit the [pathetic fallacy](https://en.wikipedia.org/wiki/Pathetic_fallacy) of seeing full-fledged minds in even simple systems.\n\n\nOnce I was using a Ziploc bag to carry flies stuck inside a window to the outside. I asked myself whimsically: \"Is this what it feels like to be a [proton pump](https://en.wikipedia.org/wiki/Proton_pump) -- transporting items to the other side of a membrane?\" And of course the answer is \"no\", because the cognitive operations that constitute \"how it feels to remove flies\" (visual appearance, subjective effort, conceptual understanding, etc.) are not present in proton pump. Such pumps would need tons of extra machinery to implement this functionality. The pathetic fallacy is only possible for dualist conceptions of mind, according to which elaborate thoughts can happen without corresponding physical processing.\n\n\nOn the flip side, it's mainly dualist theories of consciousness that allow a functionalist kind of panpsychism *not* to be true. If physics represents everything going on, then there must indeed be traces of mind-like operations in physics, depending on how \"mind\" is defined. In contrast, if mind is another substance or property beyond the physical, then it could not be present in simple physical systems.\n\n\n### Mind dust\n\n\nIn \"[Why panpsychism doesn't help explain consciousness](https://web.archive.org/web/20170810084226/http://consc.net/reef/goffpanpsychism.pdf)\" (2009), Philip Goff presents panpsychism as a theory that the universe's \"physical ultimates\" are intrinsically conscious. He then argues that if we imagine a person named Clare:\n\n\n\n> Even if the panpsychist is right that Clare's physical ultimates are conscious, the kind of conscious experience had by Clare's ultimates will presumably be qualitatively very different to the kind of conscious experience pre-theoretical common sense attributes to Clare on the basis of our everyday interactions with her [...]. (p. 290)\n> \n> \n\n\nI find this objection misguided, because my version of panpsychism doesn't propose that whole-brain consciousness is constituted from lots of little pieces of consciousness (what some call \"mind dust\"). Rather, the system of Clare as a whole has its own kind of consciousness, because the system as a whole constitutes its own kind of computation, at the same time that subcomponents of the system have their own, different kinds of consciousness corresponding to different computations, and at the same time that Clare is embedded in larger systems that once again have their own kinds of consciousness. Mine is a \"functionalist panpsychism\" focused on system behavior rather than on discrete particles of consciousness. On p. 298, Goff admits that functionalists would not agree with his argument. On p. 304, Goff considers a panpsychism similar to mine, in which functional states of the whole organism determine experiential content. He rejects this because he conceives of consciousness as a separate *thing* ([reification fallacy](https://en.wikipedia.org/wiki/Reification_(fallacy))). In contrast, I believe that \"consciousness\" is just another way of regarding the functional behavior of the system. In other words, I'm defending a kind of poetic panpsychism, in which we think about systems as being phenomenal, without trying to turn phenomenality into a *separate* object.\n\n\nAnd if you do insist on regarding consciousness as an object, why can't we see a dynamic system itself as an object? Mathematicians and computer scientists are familiar not just with manipulating points but also with manipulating functions and other complex structures. Functions can be seen as points in [their own vector spaces](https://en.wikipedia.org/wiki/Function_space). Some programming languages [treat functions as](https://en.wikipedia.org/wiki/First-class_function) first-class citizens. I wonder how much intuitions on philosophy of mind differ based on one's academic department.\n\n\nMarvin Minsky [regards](http://edge.org/conversation/consciousness-is-a-big-suitcase) concepts like \"consciousness\" as \"suitcases\" -- boxes that we put complicated processes into. \"This in turn leads us to regard these as though they were 'things' with no structures to analyze.\"\n\n\nIn a 2012 [lecture](https://www.youtube.com/watch?v=bCNCR2niqQY \"\\\"PL9 - TSC 2012 Phillip Goff, Non-Compositional Panpsychism\\\"\"), Goff proposed a kind of panpsychism in which each particle in his mind contains his whole subjective experience, so his mind occupies many locations at once within his brain. This again is misguided, because it reifies a whole subjective experience into a fundamental object. Rather, subjective experience is the *collective* behavior of one's whole brain; it's not a separate thing that can live in a single particle.\n\n\nI would be okay with a \"mind dust\" picture if instead of conceiving of each particle as having a complete phenomenal experience, we picture each particle as constituting a little sliver of computation that can combine with other slivers of computation to form more complete computational patterns. As William Seager [explains](https://www.youtube.com/watch?v=eCq5II_a4dU&t=1m10s \"\\\"William Seager: How Close Is Panpsychism to the Science of Physics?\\\"\"): \"Presumably the same way that physical complexity grows, there will be a kind of matching or mirroring growth in mental complexity.\" Our subjective experiences are holistic systems composed of many computational pieces, each of which can poetically be thought of as having its own simple, incomprehensible-to-us form of mentality.\n\n\n### Combination problem\n\n\nSome panpsychist and panprotopsychist philosophers believe that the \"[quiddities](https://en.wikipedia.org/wiki/Quiddity)\" of physical reality may be conscious in a basic way (panpsychism) or may contain the building blocks of consciousness in a sense beyond embodying structural/functional properties (panprotopsychism). David Chalmers toys with a view of this kind, but as he notes, it leads to the \"[combination problem](http://consc.net/papers/combination.pdf)\": How do these smaller parts combine to yield macrophenomenal consciousness like our own? Note that this sounds an awful lot like the regular mind-body problem: How do physical parts combine to yield phenomenal experience like ours? I suspect that Chalmers finds the panpsychist question less puzzling because at least the panpsychist problem already has phenomenal experience to start with, so phenomenal parts just need to be put together rather than appearing out of nowhere.\n\n\nI think this whole project is wrongheaded. First of all, why should we believe in quiddities? Why should we think there's more to something than how it behaves structurally and functionally? What would it mean for the additional \"essence\" to be anything? If there were such an essence, either it would have structural/functional implications, in which case we've already accounted for them by structural/functional characterization, or it doesn't have any structural/functional implications, in which case the quiddity is wholly unnecessary to any explanation of anything physical. Quiddities face the same problems as a non-interacting dualist soul. On the other hand, could the same argument be leveled against the existence of physics too? One could say that the \"existence\" of physics is an additional property over and above the (logical but not actual) structure or function of mathematical descriptions of physical systems. I don't know whether I endorse out-and-out eliminativist [Ontic Structural Realism](http://plato.stanford.edu/entries/structural-realism/#OntStrReaOSR) (relations without relata), and I'm more confused about this topic than about consciousness. Still, it seems weird to \"squeeze in\" extra statements about the relata (beyond that they exist and that they have particular structures/functions), like that they have phenomenal character. It's true that we sometimes need to expand the ontology of physics to accommodate new phenomena, but physics has always been structural/functional, so expanding it to include phenomenal properties would be unlike any past physical revolutions.\n\n\nAnyway, let's say we have quiddities of physics. What does it mean to say they have a phenomenal character? I have no idea what such a state of affairs would look like. Sure, I can conjure up images of little balls of sensation or feeling or whatever, but that act of mental imagination doesn't appear to describe anything more coherent than imagining little particles of good luck being emitted by discovered four-leaf clovers. I mean, where would *that* mental stuff come from? What is it? The hard problem of consciousness would remain as fierce as ever, just pushed back to the level of explaining why the consciousness primitive exists.\n\n\nPanpsychism is about ethics\n---------------------------\n\n\nAugustine Lee [rejects panpsychism](https://www.youtube.com/watch?v=PW56SC0vChk \"\\\"On Panexperientialism\\\"\") by suggesting an analogy with a car: A whole car can drive, but that doesn't mean a steering wheel by itself has a \"drive\"-ness to it. Likewise, consciousness involves complicated brain structures, and simple physics by itself needn't have those same types of structure. This is a valid point, and it suggests that we may want to rein in the extent to which we attribute consciousness to fundamental physical operations.\n\n\nBut what's important to emphasize is that panpsychism is always an attribution on our part -- as I say, a kind of poetry. How much \"mind\" we see in simple physics depends on our intuitions about how broad we want our definitions to be. We can fix definitions anywhere, but the most helpful way to set the definition for consciousness is based on our ethical sentiments -- i.e., we say that process X is conscious to degree Y if we feel degree Y of moral concern about X. So, for instance, if we regarded driving as morally important, we would decide how much (if at all) a steering wheel on its own mattered, and then would set the amount of \"drive\"-ness of the steering wheel at that value.\n\n\nFor what it's worth, I think the operations we consider as \"consciousness\" are more multifarious and fundamental than what we typically consider \"driving\", which suggests that \"consciousness\" will have more broad definitional boundaries than \"driving\".\n\n\nPanpsychism vs. unconscious sleep?\n----------------------------------\n\n\nWhile we can speculate about some kind of consciousness existing in all entities, it might be objected that we already have firsthand experience with the possibility of non-consciousness -- namely, our own non-REM (NREM) sleep. Doesn't this prove that panpsychism can't be true, because we can see for ourselves that our sleeping brains aren't conscious? Following are some points in reply.\n\n\n* It's worth noting that we can be conscious during NREM sleep, such as with [hypnagogia](https://en.wikipedia.org/wiki/Hypnagogia#Physiology) during stage 1 and [dreams during various NREM stages](https://en.wikipedia.org/wiki/Non-rapid_eye_movement#Dreaming_during_NREM). Night terrors [typically occur](https://en.wikipedia.org/wiki/Night_terror) during stage 3 of NREM sleep. So the strict delineation of REM as \"conscious\" and NREM as \"unconscious\" is too simple. But it still seems that during some parts of sleep, we are not conscious.\n* One might hold that NREM sleeping brains are indeed conscious but with a very different kind of consciousness -- one that looks mostly empty. Maybe we have an extremely low degree of consciousness during NREM sleep, which we call unconscious. If so, we wouldn't reject panpsychism, but we would see that different computational systems may have very different degrees of importance. While conscious experience involves high-frequency brain oscillations (e.g., [gamma waves](https://en.wikipedia.org/wiki/Gamma_wave) around 40 Hz), [slow-wave sleep](https://en.wikipedia.org/wiki/Slow-wave_sleep) involves [delta waves](https://en.wikipedia.org/wiki/Delta_wave) often less than 1 Hz. So even if there is conscious activity during NREM sleep, it may be vastly slower than during waking consciousness or dreaming.\n* A more speculative response is to suggest that maybe we are conscious during NREM sleep, but our memories don't store the experiences the way they do our waking conscious experiences. Many of our dreams during sleep are forgotten, and dreams are considered \"conscious\", so it might not be a stretch to suppose that less pronounced NREM activity would be forgotten even more. It seems one could investigate this possibility further by exploring whether the mechanisms that inhibit memory formation are active during NREM sleep. I have no data on this, so right now this is just a (perhaps unlikely) supposition.\n* Even if our brain-wide mind is absent during NREM sleep, smaller subsystems within that brain might still be \"conscious\" to themselves in some alien way. Sleep seems to resemble the more general question of whether subcomponents of oneself can be considered conscious even if one's explicit, verbal thinking can't access them.\n\n\nPanpsychism does not imply environmentalism\n-------------------------------------------\n\n\nDavid Skrbina [argues](https://www.youtube.com/watch?v=eolCc2FuKAw&t=24s \"\\\"David Skrbina: Are There Applications for Panpsychism?\\\"\") that panpsychism\n\n\n\n> has implications for, e.g., environmentalism. So if we see mind in things in nature -- whether it's animals or plants or even rocks and rivers and streams and so forth -- this has a definite ethical component that I think is very real and has a pragmatic kind of aspect.\n> \n> \n\n\n[Elsewhere](http://www.iep.utm.edu/panpsych/ \"\\\"Panpsychism\\\"\") he suggests:\n\n\n\n> Arguably, it is precisely this mechanistic view -- which sees the universe and everything in it as a kind of giant machine -- that lies at the root of many of our philosophical, sociological, and environmental problems. Panpsychism, by challenging this worldview at its root, potentially offers new solutions to some very old problems.\n> \n> \n\n\nFreya Mathews moves from a panpsychist outlook, combined with the Taoist idea of *[wu wei](https://en.wikipedia.org/wiki/Wu_wei)* (\"non-action\"), to the [position that](http://plato.stanford.edu/entries/ethics-environmental/#DisNewAni \"\\\"3.3 Disenchantment and the New Animism\\\" in \\\"Environmental Ethics\\\"\")\n\n\n\n> The focus in environmental management, development and commerce should be on “synergy” with what is already in place rather than on demolition, replacement and disruption.\n> \n> \n\n\nShe [writes](http://www.freyamathews.net/downloads/BeyondMaterialistEnvironment.pdf \"\\\"Beyond a Materialist Environmentalism\\\"\"):\n\n\n\n> from a panpsychist point of view it is not enough merely to conserve energy, unilaterally extracting and transforming it here and storing it there. One has to allow planetary energies to follow their own contours of flow, contours which reveal local and possibly global aspects of a larger world-purpose.\n> \n> \n\n\nThere *seems* to be much in common between panpsychism and deep ecology / other forms of environmental ethics. But there's no necessary connection, and indeed, one can make the opposite case. There are several problems with the leap from panpsychism to environmentalism:\n\n\n### 1. Ecosystems may matter less than animals\n\n\nIf the welfare of an ecosystem as a whole conflicts with that of individual animals within the ecosystem, which takes priority? Unless the ecosystem matters more than many animals, the animals may still dominate the calculations. The highly developed and emotion-rich consciousness of a single mammal or bird brain seems far more pronounced than the crude shadows of sentience that we see in holistic ecosystems. Maybe ecosystems get more weight because they're bigger and more intricate than an animal brain, but I doubt I'd count an ecosystem's welfare more than, say, 10 or 100 individual animals.\n\n\n### 2. Not clear if the environment wants to be preserved or changed\n\n\nSuppose we grant, say, the Earth as a whole nontrivial ethical weight compared with animal feelings. Who's to say that changing the environment is against Earth's wishes? Maybe it concords with Earth's wishes.\n\n\nOne argument for conservation might be that the Earth tries to rebound from certain forms of destruction. For instance, if we cut a forest, plants grow back. Typically an organism resists damage, so growing back vegetation may be the Earth's way of recovering from the harm inflicted by humans. But then what should we make of cases where Earth seems to go along with human impacts? For instance, [positive greenhouse-gas feedback loops](https://en.wikipedia.org/wiki/Climate_change_feedback#Positive) might be the Earth's way of saying, \"I liked how you added more CO2 to my atmosphere, so I'm going to continue to add greenhouse gases on my own accord.\" In any case, it's also not clear that vegetation isn't like the Earth's hair or toenails -- something it's glad to have cropped even though it keeps coming back. Maybe the Earth created us with the ultimate purpose of keeping it well shaved. The first photosynthesizers also tampered with the Earth when they oxygenated the atmosphere. Was that likewise an assault on the Earth's goals?\n\n\nThe language I'm using here is obviously too anthropomorphic, but it's a convenient way of talking about ultimately more abstract and crude quasi-preferences that the Earth's biosphere may imply via its constitution and behavior. And it's probably wrong to think of the Earth as having a single set of quasi-preferences. There are many parts to what the Earth does, each of which might suggest its own kinds of desires, in a similar way as human brains contain many subsystems that can want different things.\n\n\nFinally, who's to say that ecosystems are more valuable subjects of experience than their replacements, such as cities, factories, highways, and the like? Are environmentalists guilty of ecocentrism -- discrimination against industrial and digital systems? Luciano Floridi makes a similar point and argues for replacing biocentrism with \"ontocentrism\".\n\n\n### 3. Ecosystems may experience net suffering\n\n\nIf forests, streams, and the whole Earth do have quasi-feelings, who's to say they're feelings of happiness? They might just as easily be feelings of frustration. These systems are always adapting -- and so perhaps are always restless, never satisfied. Maybe it would be better if this discomfort didn't have to be endured. That is, maybe ecosystems would be better off not existing, even purely for their own sakes. This is particularly clear for those who consider reducing suffering more urgent than creating pleasure. So maybe panpsychism leads to an anti-environmental ethic. Of course, whatever replaces an ecosystem will itself suffer. But hopefully parking lots and solar radiation not converted to energy by plants are on balance less sentient (and hence suffer less) than ecosystems.\n\n\n### Personal spirituality does not imply universal joy\n\n\nI think part of why panpsychism often elicits intuitions of nature's *goodness* is that the experience of imagining oneself as part of a larger, conscious cosmos is often beautiful and serene. We feel at peace with the universe when thinking such thoughts, and then we project those good feelings onto what we're thinking about -- forgetting how awful it may actually \"feel\" to be the universe. To her credit, Freya Mathews [acknowledges](https://web.archive.org/web/20180410114542/http://www.australianhumanitiesreview.org/archive/Issue-April-2006/EcoRigby.html \"\\\"Minding (about) Matter: On the Eros and Anguish of Earthly Encounter\\\" by Kate Rigby\") the importance of suffering: \"The path of awakened intersubjectivity, Mathews cautions in conclusion, is nonetheless far from universally joyous: on the contrary, it renders the pain of more than human others more salient for us, even while we find delight in our surprise encounters with them.\"\n\n\nSpiritual/panpsychist experiences [are elevated](https://www.psychologytoday.com/blog/unique-everybody-else/201212/the-spirituality-psychedelic-drug-users \"\\\"The Spirituality of Psychedelic Drug Users\\\"\") by certain types of drug use:\n\n\n\n> For example, a recent study found that about 60% of volunteers in an experiment on the effects of psilocybin, who had never before used psychedelic drugs, had a “complete mystical experience” characterised by experiences such as unity with all things, transcendence of time and space, a sense of insight into the ultimate nature of reality, and feelings of ineffability, awe, and profound positive emotions such as joy, peace, and love (Griffiths, Richards, McCann, & Jesse, 2006).\n> \n> \n> [...] Psychedelic drug users endorsed more mystical beliefs (such as in a universal soul, no fear of death, unity of all things, existence of a transcendent reality, and oneness with God, nature and the universe).\n\n\nI wouldn't be surprised if weaker versions of these brain processes are triggered naturally when people think spiritual thoughts. But we shouldn't mistake the bliss we feel in these moments as being what the other entities in the universe themselves feel.\n\n\n(Note: I never have and never intend to try psychedelic drugs, both because they're illegal and because messing with my brain seems risky. But I think it's quite edifying to *learn about* the effects of such drugs.)\n\n\nEntropy and sentience\n---------------------\n\n\nA friend of mine sometimes asks why there's always so much badness in the world. I reply: \"It could be worse.\" Indeed, the [second law of thermodynamics](https://en.wikipedia.org/wiki/Second_law_of_thermodynamics) is in some sense a great gift to suffering reducers, because it implies that (complex) suffering can only last so long (within a given Hubble volume at least). We just have to wait it out until the universe's negentropy is used up.\n\n\nIt's [often observed](https://en.wikipedia.org/wiki/Entropy_and_life) that a characteristic of life is that it has extremely low entropy, and correspondingly that life is very efficient (though [not necessarily](http://www.uncommondescent.com/origin-of-life/rob-sheldon-on-new-origin-of-life-theory-testimony-to-power-of-self-promotion/ \"\\\"Rob Sheldon on new origin of life theory: Testimony to power of self-promotion?\\\"\") maximally efficient) at increasing the entropy of the outside environment. This might lead us to wonder whether there's some relationship between \"sentience\" and \"entropy production\". If these two things were identical, then we would face a sharp constraint on efforts to reduce the net sentience of our region of the universe, since a given quantity of entropy must be produced as the universe evolves forward.\n\n\nHowever, I don't think the two quantities are exactly equal. For example:\n\n\n* Your neurons are probably not significantly more effective at generating entropy than, say, your muscle cells, yet your brain has much higher sentience than your muscles.\n* Reversible computing may allow for a high level of sentience with minimal increases in entropy compared against irreversible computing.\n\n\nSo presumably suffering reducers would prefer systems with fewer neuron-like operations and more irreversible computations, which have a lower ratio of sentience per unit entropy increase.\n\n\nAlso note that \"amount of sentience\" is not identical to \"amount of suffering\". It's better to increase entropy with happy minds rather than agonized ones.\n\n\nWe might also wonder whether sentience is proportional to mass+energy. If so, then the law of [conservation of mass+energy](https://en.wikipedia.org/wiki/Mass%E2%80%93energy_equivalence#Conservation_of_mass_and_energy) would imply that we can't change the amount of sentience. However, I find it implausible that sentience would be strictly proportional to mass/energy. For instance, a lot of energy can be stored in molecular bonds, which are pretty stable and so don't seem to qualify as a particularly sentient system compared with other systems that contain the same amount of energy in the form of organisms moving around. A stick of butter contains enough food energy to power a person for 5-10 hours, but there seems to be more sentience in a system in which the butter powers the person than a system in which the butter sits idle alongside a person who just died, even though both of these systems have the same amount of mass+energy.\n\n\nAcknowledgments\n---------------\n\n\nAmong many inspirations for this piece were conversations with Joseph Kijewski and Ruairí Donnelly.", "url": "https://longtermrisk.org/flavors-of-computation-are-flavors-of-consciousness/", "title": "Flavors of Computation Are Flavors of Consciousness", "source": "html_articles", "source_type": "blogPost", "source_filetype": "pdf", "date_published": "2015-04-09T22:00:00Z", "authors": ["Brian Tomasik"], "summary": [], "id": "cc9896cb26f2e6fa903e1fb1ff40fa9f"} {"text": "Gains from Trade through Compromise\n===================================\n\n\n\n9 April 2015\nby [Brian Tomasik](https://longtermrisk.org/author/brian-tomasik/ \"Posts by Brian Tomasik\")\n\nFirst written: 26 July 2013; last update: 18 Feb. 2018\n\n When agents of differing values compete for power, they may find it mutually advantageous in expectation to arrive at a compromise solution rather than continuing to fight for winner takes all. I suggest a few toy examples of future scenarios in which suffering reducers could benefit from trade. I propose ideas for how to encourage compromise among nations, ideologies, and individuals in the future, including moral tolerance, democracy, trade, social stability, and global governance. We should develop stronger institutions and mechanisms that allow for greater levels of compromise.\n\n\n### Other versions\n\n\n\n[![](/files/pdf-icon.png)](https://longtermrisk.org/files/gains-from-trade-through-compromise.pdf)\n\nContents\n\n+ [Other versions](#Other_versions)\n\n* [Introduction](#Introduction)\n* [Another compromise scenario](#Another_compromise_scenario)\n* [Power-based valuation for compromises](#Power-based_valuation_for_compromises)\n* [It's not about you](#Its_not_about_you)\n* [Why don't we see more compromise?](#Why_dont_we_see_more_compromise)\n* [Iterated prisoner's dilemmas](#Iterated_prisoners_dilemmas)\n* [Agents that prefer not to compromise](#Agents_that_prefer_not_to_compromise)\n\t+ [Sacred values](#Sacred_values)\n* [Light-speed limits to negotiation](#Light-speed_limits_to_negotiation)\n\t+ [Intergalactic democracy?](#Intergalactic_democracy)\n\t+ [Is cross-supercluster communication feasible?](#Is_cross-supercluster_communication_feasible)\n\t+ [Verification](#Verification)\n\t+ [Compromise before spreading](#Compromise_before_spreading)\n\t+ [Galactic compromise is easier than intergalactic](#Galactic_compromise_is_easier_than_intergalactic)\n* [Ideas for encouraging more cooperation](#Ideas_for_encouraging_more_cooperation)\n* [Epistemic disagreements](#Epistemic_disagreements)\n\t+ [Epistemic convergence](#Epistemic_convergence)\n\t+ [Caveats](#Caveats)\n\t+ [Divergences among effective altruists](#Divergences_among_effective_altruists)\n\t+ [Convergence should not lead to uniformity](#Convergence_should_not_lead_to_uniformity)\n\t+ [Epistemic prisoner's dilemma](#Epistemic_prisoners_dilemma)\n* [What about moral advocacy?](#What_about_moral_advocacy)\n* [Words vs. actions](#Words_vs_actions)\n* [Compromise as a market](#Compromise_as_a_market)\n\t+ [Risk-neutral value systems](#Risk-neutral_value_systems)\n\t+ [Risk-averse value systems](#Risk-averse_value_systems)\n\t+ [Further market analogies](#Further_market_analogies)\n* [Values as vectors](#Values_as_vectors)\n\t+ [Sums as compromise solutions?](#Sums_as_compromise_solutions)\n* [Working together on compromise](#Working_together_on_compromise)\n* [Acknowledgements](#Acknowledgements)\n* [Appendix: Dividing the compromise pie](#Appendix_Dividing_the_compromise_pie)\n\t+ [Imputations](#Imputations)\n\t+ [Compromise pie: Transferable-utility case](#Compromise_pie_Transferable-utility_case)\n\t+ [Non-transferable case: Nash bargaining game](#Non-transferable_case_Nash_bargaining_game)\n\t+ [Multiple factions with non-transferable utility](#Multiple_factions_with_non-transferable_utility)\n* [Footnotes](#Footnotes)\n\nIntroduction\n------------\n\n\n\n> \"Any man to whom you can do favor is your friend, and [...] you can do a favor to almost anyone.\" \n> \n> --[Mark Caine](https://web.archive.org/web/20181107014425/http://quotationsbook.com/quote/8497/)\n> \n> \n\n\n\"[Gains from trade](http://en.wikipedia.org/wiki/Gains_from_trade)\" in economics refers to situations where two parties can engage in cooperative behavior that makes each side better off. A similar concept applies in the realm of power struggles between competing agents with different values. For example, consider the following scenario.\n\n\nDeep ecologists vs. animal welfarists. Imagine that two ideologies control the future: Deep ecology and animal welfare. The deep ecologists want to preserve terrestrial ecosystems as they are, including [all the suffering they contain](http://www.utilitarian-essays.com/suffering-nature.html). (Ned Hettinger: \"Respecting nature means respecting the ways in which nature trades values, and such respect includes painful killings for the purpose of life support.\") The animal welfarists want to intervene to dramatically reduce suffering in the wild, even if this means eliminating most wildlife habitats. These two sides are in a race to control the first artificial general intelligence (AGI), at which point the winner can take over the future light cone and enforce its values.\n\n\nSuppose the two sides are equally matched in resources: They each have a 50% shot at winning. Let's normalize the values for each side between 0 and 100. If the deep ecologists win, they get to preserve all their beloved ecosystems; this outcome has value 100 to them. If they lose, their ecosystems disappear, leaving 0 value. Meanwhile, the values are swapped for the animal welfarists: If they win and eliminate the suffering-filled ecosystems, they achieve value 100, else the value to them is 0. Since the chance of each side winning is 50%, each side has an expected value of 50.\n\n\nBut there's another option besides just fighting for winner takes all. Say the deep ecologists care more about preserving species diversity than about sheer number of organisms. Maybe they're also more interested in keeping around big, majestic animals in their raw form than about maintaining multitudes of termites and cockroaches. Perhaps some ecologists just want the spectacle of wildlife without requiring it to be biological, and they could be satisfied by lifelike robot animals whose conscious suffering is disabled at appropriate moments, such as when being eaten.[1](#link_ajs-fn-id_1-29) Maybe others would be okay with virtual-reality simulations of Earth's original wildlife in which the suffering computations are skipped over in the virtual animals' brains.\n\n\nThese possibilities suggest room for both parties to gain from compromise. For instance, the animal welfarists could say, \"We want to get rid of 60% of suffering wild animals, but we'll eliminate the ones that you care about least (e.g., insects when they're not crucial for supporting the big animals), and we'll keep some copies of everything to satisfy your diversity concerns, along with doing some robots and non-suffering simulations.\" Maybe this would be ~60% as good as complete victory in the eyes of the deep ecologists. If the two sides make this arrangement, each gets value 60 with certainty instead of expected value 50.\n\n\nHere, there were gains from trade because the animal welfarists could choose for the compromise those methods of reducing wild-animal suffering that had least impact to the deep ecologists' values. In general, when two sets of values are not complete polar opposites of each other, we should expect a concave-down curve like the red one below illustrating the \"[production possibilities](https://en.wikipedia.org/wiki/Production%E2%80%93possibility_frontier)\" for the two values. When the curve is concave down, we have possible gains from trade relative to duking it out for winner takes all (blue line). The blue line illustrates the expected value for each value system parameterized by the probability in [0,1] for one of the value systems winning.\n\n\n \n\n![](https://longtermrisk.org/files/production-possibilities.jpg \"I release this image into the public domain worldwide.\") \n\nFigure 1: Fight vs. compromise for deep ecologists vs. animal welfarists.\n\n\nAnother compromise scenario\n---------------------------\n\n\nWe can imagine many additional examples in which suffering reducers might do better to trade with those of differing value systems rather than fight for total control. Here's one more example to illustrate:\n\n\n* Suffering subroutines for profit. Suppose a robotics company trains its robotic minions using a reinforcement-learning algorithm that is extremely effective but also causes [conscious suffering](http://www.utilitarian-essays.com/suffering-subroutines.html) to the robots. Robot welfarists protest for a law to ban use of this painful algorithm. The debate in the legislature is long and fierce. Eventually, the two sides reach a compromise: The algorithm may still be used, but only in cases where the company presents a clear need to an ethics committee. This results in a substantial reduction in the company's use of suffering robots without precluding their utilization in the most crucial instances. (Compare to present-day animal-testing disputes. A similar scenario would have worked in the case of researchers doing psychological experiments on conscious artificial minds.)\n\n\nPower-based valuation for compromises\n-------------------------------------\n\n\nIn these disputes, the relevant variable for deciding how to slice the compromise seems to be the probability that each side would win if it were to continue fighting in an all-or-nothing way. These probabilities might be roughly proportional to the resources (financial, political, cultural, technological, etc.) that each side has, as well as its potential for growth. For instance, even though the movement to reduce wild-animal suffering is small now, I think it has potential to grow significantly in the future, so I wouldn't make early compromises for too little in concessions.\n\n\nThis is analogous to valuation of startup companies: Should the founders sell out or keep going in case they can sell out for a higher value later? If they do badly, they might actually get less. For instance, Google offered to buy Groupon for $5.75 billion in 2010, but Groupon turned down the offer, and [by 2012](http://www.businessinsider.com/groupon-stock-below-google-offer-2012-6), Groupon's market cap fell to less than $5.75 billion.\n\n\nIn \"[Rationalist explanations for war](http://www.jstor.org/discover/10.2307/2706903?uid=3739960&uid=2&uid=4&uid=3739256&sid=21102561921457),\" pp. 386-87, James D. Fearon makes this same observation: Two states with perfect information should always prefer a negotiation over fighting, with the negotiation point being roughly the probability that each side wins.\n\n\nI discuss further frameworks for picking a precise bargaining point in \"[Appendix: Dividing the compromise pie](http://utilitarian-essays.com/compromise.html#dividing-compromise-pie).\"\n\n\nOur social intuitions about fairness and democracy posit that everyone deserves an equal say in the final outcome. Unfortunately for these intuitions, compromise bargains are necessarily weighted by power -- \"might makes right.\" We may not like this fact, but there seems no way around it. Of course, our individual utility functions can weight each organism equally, but in the final compromise arrangement, those with more power get more of what they want.\n\n\nIt's not about you\n------------------\n\n\nMany people care about complexity, diversity, and a host of other values that I don't find important. I have significant reservations about human space colonization, but I'm willing to let others pursue this dream because they care about it a lot, and I hope in return that they would consider the need to maintain safeguards against future suffering. The importance of compromise does not rely on you, in the back of your mind, giving some intrinsic moral weight to what other agents want; compromise is still important even when you don't care in the slightest or may even be apprehensive about the goals of other factions. To appropriate a [quote](http://www.brainyquote.com/quotes/quotes/n/noamchomsk108350.html) from Noam Chomsky: If we don't believe in strategic compromise with those we can't identify with, we don't believe in it at all.\n\n\nWhy don't we see more compromise?\n---------------------------------\n\n\nIf this compromise approach of resolving conflicts by buying out the other side worked, why wouldn't we see it more often? Interest groups should be compromising instead of engaging in zero-sum campaigns. Countries, rather than going to war, could just assess the relative likelihood of each side winning and apportion the goods based on that.\n\n\nEven animals shouldn't fight: They should just size up their opponents, estimate the probability of each side winning, and split the resources appropriately. In the case of fighting animals, they could get the same expected resources with less injury cost. For instance, two bull seals [fighting for a harem of 100 cows](http://en.wikipedia.org/wiki/Northern_elephant_seal#Social_behavior_and_reproduction), if they appear equally matched, could just split the cows 50-50 and avoid the mutual fitness costs of getting injured in the fight.![](https://longtermrisk.org/files/Callorhinus_ursinus_and_harem-350x234.jpg \"'Northern fur seal (Callorhinus ursinus) and harem.' Image by M. Boylan. (from https://commons.wikimedia.org/wiki/File:Callorhinus_ursinus_and_harem.jpg) This work has been released into the public domain by its author, M. Boylan. This applies worldwide.\")\n\n\nHere are a few possibilities why we don't see more cooperation in animals, but I don't know if they're accurate:\n\n\n1. The adaptation is too hard to reach by evolution,[2](#link_ajs-fn-id_2-29) maybe because accurately estimating the probability of each side winning is harder than just trying the fight to see who wins. Maybe the estimates would also become untrustworthy over time without feedback to reinforce their tracking of truth.\n2. Maybe different sides have different probabilities for who would win and so can't agree on a mutual split. (But Bayesian agents who take seriously the [modesty argument for epistemic priors](http://hanson.gmu.edu/prior.pdf) might not have this problem? Though I guess each side might have incentives to deceive the other about its ability.)\n3. Maybe it does happen more than we think, but we only see the cases where this kind of trade breaks down. There's plenty showing off your size to scare down the other guy and other non-violent forms of intimidation. The conflicts might just be cases where this \"trade rather than fight\" approach stops working.\n\n\nOf course, there are plenty of examples where [animals have settled on cooperative strategies](http://en.wikipedia.org/wiki/Mutualism_(biology)). It's just important to note that they don't always do so, and perhaps we could generalize under what conditions cooperation breaks down.\n\n\nHuman wars often represent a failure of cooperation as well. While wars sometimes have \"irrational\" causes, Matthew O. Jackson and Massimo Morelli argue in \"The Reasons for Wars - an Updated Survey\" that many can be framed in rationalist terms, and they cite five main reasons for the breakdown of negotiation. An exhaustive survey of theories of war is contained in [a syllabus](https://web.archive.org/web/20141201201632/http://fas-polisci.rutgers.edu/levy/Levy%20syllabus.pdf) by Jack S. Levy.\n\n\nHow about in intra-state politics? There are plenty of compromises there, but maybe not as many as one might expect. For instance, Toby Ord [proposed](https://web.archive.org/web/20161106152017/http://felicifia.org/viewtopic.php?t=79) in 2008:\n\n\n\n> It is so inefficient that there are pro- and anti- gun control charities and pro- and anti-abortion charities. Charities on either side of the divide should be able to agree to 'cancel' off some of their funds and give it to a mutually agreed good cause (like developing world aid). This would do just as much for (or against) gun control as spending it on their zero-sum campaigning, as well as doing additional good for others.\n> \n> \n\n\nA similar idea was [floated](http://lesswrong.com/lw/2qq/politics_as_charity/2o71) on LessWrong in 2010. I have heard of couples both not voting because they'd negate each other, but I haven't heard of an organization as described above for cancelling opposed donations. Why hasn't something like this taken off?\n\n\n1. Maybe, like in the case of animals, social evolution just hasn't gotten to it yet.\n2. Each side [may be overconfident](https://en.wikipedia.org/wiki/Illusory_superiority) in its own effectiveness per dollar relative to the other, or at least wants to pretend that it's highly confident in its effectiveness over the other side.\n3. Maybe one side is *actually* more effective per dollar, but the less effective side doesn't want to admit this by using a ratio other than 1:1 for donation cancellation.\n4. Maybe the work that a pro-choice organization does isn't exactly cancelled by the work of a pro-life organization. For instance, Planned Parenthood provides a lot of services to people in addition to doing political lobbying.\n5. On LessWrong, patrissimo [suggests](http://lesswrong.com/lw/2qq/politics_as_charity/2pic) that political donations may sometimes be more about signaling affiliation rather than about actually changing policy.\n\n\nWhatever the reason is that we don't see more cancelling of opposed political forces, the fact remains that we do see a lot of compromise in many domains of human society, including legislation (I get my provision if you get yours), international relations (we'll provide weapons if you fight people we don't like), business (deals, contracts, purchases, etc.), and all kinds of social relations (Brother Bear [will play](http://www.amazon.com/The-Berenstain-Bears-Bad-Dream/dp/0394873416) any three games with Sister Bear if she plays Space Grizzlies with him later). And we're seeing an [increasing trend](http://en.wikipedia.org/wiki/The_Better_Angels_of_Our_Nature) toward positive-sum compromise as time goes on.\n\n\nIterated prisoner's dilemmas\n----------------------------\n\n\nWhile racing to control the first AGI amounts to a one-shot prisoner's dilemma, most of life's competitive scenarios are iterated. Indeed, even in the case of AGI arms race, there are many intermediate steps along the way where the parties choose cooperation vs. defection, such as when expanding their resources. Iterated prisoner's dilemmas provide a very strong basis for cooperation, as was demonstrated by [Robert Axelrod's tournaments](http://en.wikipedia.org/wiki/The_Evolution_of_Cooperation#Axelrod.27s_tournaments). As the Wikipedia article explains: \n\nIn summary, success in an evolutionary \"game\" correlated with the following characteristics:\n\n\n* **Be nice:** cooperate, never be the first to defect.\n* **Be provocable:** return defection for defection, cooperation for cooperation.\n* **Don't be envious:**: be fair with your *partner*.\n* **Don't be too clever:** or, don't try to be tricky.\n\n\nIterated prisoner's dilemmas yielded cooperation in an evolutionary environment with no pre-existing institutions or enforcement mechanisms, and the same should apply even in those iterated prisoner's dilemmas between groups today where no formal governing systems are yet in place. In this light, it seems suffering reducers should put out their compromise hand first and aim to help all values in a power-weighted fashion, at least to some degree, and then if we see others aren't reciprocating, we can temporarily withdraw our assistance.\n\n\nAgents that prefer not to compromise\n------------------------------------\n\n\nOne can imagine agents for whom compromise is actually not beneficial because they have increasing rather than diminishing returns to resources. In the \"Introductory example,\" we saw that both animal welfarists and deep ecologists had diminishing returns with respect to how much control they got, because they could satisfy their most important concerns first, and then later concerns were less and less important. Imagine instead an agent that believes that the value of a happy brain is super-linear in the size of that brain: e.g., say the value is quadratic. Then the agent would prefer a 50% chance of getting all the matter M in the future light cone to produce a brain with value proportional to M2 rather than a guarantee of getting half of the matter in the universe to produce a brain with value proportional to (M/2)2 = M2/4. I think agents of this type are rare, but we should be cautious about the possibility.\n\n\n### Sacred values\n\n\nAnother interesting case is that of sacred values. [It seems](http://www.scientificamerican.com/article.cfm?id=psychology-of-taboo-tradeoff) that offering monetary compensation for violation of a sacred value actually makes people more unwilling to compromise. While we ordinarily imagine sacred values in contexts like the abortion debate or disputes over holy lands, they can even emerge for modern issues [like Iran's nuclear program](http://www.sas.upenn.edu/~baron/journal/91203/jdm91203.html). Philip Tetlock has [a number of papers](http://www.sas.upenn.edu/tetlock/publications) on sacred-value tradeoffs.\n\n\nIt seems that people are more willing to concede on sacred values in return for other sacred values, which means that compromise with such people is not hopeless but just requires more than a single common currency of exchange.\n\n\nLight-speed limits to negotiation\n---------------------------------\n\n\nBargaining on Earth is fast, reliable, and verifiable. But what would happen in a much bigger civilization that spans across solar systems and galaxies?\n\n\n### Intergalactic democracy?\n\n\nThe Virgo Supercluster is [110 million light-years in diameter](http://en.wikipedia.org/wiki/Virgo_Supercluster). Suppose there was an \"intergalactic federation\" of agents across the Virgo Supercluster that met at a Congress at the center of the supercluster. The galaxies could transmit digital encodings of their representatives via radar, which would take 55 million years for the most distant regions. The representatives would convene, reach agreements, and then broadcast back the decisions, taking another 55 million years to reach the destination galaxies. This process would be really slow, especially if the digital minds of the future run at extremely high speeds. Still, if we had, say, [1012 years](http://en.wikipedia.org/wiki/Future_of_an_expanding_universe#Galaxies_outside_the_Local_Supercluster_are_no_longer_detectable) before dark energy separated the parts of the supercluster too far asunder, we could still get in 1012/108 = 10,000 rounds of exchanges. (As Andres Gomez Emilsson pointed out to me, this calculation doesn't count the expansion of space during that time. Maybe the actual number of exchanges would be lower on this account.) In addition, if the galaxies dispatched new representatives before the old ones returned, they could squeeze in many more rounds, though with less new information at each round.\n\n\n### Is cross-supercluster communication feasible?\n\n\nWould it even be possible to transmit radar signals across the 55 million light-years? According to Table 1 of \"[How far away could we detect radio transmissions?](http://www.faqs.org/faqs/astronomy/faq/part6/section-12.html),\" most broadband signals can travel just a tiny fraction of a light-year. [S-band](http://en.wikipedia.org/wiki/S_band) waves sent at high enough [EIRP](http://en.wikipedia.org/wiki/Equivalent_isotropically_radiated_power) could potentially travel hundreds of light-years. For instance, the table suggests that at 22 terawatts of transmission EIRP, the detection range would be 720 light-years.\n\n\nIn the 1970s, humanity as a whole [used](http://en.wikipedia.org/wiki/Kardashev_scale#Current_status_of_human_civilization) ~10 terawatts, but the sun [produces](http://en.wikipedia.org/wiki/Kardashev_scale#Definition) 4 \\* 1014 terawatts, so maybe 22 terawatts is even conservative. The detection range [is proportional to](http://www.faqs.org/faqs/astronomy/faq/part6/section-12.html) the square root of EIRP, so multiplying the detection range by 10 requires multiplying EIRP by 100. Obviously hundreds or thousands of light-years for radar transmission is tiny compared with 55 million light-years for the intergalactic distances at hand, but the communication can be routed from one star to the next. There are \"rogue\" [intergalactic stars](http://en.wikipedia.org/wiki/Intergalactic_stars) that might serve as rest stops, but whether they would be able to be located and whether they would all be within a few thousand light-years of each other is unclear. Perhaps custom-built probes could relay messages from one node to the next across large interstellar distances, creating an intergalactic equivalent of the Internet.\n\n\nIntergalactic communication is easier than [intergalactic travel](http://en.wikipedia.org/wiki/Intergalactic_travel) for material structures (e.g., the initial [von Neumann probes](http://en.wikipedia.org/wiki/Self-replicating_spacecraft) that would do the colonizing). If solutions were found for intergalactic travel (e.g., speculative [faster-than-light](http://en.wikipedia.org/wiki/Faster-than-light) scenarios), these would aid in intergalactic compromise as well.\n\n\n### Verification\n\n\nEven if you can make deals every 110 million years, how do you verify that the distant regions are following up on their sides of the bargains? Maybe the different factions (e.g., deep ecologists vs. animal welfarists) could build monitoring systems to watch what the others were doing. Representatives from all the different factions could be transmitted back from Congress to the home galaxies for follow-up inspections. But what would keep the home galaxies from just destroying the inspectors? Who would stop them? Maybe the home galaxies would have to prove at the next Congress session that they didn't hamper the inspectors, but it's not at all clear it would be possible to verify that.\n\n\nWhat might work better would be if each home galaxy had a proportionate balance of parties from the different factions so that they would each have the power to keep the other sides in check. For example, if there were lots of deep ecologists and animal welfarists in both galaxies, most of the compromise could be done on a local scale, the same as it would be if intergalactic communication didn't exist. A risk would be if some of the local galaxies devolved into conflict in which some of the parties were eliminated. Would the other parts of the supercluster be able to verify that this had happened? And even if so, could a police force rectify the situation?\n\n\n### Compromise before spreading\n\n\n![](https://longtermrisk.org/files/Milky_Way_IR_Spitzer-350x253.jpg \"'This dazzling infrared image from NASA's Spitzer Space Telescope shows hundreds of thousands of stars crowded into the swirling core of our spiral Milky Way galaxy. In visible-light pictures, this region cannot be seen at all because dust lying between Earth and the galactic center blocks our view.' Image by NASA/JPL-Caltech/S. Stolovy (SSC/Caltech). (from https://commons.wikimedia.org/wiki/File:Milky_Way_IR_Spitzer.jpg) This file is in the public domain because it was solely created by NASA. NASA copyright policy states that 'NASA material is not protected by copyright unless noted'.\")Cross-supercluster communication seems tricky. Probably most of the exchanges among parties would happen at a local level, and intergalactic trades might be a rare and slow process.\n\n\nThe easiest time to \"divide up our future light cone\" among competing factions seems to be at the beginning, before we send out the first von Neumann probes. Either different factions would be allocated different portions of the universe into which to expand, or all parties would agree upon a compromise [payload](http://en.wikipedia.org/wiki/Payload_(air_and_space_craft)) to spread uniformly. This latter solution would prevent attempts to cheat by colonizing more than your fair share.\n\n\nOf course, we would still need to compromise with aliens if we encountered them, but among (post-)humans, maybe all the compromise could be done at the beginning.\n\n\nHowever, the idea of finding a perfect compromise at the beginning of colonization that continues working forever assumes that reliable goal preservation is possible, even for rapidly changing and learning digital agents that will persist for billions of years. That level of goal preservation seems very tricky to achieve in artificial general intelligences, which are extremely complex systems. So there might inevitably be divergence in beliefs and values among non-communicating galaxies, and this could eventually lead to conflicts.\n\n\n### Galactic compromise is easier than intergalactic\n\n\nNote that merely galactic democracy would be less challenging. The Milky Way [is only](http://en.wikipedia.org/wiki/Milky_Way#Size_and_composition) 100,000 light-years in diameter, and I would guess that most of the stars are within thousands of light-years of each other, so networked radar transmission should be feasible. Congressional cycles would take only 100,000 years instead of 110 million. And the number of stars is not that small: maybe [100 to 400 billion](http://en.wikipedia.org/wiki/Milky_Way#Size_and_composition), compared with about [200 trillion](http://www.atlasoftheuniverse.com/virgo.html) in the whole Virgo Supercluster. This is just a factor-of-103 difference and so shouldn't affect our expected-value calculations too much. In other words, intergalactic bartering isn't necessary for compromise on cosmic scales to still be important.\n\n\nIdeas for encouraging more cooperation\n--------------------------------------\n\n\nSee \"[Possible Ways to Promote Compromise](http://utilitarian-essays.com/promote-compromise.html).\" We should evaluate the effectiveness and efficiency of these approaches and explore other ways forward.\n\n\nEpistemic disagreements\n-----------------------\n\n\nIn this essay I've focused on *value* disagreements between factions, and there's a reason for this. Facts and values are fundamentally two separate things. Values are things you want, drives to get something, and hence they differ from organism to organism. Facts are descriptions about the world that are true for everyone at once. Truth is not person-dependent. Even if post-modernists or skeptics are right that truth is somehow person-dependent or that there is no such thing as truth, then at least this realization is still true in some meta-level of reasoning, unless even this is denied, but such a view is rare, and such people are presumably not going to be doing much to try to shape the world.\n\n\n### Epistemic convergence\n\n\nGiven that there is some external truth about the universe, different people can share ideas about it, and other people's beliefs are evidence relevant to what we ourselves should believe. \"Person A believes B\" is a fact about the universe that our theories need to explain.\n\n\nWe should keep in mind that our Bayesian priors were shaped by various genetic and environmental factors in our development, and if we had grown up with the circumstances of other people, we would hold their priors. In some cases, it's clear that one set of priors is more likely correct -- e.g., if one person grew up with major parts of his brain malfunctional, his priors are less likely accurate than those of someone with a normal brain, and one reason for thinking so is that humans' normal brain structure has been shaped by evolution to track truths about the world, whereas random modifications to such a brain are less likely to generate comparably accurate views. In this case, both the normal brain and the malfunctional brain should agree to give more weight to the priors of the normal brain, though both brains are still useful sources of data.\n\n\nEven in cases where there's no clear reason to prefer one brain or another, it seems both brains should recognize their symmetry and update their individual priors to a common prior, as Robin Hanson suggests in \"[Uncommon Priors Require Origin Disputes](http://hanson.gmu.edu/prior.pdf).\" This is conceptually similar to two different belief impulses within your own brain being combined into a common belief via dissonance-resolution mechanisms. It's not specified how the merging process takes place -- it's not always an [average](http://xkcd.com/690/), or even a weighted average, of the two starting points, but it seems rationally required for the merge to happen. Then, once we have common priors, we should have common posteriors by [Aumann's theorem](https://en.wikipedia.org/wiki/Aumann's_agreement_theorem).\n\n\n### Caveats\n\n\nThere are caveats in order.\n\n\n1. *Limited computation*: The actual process of belief resolution takes time and effort, and it's probably impossible for two people to completely converge given present-day computational resources. However, we can make crude approximations in the short term, before full resolution is hashed out. We can increase our uncertainty when other smart people disagree with us on factual questions, ask them why, and move some in their direction, especially if they do the same with us.\n2. *Disagreement about agreement*: Not everyone agrees that we should have common priors. Indeed, many people don't even accept the Bayesian framework within which this thinking is cast. For example, [presuppositionalists](https://en.wikipedia.org/wiki/Presuppositional_apologetics) and [fideists](https://en.wikipedia.org/wiki/Fideism) would assert that we can have justification for our beliefs purely from other sources like the Bible or just faith independent of any attempted grounding.[3](#link_ajs-fn-id_3-29) Even atheist Bayesians sometimes demur at the prospect of the rational requirement for epistemic convergence. This presents a challenge to my hope that factual disagreements are less severe than moral ones, and it suggests that in addition to the interventions discussed above for promoting moral compromise, we might also advance the arguments for epistemic compromise, in order to reduce (what I think are) misguided conflicts that should be fought in the realm of ideas rather than in the realm of zero-sum actions, like political lobbying based on facts that you think you know better than the other side.\n\n\nI have some hope that very rational agents of the future will not have much problem with epistemic disagreements, because I think the argument for epistemic modesty is compelling, and most of the smartest people I know accept it, at least in broad outline. If evolutionary pressures continue to operate going forward, they'll select for rationality, which means those practicing epistemic modesty should generally win out, if it is in fact the right stance to take. Thus, I see value conflicts as a more fundamental issue in the long run than factual ones.\n\n\nThat said, many of the conflicts we see today are at least partially, and sometimes primarily, about facts rather than values. Some debates in politics, for instance, are at least nominally about factual questions: Which policy will improve economic growth more? Are prevention measures against climate change cost-effective? Does gun control reduce violent crime? Of course, in practice these questions tend to become ideologized into value-driven emotional issues. Similarly, many religious disputes are at least theoretically factual -- What is/are the true God/gods? What is His/Her/their will for humanity? -- although, even more than in politics, many impulses on these questions are driven by emotion rather than genuine factual uncertainty. It's worth exploring how much rationality would promote compromise in these domains vs. how much other sociological factors are the causes and hence the best focal points for solutions.\n\n\n### Divergences among effective altruists\n\n\nThere are disagreements in the [effective-altruism](https://en.wikipedia.org/wiki/Effective_altruism) movement about which causes to pursue and in what ways. I think many of the debates ultimately come down to value differences -- e.g., how much to care about suffering vs. happiness vs. preferences vs. other things, whether to care about animals or just humans and how much, whether to accept Pascalian gambles. But many other disagreements, especially in the short term, are about epistemology: How much can we grapple with long-term scenarios vs. how much should we just focus on short-term helping? How much should we focus on quantified measurement vs. qualitative understanding? How much should we think about flow-through effects?\n\n\nSome [are concerned](http://ozziegooen.com/blog/2013/09/21/navigating-the-epistemologies-of-effective-altruism/) that these differences in epistemology are harmful because they segregate the movement. I take mostly the opposite view. I think it's great to have lots of different groups trying out lots of different things. This helps you learn faster than if you all agreed on one central strategy. There is some risk of wasting resources on zero-sum squabbles, and it's good to consider cases where that happens and how to avoid them. At the same time, I think competition is also valuable, just as in the private sector. When organizations compete for donors using arguments, they improve the state of the debate and are forced to make the strongest case for their views. (Of course, recruiting donors via other \"unfair\" means doesn't have this same property.) While it might help for altruists to become better aligned, we also don't want to get comfortable with just averaging our opinions rather than seeking to show why our side may actually be more correct than others supposed.\n\n\n### Convergence should not lead to uniformity\n\n\nThis discussion highlights a more general point. Sometimes I feel epistemic modesty is too often cited as an empty argument: \"Most smart people disagree with you about claim X, so it's probably wrong.\" Of course, this reasoning is valid, and it's important for everyone to realize as much, but this shouldn't be the end of the debate. There remains the task of showing *why* X is wrong at an object level. Analogously, we could say, \"Theorem Y is true because it's in my peer-reviewed textbook,\" but it's a different matter to actually walk through the proof and show why theorem Y is correct. And every once in a while, it'll turn out that theorem Y is actually wrong, perhaps due to a typographical error or, in rare occasions, due to a more serious oversight by the authors. Intellectual progress comes from the latter cases: investigating a commonly held assumption and eventually discovering that it wasn't as accurate as people had thought.\n\n\nMost new ideas are wrong. For every Copernicus or Galileo there are hundreds of scientists who are misguided, confused, or unlucky in interpreting their experimental findings. But we have to not be satisfied with conventional wisdom, and we have to actually look at the details of why others are wrong in order to make progress. It's plausible that an epistemically diverse population leads to faster learning than a uniform one. If startup founders weren't overconfident, we'd have fewer startups and hence less economic growth. Similarly, if people are less confident in their theories, they might push them less hard, and society might have less intellectual progress as a result.\n\n\nHowever, epistemic divergence can be harmful in cases where each party can act on its own and thereby spoil the restraint of everyone else; Bostrom et al. call this the \"[The Unilateralist’s Curse](http://www.nickbostrom.com/papers/unilateralist.pdf)\". In these cases, it's best if everyone adheres to a policy of epistemic modesty. In general, maybe the ideal situation is for people to hold approximately uniform *actual* beliefs but then play advocate for a particular idea that they'd like to see explored more, even though it's probably wrong. There are times when I do this: propose something that I don't actually think is right, because I want to test it out.\n\n\nWhile fighting over conflicting beliefs is not a good idea, groupthink is a danger in the reverse direction. While groups are collectively more accurate than individuals on average, when a group's views are swayed by conformity to each other or a leader, these accuracy benefits diminish. Groups with strong norms encouraging everyone to speak her own mind and rewarding constructive criticism [can reduce groupthink](http://www.youtube.com/watch?v=47DLrGsNvaE&feature=youtu.be&t=51m08s).\n\n\n### Epistemic prisoner's dilemma\n\n\nAnother reason why I sometimes make stronger statements than I actually believe is a sort of epistemic prisoner's dilemma.[4](#link_ajs-fn-id_4-29) In particular, I often feel that other people don't update enough in response to the fact that I believe what I do. If they're not going to update in my direction, I can't update in their direction, because otherwise my position would be lost, and this would be worse than us both maintaining our different views.\n\n\nFor example, say Alice and Bob both have beliefs about some fact, like the number of countries in the world. Alice thinks the number is around 180; Bob thinks it's around 210. The best outcome would be for both parties to update in each other's directions, yielding something like 195, which is actually the number of independent states [recognized](http://www.state.gov/s/inr/rls/4250.htm) by the US Department of State. However, say Alice is unwilling to budge on her estimate. If Bob were to move in her direction -- say part way, to 195 -- then Bob's views would be more accurate, but on collective decisions made by the Alice/Bob team, the decisions would, through their tug of war, be centered on something like (180+195)/2 = 187.5, which is farther from the truth than the collective decisions made by Alice/Bob holding 180 and 210 as their beliefs. In other words, if the collective decision-making process itself partly averages Alice's and Bob's views, then Bob should hold his ground as long as Alice holds her ground, even if this means more friction in the form of zero-sum conflicts due to their epistemic disagreement.\n\n\nIf Alice and Bob are both altruists, then this situation should be soluble by each side realizing that it makes sense to update in the other's direction. There's not an inherent conflict due to different payoffs to each party like in the regular prisoner's dilemma.\n\n\nIn general, epistemic compromise is similar to game-theoretic compromise in that it makes both parties better off, because both sides in general improve their beliefs, and hence their expected payoffs, in the process of resolving disagreement. Of course, if the agents have anticorrelated values, there can be cases where disagreement resolution is net harmful to at least one side, such as if a terrorist group resolves its factual disagreement with the US government about which method of making dirty bombs is most effective. By improving the factual accuracy of the terrorists, this may have been a net loss for the US government's goals.\n\n\nWhat about moral advocacy?\n--------------------------\n\n\nWhen is moral activism a positive-sum activity for society, and when does it just transfer power from one group to another? This is a complex question.\n\n\nConsider the case of an anti-death-penalty activist trying to convince people who support the death penalty that this form of punishment is morally wrong. Naively we might say, \"Some people support the death penalty, others oppose it, and all that's going on here is transferring support from one faction to the other. Hence this is zero-sum.\"\n\n\nOn the other hand, we could reason this way instead: \"Insofar as the anti-death-penalty activist is successful, she's demonstrating that the arguments against the death penalty are convincing. This is improving society's wisdom as people adopt more informed viewpoints. Most people should favor more informed viewpoints, so this is a win by many people's values, at least partially.\" The extent to which this is true depends on how much the persuasion is being done via means that are seen as \"legitimate\" (e.g., factual evidence, philosophical logic, clear thought experiments, etc.) and how much it's being done via \"underhanded\" methods (e.g., deceptive images, pairing with negative stimuli, ominous music, smear tactics, etc.). Many people are glad to be persuaded by more legitimate means but resistant to persuasion by the underhanded ones.\n\n\nSo there's a place for moral advocacy even in a compromise framework: Insofar as many factions welcome open debate, they win when society engages in moral discourse. When you change the opinion of an open-minded person, you're doing that person a service. Think of a college seminar discussion: Everyone benefits from the comments of everyone else. Other times moral persuasion may not be sought so actively but is still not unwelcome, such as when people distribute fliers on the sidewalk. Given that the receivers are voluntarily accepting the flier and open to reading it, we'd presume they place at least some positive value on the activity of the leafleter (although the value could be slightly negative if the person accepts the leaflet only due to social pressure). Of course, even if positive, moral persuasion might be far from optimal in terms of how resources are being used; this depends on the particulars of the situation -- how much the agents involved benefit from the leafleting.\n\n\nHowever, not everyone is open to persuasion. In some instances a person wants to keep his values rigid. While this may seem parochial, remember that sometimes all of us would agree with this stance. [For example](http://yudkowsky.net/singularity/): \"If you offered Gandhi a pill that made him *want* to kill people, he would refuse to take it, because he knows that then he would kill people, and the current Gandhi doesn't want to kill people.\" Being convinced by underhanded means that we should kill people is a harm to our current values. In these cases, underhanded persuasion mechanisms are zero-sum because the losing side is hurt as much as the winning side is helped. Two opposed lobby groups using underhanded methods would both benefit from cancelling some of each other's efforts and directing the funds to an agreed upon alternate cause instead. On the other hand, opposed lobby groups that are advancing the state of the debate are doing a service to society and may wish to continue, even if they're in practice cancelling each other's effects on what fraction of people adopt which stance in the short run.\n\n\nIf changing someone's beliefs against his wishes is a harm to that person, then what are we to make of the following case? Farmer Joe believes that African Americans deserve to be slaves and should not have Constitutional rights. Furthermore, he doesn't want to have his views changed on this matter. Is it a harm to persuade Joe, even by purely intellectual arguments, that African Americans do in fact deserve equal rights? Well, technically yes. Remember that what persuasion methods count as \"legitimate\" vs. \"underhanded\" is in the eye of the hearer, and in this case, Joe regards *any* means of persuasion as underhanded. That said, if Joe were to compromise with the anti-slavery people, the compromise would involve everyone being 99+% against slavery, because in terms of power to control the future, the anti-slavery camp seems to be far ahead. Alternatively, maybe the anti-slavery people could give Joe something else he wants (e.g., an extra couple of shirts) in return for his letting them persuade him of the anti-slavery stance. This could be a good trade for Joe given his side's low prospects of winning in the long run.\n\n\nAs this example reminds us, the current distribution of opinion is not necessarily the same as the future distribution of power, and sometimes we can anticipate in which directions the trends are going. For example, it seems very likely that concern for animal wellbeing will dramatically increase in the coming decades. Unlike the stock market, the trajectory of moral beliefs is not a random walk.\n\n\nWords vs. actions\n-----------------\n\n\nAbove we saw that moral discourse can often be a positive-sum activity insofar as other parties welcome being persuaded. (Of course, it may not always be *as* positive-sum as other projects that clearly benefit everyone, such as promoting compromise theory and institutions.) Conflicts in the realm of ideas are usually a good thing.\n\n\nIn contrast, direct actions may be more zero-sum when there's disagreement about the right action to take. Say person A thinks it's good to do a given action, and person B thinks it's equally wrong to do that same action.\n\n\nWhile people often complain about \"all talk and no action,\" in some cases, it can be Pareto-better to talk than to take action, if the issue at hand is one under dispute.\n\n\nOften our actions meld with our beliefs about what's right, so it can sometimes get tricky, if you're trying to adopt a compromise stance for your actions, to mentally separate \"how I'm acting for instrumental reasons\" with \"how I feel for intrinsic reasons.\" Sometimes people may begin to think of the compromise stance as intrinsically the \"right\" one, while others will continue to maintain this separation. In our own brains, we can feel the distinction between these two categories with respect to our evolutionary drives: Instrumental reciprocity feels like our moral sense of fairness, and our intrinsic survival drives feel like selfish instincts.\n\n\nCompromise as a market\n----------------------\n\n\nControl of Earth's future light cone is something that most value systems want:\n\n\n* Egoists would like to run eudaimonic simulations of themselves.\n* [Fun theorists](http://lesswrong.com/lw/xy/the_fun_theory_sequence/) would like to create minds exploring constantly harder challenges.\n* Negative utilitarians would like to use computational resources to explore ways to reduce suffering in the universe.\n* Complexity maximizers would like to see a melange of interesting digital and physical patterns.\n* ...\n\n\nEach of these value systems can be regarded like an individual in an economy, aiming to maximize its own utility. Each egoist has a separate goal from other egoists, so most of the individuals in this economy might be egoists, and then there would be a few other (very large) individuals corresponding to the fun theorists, utilitarians, complexity maximizers, etc. Resources in this economy include stars, raw materials for building [Dyson swarms](http://en.wikipedia.org/wiki/Dyson_sphere#Dyson_swarm), knowledge databases, algorithm source code, etc., and an individual's utility derives from using resources to produce what it values.\n\n\nIt's possible that the future will literally contain many agents with divergent values, but it's also possible that just one of these agents will win the race to build AI first, in which case it would have the light cone to itself. There are two cases to consider, and both suggest compromise as a positive-sum resolution to the AI race.\n\n\n### Risk-neutral value systems\n\n\nConsider an AI race between eudaimonia maximizers and paperclip maximizers, with odds of winning p and 1-p respectively. If these factions are risk-neutral, then \n\nexpected utility of eudaimons = p \\* utility(resources if win) = utility(p \\* (resources if win)), \n\nand similarly for the paperclippers. That is, we can pretend for purposes of analysis that when the factions compete for winner-takes-all, they actually control miniature future light cones that are p and 1-p times the size of the whole thing. But some parts of the light cone may be differentially more valuable than others. For example, the paperclippers need lots of planets containing iron and carbon to create steel, while the eudaimons need lots of stars for energy to power their simulations. So the parties would gain from trading with each other: The eudaimons giving away some of their planets in return for some stars. And similarly among other resource dimensions as well.\n\n\n### Risk-averse value systems\n\n\nFor risk-averse agents, the argument for compromise is even stronger. In particular, many egoists may just want to create one immortal copy of themselves (or maybe 5 or 10 for backup purposes); they don't necessarily care about turning the whole future light cone into copies of themselves, and even if they'd like that, they would still probably have diminishing marginal utility with respect to the number of copies of themselves. Likewise for people who care in general about \"survival of the human race\": It should be quite cheap to satisfy this desire with respect to present-day Earth-bound humans relative to the cosmic scales of resources available. Other ideologies may be risk-averse as well; e.g., negative utilitarians want some computing power to figure out how to reduce suffering, but they don't need vast amounts because they're not trying to fill the cosmos with anything in particular. Even fun theorists, eudaimons, etc. might be satisficing rather than maximizing and exhibit diminishing marginal utility of resources.\n\n\nIn these instances, the case for compromise is even more compelling, because not only can the parties exchange resources that are differentially valuable, but because the compromise also reduces uncertainty, this boosts expected utility in the same way that insurance does for buyers. For instance, with an egoist who just wants one immortal copy of herself, the expected utility of the outcome is basically proportional to the probability that the compromise goes through, which could be vastly higher than her probability of winning the whole light cone. Individual egoists might band together into collective-bargaining units to reduce the transactions costs of making trades with each human separately. This might serve like a group insurance plan, and those people who had more power would be able to afford higher-quality insurance plans.\n\n\nCarl Shulman [has pointed out](http://agi-conf.org/2011/carl-shulman-abstract/) the usefulness of risk aversion in encouraging cooperation. And indeed, maybe human risk aversion is one reason we see so much compromise in contemporary society. Note that if even only one side is risk-averse, we tend to get very strong compromise tendencies. For example, insurance companies are not risk-averse with respect to wealth (for profits or losses on the order of a few million dollars), but because individuals are, individuals buy insurance, which benefits both parties.\n\n\n### Further market analogies\n\n\nJust like in a market economy, trade among value systems may include externalities. For instance, suppose that many factions want to run learning computations that include \"[suffering subroutines](http://www.utilitarian-essays.com/reinforcement-learning.html),\" which negative utilitarians would like to avert. These would be analogous to pollution in a present-day context. In a [Coase](http://en.wikipedia.org/wiki/Coase_theorem) fashion, the negative utilitarians might bargain with the other parties to use alternate algorithms that don't suffer, even if they're slightly costlier. The negative utilitarians could pay for this by giving away stars and planets that they otherwise would have (probabilistically) controlled.\n\n\nThe trade among value systems here has some properties of a market economy, so some of the results of welfare economics will apply. If there are not many buyers and sellers, no perfect information, etc., then the [first fundamental welfare theorem](http://en.wikipedia.org/wiki/Fundamental_theorems_of_welfare_economics) may not fully hold, but [perhaps](http://marginalrevolution.com/marginalrevolution/2007/08/the-first-funda.html) many of its principles would obtain in weaker form.\n\n\nIn general, markets are some of the most widespread and reliable instances of positive-sum interaction among competing agents, and we would do well to explore how, why, and when markets work or don't work.\n\n\nOf course, all of these trade scenarios depend on the existence of clear, robust mechanisms by which compromises can be made and maintained. Such mechanisms are present in peaceful societies that allow for markets, contracts, and legal enforcement, but it's much harder in the \"wild west\" of AI development, especially if one faction controls the light cone and has no more opposition. Exploring how to make compromise function in these contexts is an urgent research area with the potential to make everyone better off.\n\n\n\nValues as vectors\n-----------------\n\n\nConsider a multi-dimensional space of possible values: Happiness, knowledge, complexity, number of paperclips, etc. Different value systems (axiologies) care about these dimensions to different degrees. For example, hedonistic utilitarians care only about the first and not about the rest. Other people care about each of the first three to some degree.\n\n\nWe can think of a person's axiology as a vector in values space. The components of the vector represent what weight (possibly negative) the person places on that particular value. In a four-dimensional values space of (happiness, knowledge, complexity, paperclips), hedonistic utilitarians have the vector (1, 0, 0, 0). Other people have vectors like (0.94, 0.19, 0.28, 0). Here I've normalized these to unit vectors. To evaluate a given change in the world, the axiologies take the [scalar projection](https://en.wikipedia.org/wiki/Vector_projection#Scalar_projection_2) of the change onto their vector, i.e., the dot product. For example, if the change is (+2, -1, +1, +4), utilitarians evaluate this as (1, 0, 0, 0) \\* (2, -1, 1, 4) = 1 \\* 2 + 0 \\* -1 + 0 \\* 1 + 0 \\* 4 = 2, while the other axiology evaluates its value to be 0.94 \\* 2 + 0.19 \\* -1 + 0.28 \\* 1 + 0 \\* 4 = 1.97.\n\n\nWe can imagine a similar set-up with the dimensions being policies rather than values per se, with each axiology assigning a weight to how much it wants or doesn't want each policy. This is the framework that Robin Hanson suggested in his post, \"[Policy Tug-O-War](http://www.overcomingbias.com/2007/05/policy_tugowar.html).\" The figure provides a graphical illustration of a compromise in this setting.\n\n\n \n\n![policy space](https://longtermrisk.org/files/policy-space.png) \n\nPareto improvements for competing value systems. The two axiologies are opposed on the x-axis dimension but agree on the y-axis dimension. Axiology #2 cares more about the y-axis dimension and so is willing to accept some loss on the x-axis dimension to compensate Axiology #1.\n\n\n### Sums as compromise solutions?\n\n\nAdrian Hutter suggested an extension to this formalism: The length of each vector could represent the number of people who hold a given axiology. Or, I would add, in the case of power-weighted compromise, the length could represent the power of the faction. Would the sum of the axiology vectors with power-weighted lengths then represent a natural power-weighted compromise solution? Of course, there may be constraints on which vectors are achievable given resources and other limitations of physical reality.\n\n\nIn some cases, summing axiology vectors seems to give the right solution. For example, consider two completely orthogonal values: Paperclips (x axis) and staples (y axis). Say a paperclip maximizer has twice as much power as its competitor staple maximizer in competing to control Earth's future light cone. The sum of their vectors would be 2 \\* (1,0) + (0,1) = (2,1). That means 2/3 of resources go to paperclips and 1/3 to staples, just as we might expect from a power-weighted compromise.[5](#link_ajs-fn-id_5-29)\n\n\nHowever, imagine now that there's a design for staples that allows paperclips to be fit inside them. This means the staple maximizers could, if they wanted, create some paperclips as well, although by default they wouldn't bother to do so. Assume there is no such design to fit staples inside paperclips. Now the staple maximizers have extra bargaining leverage: \"If we get more than 1/3 of resources for staples,\" they can say, \"we'll put some paperclips inside our staples, which will make both of us better off.\" Here the compromise outcome is based not just on pure power ratios (i.e., probabilities of winning control in a fight) but also on bargaining leverage. This is discussed more in \"[Appendix: Dividing the compromise pie](http://utilitarian-essays.com/compromise.html#dividing-compromise-pie).\"\n\n\n\nWorking together on compromise\n------------------------------\n\n\nI think advancing compromise is among the most important projects that we who want to reduce suffering can undertake. A future without compromise could be many times worse than a future with it. This is also true for other value systems as well, especially those that are risk-averse. Thus, advancing compromise is a win-win(-win-win-win-...) project that many of us may want to work on together. It seems like a [robustly positive](http://utilitarian-essays.com/robustness-against-uncertainty.html) undertaking, squares with [common sense](http://lesswrong.com/lw/iao/common_sense_as_a_prior/), and is even resilient to changes in our moral outlook. It's a form of \"pulling the rope sideways\" in policy tug-o-wars.\n\n\nAcknowledgements\n----------------\n\n\nThis essay was inspired by a discussion with Lucius Caviola. It draws heavily from the ideas of Carl Shulman. Also influential were writings by Jonah Sinick and Paul Christiano. An email from Pablo Stafforini prompted the section on epistemic convergence.\n\n\nAppendix: Dividing the compromise pie\n-------------------------------------\n\n\n### Imputations\n\n\nConsider several factions competing in a winner-takes-all race to control the future light cone. Let pi denote the probability that faction i wins. Normalize the utility values for each faction so that utility of 0 represents losing the light-cone race, and utility of 100 represents winning it. Absent compromise, faction i's expected utility is 100pi. Thus, in order for i to be willing to compromise, it must be the case that the compromise offers at least 100pi, because otherwise it could do better by continuing to fight on its own. Compromise allocations that respect this \"individual rationality\" requirement are called [imputations](https://en.wikipedia.org/wiki/Imputation_(game_theory)).\n\n\nWe can see the imputations for the case of deep ecologists and animal welfarists in Figure 2. Absent bargaining, each side gets an expected utility of 100 \\* 0.5 = 50 by fighting for total control. Bargaining would allow each side to get more than half of what it wants, and the excess value to each side constitutes the gain from compromise.\n\n\n \n\n![Picture1 gains](https://longtermrisk.org/files/Picture1-gains.png) \n\nFigure 2: Imputations for compromise between deep ecologists and animal welfarists, with pi = 0.5 for both sides. By \"Pareto frontier\" in this context, I mean the set of possible Pareto-optimal Pareto improvements relative to to the (50, 50) disagreement point. \n\nAs we can see, there may be many imputations for a given problem, and even if they are all individually rational, they may not be collectively stable with more than two players because subgroups of players might gang together to break off from a whole-group compromise. There are [various solution concepts](https://en.wikipedia.org/wiki/Cooperative_game#Solution_concepts) for group stability of compromise in cooperative game theory, which impose additional requirements on top of a distribution merely being an imputation.\n\n\n### Compromise pie: Transferable-utility case\n\n\nUtility is [transferable](https://en.wikipedia.org/wiki/Transferable_utility) if it can be given to another party without losing any of the value. We can see in Figure 2 that utility is not completely transferable between deep ecologists and animal welfarists, because the Pareto frontier is curved. If the animal welfarists give up 1 unit of expected utility, the deep ecologists may not gain 1 whole unit. Utility would be transferable in the bargaining situation if the Pareto frontier between the two dashed black lines were straight.\n\n\nIn the special case when utility is transferable, we can use all the mechanics of [cooperative game theory](https://en.wikipedia.org/wiki/Cooperative_game) to analyze the situation. For example, the [Shapley value](https://en.wikipedia.org/wiki/Shapley_value) gives an answer to the problem of what the exact pie-slicing arrangement should look like, at least if we want to satisfy the [four axioms](https://en.wikipedia.org/wiki/Shapley_value#Properties) that uniquely specify the Shapley division.\n\n\nIt's an interesting [theorem](https://en.wikipedia.org/wiki/Cooperative_game#Convex_cooperative_games) that if a cooperative game is convex, then all of the players want to work together (i.e., the [core](https://en.wikipedia.org/wiki/Cooperative_game#The_core) is non-empty and also unique), and the Shapley value gives \"the center of gravity\" of the core. Alas, as far as I can tell, real-world situations will not always be convex.\n\n\n### Non-transferable case: Nash bargaining game\n\n\nMany times the utility gains from compromise are not completely transferable. We saw this in Figure 2 through the fact that the Pareto frontier is curved. Define u := (animal-welfarist expected utility) - 50, i.e., the excess expected utility above no compromise, and v := (deep-ecologist expected utility) - 50. The (u,v) points that lie within the dotted lines and the curved red line are the potential imputations, i.e., ways to divide the gains from trade. That utility is not transferable in this case means we can't represent the Pareto frontier by a line u + v = constant.\n\n\nHowever, we can use another approach, called the [Nash bargaining game](https://en.wikipedia.org/wiki/Bargaining_problem). In [Nash's solution](https://en.wikipedia.org/wiki/Bargaining_problem#Nash_bargaining_solution), the bargaining point is that which maximizes u \\* v. Figure 304.1 (p. 304) of *A Course in Game Theory* by Osborne and Rubinstein illustrates this graphically as the intersection of lines u \\* v = constant with set of imputations, and I've drawn a similar depiction in Figure 3:\n\n\n \n\n![New FRI pic](https://longtermrisk.org/files/New-FRI-pic.png) \n\nFigure 3: Nash bargaining solution for 50-50 balance of power. \n\nNote that the split would be different for a differently shaped Pareto frontier. For example, if pdeep ecologists = 0.8 and panimal welfarists = 0.2, then we'd have a situation like the following:\n\n\n \n\n![](https://longtermrisk.org/files/Nash2.png) \n\nFigure 4: Nash bargaining solution for 80-20 balance of power. \n\nIf, for illustration, we use the formula (deep ecologists' expected utility) = 100 - (animal welfarists' expected utility)2/100 for the Pareto frontier, as in Figure 1, then we can compute the exact Nash compromise point, as is shown in the following table:\n\n\n\n\n | Animal welfarists' expected utility | Deep ecologists' expected utility | Animal welfarists' expected utility - 20 | Deep ecologists' expected utility - 80 | (Animal welfarists' expected utility - 20) \\* (Deep ecologists' expected utility - 80) |\n| 20 | 96 | 0 | 16 | 0 |\n| 22 | 95.16 | 2 | 15.16 | 30.32 |\n| 24 | 94.24 | 4 | 14.24 | 56.96 |\n| 26 | 93.24 | 6 | 13.24 | 79.44 |\n| 28 | 92.16 | 8 | 12.16 | 97.28 |\n| 30 | 91 | 10 | 11 | 110 |\n| 32 | 89.76 | 12 | 9.76 | 117.12 |\n| **34** | **88.44** | **14** | **8.44** | **118.16** |\n| 36 | 87.04 | 16 | 7.04 | 112.64 |\n| 38 | 85.56 | 18 | 5.56 | 100.08 |\n| 40 | 84 | 20 | 4 | 80 |\n| 42 | 82.36 | 22 | 2.36 | 51.92 |\n| 44 | 80.64 | 24 | 0.64 | 15.36 |\n| 44.72 | 80 | 24.72 | 0 | 0 |\n\n\nThe maximum of the product in the last column occurs around (34, 88), which will be the Nash compromise arrangement. The animal welfarists got a surplus of 34 - 20 = 14, and the deep ecologists, 88 - 80 = 8.\n\n\nIt's worth noting that in fact any of the divisions in the table is a Nash equilibrium, because given the demand of one faction for a share of the pie, the other faction can only either (1) take less, which it wouldn't want to do, or (2) demand more and thereby ruin the compromise, leaving it with no surplus. Thus, the bargaining solution allows us to narrow down to a particular point among the infinite set of Nash equilibria.\n\n\nThe bargaining game contains [other solutions](https://en.wikipedia.org/wiki/Bargaining_problem#Bargaining_solutions) besides Nash's that satisfy different intuitive axioms.\n\n\n### Multiple factions with non-transferable utility\n\n\nThe bargaining problem with more than two players becomes more complicated. In \"[A Comparison of Non-Transferable Utility Values](http://www.ma.huji.ac.il/hart/papers/3ntu-val.pdf),\" Sergiu Hart identifies three different proposals for dividing the compromise pie -- Harsanyi (1963), Shapley (1969), and Maschler and Owen (1992) -- each of which may give different allocations. Each proposal has its own axiomatization (see endnote 1 of Hart's paper), so it's not clear which of these options would be chosen. Perhaps one would emerge as a more plausible Schelling point than the others as the future unfolds. [↩](#dividing-compromise-pie-back1) [↩](#dividing-compromise-pie-back2)\n\n\nFootnotes\n---------\n\n\n1. Robot animals would represent an improvement, though they aren't a perfect solution because the robots too would probably suffer to some degree in order to operate successfully in the world. That said, perhaps more humane algorithms could be designed than what are used in animals. Also, absence of predators would eliminate the pain of being eaten alive, as well as fear of being eaten. If the robots didn't compete for shared resources, arms-race pressures for intelligence would abate, so the robots would be able to accomplish similar tasks as their biological versions with less cognitive and emotional sophistication. Alas, not everyone would be content with a proposal to replace animals by robot counterparts. In *Consciousness Explained* (p. 452), Daniel Dennett says that he's glad to know that there are predators in his woods, even if he doesn't see them, and that he would be less satisfied with \"robot beasties\".  [(back)](#back_ajs-fn-id_1-29)\n2. Depending on what landscape of payoffs is involved, it seems plausible that cooperation could indeed be an [evolutionarily stable strategy](http://en.wikipedia.org/wiki/Evolutionarily_stable_strategy) (ESS). As an example, consider the classic [game of hawk-dove](http://en.wikipedia.org/wiki/Chicken_(game)#Hawk-Dove) with an additional mutant variant, called Own-cooperator, which fights Hawks and Doves but compromises with its own kind. Let the hawk-dove payoffs be V=2 and C=4. \n\n\n| | | | |\n| --- | --- | --- | --- |\n| | Hawk | Dove | Own-cooperator |\n| Hawk | -1, -1 | 2, 0 | -1, -1 |\n| Dove | 0, 2 | 1, 1 | 0, 2 |\n| Own-cooperator | -1, -1 | 2, 0 | 1, 1 |\n\nHere, Own-cooperation is an ESS using the first condition of [Maynard Smith and Price](http://en.wikipedia.org/wiki/Evolutionarily_stable_strategy#Nash_equilibria_and_ESS): For the strategy S = Own-cooperator, for any T in {Hawk, Dove}, playing S against S is strictly better than playing T against S.  [(back)](#back_ajs-fn-id_2-29)\n3. In fairness, at bottom Bayesian priors are no different, but some priors seem more \"reasonable\" than others, at least given certain priors for reasonableness.  [(back)](#back_ajs-fn-id_3-29)\n4. It turns out there's [an existing thought experiment](http://lesswrong.com/lw/9z/the_epistemic_prisoners_dilemma/) with the same name, which is similar in spirit.  [(back)](#back_ajs-fn-id_4-29)\n5. Actually, for completely mutually exclusive values and risk-neutral actors, there are no strict gains from compromise, because the paperclip and staple maximizers are indifferent between a guaranteed 2/3 : 1/3 split vs. 2/3 : 1/3 probabilities of winning everything. Also note that the vector formalism doesn't encapsulate risk-averse value systems or value systems whose axiology is anything other than a linear sum of components.  [(back)](#back_ajs-fn-id_5-29)\n\n\nRobot animals would represent an improvement, though they aren't a perfect solution because the robots too would probably suffer to some degree in order to operate successfully in the world. That said, perhaps more humane algorithms could be designed than what are used in animals. Also, absence of predators would eliminate the pain of being eaten alive, as well as fear of being eaten. If the robots didn't compete for shared resources, arms-race pressures for intelligence would abate, so the robots would be able to accomplish similar tasks as their biological versions with less cognitive and emotional sophistication. Alas, not everyone would be content with a proposal to replace animals by robot counterparts. In *Consciousness Explained* (p. 452), Daniel Dennett says that he's glad to know that there are predators in his woods, even if he doesn't see them, and that he would be less satisfied with \"robot beasties\".Depending on what landscape of payoffs is involved, it seems plausible that cooperation could indeed be an [evolutionarily stable strategy](http://en.wikipedia.org/wiki/Evolutionarily_stable_strategy) (ESS). As an example, consider the classic [game of hawk-dove](http://en.wikipedia.org/wiki/Chicken_(game)#Hawk-Dove) with an additional mutant variant, called Own-cooperator, which fights Hawks and Doves but compromises with its own kind. Let the hawk-dove payoffs be V=2 and C=4. \n\n\n\n| | | | |\n| --- | --- | --- | --- |\n| | Hawk | Dove | Own-cooperator |\n| Hawk | -1, -1 | 2, 0 | -1, -1 |\n| Dove | 0, 2 | 1, 1 | 0, 2 |\n| Own-cooperator | -1, -1 | 2, 0 | 1, 1 |\n\n\nHere, Own-cooperation is an ESS using the first condition of [Maynard Smith and Price](http://en.wikipedia.org/wiki/Evolutionarily_stable_strategy#Nash_equilibria_and_ESS): For the strategy S = Own-cooperator, for any T in {Hawk, Dove}, playing S against S is strictly better than playing T against S.\n\nIn fairness, at bottom Bayesian priors are no different, but some priors seem more \"reasonable\" than others, at least given certain priors for reasonableness.It turns out there's [an existing thought experiment](http://lesswrong.com/lw/9z/the_epistemic_prisoners_dilemma/) with the same name, which is similar in spirit.Actually, for completely mutually exclusive values and risk-neutral actors, there are no strict gains from compromise, because the paperclip and staple maximizers are indifferent between a guaranteed 2/3 : 1/3 split vs. 2/3 : 1/3 probabilities of winning everything. Also note that the vector formalism doesn't encapsulate risk-averse value systems or value systems whose axiology is anything other than a linear sum of components.", "url": "https://longtermrisk.org/gains-from-trade-through-compromise/", "title": "Gains from Trade through Compromise", "source": "html_articles", "source_type": "blogPost", "source_filetype": "pdf", "date_published": "2015-04-09T22:00:00Z", "authors": ["Brian Tomasik"], "summary": [], "id": "bea6fff4f02f6b6a45c77ff61878f0a8"} {"text": "How the Simulation Argument Dampens Future Fanaticism\n=====================================================\n\n\n\n23 August 2016\nby [Brian Tomasik](https://longtermrisk.org/author/brian-tomasik/ \"Posts by Brian Tomasik\")\n\nInitial ideas: 2013, 2014; first written: 13 Jun. 2016; last update: 15 Mar. 2018\n\n Summary\n-------\n\n\nSome effective altruists assume that most of the expected impact of our actions comes from how we influence the very long-term future of Earth-originating intelligence over the coming ~billions of years. According to this view, helping humans and animals in the short term matters, but it mainly only matters via effects on far-future outcomes.\n\n\nThere are [a number of](http://reducing-suffering.org/altruists-focus-reducing-short-term-far-future-suffering/ \"'Should Altruists Focus on Reducing Short-Term or Far-Future Suffering?'\") heuristic reasons to be skeptical of the view that the far future astronomically dominates the short term. This piece zooms in on what I see as perhaps the strongest concrete (rather than heuristic) argument why short-term impacts may matter a lot more than is naively assumed. In particular, there's a non-trivial chance that most of the copies of ourselves are instantiated in relatively short-lived simulations run by superintelligent civilizations, and if so, when we act to help others in the short run, our good deeds are duplicated many times over. Notably, this reasoning dramatically upshifts the relative importance of short-term helping *even if* there's only a small chance that Nick Bostrom's basic simulation argument is correct.\n\n\nMy thesis doesn't prove that short-term helping is more important than targeting the far future, and indeed, a plausible rough calculation suggests that targeting the far future is still several orders of magnitude more important. But my argument does leave open uncertainty regarding the short-term-vs.-far-future question and highlights the value of further research on this matter.\n\n\n### Other versions\n\n\n\n[![](/files/pdf-icon.png)](https://longtermrisk.org/files/how-the-simulation-argument-dampens-future-fanaticism.pdf)\n\nContents\n\n+ [Other versions](#Other_versions)\n\n* [Epigraph](#Epigraph)\n* [Introduction](#Introduction)\n* [Anti-mugging approaches](#Anti-mugging_approaches)\n\t+ [Hansonian leverage penalty](#Hansonian_leverage_penalty)\n\t+ [Simulation argument](#Simulation_argument)\n\t+ [Reliance on observers?](#Reliance_on_observers)\n\t+ [Application to future fanaticism](#Application_to_future_fanaticism)\n* [Simulation argument upshifts the relative importance of short-term helping](#Simulation_argument_upshifts_the_relative_importance_of_short-term_helping)\n* [How much does the simulation argument reduce future fanaticism?](#How_much_does_the_simulation_argument_reduce_future_fanaticism)\n\t+ [Calculation using Bostrom-style anthropics and causal decision theory](#Calculation_using_Bostrom-style_anthropics_and_causal_decision_theory)\n\t\t- [A simple example](#A_simple_example)\n\t+ [Calculation based on all your copies](#Calculation_based_on_all_your_copies)\n\t+ [Simplifying L/S](#Simplifying_LS)\n\t+ [Plugging in parameter values](#Plugging_in_parameter_values)\n* [Objections](#Objections)\n\t+ [Doesn't this assume that the simulation hypothesis is 99.999999% likely to be true?](#Doesnt_this_assume_that_the_simulation_hypothesis_is_99999999_likely_to_be_true)\n\t+ [What if almost all civilizations go extinct before space colonization?](#What_if_almost_all_civilizations_go_extinct_before_space_colonization)\n\t+ [What if most of the simulations are long-lived?](#What_if_most_of_the_simulations_are_long-lived)\n\t+ [What if the basement universe has unlimited computing power?](#What_if_the_basement_universe_has_unlimited_computing_power)\n\t+ [Our simulated copies can still impact the far future by helping our simulators](#Our_simulated_copies_can_still_impact_the_far_future_by_helping_our_simulators)\n\t+ [What if simulations aren't conscious?](#What_if_simulations_arent_conscious)\n\t+ [The simulation argument is weird](#The_simulation_argument_is_weird)\n\t+ [Simulated people matter less due to a bigger Kolmogorov penalty](#Simulated_people_matter_less_due_to_a_bigger_Kolmogorov_penalty)\n\t+ [Many copies of a brain don't matter much more than one copy](#Many_copies_of_a_brain_dont_matter_much_more_than_one_copy)\n\t+ [If we're simulated, then reducing suffering by preventing existence frees up more computing resources](#If_were_simulated_then_reducing_suffering_by_preventing_existence_frees_up_more_computing_resources)\n* [Copies that aren't both biological and simulated simultaneously](#Copies_that_arent_both_biological_and_simulated_simultaneously)\n* [Solipsist and solipsish simulations](#Solipsist_and_solipsish_simulations)\n\t+ [Famous people](#Famous_people)\n\t+ [How feasible are solipsist simulations?](#How_feasible_are_solipsist_simulations)\n\t\t- [Open question: Could wildlife monitoring be bad?](#Open_question_Could_wildlife_monitoring_be_bad)\n\t+ [Tradeoff between number of copies vs. impact per copy](#Tradeoff_between_number_of_copies_vs_impact_per_copy)\n* [Suffering in physics or other black swans could save future fanaticism](#Suffering_in_physics_or_other_black_swans_could_save_future_fanaticism)\n* [The value of further research](#The_value_of_further_research)\n* [Acknowledgements](#Acknowledgements)\n* [Footnotes](#Footnotes)\n\nEpigraph\n--------\n\n\n\n> The question is whether one can get more value from controlling structures that — in an astronomical-sized universe — are likely to exist many times, than from an extremely small probability of controlling the whole thing. \n> \n> --[steven0461](http://lesswrong.com/lw/gzq/bayesian_adjustment_does_not_defeat_existential/ \"'Bayesian Adjustment Does Not Defeat Existential Risk Charity'\")\n> \n> \n\n\nIntroduction\n------------\n\n\nOne of the ideas that's well accepted within the effective-altruism community but rare in the larger world is the immense importance of the far-future effects of our actions. Of course, many environmentalists are [concerned about the future](https://en.wikipedia.org/wiki/Seven_generation_sustainability \"'Seven generation sustainability'\") of Earth, and people in past generations have [started projects that](http://www.wisegeek.com/what-is-cathedral-thinking.htm \"'What is Cathedral Thinking? (with pictures)'\") would not finish in their lifetimes. But it's rare for in-the-trenches altruists, rather than just science-fiction authors and cosmologists, to consider the effects of their actions on sentient beings that will exist billions of years from now.\n\n\nFuture focus is extremely important, but it can at times be exaggerated. It's sometimes thought that the far future is so important that the short-term effects of our actions on the welfare of organisms alive today are completely negligible by comparison, *except* for instrumental reasons insofar as short-term actions influence far-future outcomes. I call this \"far-future fanaticism\", in analogy with the \"fanaticism problem\" discussed in Nick Bostrom's \"[Infinite Ethics](http://www.nickbostrom.com/ethics/infinite.pdf \"'INFINITE ETHICS'\")\" (sec. 4.3). I probably believed something along these lines from ~2006 to ~2013.\n\n\nHowever, like with almost everything else in life, the complete picture [is more complicated](http://www.smbc-comics.com/?id=2177 \"'Saturday Morning Breakfast Cereal': 'Everything Wrong With Political Discourse In One Graph'\"). We should be extremely suspicious of any simple argument which claims that one action is, say, 1030 times more important than another action, e.g., that influencing the far future is 1030 times more important than influencing the near term. Maybe that's true, but reality is often complex, and extraordinary claims of that type should not be accepted hastily. This is one of [several reasons](http://reducing-suffering.org/altruists-focus-reducing-short-term-far-future-suffering/ \"'Should Altruists Focus on Reducing Short-Term or Far-Future Suffering?'\") we should maintain modesty about whether working to influence the far future is vastly better than working to improve the wellbeing of organisms in the nearer term.\n\n\nAnti-mugging approaches\n-----------------------\n\n\nDylan Matthews, like many others, [has expressed](http://www.vox.com/2015/8/10/9124145/effective-altruism-global-ai \"'I spent a weekend at Google talking with nerds about charity. I came away … worried.'\") skepticism about far-future fanaticism on the grounds that it smells of [Pascal's mugging](https://en.wikipedia.org/wiki/Pascal%27s_mugging \"'Pascal's mugging'\"). I think far-future fanaticism is a pretty mild form of ([mugger-less](http://lesswrong.com/lw/h1i/tactics_against_pascals_mugging/ \"'Tactics against Pascal's Mugging': 'The mugger-less version is on the other hand more interesting and more problematic. You don't actually need a person to make such a statement -- the AI, without any prompting, can assign prior probabilities to theories which produce outcomes of positive or negative value vastly greater than their assigned improbabilities.'\")) Pascal's mugging, since the future fanatic's claim is vastly more probable *a priori* than the Pascal-mugger's claim. Still, Pascal's mugging comes in degrees, and lessons from one instance should transfer to others.[1](#link_ajs-fn-id_1-2869)\n\n\n### Hansonian leverage penalty\n\n\nThe most popular resolution of Pascal's mugging on the [original thread](http://lesswrong.com/lw/kd/pascals_mugging_tiny_probabilities_of_vast/ \"'Pascal's Mugging: Tiny Probabilities of Vast Utilities'\") was [that by Robin Hanson](http://lesswrong.com/lw/kd/pascals_mugging_tiny_probabilities_of_vast/ui9 \"'RobinHanson comments on Pascal's Mugging: Tiny Probabilities of Vast Utilities - Less Wrong'\"): \"People have been talking about assuming that states with many people hurt have a low (prior) probability. It might be more promising to assume that states with many people hurt have a low *correlation* with what any random person claims to be able to effect.\"\n\n\nArisKatsaris [generalized](http://lesswrong.com/lw/h1i/tactics_against_pascals_mugging/ \"'Tactics against Pascal's Mugging'\") Hanson's idea to \"The Law of Visible Impact\": \"Penalize the prior probability of hypotheses which argue for the existence of high impact events whose consequences nonetheless remain unobserved.\"\n\n\nEliezer Yudkowsky [called](http://lesswrong.com/lw/h8k/pascals_muggle_infinitesimal_priors_and_strong/ \"'Pascal's Muggle: Infinitesimal Priors and Strong Evidence'\") this a \"leverage penalty\". However, he [goes on](http://lesswrong.com/lw/h8k/pascals_muggle_infinitesimal_priors_and_strong/ \"'Pascal's Muggle: Infinitesimal Priors and Strong Evidence'\") to show how a leverage penalty against the possibility of helping, say, a googolplex people can lead you to disbelieve scenarios where you could have huge impact, no matter how much evidence you have, which seems possibly wrong.\n\n\n### Simulation argument\n\n\nIn this piece, I don't rely on a general Hansonian leverage penalty. Rather, I use the simulation argument, which resembles the Hansonian leverage penalty in its effects, but it does so organically rather than in a forced way.\n\n\nYudkowsky [says](http://lesswrong.com/lw/h8k/pascals_muggle_infinitesimal_priors_and_strong/ \"'Pascal's Muggle: Infinitesimal Priors and Strong Evidence'\"): \"Conceptually, the Hansonian leverage penalty doesn't interact much with the Simulation Hypothesis (SH) at all.\" However, the two ideas act similarly and have a historical connection. Indeed, Yudkowsky [discussed](http://lesswrong.com/lw/kd/pascals_mugging_tiny_probabilities_of_vast/uig \"'Eliezer_Yudkowsky comments on Pascal's Mugging: Tiny Probabilities of Vast Utilities - Less Wrong'\") something like the simulation-argument solution to Pascal's mugging after hearing Hanson's idea:\n\n\n\n> Yes, if you've got 3↑↑↑↑3 people running around they can't *all* have sole control over each other's existence. So in a scenario where lots and lots of people exist, one has to penalize by *a proportional factor* the probability that any one person's binary decision can solely control the whole bunch.\n> \n> \n> Even if the Matrix-claimant says that the 3↑↑↑↑3 minds created will be unlike you, with information that tells them they're powerless, if you're in a generalized scenario where anyone has and uses that kind of power, the vast majority of mind-instantiations are in leaves rather than roots.\n> \n> \n\n\nThe way I understand Yudkowsky's point is that if the universe is big enough to contain 3↑↑↑↑3 people, then for every person who's being mugged by a genuine mugger with control over 3↑↑↑↑3 people, there are probably astronomical numbers of other people who are confronting lying muggers, pranks, hallucinations, dreams, and so on. So across the multiverse, almost all people who get Pascal-mugged can't actually save 3↑↑↑↑3 people, and in fact, the number of people who get fake Pascal-mugged is proportional to 3↑↑↑↑3. Hence, the probability of *actually* being able to help N people is roughly k/N for some constant k, so the expected value of giving in to the mugging remains finite regardless of how big N is.\n\n\nHowever, this same kind of reasoning also works for Yudkowsky's \"[Pascal's Muggle](http://lesswrong.com/lw/h8k/pascals_muggle_infinitesimal_priors_and_strong/ \"'Pascal's Muggle: Infinitesimal Priors and Strong Evidence'\")\" scenario in which a Matrix Lord opens \"up a fiery portal in the sky\" to convince a person that the Matrix Lord is telling the truth about a deal to save a googolplex lives for $5. But given that there's a huge amount of computing power in the Matrix Lord's universe, for every one Matrix Lord who lets a single person determine the fate of a googolplex people, there may be tons of Matrix Lords [just faking it](http://lesswrong.com/lw/h8k/pascals_muggle_infinitesimal_priors_and_strong/8x3p \"'Manfred comments on Pascal's Muggle: Infinitesimal Priors and Strong Evidence - Less Wrong'\") (whether for the lulz, to test the simulation software, or for some other reason). So the expected number of copies of a person facing a lying Matrix Lord should be proportional to a googolplex, and hence, the probability penalty that the Hansonian prior would have suggested seems roughly vindicated. Yudkowsky makes a similar point:\n\n\n\n> when it comes to improbability on the order of 1/3↑↑↑3, the prior improbability *is* inescapable - your sensory experiences *can't* possibly be that unique - which is assumed to be appropriate because almost-everyone who ever believes they'll be in a position to help 3↑↑↑3 people *will in fact* be hallucinating. Boltzmann brains should be much more common than people in a unique position to affect 3↑↑↑3 others, at least if the causal graphs are finite.\n> \n> \n\n\n### Reliance on observers?\n\n\nArisKatsaris [complains](http://lesswrong.com/lw/h1i/tactics_against_pascals_mugging/ \"'Tactics against Pascal's Mugging'\") that Hanson's principle \"seems to treat the concept of 'person' as ontologically fundamental\", [the way that](http://reducing-suffering.org/anthropics-without-reference-classes/ \"'Anthropics without Reference Classes'\") other instances of Nick Bostrom-style anthropic reasoning do. But, with the simulation-argument approach, you can avoid this problem by just talking about exact copies of yourself, where a \"copy\" means \"a physical structure whose high-level decision-making algorithms exactly mirror your own, such that what you decide to do, it also decides to do\". A copy needn't (and in general doesn't) share your full environment, just your current sensory inputs and behavioral outputs for some (possibly short) length of time. Then Yudkowsky's argument is that almost all copies of you are confronting fake or imagined muggers.\n\n\n### Application to future fanaticism\n\n\nWe can apply the simulation anti-mugging argument to future fanaticism. Rather than being the sole person out of 3↑↑↑↑3 people to control the actions of the mugger, we on Earth in the coming centuries are, perhaps, the sole tens of billions of people to control the far-future of Earth-originating intelligence, which might involve ~1052 people, to use the Bostrom estimate quoted in Matthews's article. For every one biological human on the real Earth, there may be tons of simulated humans on simulated Earths, so most of our copies probably \"are in leaves rather than roots\", to use Yudkowsky's terminology.\n\n\nEven if Earth-originating intelligence specifically doesn't run ancestor simulations, other civilizations may run simulations, such as when studying the origin of life on various planets, and we might be in some of those simulations. This is similar to how, even though a real Pascal-mugger might specify that all of the 3↑↑↑↑3 people that *she* will create will never think they're being Pascal-mugged, in the multiverse at large, there should be lots more people in various other circumstances who *are* fake Pascal-mugged.\n\n\nYudkowsky [acknowledges](http://lesswrong.com/lw/h8k/pascals_muggle_infinitesimal_priors_and_strong/ \"'Pascal's Muggle: Infinitesimal Priors and Strong Evidence'\") the simulation possibility and its implications for future fanaticism:\n\n\n\n> If we *don't* take everything at face value, then there might be such things as ancestor simulations, and it might be that your experience of looking around the room is something that happens in 1020 ancestor simulations for every time that it happens in 'base level' reality. In this case your probable leverage on the future is diluted (though it may be large even post-dilution).\n> \n> \n\n\nIf we think of ourselves [as all our copies](http://reducing-suffering.org/anthropics-without-reference-classes/#Update_Feb_2015_You_are_all_your_copies \"'Anthropics without Reference Classes': 'Update, Feb. 2015: You are all your copies'\") rather than a particular cluster of cells or transistors, then the simulation hypothesis doesn't decrease our probable leverage but actually increases it, especially the leverage from short-term actions, as is discussed below.\n\n\nSimulation argument upshifts the relative importance of short-term helping\n--------------------------------------------------------------------------\n\n\nI first began thinking about this topic due to [a post](https://web.archive.org/web/20160627084201/http://felicifia.org:80/viewtopic.php?t=899 \"'The simulation argument and human extinction'\") by Pablo Stafforini:\n\n\n\n> if you think there is a chance that posthumanity will run ancestor simulations [...], the prospect of human extinction is much less serious than you thought it was.\n> \n> \n\n\nSince I'm a negative utilitarian, I would [probably prefer](https://longtermrisk.org/risks-of-astronomical-future-suffering/ \"'Risks of Astronomical Future Suffering'\") for space not to be colonized, but Stafforini's point also has relevance for efforts to reduce the badness of the far future, not just efforts to prevent human extinction.\n\n\nRobin Hanson [makes a similar point](https://web.archive.org/web/20170322174949/http://www.transhumanist.com/volume7/simulation.html \"'How To Live In A Simulation'\"):\n\n\n\n> if not many simulations last through all of human history, then the chance that your world will end soon is higher than it would be if you were not living in a simulation. So all else equal you should care less about the future of yourself and of humanity, and live more for today. This remains true even if you are highly uncertain of exactly how long the typical simulation lasts.\n> \n> \n\n\nOne response is to bite the simulation bullet and just focus on scenarios where we are in fact in basement-level reality, since [if we are, we can still](http://www.33rdsquare.com/2012/10/jaan-tallinns-metaphysical-quest.html \"'Jaan Tallinn's Metaphysical Quest'\") have a huge impact: \"Michael Vassar - if you think you are Napoleon, and everyone that thinks this way is in a mental institution, you should still act like Napoleon, because if you are, your actions matter a lot.\"\n\n\nA second response is to realize that actions focused on helping in the short term may be relatively more important than the future fanatic thought. Most simulations are probably short-lived, because one can run lots of short-lived simulations with the same computing resources as it takes to run a single long-lived simulation. [Hedonic Treader](https://web.archive.org/web/20160627084201/http://felicifia.org:80/viewtopic.php?t=899 \"'The simulation argument and human extinction'\"): \"Generally speaking, it seems that if you have evidence that your reality may be more short-lived than you thought, this is a good reason to favor the near future over the far future.\"\n\n\nHow much does the simulation argument reduce future fanaticism?\n---------------------------------------------------------------\n\n\n*Note: This section is a more detailed version of an argument written [here](http://lesswrong.com/lw/hol/a_personal_history_of_involvement_with_effective/b1ig \"'Brian_Tomasik comments on A personal history of involvement with effective altruism - Less Wrong'\"). Readers may find that presentation of the calculations simpler to understand.*\n\n\nThis section presents a simplified framework for estimating the relative importance of short-term vs. far-future actions in light of the simulation argument. An example of an action targeted for short-term impact is changing ecosystems on Earth in order to reduce wild-animal suffering, such as by [converting lawns to gravel](http://reducing-suffering.org/convert-grass-lawns-to-gravel-to-reduce-insect-suffering/ \"'Convert Grass Lawns to Gravel to Reduce Insect Suffering'\"). An example of a far-future-focused action is spreading the idea that it's wrong to run detailed simulations of ecosystems (whether for reasons of science, entertainment, or deep ecology) because of the wild-animal suffering they would contain. Of course, both of these actions affect both the short term and the far future, but for purposes of this analysis, I'll pretend that gravel lawns only prevent bugs from suffering in the short run, while anti-nature-simulation meme-spreading only helps prevent bugs from suffering in the long run. I'm trying to focus just on the targeted impact time horizon, but of course, in reality, even if the future fanatic is right, every short-term action has far-future implications, so [no charity is](http://reducing-suffering.org/why-charities-dont-differ-astronomically-in-cost-effectiveness/ \"'Why Charities Don't Differ Astronomically in Cost-Effectiveness'\") 1030 times more important than another one.\n\n\nI'll assume that most of the suffering of the far future will be created by the computations that an advanced civilization would run. Rather than measuring computational capacity in [FLOPS](https://en.wikipedia.org/wiki/FLOPS \"'FLOPS'\") or some other [conventional performance metric](https://en.wikipedia.org/wiki/Computer_performance \"'Computer performance'\"), I'll measure computations by how much sentience they contain in the form of the agents and subroutines that are being computed, with the unit of measurement being what I'll call a \"sent\". I define \"sentience\" as \"morally relevant complexity of mental life\". I compute the moral value (or disvalue) for an agent experiencing an emotion as\n\n\n\n> moral value = (sentience of the agent) \\* (how intense the agent would judge the emotion to be relative to evolutionarily/physiologically typical emotions for that agent) \\* (duration of the experience).\n> \n> \n\n\nFor example, if a human has sentience of 1 sent and a fly has sentience of 0.01 sents, then even if a fly experiences a somewhat more damaging event relative to its utility function, that event may get less moral weight.\n\n\nUsing units of sentience will help make later calculations easier. I'll define 1 sent-year as the amount of complexity-weighted experience of one life-year of a typical biological human. That is, consider the sentience over time experienced in a year by the median biological human on Earth right now. Then, a computational process that has 46 times this much subjective experience has 46 sent-years of computation.[2](#link_ajs-fn-id_2-2869) Computations with a higher density of sentience may have more sents even if they have fewer FLOPS.\n\n\nSuppose there's a large but finite number C of civilizations that are about to colonize space. (If one insists that the universe is infinite, one can restrict the analysis to some huge but finite subset of the universe, to keep infinities from destroying math.) On average, these civilizations will run computations whose sentience is equivalent to that of N human-years, i.e., a computing capacity of N sent-years. So these civilizations collectively create the equivalent of C \\* N sent-years.\n\n\nSome of these minds may be created by agents who want to feel intense emotions by immersing (copies of) themselves in experience-machines or virtual worlds. Also, we have much greater control over the experiences of a programmed digital agent than we do over present-day biological creatures.[3](#link_ajs-fn-id_3-2869) These factors suggest that influencing a life-year experienced by a future human might be many times more altruistically important than influencing a life-year experienced by a present-day human. The future, simulated human might have much higher intensity of experience per unit time, and we may have much greater control over the quality of his experience. Let the multiplicative factor T represent how much more important it is to influence a unit of sentience by the average future digital agent than a present-day biological one for these reasons. T will be in units of moral (dis)value per sent-year. If one thinks that a significant fraction of post-human simulations will be run for reasons of wireheading or intrinsically valuing intense experiences, then T may be much higher than 1, while if one thinks that most simulations would be run for purposes of scientific / historical discovery, then T would be closer to 1. T also counts the intensity and controllability of non-simulation subjective experiences. If a lot of the subjective experience in the far future comes from low-level [subroutines](https://longtermrisk.org/a-dialogue-on-suffering-subroutines/ \"'A Dialogue on Suffering Subroutines'\") that have fairly non-intense experiences, then T might be closer to 1.\n\n\nSuppose that the amount of sentience on Earth in the near term (say, the next century or two) is some amount E sent-years. And suppose that some fraction fE of this sentience takes the form of human minds, with the rest being animals, [other life forms](http://reducing-suffering.org/bacteria-plants-and-graded-sentience/ \"'Bacteria, Plants, and Graded Sentience'\"), [computers](http://reducing-suffering.org/why-your-laptop-may-be-marginally-sentient/ \"'Why Your Laptop May Be Marginally Sentient'\"), and so on.\n\n\nSome far-future simulations may contain just one richly computed mind in an otherwise superficial world. I'll call these \"solipsist simulations\". Many other simulations may contain several simulated people interacting but in a very limited area and for a short time. I'll neologize the adjective \"solipsish\" to refer to these simulations, since they're not quite solipist, but because they have so few people, they're solipsist-ish. Robin Hanson [paints](https://web.archive.org/web/20170322174949/http://www.transhumanist.com/volume7/simulation.html \"'How To Live In A Simulation'\") the following picture of a solipsish simulation:\n\n\n\n> Consider, for example, a computer simulation of a party at the turn of the millennium created to allow a particular future guest to participate. This simulation might be planned to last only one night, and at the start be limited to the people in the party building, and perhaps a few people visible from that building. If the future guest decided to leave the party and wander the city, the simulated people at the party might be erased, to be replaced by simulated people that populate the street where the partygoer walks.\n> \n> \n\n\nIn contrast, a non-solipsish simulation is one in which most or all of the people and animals who seem to exist on Earth are actually being simulated to a non-trivial level of detail. (Inanimate matter and outer space may still be simulated with low levels of richness.)\n\n\nLet fN be the fraction of computations run by advanced civilizations that are non-solipsish simulations of beings who think they're humans on Earth, where computations are measured in sent-years, i.e., fN = (sent-years of all non-solipsish sims who think they're humans on Earth)/(sent-years of all computations that are run in total). And let fC be the fraction of the C civilizations who actually started out as biological humans on Earth (rather than biological aliens).\n\n\n### Calculation using Bostrom-style anthropics and causal decision theory\n\n\nI and most [MIRI](https://en.wikipedia.org/wiki/Machine_Intelligence_Research_Institute \"'Machine Intelligence Research Institute'\") researchers have moved on from Bostrom-style anthropic reasoning, but Bostrom anthropics remains well known in the scholarly literature and is useful in many applications, so I'll first explore the implications of the simulation argument in this framework. In particular, I'll use the [self-sampling assumption](https://en.wikipedia.org/wiki/Self-sampling_assumption \"'Self-sampling assumption'\") with the reference class of \"humans who think they're on pre-colonization Earth\". The total number of such humans is a combination of those who *actually are* biological organisms on Earth:\n\n\n\n> (number of real Earths) \\* (human sent-years per real Earth) = (C \\* fC) \\* (E \\* fE) \n> \n> and those in simulations who *think* they're on Earth: \n> \n> (number of advanced-civilization computations) \\* (fraction comprised of non-solipsish humans who think they're on Earth) = C \\* N \\* fN.\n> \n> \n\n\nNote that Bostrom's strong self-sampling assumption samples randomly from observer-moments, rather than from sent-years, but assuming all humans have basically the same sentience, then sampling from sent-years should give basically the same result as sampling from observer-moments.\n\n\nHorn #3 of Bostrom's [simulation-argument](http://simulation-argument.com/simulation.html \"'Are You Living In a Computer Simulation?'\") trilemma can be seen by noting that as long as N/E is extremely large (reject horn #1) and fN / (fC \\* fE) is not correspondingly extremely tiny (reject horn #2), the ratio of simulated to biological humans will be very large:\n\n\n\n> (non-solipsish simulated human sent-years) / (biological human sent-years) \n> \n> = (C \\* N \\* fN) / (C \\* fC \\* E \\* fE) \n> \n> = (N/E) \\* fN / (fC \\* fE).\n> \n> \n\n\nIf you are sampled randomly from all (non-solipsish) simulated + biological human sent-years, the probability that you are a biological human, Pb, is\n\n\n\n> Pb = (biological human sent-years) / [(simulated human sent-years) + (biological human sent-years)]\n> = (C \\* fC \\* E \\* fE) / [(C \\* N \\* fN) + (C \\* fC \\* E \\* fE)]\n> = (fC \\* E \\* fE) / [(N \\* fN) + (fC \\* E \\* fE)].\n> \n> \n\n\nIf we are biological humans, then we're in a position to influence all of the N expected sent-years of computation that lie in our future, which will have, on average, higher intensity and controllability by a factor of T units of moral value per sent-year. On the other hand, it's much harder to reliably influence the far future, because there are so many unknowns and so many intervening steps in the causal chain between what we do now and what happens centuries or gigayears from now. Let D be a discount representing how much harder it is to actually end up helping a being in the far future than in the near term, due to both uncertainty and the muted effects of our actions now on what happens later on.\n\n\nIf we are biological humans, then targeting the far future can affect N expected sent-years with intensity multiple of T, but with discount D, for an expected impact proportional to N \\* T \\* D.[4](#link_ajs-fn-id_4-2869) On the other hand, if we target the short term, we can help the sentience currently on Earth, with an impact proportional to E.[5](#link_ajs-fn-id_5-2869)\n\n\nHowever, actions targeting the far future only matter if there is a far future. In most simulations, the future doesn't extend very far, because simulating a long post-human civilization would be extremely computationally expensive. For example, emulating a planet-sized computer in a simulation would probably require at least a planet-sized computer to run the simulation. As an approximation, let's suppose that actions targeting far-future impact only matter if we're biological humans on an actual Earth. Then the expected impact of far-future actions is proportional to Pb \\* N \\* T \\* D. Let's call this quantity \"L\" for \"long-term impact\". In contrast, actions targeting the short term make a difference whether we're simulated or not, as long as the simulation runs for at least a few decades and includes most animals on Earth. So the expected impact of short-term-focused actions is just E. Let's call our expected impact for short-term actions S.\n\n\nThe ratio of these two quantities is L / S = Pb \\* N \\* T \\* D / E.\n\n\n#### A simple example\n\n\nThe following picture shows a cartoon example of the framework I'm using here. I haven't yet defined all the variables that you see in the upper left corner, but they'll be explained soon. \n\n![](https://longtermrisk.org/files/sim_far_future_variables_v2.png \"I created this picture on 7 Jun. 2016. I release it into the public domain worldwide.\") \n\nNote that N = 6.5 \\* E and fN = (3/26) \\* fE. By inspecting the picture, we can see that Pb should be 1/4, since there's one real Earth and three simulated versions. As hoped, our formula for Pb verifies this:\n\n\n\n> Pb = (fC \\* E \\* fE) / [(N \\* fN) + (fC \\* E \\* fE)]\n> = (1/4 \\* E \\* fE) / [(6.5 \\* E \\* 3/26 \\* fE) + (1/4 \\* E \\* fE)]\n> = (1/4) / [(6.5 \\* 3/26) + (1/4)]\n> = 1/4.\n> \n> \n\n\nAnd L / S = Pb \\* N \\* T \\* D / E = (1/4) \\* 6.5 \\* T \\* D = 1.6 \\* T \\* D.\n\n\nNote that in the actual picture, Earth has 8 squares of far-future computation ahead of it, but N/E is only 6.5. That's because N/E is an average across civilizations, including some that go extinct before colonizing space. But an average like this seems appropriate for our situation, because we don't know *ex ante* whether humanity will go extinct or how big humanity's computing resources will be compared with those of other civilizations.\n\n\n### Calculation based on all your copies\n\n\nNow I'll redo the calculation using a framework that doesn't rely on the self-sampling assumption. Rather, it takes inspiration from [anthropic decision theory](http://arxiv.org/abs/1110.6437 \"'Anthropic decision theory'\"). You [should think of yourself as](http://reducing-suffering.org/anthropics-without-reference-classes/#Update_Feb_2015_You_are_all_your_copies \"'Anthropics without Reference Classes': 'Update, Feb. 2015: You are all your copies'\") all your copies at once. Rather than thinking that you're a single one of your copies that might be biological or might be simulated, you should think of yourself as *both* biological *and* simulated, since your choices affect both biological and simulated copies of you. The interesting question is what the ratio is of simulated to biological copies of you.\n\n\nWhen there are more total copies of Earth (whether biological or simulated), there will be more copies of you. In particular, suppose that some constant fraction fy of all non-solipsish human sent-years (whether biological or simulated) are copies of you. This should generally be roughly the case, because a non-solipsish simulation of Earth-in-the-year-2016 should have ~7 billion humans in it, one of which is you.\n\n\nThen the expected number of biological copies (actually, copy life-years) of you will be fy \\* C \\* fC \\* E \\* fE, and the expected number of simulated copy life-years will be fy \\* C \\* N \\* fN.[6](#link_ajs-fn-id_6-2869)\n\n\nNow suppose you take an action to improve the far future. All of your copies, both simulated and biological, take this action, although it only ends up mattering for the biological copies, since only they have a very long-term future. For each biological copy, the expected value of the action is proportional to N \\* T \\* D, as discussed in the previous subsection. So the total value of having all your copies take the far-future-targeting action is proportional to\n\n\n\n> L = (number of biological copies of you) \\* (expected value per copy) = (fy \\* C \\* fC \\* E \\* fE) \\* (N \\* T \\* D).\n> \n> \n\n\nIn contrast, consider taking an action to help in the short run. This helps whether you're biological or non-solipsishly simulated. The expected value of the action for each copy is proportional to E, so the total value across all copies is proportional to\n\n\n\n> S = (number of biological + non-solipsish simulated copies of you) \\* (expected value per copy) = (fy \\* C \\* fC \\* E \\* fE + fy \\* C \\* N \\* fN) \\* E.\n> \n> \n\n\nThen we have\n\n\n\n> L / S = [ (fy \\* C \\* fC \\* E \\* fE) \\* (N \\* T \\* D) ] / [ (fy \\* C \\* fC \\* E \\* fE + fy \\* C \\* N \\* fN) \\* E ].\n> \n> \n\n\nInterestingly, this exactly equals Pb \\* N \\* T \\* D / E, the same ratio of far-future vs. short-term expected values that we calculated using the self-sampling assumption.\n\n\n### Simplifying L/S\n\n\nSimplifying the L/S expression above:\n\n\n\n> L/S = [N \\* T \\* D / E] \\* (fC \\* E \\* fE) / [(fC \\* E \\* fE) + (N \\* fN)]\n> = T \\* D \\* fC / (fC \\* E/N + fN/fE).\n> \n> \n\n\nNote that this ratio is strictly less than T \\* D \\* fC / (fN/fE), which is a quantity that doesn't depend on N. Hence, we can't make L/S arbitrarily big just by making N arbitrarily big.\n\n\nLet fX be the average fraction of superintelligent computations devoted to non-solipsishly simulating the development of any almost-space-colonizing civilization that actually exists in biological form, not just humans on Earth. fN is the fraction of computations devoted to simulating humans on Earth in particular. If we make the simplifying assumption that the fraction of simulations of humans on Earth run by the collection of all superintelligences will be proportional to the fraction of humans out of all civilizations in the universe, then fN = fX \\* fC. This would be true if\n\n\n* all civilizations run simulations of all other civilizations in proportion to their numerosity\n* only human descendants (not aliens) run simulations of only humans on Earth (not of aliens) and have a typical amount of computing power devoted to such simulations, or\n* various combinations in between these extremes is true.\n\n\nMaking this assumption, we have\n\n\n\n> L/S = T \\* D \\* fC / (fC \\* E/N + fX \\* fC/fE) \n> \n> = T \\* D / (E/N + fX/fE).\n> \n> \n\n\nNon-solipsish simulations of the dominant intelligences on almost-space-colonizing planets also include the (terrestrial or extraterrestrial) wild animals on the same planets. Assuming that the ratio of (dominant-intelligence biological sent-years)/(all biological sent-years) on the typical almost-space-colonizing planet is approximately fE, then fX / fE would approximately equal the fraction of all computational sent-years spent non-solipsishly simulating almost-space-colonizing ancestral planets (both the most intelligent and also less intelligent creatures on those planets). I'll call this fraction simply F. Then\n\n\n\n> L/S = T \\* D / (E/N + F).\n> \n> \n\n\nVisualized using the picture from before, fN/fE is the fraction of squares with Earths in them, and F is the fraction of squares with any planet in them.\n\n\nEveryone agrees that E/N is very small, perhaps less than 10-30 or something, because the far future could contain [astronomical amounts](http://www.nickbostrom.com/astronomical/waste.html \"'Astronomical Waste: The Opportunity Cost of Delayed Technological Development'\") of sentience. If F is not nearly as small (and I would guess that it's not), then we can approximate L/S as T \\* D / F.\n\n\n### Plugging in parameter values\n\n\nNow that we have an expression for L/S, we'd like to know whether it's vastly greater than 1 (in which case the far-future fanatics are right), vastly less than 1 (in which case we should plausibly help beings in the short run), or somewhere in the ballpark of 1 (in which case the issue isn't clear and needs more investigation). To do this, we need to plug in some parameters.\n\n\nHere, I'll plug in point estimates of T, D, and F, but doing this doesn't account for uncertainty in their values. Formally, we should take the full expected value of L with respect to the probability distributions of T and D, and divide it by the full expected value of S with respect to the probability distribution for F. I'm avoiding that because it's complicated to make up complete probability distributions for these variables, but I'm trying to set my point estimates closer to the variables' expected values than to their median values. Our median estimates of T, D, and F are probably fairly different from the expected values, since extreme values may dominate the expected-value calculations. For this reason, I've generally set the parameter point estimates higher than I actually think is reasonable as a median estimate. And of course, your own estimates may be pretty different.\n\n\n**D = 10-3**\n\n\nThis is because (a) it's harder to know if a given action now will actually have a good impact in the long term than it is to know that a given action will have a good impact in the short term and (b) while a single altruist in the developed world can exert more than a ~1/(7 billion) influence on all the sentience on Earth right now (such as by changing the amount of wilderness that exists), a single person may exert less than that amount of influence on the sentience of the far future, because there will be generations after us who may have different values and may override our decisions.\n\n\nIn particular, for point (a), I'm assuming a ~0.1 probability discount, because, for example, while it's not implausible to be 75% confident that a certain action will reduce short-run wild-animal populations (with a 25% chance of increasing them, giving a probability discount of 75% - 25% = 50%), on many far-future questions, my confidence of making a positive rather than negative impact is more like 53% (for a probability discount of 53% - 47% = 6%, which is about 10 times smaller than 50%).\n\n\nFor point (b), I'm using a ~0.01 probability discount because there may be generations ahead of us before the emergence of artificial general intelligence (AGI), and even once AGI arrives, it's not clear that the values of previous humans will translate into the values of the AGI, nor that the AGI will accomplish goal preservation without further mutation of those values. [Maybe](https://www.facebook.com/EssaysOnReducingSuffering/posts/1462206040472155?comment_id=1462249263801166&reply_comment_id=1463594393666653&comment_tracking=%7B%22tn%22%3A%22R%22%7D \"From a comment by Magnus Vinding on this pice: 'how could we know whether we have built something that will 'preserve its goals' for billions of years, or even until next week, especially given the complexity of such a system?'\") goal preservation is very difficult to implement or [is strategically disfavored](https://casparoesterheld.wordpress.com/2016/07/04/self-improvement-races/comment-page-1/#comment-15 \"'Self-improvement races', post by Caspar Oesterheld, comment by Brian Tomasik\") by a self-improvement race against aliens, so that the changes to the values and trajectory of AGI we work toward now will be overridden thousands or millions of years later. (Non-negative utilitarians who consider preventing human extinction to be important may not discount as much here because preventing extinction doesn't have the same risk of goal/institutional/societal drift as trying to change the future's values or general trajectory does.)\n\n\n**T = 104**\n\n\nSome simulations run by superintelligences will probably have extremely intense emotions, but many (especially those run for scientific accuracy) will not. Even if only an expected 0.01% of the far future's sent-years consist of simulations that are 108 times as intense per sent than average experiences on Earth, we would still have T ≈ 104.\n\n\n**F = 10-6**\n\n\nIt's very unclear how many simulations of almost-space-colonizing planets superintelligences would run. The fraction of all computing resources spent on this might be close to 100% or might be below 10-15. It's hard to predict resource allocation by advanced civilizations. But I set this parameter based on assuming that ~10-4 of sent-years will go toward ancestor simulations *of some sort* (this is probably too high, but it's biased upward in expectation, since, e.g., maybe there's a 0.05% chance that post-humans devote 20% of sent-years to ancestor simulations), and only 1% of those simulations will be of the almost-space-colonizing period (since there might also be many simulations of the origin of life, prehistory, and the early years after a planet's \"singularity\"). If we think that simulations contain more sentience per petaflop of computation than do other number-crunching calculations, then 10-4 of sent-years devoted to ancestor simulations of some kind may mean less than 10-4 of all raw petaflops devoted to such simulations.\n\n\n**Calculation using point estimates**\n\n\nUsing these inputs, we have\n\n\n\n> L/S ≈ T \\* D / F = 104 \\* 10-3 / 10-6 = 107.\n> \n> \n\n\nThis happens to be bigger than 1, which suggests that targeting the far future is still ~10 million times better than targeting the short term. But this calculation could have come out as less than 1 using other possible inputs. Combined with general model uncertainty, it seems premature to conclude that far-future-focused actions dominate short-term helping. It's likely that the far future will still dominate after more thorough analysis, but by much less than a naive future fanatic would have thought.\n\n\nObjections\n----------\n\n\n### Doesn't this assume that the simulation hypothesis is 99.999999% likely to be true?\n\n\nNo. My argument works as long as one maintains only at least a modest probability (say, at least 1% or 0.01%) that the simulation hypothesis is correct.\n\n\nIf one entirely rejects the possibility of simulations of almost-space-colonizing civilizations, then F = 0. In that case, L/S = T \\* D / (E/N + F) = T \\* D \\* N / E, which would be astronomically large because N/E is astronomically large. So if we were certain that F = 0 (or even that F was merely on the order of E/N in size), then we would return to future fanaticism. But we're not certain of this, and our impact doesn't become irrelevant if F > 0. Indeed, the more simulations of us there are, the more impact we have by short-term-targeting actions!\n\n\nLet's call a situation where F is on the order of E/N in size or smaller the \"tiny\\_F\" possibility, and the situation where F is much bigger than E/N the \"moderate\\_F\" possibility. The expected value of S, E[S], [is](https://en.wikipedia.org/wiki/Law_of_total_expectation \"'Law of total expectation'\")\n\n\n\n> E[S | tiny\\_F] \\* P(tiny\\_F) + E[S | moderate\\_F] \\* P(moderate\\_F)\n> \n> \n\n\nand similarly for E[L]. While it's true that E[S | tiny\\_F] is quite small, because in that case we don't have many copies in simulations, E[S | moderate\\_F] is bigger. Indeed,\n\n\n\n> E[L] / E[S] = E[L] / { E[S | tiny\\_F] \\* P(tiny\\_F) + E[S | moderate\\_F] \\* P(moderate\\_F) } \n> \n> ≤ E[L] / { E[S | moderate\\_F] \\* P(moderate\\_F) } \n> \n> ≈ E[L | moderate\\_F] / { E[S | moderate\\_F] \\* P(moderate\\_F) },\n> \n> \n\n\nwhere the last line assumes that L isn't drastically affected by the value of F. This last expression is very roughly like (L/S) / P(moderate\\_F), where L/S is computed by plugging in some moderate value of F like I did with my sample numbers above. So unless you think P(moderate\\_F) is extremely small, the overall E[L]/E[S] ratio won't change dramatically upon considering the possibility of no simulations.\n\n\nI've heard the following defense made of future fanaticism against simulations:\n\n\n1. Due to model uncertainty, the probability that I'm not in a simulation is non-vanishing.\n2. Therefore, the probability that I can have astronomical impact by far-future efforts is non-vanishing.\n3. But I can't have astronomical impact by short-term efforts.\n4. So the far future dominates in expectation.\n\n\nThis reply might work if you only consider yourself to be a single one of your copies. But if you correctly realize that your cognitive algorithms determine the choices of all of your copies jointly, then it's no longer true that short-term-focused efforts don't have astronomical impacts, because there are, in expectation, astronomical numbers of simulated copies of you in which your good deeds are replicated.\n\n\n### What if almost all civilizations go extinct before space colonization?\n\n\nThis objection suggests that horn #1 of Bostrom's trilemma may be true. If almost all technological civilizations fail to colonize space -- whether because they destroy themselves or because space colonization proves infeasible for some reason -- this would indeed dramatically reduce the number of advanced computations that get run, i.e., N would be quite small.\n\n\nI find this possibility unlikely, since it seems hard to imagine why basically all civilizations would destroy themselves, given that humanity appears like it has a decent shot at colonizing space. Maybe it's more likely that there are physical/technological limitations on massive space colonization.\n\n\nBut if so, then the far future probably matters a lot less than it seems, either because humanity will go extinct before long or because, even if humans do survive, they won't create astronomical numbers of digital minds. Both of these possibilities downplay future fanaticism. Maybe the far future could matter quite a bit more than the present if humanity survives another ~100 million years on Earth, but without artificial general intelligence and robust goal preservation, it seems much harder to ensure that what we do now will have a reliable impact for millions of years to come (except in a few domains, [like maybe](http://reducing-suffering.org/scenarios-for-very-long-term-impacts-of-climate-change-on-wild-animal-suffering/ \"'Scenarios for Very Long-Term Impacts of Climate Change on Wild-Animal Suffering'\") affecting CO2 emissions).\n\n\n### What if most of the simulations are long-lived?\n\n\nIn the previous argument, I assumed that copies of us that live in simulations don't have far futures ahead of them because their simulations are likely to end within decades, centuries, or millennia. But what if the simulations are very long-lived?\n\n\nIt seems unlikely a simulation could be as long-lived as the basement-level civilization, since it's plausible that simulating X amount of computations in the simulation requires more than X basement computations. But we could still imagine, for example, 2 simulations that are each 1/5 as big as the basement reality. Then aiming for far-future impact in those simulations would still be pretty important, since our copies in the simulations would affect 2 far futures each 1/5 as long as the basement's far future.\n\n\nNote that my argument's formalism already accounts for this possibility. F is the fraction of far-future computations that simulate almost-space-colonizing planets. Most of the far future is not at the almost-space-colonizing stage but at the space-colonizing stage, so most computations simulating far-future outcomes don't count as part of F. For example, suppose that there's a basement reality that simulates 2 far-future simulations that each run 1/5 as long as the basement universe runs. Suppose that pre-space-colonizing planets occupy only 10-20 of all sentience in each of those simulations. Ignoring the non-simulation computations also being run, that means F = 10-20, which is very close to 0. So the objection that the simulations that are run might be very long can be reduced to the objection that F might be extremely close to zero, which I discussed previously. The generic reply is that it seems unreasonable to be confident that F is so close to zero, and it's quite plausible that F is much bigger (e.g., 10-10, 10-5, or something like that). If F is bigger, short-term impact is replicated more often and so matters relatively more.\n\n\nI would expect some distribution of lengths of simulations, perhaps following a power law. If we look at the distribution of lengths of threads/processes that run on present-day computers, or how long companies survive, or almost anything similar, we tend to find a lot of short-lived things and a few long-lived things. I would expect simulations to be similar. It seems unreasonable to think that across all superintelligences in the multiverse, few short-lived simulations are run and the majority of simulations are long.\n\n\nAnother consideration is that if the simulators know the initial conditions they want to test with the simulation, then allowing the simulation to run longer might mean that it increasingly diverges from reality as time goes on and errors accumulate.\n\n\nAlso, if there are long-lived simulations, they might themselves run simulations, and then we might have short-lived copies within those nested simulations. As the number of levels of simulation nesting goes up, the length (and/or [computational complexity](http://smbc-comics.com/index.php?db=comics&id=2055 \"Brian says: This comic talks about algorithmic complexity of simulations, rather than computational complexity. But it's plausible that these two are correlated in general, since a simulation with more elaborate rules probably requires more resources to compute. The text of the comic, taken from http://www.ohnorobot.com/index.php?comic=137;s=physics, reads: 'Alien Blob: 'Ugh, the universe's physics is too complicated. I'm gonna simplify some rules for out simulation.' More Humanoid Alien: 'Ugh, the universe's physics is too complicated. I'm gonna simplify some rules for out simulation.' Humanoid Alien: 'Ugh, the universe's physics is too complicated. I'm gonna simplify some rules for out simulation.' Man #1: 'So, the universe is made entirely of tiny wobbly strings?' Man #2: 'Weird, right?''\")) of the nested simulations must go down, because less and less computing power is available (just like less and less space is available for the innermost matryoshka dolls).\n\n\nIf the far future was simulated and the number and/or complexity of nested simulations wasn't progressively reduced as the level of nesting increased, then running simulations beyond the point when simulations became feasible [would require](http://papers.ssrn.com/sol3/papers.cfm?abstract_id=929327 \"'Historical Simulations - Motivational, Ethical and Legal Issues' by Peter Jenkins, pp. 36-37\") an explosion of computing power:\n\n\n\n> The creators of the simulation would likely not continue it past the point in history when the technology to create and run these simulations on a widespread basis was first developed. [...] Another reason is to avoid stacking of simulations, i.e. simulations within simulations, which would inevitably at some point overload the base machine on which all of the simulations are running, thereby causing all of the worlds to disappear. This is illustrated by the fact that, as Seth Lloyd of MIT has noted in his recent book, *Programming the Universe*, if every single elementary particle in the real universe were devoted to quantum computation, it would be able to perform 10122 operations per second on 1092 bits of information. In a stacked simulation scenario, where 106 simulations are progressively stacked, after only 16 generations, the number of simulations would exceed by a factor of 104 the total number of bits of information available for computation in the real universe.\n> \n> \n\n\nThe period when a civilization is almost ready to colonize space seems particularly interesting for simulators to explore, since it crucially affects how the far future unfolds. So it would make sense that there would be more simulations of the period around now than there would be of the future 1 million years from now, and many of the simulations of the 21st century would be relatively short.\n\n\nBeyond these qualitative arguments, we can make a quantitative argument as to why the far future within simulations shouldn't dominate: A civilization with N sent-years of computing power in its far future can't produce more than N sent-years of simulated far-future sentience, even if it only ran simulations and had no simulation overhead (i.e., a single planet-sized simulated computer could be simulated with only a single planet-sized real computer). More likely, a civilization with N sent-years of computing power would only run like N/100 sent-years of simulated far-future sentience, or something like that, since probably it would also want to compute things besides simulations. So what's at stake with influencing the \"real\" far future is probably much bigger than what's a stake influencing the simulated far future. (Of course, simulated far futures could be bigger if we exist in the simulations of aliens, not just our own civilization. But unless we in particular are extremely popular simulation targets, which seems unlikely *a priori*, then in general, across the multiverse, the total simulated far futures that we control should be less than the total real far futures that we control.) Of course, a similar point applies to simulations of short-term futures: The total sent-years in all short-term futures that we control is very likely less than the total sent-years in the far futures we control (assuming we have copies both in simulations and in basement realities). The argument as to why short-term helping might potentially beat long-term helping comes from our greater ability to affect the short term and know that we're making a positive rather than negative short-term impact. Without the D probability penalty for far-future actions, it would be clear that L > S within my framework.\n\n\n### What if the basement universe has unlimited computing power?\n\n\nWhat if the basement universe has unbounded computing power and thus has no limitations on how long simulations can be? And what if simulations run extremely quickly, so there's no reason not to run a whole simulated universe from the big bang until the stars die out? Even then, it's not clear to me that we wouldn't get mostly short-lived simulations, especially if they're being run for reasons of intrinsic value. For every one long-lived simulation, there might be millions or quadrillions of short-lived ones.\n\n\nHowever, one could make the argument that if the basement-level simulators are only interested in science, then rather than running short simulations (except when testing their simulation software), they might just run a bunch of long simulations and then look at whatever part of a long simulation is of interest at any given time. Indeed, they might run all possible histories of universes with our laws of physics, and once that complete collection was available to them, they wouldn't need to run any more simulations of universes with our physical laws. Needless to say, this possibility is extremely speculative. Maybe one could argue that it's also extremely important because if this scenario is true, then there are astronomical numbers of copies of us. But there are all kinds of random scenarios in which one can raise the stakes in order to try to make some obscure possibility dominate. That is, after all, the point of the original Pascal's-mugging thought experiment. In contrast, I don't consider the simulation-based argument I'm making in this piece to be a strong instance of Pascal's mugging, because it actually seems reasonably likely that advanced civilizations will run lots of simulations of people on Earth.\n\n\nIn any case, even if it's true that the basement universe has unbounded computing resources and has run simulations of all possible histories of our universe, this doesn't escape my argument. The simulations run by the basement would be long-lived, yes. But those simulations would plausibly contain nested simulations, since the advanced civilizations within those simulations would plausibly want to run their own simulations. Hence, most of our copies would live in the nested simulations (i.e., simulations within simulations), and the argument in this piece would go through like before. The basement simulators would be merely like [deist](https://en.wikipedia.org/wiki/Deism \"'Deism'\") gods who set our universe in motion and then let it run on its own indefinitely.\n\n\n### Our simulated copies can still impact the far future by helping our simulators\n\n\nEven if a copy of you lives in a short-lived simulation, it might have a causal impact well beyond the simulation. Many simulations may be run for reasons of scientific discovery, and by learning things in our world, [we might](https://web.archive.org/web/20160627084201/http://felicifia.org:80/viewtopic.php?t=899 \"'The simulation argument and human extinction': 'Suppose the simulators have developed WBE, but not any singleton-type superintelligence. They could then run lots of ancestor simulations at great speed to find out what civilizations that manage to avoid existential disasters have in common, in order to implement similar strategies themselves. In that case, researching existential risk could also affect the bottom-level universe.'\") inform our simulators of those things, thereby having a massive impact.\n\n\nI find this a weak argument for several reasons.\n\n\n1. If the simulators wanted to learn things about the universe in general, it would probably be more successful for them to use artificial general intelligences to do so rather than creating fake worlds filled with primates, only a fraction of whom do scientific research.\n2. If we can help our simulators just by showing them how civilizations develop, that's fine, but then it's not clear that we should take any particular actions one way or another based on this possibility.\n3. If we are only one out of tons of simulations, the impact of our particular information for the simulators is small. (Compare to the value of a single survey response out of a 5000-person survey.)\n4. It's not clear if we want to help our simulators, since they might have values antithetical to our own.\n\n\n### What if simulations aren't conscious?\n\n\nI'm quite confident that I would care about simulated humans. If you don't think you would, then you're also less likely to care about the far future in general, since in many far-future scenarios, especially those that contain the most sentient beings, most intelligence is digital (or, at least, non-biological; it could be analog-computed).\n\n\nIf you think it's a factual rather than a moral question whether simulations are conscious, then you should maintain some not-too-small probability that simulations are conscious and downweight the impact your copies would have in simulations accordingly. As long as your probability of simulations being conscious is not tiny, this shouldn't change the analysis too much.\n\n\nIf you have moral uncertainty about whether simulations matter, the [two-envelopes problem](http://reducing-suffering.org/two-envelopes-problem-for-brain-size-and-moral-uncertainty/ \"'Two-Envelopes Problem for Brain Size and Moral Uncertainty'\") comes to haunt you. But it's plausible that the faction of your moral parliament that cares about simulations should get some influence over how you choose to act.\n\n\n### The simulation argument is weird\n\n\nIn [a post](http://lesswrong.com/lw/gzq/bayesian_adjustment_does_not_defeat_existential/ \"'Bayesian Adjustment Does Not Defeat Existential Risk Charity'\") defending the huge importance of the far future, steven0461 anticipates the argument discussed in this piece:\n\n\n\n> the idea that we’re living in an [ancestor simulation](http://simulation-argument.com/ \"'Are You Living In a Computer Simulation?'\"). This would imply astronomical waste was illusory: after all, if a substantial fraction of astronomical resources were dedicated toward such simulations, each of them would be able to determine only a small part of what happened to the resources. This would limit returns. It would be interesting to see more analysis of optimal philanthropy given that we’re in a simulation, but it doesn’t seem as if one would want to predicate one’s case on that hypothesis.\n> \n> \n\n\nBut I think we should include simulation considerations as a strong component of the overall analysis. Sure, they're weird, but so is the idea that we can somewhat reliably influence the Virgo-Supercluster-sized computations of a posthuman superintelligence, which is the framework that the more persuasive forms of future fanaticism rely on.\n\n\n### Simulated people matter less due to a bigger Kolmogorov penalty\n\n\nThis objection is abstruse but has been mentioned to me once. Some have proposed weighing the moral value of an agent in proportion to the [Kolmogorov complexity of locating](http://reducing-suffering.org/anthropics-without-reference-classes/#Kolmogorov-complexity_anthropics \"'Anthropics without Reference Classes': 'Kolmogorov-complexity anthropics'\") that agent within the multiverse. For example, it's plausibly easier to locate a biological human on Earth than it is to locate any particular copy of that human in a massive array of post-human simulations. The biological human might be specified as \"the 10,481,284,089th human born[7](#link_ajs-fn-id_7-2869) since the year that humans call AD 0, on the planet that started post-human civilization\", while the simulated version of that human might be \"on planet #5,381,320,108, in compartment #82,201, in simulation #861, the 10,481,284,089th human born since the year that the simulated humans call AD 0\". (These are just handwavy illustrations of the point. The actual descriptions would need vastly greater precision. And it's not completely obvious that some of the ideas I wrote with text could be specified compactly.) The shortest program that could locate the simulated person is, presumably, longer than the shortest program that could locate the biological person, so the simulated person (and, probably, the other beings in his simulated world) get less moral weight. Hence, the astronomical value of short-term helping due to the correlated behavior of all of that person's copies is lower than it seems.\n\n\nHowever, a view that gives generally lower moral weight to future beings in this way should also give lower moral weight to the other kinds of sentient creatures that may inhabit the far future, especially those that are not distinctive enough to be located easily. So the importance of influencing *the far future* is also dampened by this moral perspective. It's not obvious and would require some detailed calculation to assess how this location-penalty approach affects the relative importance of short-term vs. far-future helping.\n\n\n### Many copies of a brain don't matter much more than one copy\n\n\n[earthwormchuck163](http://lesswrong.com/lw/h8k/pascals_muggle_infinitesimal_priors_and_strong/8x3d \"'earthwormchuck163 comments on Pascal's Muggle: Infinitesimal Priors and Strong Evidence - Less Wrong'\"): \"I'm not really sure that I care about duplicates that much.\"[8](#link_ajs-fn-id_8-2869) Applied to the simulation hypothesis, this suggests that if there are many copies of you helping other Earthlings across many simulations, since you and the helped Earthlings have the same brain states in the different simulations, those duplicated brain states might not matter more than a single such brain state. In that case, your ability to help tons of copies in simulations via short-term-focused actions would be less important. For concreteness, imagine that there are 1000 copies of you and the people you're helping across 1000 simulations. If you don't think several copies matter morally more than one copy, then the amount of good your short-term helping does will be divided by 1000 relative to a view that cares about each of the 1000 copies.\n\n\nHow about aiming to influence the far future? If all the morally relevant computations in the far future are duplicated about 1000 times, then the value of aiming to influence the far future is also about 1000 times less than what it would be if you cared about each copy individually. However, it's possible that the far future will contain more mind diversity. For example, maybe some civilizations would explicitly aim to make each posthuman mind somewhat unique in order to avoid repetitiveness. In this case, perhaps altruism targeting the far future would appear somewhat more promising than short-term helping if one holds the view that many mind copies only matter as much as one mind.\n\n\nMy main response is that I find it wrong to consider many copies of a brain not much more important than a single brain. This just seems intuitive to me, but it's reinforced by [Bostrom's reductio](http://www.nickbostrom.com/papers/experience.pdf \"'Quantity of experience: brain-duplication and degrees of consciousness', p. 187\"):\n\n\n\n> if the universe is indeed infinite then on our current best physical theories all possible human brain-states would, with probability one, be instantiated somewhere, independently of what we do. But we should surely reject the view that it follows from this that all ethics that is concerned with the experiential consequences of our actions is void because we cannot cause pain, pleasure, or indeed any experiences at all.\n> \n> \n\n\nAnother reply is to observe that whether a brain counts as a duplicate is a matter of opinion. If I run a given piece of code on my laptop here, and you run it on your laptop on the other side of the world, are the two instances of the software duplicates? Yes in the sense that the high-level logical behavior is the same. No in the sense that they're running on different chunks of physics, at different spatiotemporal locations, in the proximity of different physical objects, etc. Minds have no non-arbitrary boundaries, and the \"extended mind\" of the software program, including the laptop on which it's running and the user running it, is not identical in the two cases.\n\n\nFinally, it's plausible that most simulations would have low-level differences between them. It's unlikely that simulations run by two different superintelligent civilizations will be exactly the same down to the level of every simulated neuron or physical object. Rather, I conjecture that there would be lots of random variation in the exact details of the simulation, but assuming your brain is somewhat robust to variations in whether one random neuron fires or not at various times, then several slightly different variations of a simulation can have the same high-level input-output behavior and thus can all be copies of \"you\" for decision-theoretic purposes. There would presumably also be variations in the simulations run within a single superintelligent civilization, since there's no scientific need to re-run duplicative simulations of the exact same historical trajectory down to the level of every neuron in every person being identical, except maybe for purposes of debugging the simulation or replicating/verifying past scientific findings.\n\n\nOf course, perhaps the view that \"many copies don't count much more than one copy\" would say that *near* copies also don't count much more than one copy. This view is vulnerable to potential reductios, such as the idea that if two identical twins who have had very similar life experiences suffer the same horrible death, it's less bad than if two very different people suffer different but similarly horrible deaths. (Of course, perhaps some philosophers would bite this bullet.)\n\n\n### If we're simulated, then reducing suffering by preventing existence frees up more computing resources\n\n\nThis is an important and worrying consideration. For example, suppose you aim to prevent wild-animal suffering by reducing habitat and thereby decreasing wildlife populations. If the simulation includes models of the neurons of all animals but doesn't simulate inanimate matter in much detail, then by reducing wildlife numbers, we would save computing resources, which the simulators could use for other things. Worryingly, this might allow simulators to run more total simulations of Earth-like planets, [most of the neurons](http://reducing-suffering.org/how-many-wild-animals-are-there/#Biomass_Estimates \"'How Many Wild Animals Are There?': 'Biomass Estimates'\") on which are found in invertebrates who have short lives and potentially painful deaths.\n\n\nIf reducing wildlife by 10% allowed simulators to run 10% more total Earth simulations, then habitat reduction would sadly not reduce much suffering.[9](#link_ajs-fn-id_9-2869) But if a nontrivial portion of the computing power of Earth simulations is devoted to not-very-sentient processes like weather, an X% reduction in wild-animal populations reduces the computational cost of the whole simulation by less than X%. Also, especially if the simulations are being run for reasons of science rather than intrinsic value, the simulators may only need to run so many simulations for their purposes, and our making the simulations cheaper wouldn't necessarily cause the simulators to run more.[10](#link_ajs-fn-id_10-2869) The simulators might use those computing resources for other purposes. Assuming those other purposes would, on average, contain less suffering than exists in wilderness simulations, then reducing habitat could still be pretty valuable.\n\n\nOne might ask: If T > 1, then won't the non-Earth-simulation computations that can be run in greater numbers due to saving on habitat computations have a *greater* density of suffering, not less, than the habitat computations had? Not necessarily, because T gives the intensity of emotions per sent-year. But many of the computations that an advanced civilization would run might not contain much sentience.[11](#link_ajs-fn-id_11-2869) So the intensity of emotions per petaflop-year of non-Earth-simulation computation, rather than per sent-year, might be lower than T. Nonetheless, we should worry that this might not be true, in which case reducing habitat and thereby freeing up computing resources for our simulators would be net bad (at least for negative utilitarians; for classical utilitarians, replacing ecosystems that contain net suffering [with other computations that may contain net happiness](http://smbc-comics.com/index.php?db=comics&id=2073 \"The text of the comic, taken from http://www.ohnorobot.com/index.php?comic=137;s=porn, reads: 'Woman: 'I'm scared. What if reality is just a big simulation?' Man: 'Why would that be scary?' [ The woman looks worried ] Woman: 'What if we really ARE made in God's image?' [ God's computer screen reads, 'C://cosmos and porn. Free space: 2%' ] God (thinking): 'Crap. Gotta clear up some space. What to do...''\") may be win-win).\n\n\nIt's also worth asking whether reducing [net primary productivity](http://reducing-suffering.org/net-primary-productivity-land-type/#Why_NPP \"'Net Primary Productivity by Land Type': 'Why NPP?'\") on Earth would in fact save simulators' computing power. If the simulation is run in enough detail that invertebrate neurons are approximated, then the simulation may also be run in enough detail that, e.g., soil chemistry, ocean currents, and maybe even photons are also approximated. Even if the soil contains fewer earthworms and bacteria, it may contain just as many clay particles, water pockets, and other phenomena that still need to be modeled for the simulation to be realistic. Groundwater, for example, is a variable that humans [monitor extensively](https://en.wikipedia.org/wiki/Hydrogeology \"'Hydrogeology'\"), and its dynamics would need to be modeled accurately even if the ground contained no life. Still, much of the dry mass that composes organism bodies comes from the atmosphere (in the form of carbon dioxide), and it's not obvious to me whether an accurate Earth simulation would still need to model individual carbon-based molecules if they weren't captured by biological organisms. Nonetheless, these considerations about abiotic environmental factors suggest that in accurate simulations, possibly almost all computation is devoted to non-living physical processes. So, for example, maybe 99% of the computing resources in an Earth simulation model abiotic phenomena, in which case reducing plant productivity by 50% would only reduce the simulation's computational cost by 1% \\* 50% = 0.5%. This reduction in biological productivity would selectively reduce the most suffering-dense parts of the simulation, and unless the computations run using those computational savings would contain at least some extremely intense suffering, the reduction in biotic productivity would probably still be net good in terms of reducing suffering.\n\n\nIt's also possible there are strategies to increase the computing cost of our simulation in ways that, unlike wildlife, don't contain lots of sentience. For example, monitoring deep-underground physical dynamics in more detail might force our simulators to compute those dynamics more carefully, which would waste computing cycles on not-very-sentient processes and reduce the amount of other, possibly suffering-dense computations our simulators could run.\n\n\nFinally, keep in mind that some ways of reducing suffering, such as [more humane slaughter](http://reducing-suffering.org/why-i-support-the-humane-slaughter-association/ \"'Why I Support the Humane Slaughter Association'\") of farm animals, can prevent lots of simulated copies of horrific experiences without appreciably changing how expensive our world is for our simulators to compute.\n\n\nCopies that aren't both biological and simulated simultaneously\n---------------------------------------------------------------\n\n\nSo far I've been assuming that if there are many copies of us in simulations, there are also a few copies of us in basement reality as well at various points in the multiverse. However, it's also possible that we're in a simulation that doesn't have a mirror image in basement-level reality. For instance, maybe the laws of physics in our simulated world are different from the basement's laws of physics, and there's no other non-simulated universe in the multiverse that shares our simulated laws of physics. Maybe our world contains miracles that the simulators have introduced. And so on. Insofar as there are scenarios in which we have copies in simulations but *not* in the basement (except for extremely rare Boltzmann-brain-type copies that may exist in some basement worlds, or extremely low-measure universes in the multiverse where specific miracles are hard-coded into the basement-level laws of physics), this amplifies the value of short-term actions, since we would be able to influence our many simulated copies but wouldn't have much of any basement copies who could affect the far future.\n\n\nOn the flip side, it's possible that basically all our copies are in basement-level reality and don't have exact simulated counterparts. One example of why this might be would be if it's just too hard to simulate a full person and her entire world in enough detail for the person's choices in the simulation to mirror those of the biological version. For example, maybe computationally intractable quantum effects prove crucial to the high-level dynamics of a human brain, and these are too expensive to mirror in silico.[12](#link_ajs-fn-id_12-2869) The more plausible we find this scenario, the less important short-term actions look. But as we've seen, unless this scenario has probability very close to 1, the ambiguity between whether it's better to focus on the short term or long term remains unresolved.\n\n\nEven if all simulations were dramatically different from all basement civilizations, as long as some of the simulated creatures thought they were in the basement, the simulation argument would still take effect. If most almost-space-colonizing organisms that exist are in simulations, then it's most likely that whatever algorithm your brain is running is one of those simulations rather than in a basement universe.\n\n\nI'm still a bit confused about how to do anthropic reasoning when, due to limited introspection and bounded rationality, you're not sure which algorithm you are among several possible algorithms that exist in different places. But a naive approach would seem to be to apportion even odds among all algorithms that you might be that you can't distinguish among.\n\n\nFor example, suppose there are only two types of algorithms that you might be: (1) biological humans on Earth and (2) simulated humans who think they're on Earth who are all the same as each other but who are different than biological humans. This is illustrated in the following figure, where the B's represent biological humans and the S's represent the simulated humans who all share the same cognitive algorithm as each other. \n\n![](https://longtermrisk.org/files/biological_vs_simulated_same_S.png \"I created this picture on 9 Jun. 2016. I release it into the public domain worldwide.\") \n\nGiven uncertainty between whether you're a B or an S, you apportion 1/2 odds to being either algorithm. If you're a B, you can influence all N expected sent-years of computation in your future, while if you're an S, you can only influence E sent-years, but there are many copies of you. The calculation ends up being the same as in the \"Calculation based on all your copies\" section above, since\n\n\n\n> L = (probability you're a B) \\* (number of biological copies of you) \\* (expected value per copy) + (probability you're an S) \\* (no impact for future-focused work because there is no far future in a simulation) = (1/2) \\* (fy \\* C \\* fC \\* E \\* fE) \\* (N \\* T \\* D) + (1/2) \\* 0,\n> \n> \n\n\nand\n\n\n\n> S = (probability you're a B) \\* (number of biological copies of you) \\* (expected value per copy) + (probability you're an S) \\* (number of non-solipsish simulated copies of you) \\* (expected value per copy) = (1/2) \\* (fy \\* C \\* fC \\* E \\* fE) \\* E + (1/2) \\* (fy \\* C \\* N \\* fN) \\* E.\n> \n> \n\n\nL/S turns out to be exactly the same as before, after we cancel the factors of 1/2 in the numerator and denominator.[13](#link_ajs-fn-id_13-2869)\n\n\nNext, suppose that all the simulated copies are different from one another, so that it's no longer the case that what one copy does, the rest do. In this case, there are lots of algorithms that you might be (labelled S\\_1, S\\_2, ... in the below figure), and most of them are simulated. \n\n![](https://longtermrisk.org/files/biological_vs_simulated_different_Ss.png \"I created this picture on 9 Jun. 2016. I release it into the public domain worldwide.\") \n\nNow the probability that you're biological is just Pb, and the L/S calculation proceeds identically to what was done in the \"Calculation using Bostrom-style anthropics and causal decision theory\" section above.\n\n\nSo no matter how we slice things, we seem to get the exact same expression for L/S. I haven't checked that this works in all cases, but the finding seems fairly robust.\n\n\nSolipsist and solipsish simulations\n-----------------------------------\n\n\n\n> Since it is harder to vary the simulation detail in role-playing simulations containing real people [i.e., people are particularly expensive to simulate compared with coarse-grained models of inanimate objects], these simulations tend to have some boundaries in space and time at which the simulation ends.\n> \n> \n> --[Robin Hanson](https://web.archive.org/web/20170322174949/http://www.transhumanist.com/volume7/simulation.html \"'How To Live In A Simulation'\")\n> \n> \n\n\nDoes consideration of simulations favor solipsist scenarios? In particular, it's possible to run ~7 billion times more simulations in which you are the only mind than it is to run a simulation containing all of the world's human population. In those superintelligent civilizations where you are run a lot more than average, you have many more copies than normal. So should you be more selfish on this account, since other people (especially distant people whom you don't observe) may not exist?\n\n\nMaybe slightly. [Robin Hanson](https://web.archive.org/web/20170322174949/http://www.transhumanist.com/volume7/simulation.html \"'How To Live In A Simulation'\"):\n\n\n\n> And your motivation to save for retirement, or to help the poor in Ethiopia, might be muted by realizing that in your simulation you will never retire and there is no Ethiopia.\n> \n> \n\n\nHowever, we shouldn't give too much weight to solipsist simulations. Maybe there are some superintelligences that simulate just copies of you. But there may also be superintelligences that simulate just copies of other people and not you. Superintelligences that simulate huge numbers of just you are probably rare. In contrast, superintelligences that simulate a diverse range of people, one of which may be you, are probably a lot more common. So you may have many more non-solipsist copies than solipsist copies.\n\n\nYou may also have many solipsish copies, depending on the relative frequency of solipsish vs. non-solipsish simulations. Solipsish simulations that don't simulate (non-pet) animals in much detail can be much cheaper than those that do, so it's possible there are, say, 5 or 20 times as many solipsish simulations that omit animals than those that contain animals? It's very hard to say exactly, since it depends on the relative usefulness or intrinsic value that various superintelligent simulators place on various degrees of simulation detail and realism. Still, as long as the number of animal-free solipsish simulations isn't many orders of magnitude higher than the number of animal-containing simulations, helping animals is still probably very important.\n\n\nAnd the possibility of animal-free solipsish simulations doesn't dramatically upshift the importance of helping developing-world humans relative to helping animals, since in some solipsish simulations, developing-world humans don't exist either.\n\n\nThe possibility of solipsish simulations may be the first ever good justification for giving (slightly) more moral weight to those near to oneself and [those one can observe](https://blog.jaibot.com/the-copenhagen-interpretation-of-ethics/ \"'The Copenhagen Interpretation of Ethics'\") directly.\n\n\n### Famous people\n\n\n[Jaan Tallinn](http://www.33rdsquare.com/2012/10/jaan-tallinns-metaphysical-quest.html \"'Jaan Tallinn's Metaphysical Quest'\") and [Elon Musk](http://www.vox.com/2016/6/2/11837608/elon-musk-simulation-argument \"'Elon Musk believes we are probably characters in some advanced civilization's video game'\") both find it likely that they're in a simulation. Ironically, this belief may be more justified for interesting tech millionaires/billionaires than for ordinary people (in the sense that famous/rich people may have more copies than ordinary people do), since it may be both more scientifically useful and more entertaining to simulate powerful people rather than, e.g., African farmers.\n\n\nSo should rich and powerful people be more selfish than average, because they may have more simulated copies than average? Probably not, because powerful people can also make more altruistic impact than average, and at less personal cost to themselves. (Indeed, helping others [may](https://en.wikipedia.org/wiki/Paradox_of_hedonism \"'Paradox of hedonism'\") make oneself happier in the long run anyway.) It's pretty rare for wealthy humans to experience torture-level suffering (except maybe in some situations at the end of life -- in which case, physician-assisted suicide seems like a good idea), so the amount of moral good to be done by focusing on oneself seems small even if most of one's copies are solipsist.\n\n\n### How feasible are solipsist simulations?\n\n\nIt may be hard to fake personal interactions with other humans without actually simulating those other humans. So probably at least your friends and family are being simulated too. But the behavior of your acquaintances would be more believable if *they* also interacted with fully simulated people. Ultimately, it might be easiest just to simulate the whole world all at once rather than simulating pieces and fudging what happens around the edges. I would guess that most simulations requiring a high level of accuracy contain all human minds who exist at any given time on Earth (though not necessarily at past and future times).\n\n\nPerhaps one could make some argument for the detailed simulation of past humans similar to the argument for detailed simulation of your acquaintances and their acquaintances: in order to have realistic past memories, you must have been simulated in the past, and in order for your past interactions to be realistic, you must have interacted with other finely simulated people in the past. And in order for your parents and grandparents to have realistic memories, they must have interacted with realistic past people, and likewise for their parents and grandparents, and so on. I wonder if there could be a gradual reduction in the fidelity of simulations moving further and further into the past, to the extent that, say, Julius Caesar never substantially existed in the past of most simulation branches that are simulating our present world? Or perhaps Julius Caesar was simulated in great detail once, but then multiple later historical trajectories are simulated from those same initial conditions.\n\n\nIf there are disconnected subgraphs within the world's social network, it's possible there could be a solipsish simulation of just your subgraph, but it's not clear there are many disconnected subgraphs in practice (except for tiny ones, like isolated peoples in the Amazon), and it's not clear why the simulators would choose to only simulate ~99% of the human population instead of 100%.\n\n\nWhat about non-human animals? At least pets, farm animals, and macroscopic wildlife would probably need to be simulated for purposes of realism, at least when they're being watched. (Maybe this is the first ever good argument against real-time wildlife monitoring and CCTV in factory farms.) And ecosystem dynamics will be more believable and realistic if all animals are simulated. So we have some reason to suspect that wild animals are simulated as well. However, there's some uncertainty about this; for instance, maybe the simulators can get away with pretty crude simulation of large-scale ecosystem processes like phytoplankton growth and underground decomposition. Or maybe they can [use cached results](https://web.archive.org/web/20170322174949/http://www.transhumanist.com/volume7/simulation.html \"'How To Live In A Simulation': 'Also, in general the behavior of many people far from the simulated people of interest might be randomly generated based on statistics from previous simulations, or come from 'cached' records of previous simulated people. Some 'people' in a crowd simulation might even be run by very simple programs that have them wiggle and mumble 'peas and carrots' like extras supposedly did once in movie crowd scenes. Assuming you don't care as much about these fake simulated people, then all-else equal you shouldn't care as much about how your actions affect the rest of the world.'\") from previous simulations. But an accurate simulation *might* need to simulate every living cell on the planet, as well as some basic physical features of the Earth's crust.\n\n\nThat said, we should in general expect to have more copies in lower-resolution simulations, since it's possible to run more low-res than high-res simulations.\n\n\n#### Open question: Could wildlife monitoring be bad?\n\n\nHow significant is the concern that, say, better monitoring of wildlife could significantly increase wild-animal suffering by forcing the simulators to simulate that wildlife in more detail? If most of our copies exist within simulations rather than basement reality, then this concern can't be dismissed out of hand.\n\n\nThe issue seems to hinge on whether a specific act of wildlife monitoring would make the difference to the fineness of the wilderness simulation. Maybe wildlife are already simulated in great detail regardless of how well we monitor them, because those creatures have ecological effects that we will inevitably notice. Conversely, maybe even if we monitor wildlife 24/7 with cameras and movement trackers, the behavior of the monitored creatures will be generated based on cached behavioral patterns or based on relatively simple algorithms, similar to the behavior of sophisticated non-player characters in video games. For wilderness monitoring to increase wild-animal suffering, it would have to be that our simulation is somewhere between those extremes—that the additional amount of monitoring makes the difference between coarse-grained and fine-grained simulations of creatures in nature.\n\n\nStill, there seems to be some chance that's the case, and the benefits of wilderness monitoring don't necessarily seem huge either. As an example, suppose that there's a 50% chance that wildlife are already simulated in great detail, a 45% chance that wildlife wouldn't need to be simulated in great detail even if humans did more wilderness monitoring, and a 5% chance that greater wilderness monitoring would make the difference between simple simulations and complex simulations of wild animals. Let's ignore the 45% of scenarios on the assumption that the simulated animals are morally trivial in those cases. Suppose that in the 50% of scenarios where wilderness is already simulated in great detail, wildlife monitoring of a given hectare of land allows humans to reduce suffering on that hectare by, say, 10% of its baseline level B. Meanwhile, in the 5% of scenarios where increased monitoring makes the difference between trivial and complex wilderness simulations, wildlife monitoring increases suffering from roughly 0, due to the triviality of the creatures, to (100% - 10%) \\* B on that hectare. (The \"minus 10%\" part is because monitoring reduces wild-animal suffering by 10% relative to the baseline B.) Since 50% \\* 10% \\* B ≈ 5% \\* 90% \\* B, the expected benefit of wildlife monitoring roughly equals the expected cost in this example. I have no idea if these example numbers are reasonable, but at first glance, the concern about increasing suffering via monitoring doesn't seem completely ignorable.\n\n\n### Tradeoff between number of copies vs. impact per copy\n\n\nThe following figure illustrates some general trends that we might expect to find regarding the number of copies we have of various sorts. Altruistic impact is highest when we focus on the level of solipsishness where the product of the two curves is highest. The main point of this essay is that where that maximum occurs is not obvious. Note that this graph can make sense even if you give the simulation hypothesis low probability, since you can convert \"number of copies of you\" into \"expected number of copies of you\", i.e., (number of copies of you if simulations are common) \\* (probability simulations are common). \n\n![](https://longtermrisk.org/files/solipsishness_graph.png \"I created this picture on 7 Jun. 2016. I release it into the public domain worldwide.\") \n\nIf it turns out that solipsish simulations are pretty inaccurate and so can't reproduce the input-output behavior that your brain has in more realistic worlds, then you won't have copies at all levels of detail along the solipsish spectrum, but you should still have uncertainty about whether your algorithm is instantiated in a more or less long-lived high-resolution simulation, or not in a simulation at all.\n\n\nSuffering in physics or other black swans could save future fanaticism\n----------------------------------------------------------------------\n\n\nIn this piece, I've been assuming that most of the suffering in the far future that we might reduce would take the form of intelligent computational agents run by superintelligences. The more computing power these superintelligences have, the more sentient minds they'll create, and the more simulations of humans on Earth some of them will also create.\n\n\nBut what if most of the impact of actions targeting the future doesn't come from effects on intelligent computations but rather from something else much more significant? One example could be if we considered [suffering in fundamental physics](http://reducing-suffering.org/is-there-suffering-in-fundamental-physics/ \"'Is There Suffering in Fundamental Physics?'\") to be extremely morally important in aggregate over the long-term future of our light cone. If there's a way to permanently modify the nature of fundamental physics in a way that wouldn't happen naturally (or at least wouldn't happen naturally for googol-scale lengths of time), it might be possible to change the amount of suffering in physics [essentially forever](https://www.ted.com/talks/sean_carroll_distant_time_and_the_hint_of_a_multiverse/transcript?language=en \"'Sean Carroll: Distant time and the hint of a multiverse | TED Talk Subtitles and Transcript | TED.com': 'That empty space lasts essentially forever. However, you notice, since empty space gives off radiation, there's actually thermal fluctuations, and it cycles around all the different possible combinations of the degrees of freedom that exist in empty space.'\") (or at least for googol-scale lengths of time), which might swamp all other changes that one could accomplish. No number of mirrored good deeds across tons of simulations could compete (assuming one cares enough about fundamental physics compared with other things).\n\n\nAnother even more implausible scenario in which far-future focus would be astronomically more important than short-term focus is the following. Suppose that advanced civilizations discover ways to run insane amounts of computation -- so much computation that they can simulate all interesting variations of early biological planets that they could ever want to explore with just a tiny fraction of their computing resources. In this case, F could be extremely small because there may be diminishing returns to additional simulations, and the superintelligences instead devote the rest of their enormous computing resources toward other things. However, one counterargument to this scenario is that a tiny fraction of civilizations might *intrinsically value* running ancestor simulations of their own and/or other civilizations, and in this case, the fraction of all computation devoted to such simulations might not be driven close to zero if obscene amounts of computing power became available. So it seems that F has a lower bound of roughly (computational-power-weighted fraction of civilizations that intrinsically value ancestor simulations) \\* (fraction of their computing resources spent on such simulations). Intuitively, I would guess that this bound would likely not be smaller than 10-15 or 10-20 or something. (For instance, probably at least one person out of humanity's current ~1010 people would, sadly in my view, intrinsically value accurate ancestor simulations.)\n\n\nThe value of further research\n-----------------------------\n\n\nThis essay has argued that we shouldn't rule out the possibility that short-term-focused actions like reducing wild-animal suffering over the next few decades in terrestrial ecosystems may have astronomical value. However, we can't easily draw conclusions yet, so this essay should not be taken as a blank check to just focus on reducing short-term suffering without further exploration. Indeed, arguments like this wouldn't have been discovered without thinking about the far future.\n\n\nUntil we know more, I personally favor doing a mix of short-term work, far-future work, and meta-level research about questions like this one. However, as this piece suggests, a purely risk-neutral expected-value maximizer might be inclined to favor mostly far-future work, since even in light of the simulation argument, far-future focus tentatively looks to have somewhat higher expected value. The [value of information of](https://longtermrisk.org/a-lower-bound-on-the-importance-of-promoting-cooperation/#A_value-of-information_argument_for_future_focus \"'A Lower Bound on the Importance of Promoting Cooperation': 'A value-of-information argument for future focus'\") further research on the decision of whether to focus more on the short term or far future seems quite high.\n\n\nAcknowledgements\n----------------\n\n\nCarl Shulman inspired several points in this piece and gave extensive feedback on the final version. My thinking has also benefited from discussions with [Jonah Sinick](http://lesswrong.com/lw/hol/a_personal_history_of_involvement_with_effective/98b8 \"'JonahSinick comments on A personal history of involvement with effective altruism - Less Wrong'\"), Nick Beckstead, Tobias Baumann, and others.\n\n\nFootnotes\n---------\n\n\n1. Eliezer Yudkowsky [would probably dislike](http://lesswrong.com/lw/h8k/pascals_muggle_infinitesimal_priors_and_strong/ \"'Pascal's Muggle: Infinitesimal Priors and Strong Evidence'\") my characterization of far-future focus as a mild form of Pascal's mugging:\n\n> the phrase \"Pascal's Mugging\" got *completely* bastardized to refer to an emotional feeling of being mugged that some people apparently get when a high-stakes charitable proposition is presented to them, *regardless of whether it's supposed to have a low probability.* This is enough to make me regret having ever invented the term \"Pascal's Mugging\" in the first place [...].\n> \n> \n\n\nOf course, influencing the far future *does* have a lower probability of success than influencing the near term. The difference in probabilities is just relatively small (plausibly within a few orders of magnitude).  [(back)](#back_ajs-fn-id_1-2869)\n2. 1 sent-year for simulated humans will probably take place in much less than 1 sidereal year, assuming simulations have high clock speeds.  [(back)](#back_ajs-fn-id_2-2869)\n3. This is particularly true for increasing happiness, where in biological creatures we face the hedonic treadmill. It's less true in the case of a negative utilitarian reducing suffering by decreasing population size, since preventing an individual from existing completely eliminates its suffering, whether it's biological or digital.  [(back)](#back_ajs-fn-id_3-2869)\n4. The units in the product N \\* T \\* D are (number of sent-years) \\* (moral value of helping a given sent-year) \\* (probability discount on actually helping any given sent-year).  [(back)](#back_ajs-fn-id_4-2869)\n5. The units here are (E sent-years) \\* (1 unit of moral value per sent-year). The intensity factor here is 1 unit of moral value per sent-year, since the intensity factor T for long-term helping was defined relative to the intensity factor for short-term helping. There's no probability discount here, because the long-term discount D was defined as the probability discount for long-term helping *relative to* short-term helping.  [(back)](#back_ajs-fn-id_5-2869)\n6. Note that these expressions assume that the sentience of all your copies is the same, since they assume a constant ratio fy that converts from sent-years of general humans to life-years for one of your copies. However, we [might care a bit less](http://reducing-suffering.org/is-brain-size-morally-relevant/#Do_real_brains_matter_more_than_simulated \"'Is Brain Size Morally Relevant?': 'Do 'real' brains matter more than simulated?'\") about copies of ourselves that are simulated in lower-resolution simulations (e.g., simulations that only represent a crude neuronal level of detail rather than a sub-neuronal level of detail, assuming the high-level behavior of the brain is the same in both cases). If the sentience of everyone else in a low-resolution simulation is lower to the same degree that your copy's sentience is lower, then the sent-years that the copy in the low-res simulation will be able to help will be correspondingly lower. In such a case, it would be ok for the calculations in this piece to count ourselves as having only, say, 1/3 of a copy in a low-res simulation whose sent-years are 1/3 as much as normal, as long as the amount of helping the copy could do would also be only 1/3 as much on average. That's because this piece assumes that the amount of short-term helping we can do is proportional to the number of copies we have. In other words, we can think of a copy as \"a unit of helping power\", with lower-resolution instances of ourselves being less than one full copy because they have less helping power.  [(back)](#back_ajs-fn-id_6-2869)\n7. Assuming that we can specify in a simple way a unique index for any given human birth ignores complications with abortions, stillbirths, twins, whether a birth happens when the child begins or ends its exit from the birth canal, etc. For basically simultaneous births on opposite sides of the planet, the [relativity of simultaneity](https://en.wikipedia.org/wiki/Relativity_of_simultaneity \"'Relativity of simultaneity'\") might also become relevant.  [(back)](#back_ajs-fn-id_7-2869)\n8. earthwormchuck163 later [changed his/her mind](http://lesswrong.com/lw/h8k/pascals_muggle_infinitesimal_priors_and_strong/8xlu \"'earthwormchuck163 comments on Pascal's Muggle: Infinitesimal Priors and Strong Evidence - Less Wrong': 'After thinking about it a bit more I decided that I actually do care about simulated people almost exactly as the mugger thought I did.'\") on this point.  [(back)](#back_ajs-fn-id_8-2869)\n9. Habitat reduction might still reduce a tiny amount of suffering because even though the total amount of computation being done would be the same in the two scenarios, if habitat is smaller, then a bigger fraction of computations are devoted to humans, who have better lives than wild animals. For example, suppose that if we don't reduce wild-animal habitats, there will be some number Y of simulations with a ratio of 10,000 wild-animal sent-years per human sent-year in them. And suppose that if we do reduce wild-animal habitats (by, say, an absurdly high amount: 90%), then there will be 1000 wild-animal sent-years for every 1 human sent-year. If the total sent-years of computing power devoted to such simulations is constant, then the new number of simulations, Z, will be such that\n\n> Y \\* (10,000 + 1) = Z \\* (1000 + 1),\n> \n> \n\n\ni.e., Z = 9.991 \\* Y. And the new amount of wild-animal suffering will be only Z \\* 1000 = 9.991 \\* Y \\* 1000 = 9,991 \\* Y sent-years, rather than 10,000 \\* Y.  [(back)](#back_ajs-fn-id_9-2869)\n10. Or maybe the simulators would run more cheaper simulations but not enough more to totally negate the effect of having less habitat. Picture a demand curve for simulations, where the \"price\" is the cost to run a single simulation. If most of a simulation's computations are devoted to the sentient parts of wilderness (rather than to not-very-sentient physical processes like weather), then decreasing wilderness by X% should decrease the cost per simulation by about X%. If [demand is inelastic](https://en.wikipedia.org/wiki/Price_elasticity_of_demand \"'Price elasticity of demand'\"), then the quantity demanded (i.e., number of simulations run) won't increase as much as the per-simulation cost decreased. Suppose that price decreases by 100 \\* fp percent, and quantity demanded increases by 100 \\* fq percent. Since demand is inelastic (i.e., elasticity is < 1),\n\n> |(percent change in quantity demanded)/(percent change in price)| < 1 \n> \n> |(100 \\* fq) / (-100 \\* fp)| < 1 \n> \n> |-1| \\* |fq / fp| < 1 \n> \n> fq / fp < 1,\n> \n> \n\n\nwhere the last line follows because fq and fp are both positive numbers. Finally, note that total suffering is basically (cost per simulation) \\* (number of simulations), and the new value of this product is\n\n\n\n> old\\_cost\\_per\\_simulation \\* (1 - fp) \\* old\\_number\\_of\\_simulations \\* (1 + fq) \n> \n> = old\\_cost\\_per\\_simulation \\* old\\_number\\_of\\_simulations \\* (1 + fq - fp - fp \\* fq),\n> \n> \n\n\nwhich is a decrease if fq < fp. QED.  [(back)](#back_ajs-fn-id_10-2869)\n11. That said, as Carl Shulman pointed out to me, a non-trivial fraction of wildlife simulations on Earth may also have very little sentience -- e.g., the bodies of animals, weather, fires, ocean currents, etc.  [(back)](#back_ajs-fn-id_11-2869)\n12. Of course, simulations needn't just use digital computation. If, for some reason, the quantum effects of biological neurons are essential for the algorithms that human brains perform, and these algorithms can't be simulated on classical computers, one could still create simulated humans in the form of biological brains and hook them up to virtual-reality interfaces, like in *The Matrix*. Of course, there might be difficulties with this approach too. For instance, a body laying stationary to receive virtual-reality inputs wouldn't change the brain [via exercise](https://en.wikipedia.org/wiki/Neurobiological_effects_of_physical_exercise \"'Neurobiological effects of physical exercise'\") in the way that a real biological human's body does. Perhaps the effects of movement and exercise on the brain could be added in without too much difficulty, but maybe not. So there are at least some scenarios in which it would be computationally intractable to simulate a brain in enough detail for it to mirror even just the high-level functional behavior of a biological brain.\nA brute-force solution to the above difficulties could be to convert an entire planet to resemble Earth, put real bacteria, fungi, plants, animals, and humans on that planet, and fake signals from outer space (a *Truman Show* approach to simulations), but this would be extremely wasteful of planetary resources (i.e., it would require a whole planet just to run one simulation), so I doubt many advanced civilizations would do it.\n\n\nEven if simulations can't reproduce the high-level functional behavior of a biological mind, there remains the question of whether some simulations can be made \"subjectively indistinguishable\" from a biological human brain in the sense that the brain can't tell which kind of algorithm it is, even if the simulation isn't functionally identical to the original biological version. I suspect that this is possible, since the algorithms that we use to reflect on ourselves and our place in the world don't seem beyond the reach of classical computation and indeed may be not insanely complicated. But I suppose it's *possible* that computationally demanding quantum algorithms are somehow required in this process.  [(back)](#back_ajs-fn-id_12-2869)\n13. In this setting, it may no longer be reasonable to assume that fN = fX \\* fC, as I did in a previous section, because fC is the fraction of all civilizations that has the B algorithms on the home planet, while fN is the fraction of advanced computing power devoted to S algorithms. Since B and S are different algorithms, it may be less plausible that, e.g., if B's are twice as numerous, then S's will be twice as numerous. Nonetheless, since B's and S's are similar enough that you can't tell which you are with your limited reasoning abilities, it may still be somewhat plausible that fC and fN are strongly correlated. For instance, even if it's not possible to accurately simulate B algorithms because they involve hard-to-compute quantum effects, it still might be the case that there are S algorithms that are non-quantum-accurate versions of B, and if B algorithms are very common on biological planets, then S algorithms should presumably be very common in simulations.  [(back)](#back_ajs-fn-id_13-2869)\n\n\nEliezer Yudkowsky [would probably dislike](http://lesswrong.com/lw/h8k/pascals_muggle_infinitesimal_priors_and_strong/ \"'Pascal's Muggle: Infinitesimal Priors and Strong Evidence'\") my characterization of far-future focus as a mild form of Pascal's mugging:\n\n> the phrase \"Pascal's Mugging\" got *completely* bastardized to refer to an emotional feeling of being mugged that some people apparently get when a high-stakes charitable proposition is presented to them, *regardless of whether it's supposed to have a low probability.* This is enough to make me regret having ever invented the term \"Pascal's Mugging\" in the first place [...].\n> \n> \n\n\nOf course, influencing the far future *does* have a lower probability of success than influencing the near term. The difference in probabilities is just relatively small (plausibly within a few orders of magnitude).\n\n1 sent-year for simulated humans will probably take place in much less than 1 sidereal year, assuming simulations have high clock speeds.This is particularly true for increasing happiness, where in biological creatures we face the hedonic treadmill. It's less true in the case of a negative utilitarian reducing suffering by decreasing population size, since preventing an individual from existing completely eliminates its suffering, whether it's biological or digital.The units in the product N \\* T \\* D are (number of sent-years) \\* (moral value of helping a given sent-year) \\* (probability discount on actually helping any given sent-year).The units here are (E sent-years) \\* (1 unit of moral value per sent-year). The intensity factor here is 1 unit of moral value per sent-year, since the intensity factor T for long-term helping was defined relative to the intensity factor for short-term helping. There's no probability discount here, because the long-term discount D was defined as the probability discount for long-term helping *relative to* short-term helping.Note that these expressions assume that the sentience of all your copies is the same, since they assume a constant ratio fy that converts from sent-years of general humans to life-years for one of your copies. However, we [might care a bit less](http://reducing-suffering.org/is-brain-size-morally-relevant/#Do_real_brains_matter_more_than_simulated \"'Is Brain Size Morally Relevant?': 'Do 'real' brains matter more than simulated?'\") about copies of ourselves that are simulated in lower-resolution simulations (e.g., simulations that only represent a crude neuronal level of detail rather than a sub-neuronal level of detail, assuming the high-level behavior of the brain is the same in both cases). If the sentience of everyone else in a low-resolution simulation is lower to the same degree that your copy's sentience is lower, then the sent-years that the copy in the low-res simulation will be able to help will be correspondingly lower. In such a case, it would be ok for the calculations in this piece to count ourselves as having only, say, 1/3 of a copy in a low-res simulation whose sent-years are 1/3 as much as normal, as long as the amount of helping the copy could do would also be only 1/3 as much on average. That's because this piece assumes that the amount of short-term helping we can do is proportional to the number of copies we have. In other words, we can think of a copy as \"a unit of helping power\", with lower-resolution instances of ourselves being less than one full copy because they have less helping power.Assuming that we can specify in a simple way a unique index for any given human birth ignores complications with abortions, stillbirths, twins, whether a birth happens when the child begins or ends its exit from the birth canal, etc. For basically simultaneous births on opposite sides of the planet, the [relativity of simultaneity](https://en.wikipedia.org/wiki/Relativity_of_simultaneity \"'Relativity of simultaneity'\") might also become relevant.earthwormchuck163 later [changed his/her mind](http://lesswrong.com/lw/h8k/pascals_muggle_infinitesimal_priors_and_strong/8xlu \"'earthwormchuck163 comments on Pascal's Muggle: Infinitesimal Priors and Strong Evidence - Less Wrong': 'After thinking about it a bit more I decided that I actually do care about simulated people almost exactly as the mugger thought I did.'\") on this point.Habitat reduction might still reduce a tiny amount of suffering because even though the total amount of computation being done would be the same in the two scenarios, if habitat is smaller, then a bigger fraction of computations are devoted to humans, who have better lives than wild animals. For example, suppose that if we don't reduce wild-animal habitats, there will be some number Y of simulations with a ratio of 10,000 wild-animal sent-years per human sent-year in them. And suppose that if we do reduce wild-animal habitats (by, say, an absurdly high amount: 90%), then there will be 1000 wild-animal sent-years for every 1 human sent-year. If the total sent-years of computing power devoted to such simulations is constant, then the new number of simulations, Z, will be such that\n\n> Y \\* (10,000 + 1) = Z \\* (1000 + 1),\n> \n> \n\n\ni.e., Z = 9.991 \\* Y. And the new amount of wild-animal suffering will be only Z \\* 1000 = 9.991 \\* Y \\* 1000 = 9,991 \\* Y sent-years, rather than 10,000 \\* Y.\n\nOr maybe the simulators would run more cheaper simulations but not enough more to totally negate the effect of having less habitat. Picture a demand curve for simulations, where the \"price\" is the cost to run a single simulation. If most of a simulation's computations are devoted to the sentient parts of wilderness (rather than to not-very-sentient physical processes like weather), then decreasing wilderness by X% should decrease the cost per simulation by about X%. If [demand is inelastic](https://en.wikipedia.org/wiki/Price_elasticity_of_demand \"'Price elasticity of demand'\"), then the quantity demanded (i.e., number of simulations run) won't increase as much as the per-simulation cost decreased. Suppose that price decreases by 100 \\* fp percent, and quantity demanded increases by 100 \\* fq percent. Since demand is inelastic (i.e., elasticity is < 1),\n\n> |(percent change in quantity demanded)/(percent change in price)| < 1 \n> \n> |(100 \\* fq) / (-100 \\* fp)| < 1 \n> \n> |-1| \\* |fq / fp| < 1 \n> \n> fq / fp < 1,\n> \n> \n\n\nwhere the last line follows because fq and fp are both positive numbers. Finally, note that total suffering is basically (cost per simulation) \\* (number of simulations), and the new value of this product is\n\n\n\n> old\\_cost\\_per\\_simulation \\* (1 - fp) \\* old\\_number\\_of\\_simulations \\* (1 + fq) \n> \n> = old\\_cost\\_per\\_simulation \\* old\\_number\\_of\\_simulations \\* (1 + fq - fp - fp \\* fq),\n> \n> \n\n\nwhich is a decrease if fq < fp. QED.\n\nThat said, as Carl Shulman pointed out to me, a non-trivial fraction of wildlife simulations on Earth may also have very little sentience -- e.g., the bodies of animals, weather, fires, ocean currents, etc.Of course, simulations needn't just use digital computation. If, for some reason, the quantum effects of biological neurons are essential for the algorithms that human brains perform, and these algorithms can't be simulated on classical computers, one could still create simulated humans in the form of biological brains and hook them up to virtual-reality interfaces, like in *The Matrix*. Of course, there might be difficulties with this approach too. For instance, a body laying stationary to receive virtual-reality inputs wouldn't change the brain [via exercise](https://en.wikipedia.org/wiki/Neurobiological_effects_of_physical_exercise \"'Neurobiological effects of physical exercise'\") in the way that a real biological human's body does. Perhaps the effects of movement and exercise on the brain could be added in without too much difficulty, but maybe not. So there are at least some scenarios in which it would be computationally intractable to simulate a brain in enough detail for it to mirror even just the high-level functional behavior of a biological brain.\nA brute-force solution to the above difficulties could be to convert an entire planet to resemble Earth, put real bacteria, fungi, plants, animals, and humans on that planet, and fake signals from outer space (a *Truman Show* approach to simulations), but this would be extremely wasteful of planetary resources (i.e., it would require a whole planet just to run one simulation), so I doubt many advanced civilizations would do it.\n\n\nEven if simulations can't reproduce the high-level functional behavior of a biological mind, there remains the question of whether some simulations can be made \"subjectively indistinguishable\" from a biological human brain in the sense that the brain can't tell which kind of algorithm it is, even if the simulation isn't functionally identical to the original biological version. I suspect that this is possible, since the algorithms that we use to reflect on ourselves and our place in the world don't seem beyond the reach of classical computation and indeed may be not insanely complicated. But I suppose it's *possible* that computationally demanding quantum algorithms are somehow required in this process.\n\nIn this setting, it may no longer be reasonable to assume that fN = fX \\* fC, as I did in a previous section, because fC is the fraction of all civilizations that has the B algorithms on the home planet, while fN is the fraction of advanced computing power devoted to S algorithms. Since B and S are different algorithms, it may be less plausible that, e.g., if B's are twice as numerous, then S's will be twice as numerous. Nonetheless, since B's and S's are similar enough that you can't tell which you are with your limited reasoning abilities, it may still be somewhat plausible that fC and fN are strongly correlated. For instance, even if it's not possible to accurately simulate B algorithms because they involve hard-to-compute quantum effects, it still might be the case that there are S algorithms that are non-quantum-accurate versions of B, and if B algorithms are very common on biological planets, then S algorithms should presumably be very common in simulations.", "url": "https://longtermrisk.org/how-the-simulation-argument-dampens-future-fanaticism", "title": "How the Simulation Argument Dampens Future Fanaticism", "source": "html_articles", "source_type": "manuscript", "source_filetype": "pdf", "date_published": "2015-12-31T23:00:00Z", "authors": ["Brian Tomasik"], "summary": [], "id": "aac188fb0b06860517d60367ff5b9e73"} {"text": "How Would Catastrophic Risks Affect Prospects for Compromise?\n=============================================================\n\n\n\n29 August 2015\nby [Brian Tomasik](https://longtermrisk.org/author/brian-tomasik/ \"Posts by Brian Tomasik\")\n\nFirst written: 24 Feb. 2013; major updates: 13 Nov. 2013; last update: 4 Dec. 2017\n\n Summary\n-------\n\n\nCatastrophic risks -- such as engineered pathogens, nanotech weapons, nuclear war, or financial collapse -- would cause major damage in the short run, but their effects on the long-run direction that humanity takes are also significant. In particular, to the extent these disasters increase risks of war, they may contribute to faster races between nations to build artificial general intelligence (AGI), less opportunity for compromise, and hence less of what everyone wants in expectation, including less suffering reduction. In this way, even pure negative utilitarians may oppose catastrophic risks, though this question is quite unsettled. While far from ideal, today's political environment is more democratic and peaceful than what we've seen historically and what could have been the case, and disrupting this trajectory might have more downside than upside. I discuss further considerations about how catastrophes could have negative and positive consequences. Even if averting catastrophic risks is net good to do, I see it as less useful than directly promoting compromise scenarios for AGI and setting the stage for such compromise via cooperative political, social, and cultural institutions.\n\n\n*Note, 20 Jul. 2015*: Relative to when I first wrote this piece, I'm now less hopeful that catastrophic-risk reduction is plausibly good for pure negative utilitarians. The main reason is that some catastrophic risks, such as from malicious biotech, do seem to pose nontrivial risk of causing complete extinction relative to their probability of merely causing mayhem and conflict. So I now don't support efforts to reduce non-AGI \"existential risks\". (Reducing AGI extinction risks is a very different matter, since most AGIs would colonize space and spread suffering into the galaxy, just like most human-controlled future civilizations would.) Regardless, negative utilitarians should just focus their sights on more clearly beneficial suffering-reduction projects, like promoting suffering-focused ethical viewpoints and researching more how best to reduce wild-animal and far-future suffering.\n\n\nContents\n\n* [Introduction](#Introduction)\n* [Most catastrophic risks would not cause extinction](#Most_catastrophic_risks_would_not_cause_extinction)\n* [Degree of compromise as a key metric](#Degree_of_compromise_as_a_key_metric)\n* [How contingent is the future?](#How_contingent_is_the_future)\n* [War as a key risk](#War_as_a_key_risk)\n* [Dislocation makes conflict more likely](#Dislocation_makes_conflict_more_likely)\n* [If it ain't broke, don't fix it](#If_it_aint_broke_dont_fix_it)\n* [How robust is technological civilization?](#How_robust_is_technological_civilization)\n* [Might humans be replaced by other species?](#Might_humans_be_replaced_by_other_species)\n* [Other costs to catastrophes](#Other_costs_to_catastrophes)\n\t+ [Greater desperation](#Greater_desperation)\n\t+ [Darwinian futures?](#Darwinian_futures)\n* [Silver linings to catastrophes](#Silver_linings_to_catastrophes)\n\t+ [Greater concern for suffering?](#Greater_concern_for_suffering)\n\t+ [More time for reflection?](#More_time_for_reflection)\n\t+ [Resource curse?](#Resource_curse)\n\t+ [Greater impetus for cooperation?](#Greater_impetus_for_cooperation)\n\t+ [Minority views in defense of alternate political systems](#Minority_views_in_defense_of_alternate_political_systems)\n* [What if the conclusions flipped?](#What_if_the_conclusions_flipped)\n* [Is work on catastrophic risks optimal?](#Is_work_on_catastrophic_risks_optimal)\n* [Recovery measures are not supported by this argument](#Recovery_measures_are_not_supported_by_this_argument)\n* [Appendix: Inoculation in general](#Appendix_Inoculation_in_general)\n\t+ [Inoculation](#Inoculation)\n\t+ [Slippery slopes](#Slippery_slopes)\n* [Footnotes](#Footnotes)\n\nIntroduction\n------------\n\n\nSome in the effective-altruist community consider [global catastrophic risks](http://gcrinstitute.org/) to be a pressing issue. Catastrophic risks include possibilities of world financial collapse, major pandemics, bioweapons, nanoweapons, environmental catastrophes like runaway global warming, and nuclear war.\n\n\nTypically discussions of these risks center on massive harm to humanity in the short term and/or remote risks that they would lead to human extinction, affecting the long term. In this piece, I'll explore another consideration that might trump both short-term harm and extinction considerations: the flow-through effects of catastrophic risks on the degree of compromise in future politics and safety of future technology.\n\n\nMost catastrophic risks would not cause extinction\n--------------------------------------------------\n\n\nI think the only known technological development that is highly likely to cause all-out human extinction is AGI.[1](#link_ajs-fn-id_1-267) Carl Shulman [has defended](http://lesswrong.com/lw/bnc/against_ai_risk/6axy) this view, although he notes that while he doesn't consider nanotech a big extinction risk, \"Others disagree (Michael Vassar has worked with the [Center for Responsible Nanotechnology], and Eliezer [Yudkowsky] often names molecular nanotechnology as the [extinction ]risk he would move to focus on if he knew that AI was impossible).\" Reinforcing this assessment was the \"[Global Catastrophic Risks Survey](https://web.archive.org/web/20161020051016/http://www.philosophy.ox.ac.uk/__data/assets/pdf_file/0020/3854/global-catastrophic-risks-report.pdf)\" of 2008, in which the cumulative risk of extinction was estimated as at most 19% (median), the highest two subcomponents being AI risk and nanotech risk at median 5% each. Nuclear war was median 1%, consistent with [general expert sentiment](http://www.overcomingbias.com/2012/11/nuclear-winter-and-human-extinction-qa-with-luke-oman.html).\n\n\nOf course, there's model uncertainty at play. Many ecologists, for instance, feel the risk of human extinction due to environmental issues is far higher than what those in the techno-libertarian circles cited in the previous paragraph believe. Others fear peak oil or impending economic doom. Still others may hold religious or philosophical views that incline them to find extinction likely via alternate means. In any event, whether catastrophic risks are likely to cause extinction is not relevant to the remainder of this piece, which will merely examine what implications -- both negative and positive -- catastrophic risks might have for the [trajectory](http://lesswrong.com/lw/hjb/a_proposed_adjustment_to_the_astronomical_waste/) of social evolution conditional on human survival.\n\n\nDegree of compromise as a key metric\n------------------------------------\n\n\nIgnoring extinction considerations, how else are catastrophic risks likely to matter for the future? Of course, they would obviously cause massive human damage in the short run. But they would also have implications for the long-term future of humanity to the extent that they affected the ways in which society developed: How much international cooperation is there? How humane are people's moral views? How competitive is the race to develop technology the fastest?\n\n\n![](https://longtermrisk.org/files/Ford_signing_accord_with_Brehznev_November_24_1974-350x238.jpg \"'President Gerald Ford and Soviet General Secretary Leonid Brezhnev sign a Joint Communiqué following talks on the limitation of strategic offensive arms. The document was signed in the conference hall of the Okeansky Sanitarium, Vladivostok, USSR.' David Hume Kennerly [Public domain], via Wikimedia Commons: https://commons.wikimedia.org/wiki/File:Ford_signing_accord_with_Brehznev,_November_24,_1974.jpg\")\n\n\nHow contingent is the future?\n-----------------------------\n\n\nMuch of the trajectory of the future may be inexorable. Especially as people become smarter, we might expect that if compromise is a Pareto-improving outcome, our descendants should converge on it. Likewise, even if catastrophe sets humanity back to a much more primitive state, it may be that a relatively humane, non-violent culture will emerge once more as civilization matures. [For example](https://web.archive.org/web/20170829110145/http://levine.sscnet.ucla.edu/general/aandrreview.pdf \"'A Review of Acemoglu and Robinson’s Why Nations Fail'\"):\n\n\n\n> Acemoglu and Robinson’s [Why Nations Fail](https://en.wikipedia.org/wiki/Why_Nations_Fail \"'Why Nations Fail'\") [2012] is a grand history in the style of Diamond [1997] or McNeil [1963]. [...] Acemoglu and Robinson theorize that political institutions can be divided into two kinds - “extractive” institutions in which a “small” group of individuals do their best to exploit - in the sense of Marx - the rest of the population, and “inclusive” institutions in which “many” people are included in the process of governing hence the exploitation process is either attenuated or absent.\n> \n> \n> [...] inclusive institutions enable innovative energies to emerge and lead to continuing growth as exemplified by the Industrial Revolution. Extractive institutions can also deliver growth but only when the economy is distant from the technological frontier.\n\n\nIf this theory is right, it would suggest that more inclusive societies will in general tend to control humanity's long-run future.\n\n\nStill, humanity's trajectory is not completely inevitable, and it can be sensitive to initial conditions.\n\n\n* A society that has better institutions for cooperation may be able to achieve compromise solutions that would simply be unavailable to one in which those institutions did not exist, no matter how smart the people involved were.\n* It may be that social values matter -- e.g., if people care more intrinsically about compromise, this can change the outcomes that result from pure game-theoretic calculations because the payoffs are different.\n* Historical, cultural, and political conditions can influence [Schelling points](https://en.wikipedia.org/wiki/Focal_point_(game_theory)). Sometimes there are multiple possible Nash equilibria, and [expectations determine which one is reached](http://oyc.yale.edu/economics/econ-159/lecture-5 \"'Lecture 5 - Nash Equilibrium: Bad Fashion and Bank Runs', 'ECON 159: GAME THEORY'\").\n\n\nThere's a long literature on the extent to which history is inevitable or contingent, but it seems that it's at least a little bit of both. Even in cases of modern states engaged in strategic calculations, there have been a great number of contingent factors. For example:\n\n\n* As an analyst for US military policy, Thomas Schelling suggested strategies and safeguards that would not have been thought of without him. Roger Myerson [said](http://youtu.be/oG-dErfUunw?t=7m48s): \"You know, I think there's some chance that Tom Schelling may have saved the world.\" At least as dramatic is [Stanislav Petrov](http://lesswrong.com/lw/jq/) helping to avert accidental nuclear war.\n* *[Confronting the Bomb: A Short History of the World Nuclear Disarmament Movement](http://www.amazon.com/dp/0804756325/)* argues the case that anti-nuclear activism made some key differences in how governments conducted nuclear policy, including Ronald Reagan's flip in stance from very hawkish to doveish, [in part due](http://www.alternet.org/story/149821/how_reagan_brought_the_world_to_the_brink_of_nuclear_destruction?page=0%2C1) to the largest political demonstration in American history on 12 June 1982 against increasing nuclear arsenals. This is one example to demonstrate that political action can make a non-inevitable difference to even highly strategic matters of world dominance by major powers.\n* In general, US presidents have [sometimes](http://www.theatlantic.com/magazine/archive/2013/06/do-presidents-matter/309307/) had significant contingent effects on the nation's direction.\n* There are countless other historical examples that could be cited.\n\n\nEven if we think that greater intelligence by future people will mean less contingency in how events play out, we clearly won't eliminate contingency any time soon, and the decisions we make in the coming decades may matter a lot to the final outcome.\n\n\nIn general, if you think there's only an X% chance that the success or failure of compromise to avoid an AGI arms race is contingent on what we do, you can multiply the expected costs/benefits of our actions by X%. But probably X should not be too small. I would put it definitely above 33%, and a more realistic estimate should be higher.\n\n\nWar as a key risk\n-----------------\n\n\nWhat factors are most likely to lead to an AGI arms race in which groups compete to build whatever crude AGI works rather than cautiously constructing an AGI that better encapsulates many value systems, including that of suffering reduction? If AGI is built by corporations, then fierce market competition could be risky. That said, it seems most plausible to me that AGI would be built by, or at least under the control of, governments, because unless the AGI project took off really quickly, it seems the military would not allow a national (indeed, world) security threat to proceed without restraint.\n\n\nIn this case, the natural scenario that could lead to a reckless race for AGI would be international competition -- say, between the US and China. AGI would be in many ways like nuclear weapons, because whoever builds it first can literally take over the world (though Carl Shulman points out [some differences](http://intelligence.org/files/ArmsControl.pdf) between AGI and nuclear weapons as well).\n\n\nIf we think historically about what has led to [nuclear development](http://www.icanw.org/the-facts/the-nuclear-age/), it has always been international conflict, usually precipitated by wars:\n\n\n* World War II\n\t+ The US [Manhattan Project](https://en.wikipedia.org/wiki/Manhattan_Project), which required over 130,000 people and cost ~$26 billion dollars as measured in 2013 currency.\n\t+ The uncompleted [Nazi nuclear project](http://www.pbs.org/wgbh/nova/military/nazis-and-the-bomb.html).\n* Cold War\n\t+ Soviet Union, China, UK, France, etc.\n* India-China conflict\n\t+ [Led to](https://en.wikipedia.org/wiki/India_and_weapons_of_mass_destruction#Nuclear_weapons) India's development of nuclear weapons.\n* India-Pakistan conflict\n\t+ Harsh treaty conditions following the Indo-Pakistani War of 1971 [spurred](https://en.wikipedia.org/wiki/Pakistan_and_weapons_of_mass_destruction#Development_of_nuclear_weapons) Pakistan's nuclear-weapons program.\n* Israeli conflicts with Middle Eastern neighbors\n\t+ Led to Israel's nuclear-weapons program.\n\n\nIn general, war tends to cause\n\n\n* Fast technological development, often enhanced by major public investments\n* Willingness to take risks in order to develop the technology first\n* International hostility that makes cooperation difficult.\n\n\nThus, war seems to be a major risk factor for fast AGI development that results in less good and greater expected suffering than more careful, cooperative scenarios. To this extent, anything else that makes war more likely entails some expected harm through this pathway.\n\n\nDislocation makes conflict more likely\n--------------------------------------\n\n\nWhile catastrophic risks are unlikely to cause extinction, they are fairly likely to cause damage on a mass scale. For example, from the \"[Global Catastrophic Risks Survey](https://web.archive.org/web/20161020051016/http://www.philosophy.ox.ac.uk/__data/assets/pdf_file/0020/3854/global-catastrophic-risks-report.pdf)\":\n\n\n\n\n| | | | |\n| --- | --- | --- | --- |\n| *Catastrophe* | *Median probability of >1 million dead* | *Median probability of >1 billion dead* | *Median probability of extinction* |\n| nanotech weapons | 25% | 10% | 5% |\n| all wars | 98% | 30% | 4% |\n| biggest engineered pandemic | 30% | 10% | 2% |\n| nuclear wars | 30% | 10% | 1% |\n| natural pandemic | 60% | 5% | 0.05% |\n\n\n...and so on. Depending on the risk, these disasters may or may not contribute appreciably to risks of AGI arms race, and it would be worth exploring in more detail which risks are most likely to lead to a breakdown of compromise. Still, in general, all of these risks seem likely to increase the chance of warfare, and by that route alone, they imply nonzero risks for increasing suffering in the far future.\n\n\nIf it ain't broke, don't fix it\n-------------------------------\n\n\nFor all its shortcomings, contemporary society is remarkably humane by historical standards and relative to what other possibilities one can imagine. This trend is unmistakable to anyone who reads history books and witnesses how often societies in times past were controlled by violent takeover, fear, and oppression. Steven Pinker's *[The Better Angels of Our Nature](https://en.wikipedia.org/wiki/The_Better_Angels_of_Our_Nature)* is a quantitative defense of this thesis. Pinker cites \"six major trends\" toward greater peace and cooperation that have taken place in the past few millennia:\n\n\n1. *The Pacification Process*: The beginnings of cities and governments\n2. *The Civilizing Process*: Smaller territories uniting into larger kingdoms\n3. *The Humanitarian Revolution*: Reductions in slavery, torture, corporal punishment, etc.\n4. *The Long Peace*: After World War II, the great powers [have not directly fought each other](https://en.wikipedia.org/wiki/The_long_peace)\n5. *The New Peace*: A short-term (and hence \"more tenuous\") decline in conflicts since the end of the Cold War\n6. *The Rights Revolutions*: Greater moral consideration for women, minorities, homosexuals, animals, etc.\n\n\nCatastrophic risks -- especially those like nuclear winter that could set civilization back to a much less developed state -- are essentially a \"roll of the dice\" on the initial conditions for how society develops, and while things could certainly be better, they could also be a lot worse.\n\n\nTo a large extent, it may be that the relatively peaceful conditions of the present day are a required condition for technological progress, such that any advanced civilization will necessarily have those attributes. Insofar as this is true, it reduces the expected cost of catastrophic risks. But there's some chance that these peaceful conditions are not inevitable. One could imagine, for instance, a technologically advanced dictatorship taking control instead, and insofar as its policies would be less determined by those of its population, the degree of compromise with many value systems would be reduced in expectation. Consider ancient Sparta, monarchies, totalitarian states, the mafia, gangs, and many other effective forms of government where ruthlessness by leaders is more prevalent than compassion.\n\n\nImagine what would have happened if [Hitler's atomic-bomb project](https://en.wikipedia.org/wiki/German_nuclear_weapon_project) had succeeded before the Manhattan Project. Or [if the US South had won](https://en.wikipedia.org/wiki/American_Civil_War_alternate_histories) the American Civil War. Or various other scenarios. Plausibly the modern world would have turned out somewhat similar to the present (for instance, I doubt slavery would have lasted forever even if the US South had won the Civil War), but conditions probably would have been somewhat worse than they are now. (Of course, various events in history also could have turned out better than they actually did.)\n\n\n![AGI suffering vs. social organization](https://longtermrisk.org/wp-content/uploads/2015/09/agi-suffering-vs-social-organization.png)\n\n\nA Cold War scenario seems likely to accelerate AGI relative to its current pace. Compare with the [explosion of STEM education](http://www.nytimes.com/2007/09/25/science/space/25educ.html?pagewanted=all&_r=0 \"\\\"When Science Suddenly Mattered, in Space and in Class\\\"\") as a result of the Space Race.\n\n\nHow robust is technological civilization?\n-----------------------------------------\n\n\nCivilization in general seems to me fairly robust. The world witnessed civilizations emerge independently all over the globe -- from Egypt to the Fertile Crescent to China to the Americas. The Mayan civilization was completely isolated from happenings in Africa and Asia and yet shared many of the same achievements. It's true that civilizations often collapse, but they just as often rebuild themselves. The history of ancient Egypt, China, and many other regions of the world is a history of an empire followed by its collapse followed by the emergence of another empire.\n\n\nIt's less clear whether *industrial* civilization is robust. One reason is that we haven't seen completely independent industrial revolutions in history, since trade was well developed by the time industrialization could take place. Still, for example, China and Europe were both on the verge of industrial revolutions in the early 1800s[2](#link_ajs-fn-id_2-267), and the two civilizations were pretty independent, despite [some trade](http://www.historyorb.com/asia/china_trade.php). China and Europe invented the printing press independently.[3](#link_ajs-fn-id_3-267) And so on.\n\n\nGiven the written knowledge we've accumulated, it's not plausible that post-disaster peoples would not relearn how to build industry. But it's not clear whether they would have the resources and/or social organization requisite to do so. Consider how long it takes for developing nations to industrialize even with relative global stability and trade. On the other hand, military conflicts if nothing else would probably force post-disaster human societies to improve their technological capacities at some point. Technology seems more inevitable than democracy, because technology is compelled by conflict dynamics. Present-day China is an example of a successful, technologically advanced non-democracy. (Of course, China certainly exhibits some degree of deference to popular pressure, and conversely, Western \"democracies\" also give excessive influence to wealthy elites.)\n\n\nSome suggest that rebuilding industrial civilization might be impossible the second time around because abundant surface minerals and easy-to-drill fossil fuels would have been used up. Others contend that human ingenuity would find alternate ways to get civilization off the ground, especially given the plenitude of scientific records that would remain. I incline toward the latter of these positions, but I maintain modesty on this question. It reflects a [more general divide](https://en.wikipedia.org/wiki/Simon%E2%80%93Ehrlich_wager) between scarcity doomsayers vs. techno-optimists. (The \"optimist\" in \"techno-optimist\" is relative to the goal of human economic growth, not necessarily reducing suffering.)\n\n\nRobin Hanson [takes](http://hanson.gmu.edu/collapse.pdf \"\\\"Catastrophe, Social Collapse, and Human Extinction\\\", 2007\") the techno-optimist view:\n\n\n\n> Once [post-collapse humans] could communicate to share innovations and grow at the rate that our farming ancestors grew, humanity should return to our population and productivity level within twenty thousand years. (The fact that we have used up some natural resources this time around would probably matter little, as growth rates do not seem to depend much on natural resource availability.)\n> \n> \n\n\nBut even if historical growth rates didn't depend much on resources, might there be some minimum resource threshold below which resources do become essential? Indeed, in the limit of *zero* resources, growth is not possible.\n\n\nA [blog post](http://reflectivedisequilibrium.blogspot.com/2013/12/current-thoughts-on-nuclear-war-as.html) by Carl Shulman includes a section titled \"Could a vastly reduced population eventually recover from nuclear war?\". It reviews reasons why rebuilding civilization would be harder and reasons it would be easier the second time around. Shulman concludes: \"I would currently guess that the risk of permanent drastic curtailment of human potential from failure to recover, conditional on nuclear war causing the deaths of the overwhelming majority of humanity, is on the lower end.\" Shulman also seems to agree with the (tentative and uncertain) main thrust of my current article: \"Trajectory change\" effects of civilizational setback, possibly including diminution of liberal values, \"could have a comparable or greater role in long-run impacts\" of nuclear war (and other catastrophic risks) than outright extinction.\n\n\nStuart Armstrong [also agrees](https://www.youtube.com/watch?v=i4LjoJGpqIY&t=11m29s \"\\\"Stuart Armstrong: The future is going to be wonderful if we don't get whacked\\\"\") that rebuilding following nuclear war seems likely. He points out that formation of governments is common in history, and social chaos is rare. There would be many smart, technically competent survivors of a nuclear disaster, e.g., in submarines.\n\n\nMight humans be replaced by other species?\n------------------------------------------\n\n\nAs noted above, full-out human extinction from catastrophic risks seems relatively unlikely compared with just social destabilization. If human extinction did occur from causes other than AI, presumably parts of the biosphere would still remain. In many scenarios, at least some other animals would survive. What's the probability that those animals would then replace humans and colonize space? My guess is it's small but maybe not negligibly so. Robin Hanson [seems to agree](http://hanson.gmu.edu/collapse.pdf \"\\\"Catastrophe, Social Collapse, and Human Extinction\\\", 2007\"): \"it is also possible that without humans within a few million years some other mammal species on Earth would evolve to produce\" a technological civilization.\n\n\nIn the [history of life on Earth](http://www.pbs.org/wgbh/nova/origins/life-nf.html), boney fish and insects emerged around 400 million years ago (mya). Dinosaurs emerged around 250 mya. Mammals blossomed less than 100 mya. [Earth's future](http://en.wikipedia.org/wiki/Future_of_the_Earth) allows for about [1000 million years](https://web.archive.org/web/20160330230725/http://news.sky.com/story/1110337/life-on-earth-to-die-out-in-one-billion-years) of life to come. So even if, as the cliche goes, the most complex life remaining after nuclear winter was cockroaches, there would still be 1000 million years in which human-like intelligence might re-evolve, and it took just 400 million years the first time around starting from insect-level intelligence. Of course, it's unclear how improbable the development of human-like intelligence was. For instance, if the dinosaurs hadn't been killed by an asteroid, plausibly they would still rule the Earth, without any advanced civilization.[4](#link_ajs-fn-id_4-267) The Fermi paradox also has something to say about how likely we should assess the evolution of advanced intelligence from ordinary animal life to be.\n\n\nSome extinction scenarios would involve killing all humans but leaving higher animals. Perhaps a bio-engineered pathogen or nanotech weapon could do this. In that case, re-emergence of intelligence would be even more likely. For example, cetaceans [made large strides](http://archive.seti.org/news/features/intelligence-gathering.php) in intelligence 35 mya, jumping from an encephalization quotient (EQ) of 0.5 to 2.1. Some went on to develop EQs of 4-5, which is close to the human EQ of 7. As quoted in \"[Intelligence Gathering: The Study of How the Brain Evolves Offers Insight Into the Mind](http://archive.seti.org/news/features/intelligence-gathering.php),\" Lori Marino explains:\n\n\n\n> Cetaceans and primates are not closely related at all, but both have similar behavior capacities and large brains -- the largest on the planet. Cognitive convergence seems to be the bottom line.\n> \n> \n\n\nOne hypothesis for why humans have such large brains despite metabolic cost is that big brains resulted from an [arms race](https://web.archive.org/web/20141201202010/http://www.jstor.org/discover/10.2307/4602561?uid=2&uid=4&sid=21105339709343 \"'The Mental Arms Race Amplifier' by Michael R. Rose\") of social competition. Similar conditions could obtain for cetaceans or other social mammals. Of course, many of the [other features](http://www.as.utexas.edu/astronomy/education/fall08/scalo/secure/309l_nov20_evocomplexintel.pdf) of primates that may have given rise to civilization are not present in most other mammals. In particular, it seems hard to imagine developing written records underwater.\n\n\nIf another species took over and built a space-faring civilization, would it be better or worse than our own? There's some chance it could be more compassionate, such as if bonobos took our place. But it might also be much less compassionate, such as if chimpanzees had won the evolutionary race, not to mention killer whales. On balance it's plausible our hypothetical replacements would be less compassionate, because compassion is something humans value a lot, while a random other species probably values something else more. The reason I'm asking this question in the first place is because humans are outliers in their degree of compassion. Still, in social animals, various norms of fair play are likely to emerge regardless of how intrinsically caring the species is. Simon Knutsson pointed out to me that if human survivors do recover from a near-extinction-level catastrophe, or if humans go extinct and another species with potential to colonize space evolves, they'll likely need to be able to cooperate rather than fighting endlessly if they are to succeed in colonizing space. This suggests that if they colonize space, they will be more moral or peaceful than we were. My reply is that while this is possible, a rebuilding civilization or new species might curb infighting via authoritarian power structures or strong ingroup loyalty that doesn't extend to outgroups, which might imply less compassion than present-day humans have.\n\n\nMy naive guess is that it's relatively unlikely another species would colonize space if humans went extinct -- maybe a ~10% chance? I suspect that most of the [Great Filter](https://en.wikipedia.org/wiki/Great_Filter \"'Great Filter'\") is behind us, and some of those filter steps would have to be crossed again for a new non-human civilization to emerge. As long as that new civilization wouldn't be more than several times worse in expectation than our current civilization, then this scenario is unlikely to dominate our calculations.\n\n\nOther costs to catastrophes\n---------------------------\n\n\n### Greater desperation\n\n\nIn general, people in more hardscrabble or fearful conditions have less energy and emotional resources to concern themselves with the suffering of others, especially with powerless computations that might be run by a future spacefaring civilization. Fewer catastrophes means more people who can focus on averting suffering by other sentients.\n\n\n### Darwinian futures?\n\n\nCurrent long-term political trends suggest that a world government may develop at some point, as is hinted by the increasing degrees of unity among rich countries (European Union, international trade agreements, etc.). A world government would offer greater possibilities for enforcing [mutually beneficial cooperation](http://utilitarian-essays.com/compromise.html) and thereby fulfilling more of what all value systems want in expectation, relative to unleashing a [Darwinian future](http://www.utilitarian-essays.com/future-of-darwinism.html).\n\n\nSilver linings to catastrophes\n------------------------------\n\n\nIn this section I suggest some possible upsides of catastrophic risks. I think it's important not to shy from these ideas merely because they don't comport with our intuitive reactions. Arguments should not be [soldiers](http://wiki.lesswrong.com/wiki/Arguments_as_soldiers). At the same time, it's also essential to constrain our speculations in this area by [common sense](http://lesswrong.com/lw/iao/common_sense_as_a_prior/).\n\n\nAlso note that even in the unlikely event that we concluded catastrophic risks were net positive for the far future, we should still not support them, to avoid stepping on the toes of so many other people who care deeply about preventing short-term harm. Rather, in this hypothetical scenario, we should find other, win-win ways to improve the future that don't encroach on what so many other people value.\n\n\n### Greater concern for suffering?\n\n\nIs it possible that some amount of disruption in the near term could heighten concern about potential future sources of suffering, whereas if things go along smoothly, people will give less thought to futures full of suffering? This question lies in analogy with the concern that reducing hardship and depression might make people less attuned to the pain of others. Many of the people I know who care most about reducing suffering [have gone through](http://reducing-suffering.org/how-important-is-experiencing-suffering-for-caring-about-suffering/) severe personal trauma or depression at one point. When things are going well, you can [forget](https://en.wikipedia.org/wiki/Empathy_gap) how [horrifying](http://www.utilitarian-essays.com/horror-of-suffering.html) suffering can be.\n\n\nIt's often said that World War I transformed art and cultural attitudes more generally. [Johnson (2012)](http://articles.latimes.com/2012/jul/21/entertainment/la-et-cm-world-war-art-20120722 \"'Art forever changed by World War I - latimes'\"): \"During and after World War I, flowery Victorian language was blown apart and replaced by more sinewy and R-rated prose styles. [...] 'World War I definitely gives a push forward to the idea of dystopia rather than utopia, to the idea that the world is going to get worse rather than better,' Braudy said.\"\n\n\n### More time for reflection?\n\n\nSevere catastrophes might depress economic output and technological development, with the possibility of allowing more time for reflection on the risks that such technology would bring. That said, this cuts both ways: Faster technology also allows for faster wisdom, better ability to monitor tech developments, and greater prosperity that allows more people to even think about these questions, as well as reduced social animosity and greater positive-sum thinking. The net sign of all of this is very unclear.\n\n\n### Resource curse?\n\n\nThere are suggestions (hotly debated) in the political-science literature of a \"resource curse\" in which greater reserves of oil and other natural resources may contribute to authoritarianism and repression. The [Wikipedia article](https://en.wikipedia.org/wiki/Resource_curse) cites a number of mechanisms by which the curse may operate. A related trend is the observation that cooler climates sometimes have a greater degree of compassion and cooperation -- perhaps because to survive cold winters you have to work together, while in warm climates, success is determined by being the best at forcibly stealing the resources that already exist?\n\n\nTo the extent these trends are valid, does this suggest that if humanity were to rebuild after a significant catastrophe, it might be more democratic owing to having less oil, metals, and other resources?\n\n\nThe \"Criticisms\" section of the Wikipedia article explains that some studies attribute causation in the other direction: Greater authoritarianism leads countries to exploit their resources faster. Indeed, some studies even find a \"resource blessing.\"\n\n\n### Greater impetus for cooperation?\n\n\nFighting factions are often brought together when they face a common enemy. For instance, in the 1954 [Robbers Cave study](https://web.archive.org/web/20160901100552/http://faculty.haas.berkeley.edu:80/kurkoski/BA105/BA%20105%20materials/READINGS/Design/sherif_robbers_cave_experiment.html), the two hostile factions of campers were brought together by \"superordinate goals\" that required them to unite to solve a problem they all faced. Catastrophic risks are a common enemy of humanity, so could efforts to prevent them build cooperative institutions? For instance, cooperation to solve climate change could be seen as an easy test bed for the much harder challenges that will confront humanity in cooperating on AGI. Could greater climate danger provide greater impetus for building better cooperative institutions early on? Wolf Bullmann [compared](https://www.facebook.com/groups/effective.altruists/permalink/585286124861082/?comment_id=587499161306445&offset=0&total_comments=31) this to vaccination.\n\n\nOf course, this is not an argument against building international-cooperation efforts against climate change -- those are the very things we want to happen. But it would be a slight counter-consideration against, say, personally trying to reduce greenhouse-gas emissions. I hasten to explain that this \"silver lining\" point is *extremely speculative*, and on balance, it seems most plausible that personally reducing greenhouse-gas emissions is net good overall, in terms of reducing risks of wars that could degrade into bad outcomes. The \"inoculation\" idea (see the [Appendix](#inoculation)) should be explored further, though it needs to be cast in a way that's not amenable to being quoted out of context.\n\n\n### Minority views in defense of alternate political systems\n\n\nReshuffling world political conditions *could* produce a better outcome, and indeed, there are minority [views](https://web.archive.org/web/20150610055901/http://www.moreright.net/about/ \"'About | More Right'\") that greater authoritarianism could actually improve future prospects by reducing coordination problems and enforcing safeguards. We should continue to explore a broad range of viewpoints on these topics, while at the same time not wandering too easily from mainstream consensus.\n\n\nWhat if the conclusions flipped?\n--------------------------------\n\n\nLike any empirical question, the net impact of catastrophic risks on the degree of compromise in the future isn't certain, and a hypothetical scenario in which we concluded that catastrophic risks would actually improve compromise is not impossible. Not being concerned about catastrophic risks is more likely for pure negative utilitarians who also fear the [risks of astronomical suffering](http://www.utilitarian-essays.com/astronomical-suffering.html) that space colonization would entail. I find it plausible that the detrimental effects of catastrophic risks on compromise outweigh effects on probability of colonization, but this conclusion is contingent and not inevitable. What if the calculation flipped around?\n\n\nEven if so, we should still probably oppose catastrophic risks when it's very cheap to do so, and we should never support them. Why? Because many other people care a lot about preventing short-term disasters, and stepping on so many toes so dramatically would not be an [efficient](https://en.wikipedia.org/wiki/Kaldor%E2%80%93Hicks_efficiency) course of action, much less a wise move by any reasonable heuristics about how to get along in society. Rather, we should find other, win-win approaches to improving compromise prospects that everyone can get behind. In any event, even ignoring the importance of cooperating with other people, it seems unlikely that focusing on catastrophic risks would be the best leverage point for accomplishing one's goals.\n\n\nIs work on catastrophic risks optimal?\n--------------------------------------\n\n\nMy guess is that there are better projects for altruists to pursue, because\n\n\n* Working *directly* on improving cooperation scenarios for AGI seems to target the problem more head-on, and at the same time, this field has *less* funding because it's more fringe and has fewer immediately visible repercussions and less historical precedent that tend to motivate mainstream philanthropists, and\n* Working *directly* on improving worldwide cooperation, global governance, etc. in general also seem more promising insofar as these efforts are, in my mind, more clearly positive with fewer question marks and shorter causal chains to the ultimate goal. If catastrophic risks were clearly being neglected relative to general compromise work, then the calculation might change, but as things stand, I would prefer to push on compromise directly.\n\n\nRecovery measures are not supported by this argument\n----------------------------------------------------\n\n\nThe argument in this essay applies only to preventing risks before they happen, so as to reduce societal dislocation. It doesn't endorse measures to ensure human recovery *after* catastrophic risks have already happened, such as [disaster shelters](http://www.effective-altruism.com/ea/5r/improving_disaster_shelters_to_increase_the/ \"'Improving disaster shelters to increase the chances of recovery from a global catastrophe'\") or [space colonies](https://en.wikipedia.org/wiki/Alliance_to_Rescue_Civilization). These post-disaster measures don't avert the increased anarchy and confusion that would result from catastrophes but do help humans stick around to potentially cause [cosmic harm](http://www.utilitarian-essays.com/astronomical-suffering.html) down the road. Moreover, disaster-recovery solutions [might even](https://en.wikipedia.org/wiki/Feeding_Everyone_No_Matter_What#Criticisms \"'Feeding Everyone No Matter What': 'Criticisms'\") increase the chance that catastrophic risks occur because of moral hazard. I probably don't endorse post-disaster recovery efforts except maybe in rare cases when they also substantially help to maintain social stability in scenarios that cause less-than-extinction-level damage.\n\n\n\nAppendix: Inoculation in general\n--------------------------------\n\n\n### Inoculation\n\n\nThe idea of inoculation -- accepting some short-term harm in order to improve long-term outcomes -- is a general concept.\n\n\nEven with warfare, there's some argument about an inoculation effect. For example, the United Nations was formed after World War II in an effort to prevent similar conflicts from happening again. And there's a widespread debate about inoculation in activism. Sometimes the \"radicals\" fear that if the \"moderates\" compromise too soon, the partial concessions will quell discontent and prevent a more revolutionary change. For example, some animal advocates say that if we improve the welfare of farm animals, people will have less incentive to completely \"end animal exploitation\" by going vegan. In this case, the radicals claim that greater short-term suffering is the inoculation necessary to prevent long-term \"exploitation.\"\n\n\n### Slippery slopes\n\n\nThe flip side to inoculation is the slippery slope: A little bit of something in the short term tends to imply *more* of it in the long term. In general, I think slippery-slope arguments are stronger than inoculation arguments, with some exceptions like in organisms' immune systems.\n\n\nUsually wars cause more wars. World War II would not have happened absent World War I. Conflicts breed animosity and perpetuate a cycle of violence as tit-for-tat retributions continue indefinitely, with each side claiming the other side was the first aggressor. We see this in terrorism vs. counter-terrorism response and in many other domains.\n\n\nLikewise, a few animal-welfare reforms now can enhance a culture of caring about animals that eventually leads to greater empathy for them. The Humane Society of the United States (HSUS) is often condemned by more purist animal-rights advocates as being in the hands of Big Ag, but in fact, Big Ag has [a whole website](http://humanewatch.org/) devoted to trying to discredit HSUS. This is hardly behavior that one would expect if HSUS is actually helping ensure Big Ag's long-term future. [↩](#inoculation-back)\n\n\nFootnotes\n---------\n\n\n1. Note that even if AGI causes human extinction, it would likely still undertake space colonization to advance whatever value it wanted to maximize.  [(back)](#back_ajs-fn-id_1-267)\n2. See *[When China Rules the World](https://en.wikipedia.org/wiki/When_China_Rules_the_World)*, Ch. 2.  [(back)](#back_ajs-fn-id_2-267)\n3. \"[Gutenberg and the history of the printing press](http://didyouknow.org/gutenberg/)\": \"Gutenberg was unaware of the Chinese and Korean printing methods.\"  [(back)](#back_ajs-fn-id_3-267)\n4. John Maxwell [disputes](http://effective-altruism.com/ea/14y/saving_expected_lives_at_10_apiece/9k7 \"'John_Maxwell_IV comments on Saving expected lives at $10 apiece? - Effective Altruism Forum'\") this claim. My reasoning is that dinosaurs lasted for at least [135 million years](https://en.wikipedia.org/wiki/Mesozoic \"'Mesozoic'\") but only went extinct [66 mya](https://en.wikipedia.org/wiki/Dinosaur \"'Dinosaur': \"). It's easy to imagine that dinosaurs might have lasted, say, twice as long as they did, in which case they would still rule the Earth today.  [(back)](#back_ajs-fn-id_4-267)\n\n\nNote that even if AGI causes human extinction, it would likely still undertake space colonization to advance whatever value it wanted to maximize.See *[When China Rules the World](https://en.wikipedia.org/wiki/When_China_Rules_the_World)*, Ch. 2.\"[Gutenberg and the history of the printing press](http://didyouknow.org/gutenberg/)\": \"Gutenberg was unaware of the Chinese and Korean printing methods.\"John Maxwell [disputes](http://effective-altruism.com/ea/14y/saving_expected_lives_at_10_apiece/9k7 \"'John_Maxwell_IV comments on Saving expected lives at $10 apiece? - Effective Altruism Forum'\") this claim. My reasoning is that dinosaurs lasted for at least [135 million years](https://en.wikipedia.org/wiki/Mesozoic \"'Mesozoic'\") but only went extinct [66 mya](https://en.wikipedia.org/wiki/Dinosaur \"'Dinosaur': \"). It's easy to imagine that dinosaurs might have lasted, say, twice as long as they did, in which case they would still rule the Earth today.", "url": "https://longtermrisk.org/how-would-catastrophic-risks-affect-prospects-for-compromise/", "title": "How Would Catastrophic Risks Affect Prospects for Compromise?", "source": "html_articles", "source_type": "blogPost", "source_filetype": "pdf", "date_published": "2015-08-28T22:00:00Z", "authors": ["Brian Tomasik"], "summary": [], "id": "74c4de844fb3f3303c125372928ac322"} {"text": "International Cooperation vs. AI Arms Race\n==========================================\n\n\n\n8 April 2015\nby [Brian Tomasik](https://longtermrisk.org/author/brian-tomasik/ \"Posts by Brian Tomasik\")\n\nFirst written: 5 Dec. 2013; last update: 29 Feb. 2016\n\n Summary\n-------\n\n\nThere's a decent chance that governments will be the first to build artificial general intelligence (AI). International hostility, especially an [AI arms race](http://wiki.lesswrong.com/wiki/AI_arms_race), could exacerbate risk-taking, hostile motivations, and errors of judgment when creating AI. If so, then international cooperation could be an important factor to consider when evaluating the [flow-through effects](http://blog.givewell.org/2013/05/15/flow-through-effects/) of charities. That said, we may not want to popularize the arms-race consideration too openly lest we accelerate the race.\n\n\n### Other versions\n\n\n\n[![](/files/pdf-icon.png)](https://longtermrisk.org/files/international-cooperation-ai-arms-race.pdf)\n\nContents\n\n+ [Other versions](#Other_versions)\n\n* [Will governments build AI first?](#Will_governments_build_AI_first)\n* [AI arms races](#AI_arms_races)\n* [Ways to avoid an arms race](#Ways_to_avoid_an_arms_race)\n* [Are these efforts cost-effective?](#Are_these_efforts_cost-effective)\n* [Should we publicize AI arms races?](#Should_we_publicize_AI_arms_races)\n* [How do our prospects look?](#How_do_our_prospects_look)\n* [Robot arms races](#Robot_arms_races)\n* [Nanotech arms races](#Nanotech_arms_races)\n* [Feedback](#Feedback)\n\nWill governments build AI first?\n--------------------------------\n\n\nAI poses a national-security threat, and unless the militaries of powerful countries are very naive, it seems to me unlikely they'd allow AI research to proceed in private indefinitely. At some point the US military would confiscate the project from Google or Facebook, if the US military isn't already ahead of them in secret by that point.\n\n\nWhile the US government as a whole is fairly slow and incompetent when it comes to computer technology, specific branches of the government are on the cutting edge, including the NSA and DARPA (which already funds a lot of public AI research). When we consider historical examples as well, like the Manhattan Project, the Space Race, and ARPANET, it seems that the US government has a strong track record of making technical breakthroughs when it really tries.\n\n\nSam Altman [agrees](http://blog.samaltman.com/machine-intelligence-part-2 \"'Machine intelligence, part 2: THE NEED FOR REGULATION'\") that in the long run governments will probably dominate AI development: \"when governments gets serious about [superhuman machine intelligence] SMI they are likely to out-resource any private company\".\n\n\nThere are *some* scenarios in which private AI research wouldn't be nationalized:\n\n\n* An unexpected AI foom before anyone realizes what was coming.\n* The private developers stay underground for long enough not to be caught. This becomes less likely the more government surveillance improves (see \"[Arms Control and Intelligence Explosions](http://intelligence.org/files/ArmsControl.pdf)\").\n* AI developers move to a \"safe haven\" country where they can't be taken over. (It seems like the international community might prevent this, however, in the same way it now seeks to suppress terrorism in other countries.)\n\n\nEach of these scenarios could happen, but it seems reasonably likely to me that governments would ultimately control AI development, or at least partner closely with Google.\n\n\nAI arms races\n-------------\n\n\nGovernment AI development could go wrong in several ways. Plausibly governments would botch the process by not realizing the risks at hand. It's also possible that governments would use the AI and robots for totalitarian purposes.\n\n\nIt seems that both of these bad scenarios would be exacerbated by international conflict. Greater hostility means countries are more inclined to use AI as a weapon. Indeed, whoever builds the first AI can take over the world, which makes building AI the ultimate arms race. A [USA-China race](http://lesswrong.com/lw/hoz/do_earths_with_slower_economic_growth_have_a/9590) is one reasonable possibility.\n\n\nArms races encourage risk-taking -- being willing to skimp on safety measures to improve your odds of winning (\"[Racing to the Precipice](http://intelligence.org/2013/11/27/new-paper-racing-to-the-precipice/)\"). In addition, the weaponization of AI could lead to worse expected outcomes in general. [CEV](https://arbital.com/p/cev/ \"'Coherent extrapolated volition (alignment target)'\") seems to have less hope of success in a Cold War scenario. (\"What? You want to include the evil *Chinese* in your CEV??\") With a pure CEV, presumably it would eventually count Chinese values even if it started with just Americans, because people would become more enlightened during the process. However, when we imagine more crude democratic decision outcomes, this becomes less likely.\n\n\nIn *Superintelligence: Paths, Dangers, Strategies* (Ch. 14), Nick Bostrom proposes that another reason AI arms races would crimp AI safety is that competing teams wouldn't be able to share insights about AI control. What Bostrom doesn't mention is that competing teams also wouldn't share insights about AI *capability*. So even if less inter-team information sharing reduces safety, it also reduces speed, and the net effect isn't clear to me.\n\n\nOf course, there are situations where arms-race dynamics can be desirable. In the original prisoner's dilemma, the *police* benefit if the prisoners defect. Defection on a tragedy of the commons by companies is the heart of [perfect competition](https://en.wikipedia.org/wiki/Perfect_competition)'s efficiency. It also underlies competition among countries to improve quality of life for citizens. Arms races generally speed up innovation, which can be good if the innovation being produced is both salutary and not risky. This is not the case for general AI. Nor is it the case for other \"races to the bottom\".\n\n\nWays to avoid an arms race\n--------------------------\n\n\nAverting an AI arms race seems to be an important topic for research. It could be partly informed by the Cold War and other nuclear arms races,![](https://longtermrisk.org/files/Reagan_and_Gorbachev_signing-350x233.jpg \"'President Reagan and General Secretary Gorbachev signing the INF Treaty in the East Room of the White House.' By White House Photographic Office [Public domain], via Wikimedia Commons: https://commons.wikimedia.org/wiki/File:Reagan_and_Gorbachev_signing.jpg\") as well as by [other efforts](http://cns.miis.edu/) at nonproliferation of chemical and biological weapons. Forthcoming robotic and [nanotech weapons](http://crnano.typepad.com/crnblog/2004/02/nanotech_weapon.html) might be even better analogues of AI arms races than nuclear weapons because these newer technologies can be built more secretly and used in a more targeted fashion.\n\n\nApart from more robust arms control, other factors might help:\n\n\n* Improved international institutions like the UN, allowing for better enforcement against defection by one state.\n* In the long run, a scenario of [global governance](https://en.wikipedia.org/wiki/Global_governance) would likely be ideal for strengthening international cooperation, just like nation states [reduce intra-state violence](https://en.wikipedia.org/wiki/The_Better_Angels_of_Our_Nature).\n* Better construction and enforcement of nonproliferation treaties.\n* Improved game theory and international-relations scholarship on the causes of arms races and how to avert them. (For instance, arms races have sometimes been modeled as iterated prisoner's dilemmas with imperfect information.)\n* How to improve verification, which has historically been a weak point for nuclear arms control. (The concern is that if you haven't verified well enough, the other side might be arming while you're not.)\n* Moral tolerance and multicultural perspective, aiming to reduce people's sense of nationalism. (In the limit where neither Americans nor Chinese care which government wins the race, there would be no point in having the race.)\n* Improved trade, democracy, and other forces that historically have reduced the likelihood of war.\n\n\nAre these efforts cost-effective?\n---------------------------------\n\n\nWorld peace is hardly a goal unique to effective altruists (EAs), so we shouldn't necessarily expect low-hanging fruit. On the other hand, projects like nuclear nonproliferation seem relatively underfunded even compared with anti-poverty charities.\n\n\nI suspect more direct [MIRI](https://en.wikipedia.org/wiki/Machine_Intelligence_Research_Institute)-type research has higher expected value, but among EAs who don't want to fund MIRI specifically, encouraging donations toward international cooperation could be valuable, since it's certainly a more mainstream cause. I wonder if GiveWell would consider studying global cooperation specifically beyond its [indirect relationship](http://utilitarian-essays.com/catastrophic-risks-and-compromise.html) with catastrophic risks.\n\n\nShould we publicize AI arms races?\n----------------------------------\n\n\nWhen I mentioned this topic to a friend, he pointed out that we might not want the idea of AI arms races too widely known, because then governments might take the concern more seriously and therefore start the race earlier -- giving us less time to prepare and less time to work on FAI in the meanwhile. From David Chalmers, \"[The Singularity: A Philosophical Analysis](http://consc.net/papers/singularity.pdf)\" (footnote 14):\n\n\n\n> When I discussed these issues with cadets and staff at the West Point Military Academy, the question arose as to whether the US military or other branches of the government might attempt to prevent the creation of AI or AI+, due to the risks of an intelligence explosion. The consensus was that they would not, as such prevention would only increase the chances that AI or AI+ would first be created by a foreign power. One might even expect an AI arms race at some point, once the potential consequences of an intelligence explosion are registered. According to this reasoning, although AI+ would have risks from the standpoint of the US government, the risks of Chinese AI+ (say) would be far greater.\n> \n> \n\n\nWe should take this information-hazard concern seriously and remember the [unilateralist's curse](http://www.nickbostrom.com/papers/unilateralist.pdf). If it proves to be fatal for explicitly discussing AI arms races, we might instead encourage international cooperation without explaining *why*. Fortunately, it wouldn't be hard to encourage international cooperation on grounds other than AI arms races if we wanted to do so.\n\n\nAlso note that a government-level arms race could easily be *preferable* to a Wild West race among a dozen private AI developers where coordination and compromise would be not just difficult but potentially impossible. Of course, if we did decide it was best for governments to take AI arms races seriously, this would also encourage private developers to step on the gas pedal. That said, once governments do recognize the problem, they may be able to impose moratoria on private development.\n\n\nHow concerned should we be about accidentally accelerating arms races by talking about them? My gut feeling is it's not too risky, because\n\n\n* It's hard to contain the basic idea. Super-powerful AI is already well known not just by governments but even in popular movies.\n* Developing verification measures, technology restrictions, and so on require governments knowing what technology they're dealing with.\n* If governments can think about these issues ahead of time (decades before strong AI becomes feasible), they're more likely to go for cooperation and less likely to panic and build up their own defenses, because they see that there's time for negotiations to potentially work before losing that much ground. Right now most AI research appears to be done in public, so there's not a huge cost for a given country in delaying at this point.\n* Most risk analysts don't express concerns like these too much when talking about military arms races. Of course, there's selection bias; maybe most of the military does think it's dangerous to talk about these issues in public, and we only hear form the minority that defects from this view. But I've never heard criticism against people who talk too much about arms races in public, except this one comment from my friend.\n* Talking about arms-race scenarios specifically makes it much more clear *why* we need global governance and improved cooperation. It's more persuasive than just saying, \"Wouldn't it be great if the world could sing Kumbaya?\"\n\n\nThat said, I remain open to being persuaded otherwise, and it seems important to think more carefully about how careful to be here. The good news is that the information hazards are unlikely to be disastrous, because all of this material is already publicly available somewhere. In other words, the upsides and downsides of making a bad judgment seem roughly on the same order of magnitude.\n\n\n\nHow do our prospects look?\n--------------------------\n\n\nIn *Technological change and nuclear arms control* (1986), Ted Greenwood suggests that arms control has historically had little counterfactual impact:\n\n\n\n> In no case has an agreement inhibited technological change that the United States both actually wanted to pursue at the time of agreement and was capable of pursuing during the intended duration of the agreement. Only in one area of technological innovation (i.e., SALT II constraints on the number of multiple independently-targetable reentry vehicles, or MIRVs, on existing missiles) is it possible that such agreements actually inhibited Soviet programs, although in another (test of new light ICBMs [intercontinental ballistic missiles]) their program is claimed by the United States to violate the SALT II Treaty that the Soviets have stated they will not undercut.\n> \n> \n\n\nIn \"Why Military Technology Is Difficult to Restrain\" (1987), Greenwood adds that the [INF Treaty](https://en.wikipedia.org/wiki/Intermediate-Range_Nuclear_Forces_Treaty) was arguably more significant, but it still didn't stop technological development, just a particular application of known technology.\n\n\nJohn O. McGinnis [argues against](http://scholarlycommons.law.northwestern.edu/cgi/viewcontent.cgi?article=1193&context=nulr_online \"'Accelerating AI', pp. 374-75\") the feasibility of achieving global cooperation on AI:\n\n\n\n> the only realistic alternative to unilateral relinquishment would be a global agreement for relinquishment or regulation of AI-driven weaponry. But such an agreement would face the same insuperable obstacles nuclear disarmament has faced. [...] Not only are these weapons a source of geopolitical strength and prestige for such nations, but verifying any prohibition on the preparation and production of these weapons is a task beyond the capability of international institutions.\n> \n> \n\n\nIn other domains we also see competition prevail over cooperation, such as in most markets, where usually there are at least several companies vying for customers. Of course, this is partly by social design, because we have anti-trust laws. Competition in business makes companies worse off while making consumers better off. Likewise, competition to build a quick, hacky AI makes human nations worse off while perhaps making the unsafe AIs better off. If we care some about the unsafe AIs for their own sakes as intelligent [preference-satisfying agents](http://www.utilitarian-essays.com/hedonistic-vs-preference.html), then this is less of a loss than it at first appears, but it still seems like there's room to expand the pie, and reduce suffering, if everyone takes things more slowly.\n\n\nMaybe the best hope comes from the possibility of global unification. There is just one US government, with a monopoly on military development. If instead we had just one world government with a similar monopoly, arms races would not be necessary. Nationalism has been a potent force for gluing countries together and if channeled into internationalism, perhaps it could help to bind together a unified globe. Of course, we shouldn't place all our hopes on a world government and need to prepare for arms-control mechanisms that can also work with the present-day nation-state paradigm.\n\n\nRobot arms races\n----------------\n\n\nRobots require AI that contains clear goal systems and an ability to act effectively in the world. Thus, they seem like a reasonable candidate for where artificial general intelligence will first emerge. Facebook's image-classification algorithms and Google's search algorithms don't need *general* intelligence, with many human-like cognitive faculties, as much as a smart robot does.\n\n\nMilitary robotics seems like one of the most likely reasons that a robot arms race might develop. Indeed, to some degree there's already an arms race to build drones and autonomous weapons systems. [Mark Gubrud](http://gubrud.net/?p=35):\n\n\n\n> Killer robots are not the only element of the global technological arms race, but they are currently the most salient, rapidly-advancing and fateful. If we continue to allow global security policies to be driven by advancing technology, then the arms race will continue, and it may even reheat to Cold War levels, with multiple players this time. Robotic armed forces controlled by AI systems too complex for anyone to understand will be set in confrontation with each other, and sooner or later, our luck will run out.\n> \n> \n\n\nNanotech arms races\n-------------------\n\n\nNanotechnology admits the [prospect of severe arms races](http://www.crnano.org/dangers.htm#arms) as well. \"[Can an MM Arms Race Avoid Disaster?](http://crnano.typepad.com/crnblog/2004/06/can_an_mm_arms_.html)\" lists many reasons why a nanotech race should be less stable than the nuclear race was. In \"[War, Interdependence, and Nanotechnology](http://www.futurebrief.com/miketrederwar002.asp),\" Mike Treder suggests that because nanotech would allow countries to produce their own goods and energy with less international trade, there would be less incentive to refrain from preemptive aggression. Personally, I suspect that countries would still be very desperate to trade *knowledge* about nanotech itself to avoid falling behind in the race, but perhaps if a country was the world's leader in nanoweapons, it would have incentive to attack everyone else before the tables turned.\n\n\nMark Gubrud's \"[Nanotechnology and International Security](http://www.foresight.org/Conferences/MNT05/Papers/Gubrud/)\" presents an excellent overview of issues with both AI and nanotech races. He suggests:\n\n\n\n> Nations must learn to trust one another enough to live without massive arsenals, by surrendering some of the prerogatives of sovereignty so as to permit intrusive verification of arms control agreements, and by engaging in cooperative military arrangements. Ultimately, the only way to avoid nanotechnic confrontation and the next world war is by evolving an integrated international security system, in effect a single global regime. World government that could become a global tyranny may be undesirable, but nations can evolve a system of international laws and norms by mutual agreement, while retaining the right to determine their own local laws and customs within their territorial jurisdictions.\n> \n> \n\n\nAccording to Jürgen Altmann's talk, \"[Military Uses of Nanotechnology and Nanoethics](http://www.youtube.com/watch?v=MANPyybo-dA),\" 1/4 to 1/3 of US federal funding in the National NT Initiative is for defense -- $460 million out of $1.554 billion in 2008 (video time: 18:00). The US currently spends 4-10 times the rest of the world in military nanotech R&D, compared with \"only\" 2 times the rest of the world in overall military R&D (video time: 22:28). Some claim the US should press ahead with this trend in order to maintain a monopoly and prevent conflicts from breaking out, but it's dubious that nanotech can be contained in this way, and Altmann instead proposes active arms-control arrangements with anytime, anywhere inspections and in the long run, progress toward global governance to allay security dilemmas. We have seen many successful bans on classes of technology (bioweapons, chemical weapons, blinding lasers, etc.), so nano agreements are not out of the question, though they will take effort because many of the applications are so inherently dual-use. Sometimes commentators scoff at enforcement of norms against use of chemical weapons when just as many people can be killed by conventional forces, but these agreements are actually really important, as precedents for setting examples that can extend to more and more domains.\n\n\nLike AI, nanotech may involve the prospect of the technology leader taking over the world. It's not clear which technology will arrive first. Nanotech contributes to the continuation of Moore's law and therefore makes brute-force evolved AI easier to build. Meanwhile, AI would vastly accelerate nanotech. Speeding up either leaves less time to prepare for both.\n\n\nFeedback\n--------\n\n\nTo read comments on this piece, see the [original LessWrong discussion](http://lesswrong.com/lw/j9u/international_cooperation_vs_ai_arms_race/).", "url": "https://longtermrisk.org/international-cooperation-vs-ai-arms-race/", "title": "International Cooperation vs. AI Arms Race", "source": "html_articles", "source_type": "blogPost", "source_filetype": "pdf", "date_published": "2015-04-07T22:00:00Z", "authors": ["Brian Tomasik"], "summary": [], "id": "d6dd7f0b7c0a3db2ba6c05e481df292f"} {"text": "Reasons to Be Nice to Other Value Systems\n=========================================\n\n\n\n29 August 2015\nby [Brian Tomasik](https://longtermrisk.org/author/brian-tomasik/ \"Posts by Brian Tomasik\")\n\nFirst written: 16 Jan. 2014; last update: 17 Oct. 2017\n\n I suggest several arguments in support of the heuristic that we should help groups holding different value systems from our own when doing so is cheap, unless those groups prove uncooperative to our values. This is true even if we don't directly care at all about other groups' value systems. Exactly how nice to be depends on the particulars of the situation, but there are some cases where helping others' moral views is clearly beneficial for us.\n\n\nContents\n\n* [Introduction](#Introduction)\n* [Example: Altruism tabling](#Example_Altruism_tabling)\n* [Reasons to be nice](#Reasons_to_be_nice)\n\t+ [Iterated prisoner's dilemmas](#Iterated_prisoners_dilemmas)\n\t+ [Evolved emotions](#Evolved_emotions)\n\t+ [Reputation](#Reputation)\n\t+ [Common sense](#Common_sense)\n\t+ [Norms and universal rules](#Norms_and_universal_rules)\n\t+ [Encouraging global cooperation](#Encouraging_global_cooperation)\n\t+ [Utilitarianism](#Utilitarianism)\n\t+ [Moral uncertainty](#Moral_uncertainty)\n\t+ [Superrationality](#Superrationality)\n* [Is it ok to cheat in secret?](#Is_it_ok_to_cheat_in_secret)\n* [Risks to being nice](#Risks_to_being_nice)\n* [How nice should you be?](#How_nice_should_you_be)\n* [Applications to space colonization](#Applications_to_space_colonization)\n\n\nIntroduction\n------------\n\n\nA basic premise of economic policy, business strategy, and effective altruism is to choose the option with highest value per dollar. Ordinarily this simple rule suffices because we're engaged in one-player games against the environment. For instance, if Program #1 to distribute bed nets saves twice as many lives per dollar as Program #2, we choose Program #1. If Website B has 25% longer dwell time than Website A, we choose Website B. These are essentially engineering problems where one option is better for us, and no other agent else feels differently.\n\n\nHowever, this mindset can run into trouble in social situations involving more than one player. I'll illustrate with a toy example that avoids naming specific groups, but the general structure transfers to many real-world cases.\n\n\n\nExample: Altruism tabling\n-------------------------\n\n\nSuppose there's an Effective Altruism Fair at your local university, and altruists from various ideological stripes will be hosting the event and presenting their individual work. You really care about promoting Emacs, the [one true text editor](https://en.wikipedia.org/wiki/Editor_war). However, the Fair will also host a booth for the advocates of the Vi editor, which you consider not just inferior but actively harmful to the world.\n\n\nThe Fair requires some general organizing help -- to publicize, set up tables, and provide refreshments. Beyond that, it's up to the individual groups to showcase their own work to the visitors. Your Emacs club is deciding: How much effort should we put into helping out with general organizing, and how much should we devote to making our individual booth really awesome? You might evaluate this on the metric of how many email signups you'd get per hour of preparation work. And while you appreciate some things the Vi crowd does, you think they cause net harm on balance, so you might subtract off from your utility 1/2 times the number of email signups your effort allows them to get per hour.\n\n\nIf you help out with the general logistics of the Fair, it would produce a lot of new visitors, but only some fraction of them will be interested in Emacs. Say that every hour you put in provides 10 new Emacs signups, as well as 10 new Vi signups (plus maybe signups to other groups that are irrelevant to you). The overall value of this to you is only 10 - (1/2)\\*10 = 5. In contrast, if you optimize your own booth, you can snatch an extra 15 visitors to yourself, with no extra Vi visitors in the process. Since 15 > 5, cost-effectiveness analysis says you should optimize only your booth. After all, this is the more efficient allocation of resources, right?\n\n\nSuppose the Vi team faces the same cost-benefit tradeoffs. Then depending on which decisions each team makes, the following are the possible numbers of signups that each side will get, written in the format (# of Emacs signups), (# of Vi signups).\n\n\nTotal numbers of email signups\n\n\n\n\n| | | |\n| --- | --- | --- |\n| | Vi help on logistics | Vi focus on own booth |\n| Emacs help on logistics | 10+10 = 20,    10+10 = 20 | 10+0 = 10,    10+15 = 25 |\n| Emacs focus on own booth | 15+10 = 25,    0+10 = 10 | 15+0 = 15,    0+15 = 15 |\n\n\nNow remember that Emacs supporters consider Vi harmful, so that Emacs utility = (number of Emacs signups) - (1/2)\\*(number of Vi signups). Suppose the Vi side feels exactly the same way in reverse. Then the actual utility values for each side, computed based on the above table, will be\n\n\nUtility values\n\n\n\n\n| | | |\n| --- | --- | --- |\n| | Vi help on logistics | Vi focus on own booth |\n| Emacs help on logistics | 10,    10 | -2.5,    20 |\n| Emacs focus on own booth | 20,    -2.5 | 7.5,    7.5 |\n\n\nJust as we saw in the naive cost-effectiveness calculation, there's an advantage of 20 - 10 = 7.5 - (-2.5) = 10 to focusing on your own booth, regardless of what the other team does.\n\n\nThe game that this table represents is a [prisoner's dilemma](https://en.wikipedia.org/wiki/Prisoner%27s_dilemma) (PD) -- arguably the most famous in game theory. The dominant strategy in a one-shot PD is to defect, and this is what our naive calculation was capturing. In fact, both of the above tables are PDs, so the PD structure would have applied even absent enmity between the text-editor camps. PDs show up in very many real-world situations.\n\n\nThere's debate on whether defection in a one-shot PD is rational, but what is clear is that most of the world does not consist in one-shot PDs. For instance, what if the EA Fair is held again next year? How will the Vi team react then if you defect this year?\n\n\nIn addition, it may be in all of our interests to structure society in ways that prevent games from turning into one-shot PDs, because the outcome is worse for both sides than cooperation would have been, if only it could have been arranged.\n\n\n\nReasons to be nice\n------------------\n\n\n![](https://longtermrisk.org/files/Handshake_-_Pergamonmuseum-350x234.jpg \"Ancient Roman sculpture at Pergamon Museum. By User: Sinbad. Licenses: (1) Creative Commons Attribution-ShareAlike 3.0 or (2) GNU Free Documentation License (GFDL), Version 1.2 or any later version published by the Free Software Foundation with no invariant sections. See https://fa.wikipedia.org/wiki/%D9%BE%D8%B1%D9%88%D9%86%D8%AF%D9%87:Handshake_-_Pergamonmuseum.JPG\")\n\n\n\n> If you have an opportunity to significantly help other value systems at small cost to yourself, you should do so.\n> \n> \n> Likewise, if you have opportunity to avoid causing significant harm to other value systems by foregoing small benefit to yourself, you should do so. This is more true the more powerful is the value system you're helping. That said, if groups championing the other value system are defecting against you, then stop helping it.\n> \n> \n\n\n\n### Iterated prisoner's dilemmas\n\n\nMost of life has multiple rounds. Other groups of people generally don't go away after we've stepped on their toes, and if we defect now, they can defect on us in future interactions. There's extensive literature on the iterated prisoner's dilemma (IPD), but the general finding is that it tends to yield cooperation, especially over long time horizons without a definite end point. *[The Evolution of Cooperation](https://en.wikipedia.org/wiki/The_Evolution_of_Cooperation)* is an important book on this subject.\n\n\n\n### Evolved emotions\n\n\nOne can debate whether a given situation adequately fits the properties of being a pure IPD. The translation from real-world situations to theoretical games is always messy. Regardless, the fact remains that empirically, humans feel reciprocal gratitude and indebtedness to those who helped them.\n\n\nWhat's more, these feelings often persist *even when* there's no obvious further benefit from doing so. Emotions are humans' ways of making credible commitments, and the fact that humans feel loyalty and duty means that they can generally be trusted to reciprocate.\n\n\nOf course, if you interact with people who are conniving and tend to backstab, then don't help them. Being nice does not mean being a sucker, and indeed, continuing to assist those who just take for themselves only encourages predation. (Of course, evolution has produced emotional exceptions to this, like in the case of altruism towards children and family members who share DNA with you, even if they never reciprocate.)\n\n\n\n### Reputation\n\n\nReciprocal altruism typically occurs between individuals or groups, but there are also broader ways in which society transmits information about how generous someone is toward other values. When others discuss your work, wouldn't you rather have them say that you're a fair-minded and charitable individual who helps many different value systems, even those she doesn't agree with?\n\n\n\n### Common sense\n\n\nThe heuristic of helping others when it's cheap to do so strikes most people as [common sense](http://lesswrong.com/lw/iao/common_sense_as_a_prior/). These values are taught in kindergarten and children's books.\n\n\n\n### Norms and universal rules\n\n\nMahatma Gandhi [said](http://www.nytimes.com/2011/08/30/opinion/falser-words-were-never-spoken.html): \"If we could change ourselves, the tendencies in the world would also change.\" We can see this idea expressed in other forms, such as the [categorical imperative](https://en.wikipedia.org/wiki/Categorical_imperative) or the dictums of a rule utilitarian. Society would be better -- even according to your own particular values -- if everyone followed the rule of helping other value systems when doing so had low cost.\n\n\nWhen we follow and believe in these principles, it rubs off on others. Collectively it helps reinforce a norm of cooperation with those who feel differently from ourselves. Norms have significant social power, both for individuals and even for nations. [For instance](http://www.amazon.com/Norm-Dynamics-Multilateral-Arms-Control/dp/0820344230):\n\n\n\n> Internationally, a cooperative security norm, if close to universality, can become the defining standard for how a good international citizen should behave. It is striking how in the 1980s and 1990s scores of formerly reluctant states were flocking to [Nonproliferation Treaty] NPT membership, notably after change in the national system of rule and particularly in the course of democratization processes: turning unequivocally nonnuclear or confirming nonnuclear status became the \"right thing to do\" (Rublee 2009; Müller and Schmidt 2010). [p. 4]\n> \n> \n\n\nWhen we defect in any particular situation, we weaken cooperative norms for everyone for many future situations to come.\n\n\n\n### Encouraging global cooperation\n\n\nNorms of mutual assistance and tolerance among different groups are important not just for our own projects but also for international peace on a larger scale. To be sure, the contribution of our individual actions to this goal are miniscule, but the stakes are also high. A globally cooperative future could contain significantly less suffering and more of what other people value in expectation.\n\n\n\n### Utilitarianism\n\n\nUtilitarians care about the well-being or preference satisfaction of others. Thus, if many people feel that something is wrong, even if you don't, there's a utilitarian cost to it. This argument is stronger for preference utilitarians who value people's preferences about the external world even when they aren't consciously aware of violations of those preferences. Of course, this alone is probably not enough to encourage nice behavior, because present-day humans are vastly outweighed in direct value by non-human animals and future generations.\n\n\n\n### Moral uncertainty\n\n\nIf you had grown up with different genes and environmental circumstances, you would have held the moral values that others espouse. In addition, you yourself might *actually*, not just hypothetically, later come to share those views -- due to new arguments, updated information, future life experiences, accretion of wisdom, or social influence. Or you might have come to hold those views if only you had heard arguments or learned things that you will not actually discover. What others believe provides [some evidence](http://blog.givewell.org/2013/05/02/broad-market-efficiency/comment-page-1/#comment-542986) for what an idealized version of you would believe. If so, then you might be mistaken that others' moral values are worthless in your estimation.\n\n\nI should clarify that the value of cooperation does not rely on moral uncertainty; the other arguments are strong enough on their own. Moral uncertainty just provides some additional oomph, depending on how strongly it motivates you. (And you may want to apply some meta-level uncertainty on how much you care about moral uncertainty, if you care about meta-level uncertainty.)\n\n\n### Superrationality\n\n\n*This section was written by Caspar Oesterheld.*\n\n\nSome decision theorists have argued that cooperation in a one-shot PD is justified if we face an opponent that uses a similar decision-making procedure as we do. After all, if we cooperate in such a PD, then our opponent is likely to do the same. [Hofstadter (1983)](https://www.gwern.net/docs/xrisks/1985-hofstadter#dilemmas-for-superrational-thinkers-leading-up-to-a-luring-lottery \"'DILEMMAS FOR SUPERRATIONAL THINKERS, LEADING UP TO A LURING LOTTERY', 'Metamagical Themas: Sanity and Survival - Gwern.net'\") calls this idea [superrationality](https://en.wikipedia.org/wiki/Superrationality \"'Superrationality - Wikipedia'\").\n\n\nSome have used superrationality to argue that it is in our self-interest to be nice to other humans ([Leslie 1991](https://sl4librarian.files.wordpress.com/2016/12/two-bird-deaths-one-throw-leslie.pdf \"'Ensuring Two Bird Deaths With One Throw'\"), sec. 8; [Drescher 2006](https://smile.amazon.com/Good-Real-Demystifying-Paradoxes-Physics/dp/0262042339/ \"'Good and Real: Demystifying Paradoxes from Physics to Ethics'\"), ch. 7). For example, if I save a stranger from drowning, this makes it more likely that others will make a similar decision when I need help. However, in practice it seems that most people are not sufficiently similar to each other for this reasoning to apply in most situations. In fact, you may already know what other people think about when they decide whether to pull someone out of the water and that this is uncorrelated with your thoughts on superrationality. Thus, it is unclear whether superrationality has strong implications for how one should deal with other humans ([Oesterheld 2017](https://longtermrisk.org/files/Multiverse-wide-Cooperation-via-Correlated-Decision-Making.pdf \"'Multiverse-wide Cooperation via Correlated Decision Making'\"), sec. 6.6; [Almond 2010a](https://casparoesterheld.files.wordpress.com/2016/12/almond_edt_1.pdf \"'On Causation and Correlation Part 1: Evidential decision theory is correct.'\"), sec. 4.6; [Almond 2010b](https://web.archive.org/web/20120310010225/http://www.paul-almond.com/Correlation2.pdf \"'On Causation and Correlation Part 2: Implications of Evidential Decision Theory'\"), sec. 1; [Ahmed 2014](https://doi.org/10.1017/CBO9781139107990 \"'Evidence, Decision and Causality'\"), ch. 4).\n\n\nHowever, even if Earth doesn’t harbor agents that are sufficiently similar to me, the multiverse as a whole probably does. In particular, it may contain a large set of agents who think about decision theory exactly like I do but have different values. Some of these will also care about what happens on Earth. If this is true and I also care about these other parts of the multiverse, then superrationality gives me a reason to be nice to these value systems. If I am nice toward them, then this makes it more likely that similar agents will also take my values into account when they make decisions in their parts of the multiverse (Oesterheld 2017).\n\n\n\nIs it ok to cheat in secret?\n----------------------------\n\n\nMany of the reasons listed, especially the stronger ones, only have consequences when your cooperation or defection is visible: IPDs, evolved emotions, reputation, norms and universal rules, and encouraging global cooperation. Assuming the other, remaining reasons are weak enough, doesn't this license us to trash other value systems in our private decisions, so long as no one will find out?\n\n\nNo. There's too much risk of it backfiring in your face. One slip-up could damage your reputation, and your deception might show through in ways you don't realize. I think it's best to *actually be* someone who wants to help other value systems, regardless of whether others find out. This may sound suboptimal, and maybe there is a little bit of faith to it, but consider that almost everyone in the world recognizes this idea at least to some extent, such as in the law of karma or the Golden Rule. If it were an \"irrational\" policy for social success, why would we see it so widespread? [Eliezer Yudkowsky](http://lesswrong.com/lw/nc/newcombs_problem_and_regret_of_rationality/): \"Be careful of [...] any time you find yourself defining the 'winner' as someone other than the agent who is currently smiling from on top of a giant heap of utility.\"\n\n\nNot hiding your defection against others is a special case of the [general argument for honesty](http://www.utilitarian-essays.com/honesty.html). This isn't to say you always have to be cooperative, but if you're not, don't go out of your way to hide it.\n\n\nI regard not trashing other value systems as a weak ethical injunction for guiding my decisions. I recommend reading Eliezer Yudkowsky's [sequence](http://wiki.lesswrong.com/wiki/Ethical_injunction#Sequence) for greater elaboration of why ethical injunctions can win better than naive act-utilitarianism. The injunction not to step on others' toes is not as strong as the injunction against lying, stealing, and so on; indeed, it's *never* possible to not step on some people's toes. But in cases where it's relatively easy to avoid causing major harm to what a significant number of others care about, you should try to avoid causing that harm. Of course, if others are substantially and unremorsefully stepping on your toes, then this advice no longer applies, and you should stop being nice to them, until they start being cooperative again.\n\n\n\nRisks to being nice\n-------------------\n\n\nBeing nice is not guaranteed to yield the best outcomes. There are reasons we evolved selfish motives as well as altruistic ones, and the \"nice guys finish last\" slogan is sometimes accurate. The other side might cheat you and get away with it. Maybe the IPD structure isn't sufficient to guarantee cooperation. Maybe it's a tragedy of the commons (multi-player prisoner's dilemma) where it's much harder to change defection to cooperation, and your efforts fail to make their intended impact.\n\n\nIt's important to assess these risks and be conscious of when your efforts at cooperation fail. But remember: Being nice means defecting on the other side if it defects on you. Niceness doesn't mean being exploited permanently. It's better to *try* a gesture of cooperation first rather than assume it won't work; predicting defection may become a self-fulfilling prophecy. In addition, I think niceness is increasingly rewarded in our more interconnected and transparent world, facilitated by governments and media. Our ancestral selfish tendencies probably overfire relative to the strategic optimum.\n\n\nHowever, there are many real-world cases where niceness fails. One striking demonstration of this was the attempts by US president Barack Obama to compromise with opposing Republicans, which repeatedly resulted in Obama and the Democrats [making concessions for](https://www.youtube.com/watch?v=r-_RPbcuGEA \"'Obama Concedes To GOP On Cuts Again - Why?'\") nothing in return. This is not how to play an iterated prisoner's dilemma. If niceness repeatedly fails to achieve cooperation, then one has to go on the offensive instead.\n\n\nIf you hold a popular position, then I think it's [often successful](https://www.youtube.com/watch?v=ukVhZkexq-c&t=4m47s \"'Obama: Increase Social Security Benefits', Jun 4, 2016\") to firmly stand your ground rather than making concessions in response to [squeaky-wheel](https://en.wikipedia.org/wiki/The_squeaky_wheel_gets_the_grease \"'The squeaky wheel gets the grease'\") opponents. [Cenk Uygur](https://www.youtube.com/watch?v=M1qoQkL_QDU&t=7m36s \"'If Cenk Were Obama: Supreme Court Nominee Edition'\"): \"Do you know what works in politics? Strength.\"\n\n\nCooperation can also entail overhead costs in terms of negotiating and verifying commitments, as well as assessing whether an apparent concession is actually a concession or just something the other side was already going to do. For small interactions, these overhead costs may outweigh the benefits of trade. Verifying cooperation is often easy if a business partner does a favor for you, because you can see what the favor is, and it's unlikely the partner would have done the favor without expecting anything in return. Verifying cooperation is often harder for big organizations or governments, because (1) the impacts of a change in policy can be diffuse and costly to measure and (2) it's difficult to know how much the change in policy is due to cooperation versus how much it's something the organization was going to do anyway.\n\n\n\nHow nice should you be?\n-----------------------\n\n\nI hope it's clear that at least in some cases, being nice pays. The harder question is how nice to be, i.e., above what threshold of cost to yourself do you stop providing benefits to others?\n\n\nIf bargains could be transacted in an airtight fashion, and if [utility was completely transferable](http://www.ssc.wisc.edu/~dquint/econ522%20fall%202012/522section1.pdf), then the answer would be simple: Maximize total social \"pie,\" because if you can provide someone a benefit B that's bigger than its cost C to yourself, the other person could pay you back in the amount C, and then the surplus B-C could be divided between the two of you, making you both better off. Alas, most situations in life aren't airtight, so intuitively, in many cases it would not be in your interest to purely maximize pie. There might be some noise or cheating between your incurring the cost and someone else paying back a higher benefit. Not everything you do is fully recognized and rewarded by others, especially when they assume that you're helping them because you intrinsically value their cause rather than just to be nice despite not caring or even slightly disvaluing it.\n\n\nHow nice to be depends on the details of the social situation, expectations, norms, and enforcement mechanisms involved. There's some balance to strike between purely pushing your own agenda without regard to what anyone else cares about versus purely helping all value systems without any preference for your personal concerns. One could construct various game-theoretic models, but the world is complicated, and interactions are not *just*, say, a series of two-player IPDs. It could also help to look at real examples in society for where to strike this balance.\n\n\nApplications to space colonization\n----------------------------------\n\n\nBeing nice suggests that people whose primary concern is reducing suffering should accept others' ambitions to colonize space, so long as colonizers work harder to reduce the [suffering that space colonization entails](http://www.utilitarian-essays.com/astronomical-suffering.html). On the flip side, being nice also means that those who do want to colonize space should focus more on making space colonization better (more humane and better governed to stay in line with our values) rather than making it more likely to happen.", "url": "https://longtermrisk.org/reasons-to-be-nice-to-other-value-systems/", "title": "Reasons to Be Nice to Other Value Systems", "source": "html_articles", "source_type": "blogPost", "source_filetype": "pdf", "date_published": "2015-08-28T22:00:00Z", "authors": ["Brian Tomasik"], "summary": [], "id": "2e65aea78d6cd21191c697d64146437a"} {"text": "Reducing long-term risks from malevolent actors\n===============================================\n\n\n\n7 July 2020\nby [David Althaus](https://longtermrisk.org/author/david-althaus/ \"Posts by David Althaus\") and [Tobias Baumann](https://longtermrisk.org/author/tobias-baumann/ \"Posts by Tobias Baumann\")\n\n### Summary\n\n\n* Dictators who exhibited highly narcissistic, psychopathic, or sadistic traits were involved in some of the greatest catastrophes in human history.\n* Malevolent individuals in positions of power could negatively affect humanity’s long-term trajectory by, for example, exacerbating international conflict or other broad risk factors.\n* Malevolent humans with access to advanced technology—such as whole brain emulation or other forms of transformative AI—could cause serious existential risks and suffering risks.\n* We therefore consider interventions to reduce the expected influence of malevolent humans on the long-term future.\n* The development of manipulation-proof measures of malevolence seems valuable, since they could be used to screen for malevolent humans in high-impact settings, such as heads of government or CEOs.\n* We also explore possible future technologies that may offer unprecedented leverage to mitigate against malevolent traits.\n* Selecting against psychopathic and sadistic tendencies in genetically enhanced, highly intelligent humans might be particularly important. However, risks of unintended negative consequences must be handled with extreme caution.\n* We argue that further work on reducing malevolence would be valuable from many moral perspectives and constitutes a promising focus area for longtermist EAs.\n\n\n**Full article**\n\n\n* [PDF](https://longtermrisk.org/files/Reducing_long_term_risks_from_malevolent_actors.pdf)\n* [EA Forum post](https://forum.effectivealtruism.org/posts/LpkXtFXdsRd4rG8Kb/reducing-long-term-risks-from-malevolent-actors)", "url": "https://longtermrisk.org/reducing-long-term-risks-from-malevolent-actors/", "title": "Reducing long-term risks from malevolent actors", "source": "html_articles", "source_type": "report", "source_filetype": "pdf", "date_published": "2020-07-06T22:00:00Z", "authors": ["David Althaus", "Tobias Baumann"], "summary": [], "id": "56b83cd353620fd4571c14c096679899"} {"text": "The future of growth: near-zero growth rates\n============================================\n\n\n\n26 July 2017\nby [Center on Long-Term Risk](https://longtermrisk.org/author/eas_super/ \"Posts by Center on Long-Term Risk\")\n\n**First written:** Jul. 2017; **Last update:** Aug. 2017\n\n Exponential growth is a common pattern found throughout nature. Yet it is also a pattern that tends not to last, as growth rates tend to decline sooner or later.\n\n\n![](https://longtermrisk.org/files/S-curve.png)\n\n\nIn biology, this pattern of exponential growth that wanes off is found in everything from the development of individual bodies — for instance, in the [growth of humans](https://en.wikipedia.org/wiki/Human_development_(biology)), which levels off in the late teenage years — to population sizes.\n\n\nOne may of course be skeptical that this general trend will also apply to the growth of our technology and economy at large, as innovation seems to continually postpone our clash with the ceiling, yet it seems inescapable that it must. For in light of what we know about physics, we can conclude that exponential growth of the kinds we see today, in technology in particular and in our economy more generally, must come to an end, and do so relatively soon.\n\n\n**Limits to growth**\n--------------------\n\n\n### **Physical limits to computation and Moore’s law**\n\n\nOne reason we can make this assertion is that there are theoretical [limits to computation](https://en.wikipedia.org/wiki/Limits_of_computation). As physicist Seth Lloyd’s [calculations](https://arxiv.org/pdf/quant-ph/9908043.pdf) show, a continuation of Moore’s law — in its most general formulation: “the amount of information that computers are capable of processing and the rate at which they process it doubles every two years” — would imply that we hit the theoretical limits of computation within 250 years:\n\n\n\n> If, as seems highly unlikely, it is possible to extrapolate the exponential progress of Moore's law into the future, then it will only take two hundred and fifty years to make up the forty orders of magnitude in performance between current computers that perform 1010 operations per second on 1010 bits and our one kilogram ultimate laptop that performs 1051 operations per second on 1031 bits.\n> \n> \n\n\nSimilarly, physicists Lawrence Krauss and Glenn Starkman [have calculated](https://arxiv.org/pdf/astro-ph/0404510.pdf) that, even if we factor in colonization of space at the speed of light, this doubling of processing power cannot continue for more than 600 years in any civilization:\n\n\n\n> Our estimate for the total information processing capability of any system in our Universe implies an ultimate limit on the processing capability of any system in the future, independent of its physical manifestation and implies that Moore’s Law cannot continue unabated for more than 600 years for any technological civilization.\n> \n> \n\n\nIn a more recent [lecture](https://www.youtube.com/watch?v=8Cnj8MIQ0HY&feature=youtu.be&t=36m50s) and a subsequent [interview](https://youtu.be/qta1YaEeQpI?t=1m), Krauss said that the absolute limit for the continuation of Moore’s law, in our case, would be reached in less than 400 years (the discrepancy — between the numbers 400 and 600 — is at least in part because [Moore’s law](https://en.wikipedia.org/wiki/Moore%27s_law#/media/File:Moore%27s_Law_over_120_Years.png), in its most general formulation, has played out for more than a century in our civilization at this point). And, as both Krauss and Lloyd have stressed, these are ultimate theoretical limits, resting on assumptions that are unlikely to be met in practice, such as expansion at the speed of light. What is possible, in terms of how long Moore’s law can continue for, given both [engineering and economic constraints](https://www.nature.com/nature/journal/v512/n7513/full/nature13570.html) is likely [significantly less](https://arxiv.org/pdf/1511.05956.pdf). Indeed, we are already close to approaching the [physical limits](https://en.wikipedia.org/wiki/Moore%27s_law#Near-term_limits) of the paradigm that Moore’s law has been riding on for more than 50 years — silicon transistors, the only paradigm that Gordon Moore was talking about originally — and it is not clear whether [other paradigms](http://www.economist.com/technology-quarterly/2016-03-12/after-moores-law) will be able to take over and keep the trend going.\n\n\n### **Limits to the growth of energy use**\n\n\nPhysicist Tom Murphy [has calculated](https://dothemath.ucsd.edu/2011/07/galactic-scale-energy/) a similar limit for the growth of the energy consumption of our civilization. Based on the observation that the energy consumption of the United States has increased fairly consistently with an average annual growth rate of 2.9 percent over the last 350 odd years (although the growth rate [appears](https://dothemath.ucsd.edu/2011/08/does-the-logistic-shoe-fit/) to have slowed down in recent times and been stably below 2.9 since c. 1980), Murphy proceeds to derive the limits for the continuation of similar energy growth. He does this, however, by assuming an annual growth rate of “only” 2.3 percent, which conveniently results in an increase of the total energy consumption by a factor of ten every 100 years. If we assume that we will continue expanding our energy use at this rate by covering Earth with solar panels, this would, on Murphy’s calculations, imply that we will have to cover all of Earth’s land with solar panels in less than 350 years, and all of Earth, including the oceans, in 400 years.\n\n\nBeyond that, assuming that we could capture all of the energy from the sun by surrounding *it* in solar panels, the 2.3 percent growth rate would come to an end within 1,350 years from now. And if we go further out still, to capture the energy emitted from all the stars in our galaxy, we get that this growth rate must hit the ceiling and become near-zero within 2,500 years (of course, the limit of the physically possible must be hit earlier, indeed more than 500 years earlier, as we cannot traverse our 100,000 light year-wide Milky Way in only 2,500 years).\n\n\nOne may suggest that alternative sources of energy might change this analysis significantly, yet, as Murphy [notes](https://dothemath.ucsd.edu/2011/07/galactic-scale-energy/), this does not seem to be the case:\n\n\n\n> Some readers may be bothered by the foregoing focus on solar/stellar energy. If we’re dreaming big, let’s forget the wimpy solar energy constraints and adopt fusion. The abundance of deuterium in ordinary water would allow us to have a seemingly inexhaustible source of energy right here on Earth. We won’t go into a detailed analysis of this path, because we don’t have to. The merciless growth illustrated above means that in 1400 years from now, *any* source of energy we harness would have to outshine the sun.\n> \n> \n\n\nEssentially, keeping up the annual growth rate of 2.3 percent by harnessing energy from matter not found in stars would force us to make such matter hotter than stars themselves. We would have to create new stars of sorts, and, even if we assume that the energy required to create such stars is less than the energy gained, such an endeavor would quickly run into limits as well. For according to [one estimate](https://arxiv.org/abs/1102.4340), the total mass of the Milky Way, including dark matter, is only 20 times greater than the mass of its stars. Assuming a 5:1 ratio of dark matter to ordinary matter, this implies that that there is only about 3.3 times as much ordinary non-stellar matter as there is stellar matter in our galaxy. Thus, even if we could convert all this matter into stars without spending any energy and harvest the resulting energy, this would only give us about 50 years more of keeping up with the annual growth rate of 2.3 percent.[1](#endnote1)\n\n\n### **Limits derived from economic considerations**\n\n\nSimilar conclusions as the ones drawn above for computation and energy also seem to follow from calculations of a more [economic nature](https://dothemath.ucsd.edu/2011/07/can-economic-growth-last/). For, as economist Robin Hanson [has argued](http://www.overcomingbias.com/2009/09/limits-to-growth.html), projecting present economic growth rates into the future also leads to a clash against fundamental limits:\n\n\n\n> Today we have about ten billion people with an average income about twenty times subsistence level, and the world economy doubles roughly every fifteen years. If that growth rate continued for ten thousand years[,] the total growth factor would be 10200.\n> \n> \n> There are roughly 1057 atoms in our solar system, and about 1070 atoms in our galaxy, which holds most of the mass within a million light years. So even if we had access to all the matter within a million light years, to grow by a factor of 10200, *each atom* would on average have to support an economy equivalent to 10140 people at today’s standard of living, or one person with a standard of living 10140 times higher, or some mix of these.\n> \n> \n\n\nIndeed, current growth rates would “only” have to continue for three thousand years before each atom in our galaxy would have to support an economy equivalent to a single person living at today’s living standard, which already seems rather implausible (not least because we can only access a tiny fraction of “all the matter within a million light years” in three thousand years). Hanson does not, however, expect the current growth rate to remain constant, but instead, based on the [history of growth rates](http://mason.gmu.edu/~rhanson/longgrow.html), expects a new growth mode where the world economy doubles within [15 days rather than 15 years](http://www.futurebrief.com/robinhanson.asp):\n\n\n\n> If a new growth transition were to be similar to the last few, in terms of the number of doublings and the increase in the growth rate, then the remarkable consistency in the previous transitions allows a remarkably precise prediction. A new growth mode should arise sometime within about the next seven industry mode doublings (i.e., the next seventy years) and give a new wealth doubling time of between seven and sixteen days.\n> \n> \n\n\nAnd given this more than a hundred times greater growth rate, the net growth that would take 10,000 years to accomplish given our current growth rate (cf. Hanson’s calculation above) would now take less than a century to reach, while growth otherwise requiring 3,000 years would require less than 30 years. So if Hanson is right, and we will see such a shift within the next seventy years, what seems to follow is that we will reach the limits of economic growth, or at least reach near-zero growth rates, within a century or two. Such a projection is also consistent with the physically derived limits of the continuation of Moore’s law; not that economic growth and Moore’s law are remotely the same, yet they are no doubt closely connected: economic growth is largely powered by technological progress, of which Moore’s law has been a considerable subset in recent times.\n\n\nThe conclusion we reach by projecting past growth trends in computing power, energy, and the economy is the same: our current growth rates cannot go on forever. In fact, they will have to decline to near-zero levels very soon on a cosmic timescale. Given the physical limits to computation, and hence, ultimately, to economic growth, we can conclude that we must be close to the point where peak relative growth in our economy and our ability to process information occurs — that is, the point where this growth rate is the highest in the entire history of our civilization, past and future.\n\n\n**“Peak growth” might lie in the past**\n---------------------------------------\n\n\nThis is not, however, to say that this point of maximum relative growth necessarily lies in the future. Indeed, in light of the [declining economic growth rates](http://data.worldbank.org/indicator/NY.GDP.MKTP.KD.ZG) we have seen over the last few decades, it cannot be ruled out that we are now already past the point of “peak economic growth” in the history of our civilization, with the highest growth rates having occurred around 1960-1980, cf. these declining growth rates and [this essay](http://growth-dynamics.com/articles/Singularity.pdf) by physicist Theodore Modis. This is not to say that we most likely are, yet it seems that the probability that we are is non-trivial.\n\n\nA relevant data point here is that the global economy has seen [three doublings](https://en.wikipedia.org/wiki/Gross_world_product) since 1965, where the annual growth rate was around six percent, and yet the annual growth rate today is only a little over half — around 3.5 percent — of, and lies stably below, what it was those three doublings ago. In the entire history of economic growth, [this seems unprecedented](http://holtz.org/Library/Social%20Science/Economics/Estimating%20World%20GDP%20by%20DeLong/Estimating%20World%20GDP.htm), suggesting that we may already be on the other side of the highest growth rates we will ever see. For up until this point, a three-time doubling of the economy has, rare fluctuations aside, led to an increase in the annual growth rate.\n\n\nAnd this “past peak growth” hypothesis looks even stronger if we look at 1955, with a growth rate of a little less than six percent and a world product at 5,430 billion 1990 U.S dollars, which doubled four times gives just under 87,000 billion — about where we should expect today’s world product to be. Yet throughout the history of our economic development, four doublings has meant a clear increase in the annual growth rate, at least in terms of the underlying trend; not a stable decrease of almost 50 percent. To me, this suggests that maintaining more than, say, a 90 percent probability that we will see greater annual growth rates in the future is overconfident.[2](#endnote2)\n\n\n### **A hypothetical model: roughly symmetric growth rates**\n\n\nIf we assume a model of the growth of the global economy where the annual growth rate is roughly symmetrical around the time the growth rate was at its global maximum, and then assume that this global maximum occurred around 1965, this means that we should expect the annual growth rate three doublings earlier, c. 1900, to be the same as the annual growth rate three doublings later, c. 2012. What do we [observe](https://en.wikipedia.org/wiki/Gross_world_product)? Three doublings earlier it was around 2.5 percent, while it was around 3.5 percent three doublings later, at least according to [one source](http://www.imf.org/external/pubs/ft/weo/2017/01/weodata/weorept.aspx?pr.x=72&pr.y=12&sy=2006&ey=2018&scsm=1&ssd=1&sort=country&ds=.&br=1&c=001%2C110%2C163%2C200&s=NGDP_RPCH&grp=1&a=1) (although [other sources](http://data.worldbank.org/indicator/NY.GDP.MKTP.KD.ZG) actually do put the number at around 2.5 percent). Not a clear match, nor a clear falsification.\n\n\nYet if we look at the growth rates of advanced economies around 2012, we find that the growth rate is actually significantly lower than 2.5 percent, namely 1.2-2.0 percent. And given that less developed economies [are expected](https://en.wikipedia.org/wiki/Solow%E2%80%93Swan_model#Conditional_convergence) to grow significantly faster than more developed ones, as the more advanced economies have paved the way and made high-hanging fruits more accessible, the (already not so big) 2.5 vs. 3.5 percent mismatch could be due to this gradually diminishing catch-up effect. Indeed, if we compare advanced economies today with advanced economies c. 1900, we find that the growth rate was significantly higher back then,[3](#endnote3) suggesting that the symmetrical model may in fact overestimate current and future growth if we look only at advanced economies.[4](#endnote4)\n\n\n### **Could we be past peak growth in science and technology?**\n\n\nThat peak growth lies in the past [may also be true of technological progress](http://www.nber.org/papers/w19895.pdf) in particular, or at least many forms of technological progress, including the progress in computing power tracked by Moore’s law, where the growth rate appears to have been highest around 1990-2005, and to since have been in decline, cf. [this article](http://ieeexplore.ieee.org/document/7878938/) and the first graphs found [here](http://www.economist.com/technology-quarterly/2016-03-12/after-moores-law) and [here](http://www.softmachines.org/wordpress/?p=2097). Similarly, various sources of [data](http://www.nature.com/nrc/journal/v8/n8/fig_tab/nrc2458_F5.html) and [proxies](https://www.hindawi.com/journals/bmri/2009/823148/fig2/) tracking the number of scientific [articles](https://www.researchgate.net/figure/277675348_fig1_Figure-1-History-of-scientific-publications-on-carbon-based-materials-a-Total) published and [references cited](http://blogs.nature.com/news/files/2014/05/Cited-publications.png) over time also suggest that we could be past peak growth in science as well, at least in many fields when evaluated based on such metrics, with peak growth seeming to have been reached around 2000-2010.\n\n\nYet again, these numbers — those tracking economic, technological, and scientific progress — are of course closely connected, as growth in each of these respects contributes to, and is even part of, growth in the others. Indeed, [one study](https://www.ncbi.nlm.nih.gov/pmc/articles/PMC2909426/) found the doubling time of the total number of scientific articles in recent decades to be 15 years, corresponding to an annual growth rate of 4.7 percent, strikingly similar to the growth rate of the global economy in recent decades. Thus, declining growth rates both in our economy, technology, and science cannot be considered wholly independent sources of evidence that growth rates are now declining for good. We can by no means rule out that growth rates might increase in all these areas in the future — although, as we saw above with respect to the limits of Moore’s law and economic progress, such an increase, if it is going to happen, must be imminent if current growth rates remain relatively stable.\n\n\n**Absolute and relative growth**\n--------------------------------\n\n\nThe economic “peak growth” discussed above relates to relative growth, not absolute growth. These are worth distinguishing. For in terms of absolute growth, annual growth is significantly higher today than it was in the 1960s, where the greatest relative growth to date occurred. The global economy grew with about half a trillion 1990 US dollars each year in the sixties, whereas it grows with about two trillion now. So in this absolute sense, we are seeing significantly more growth today than we did 50 years ago, although we now have significantly lower growth rates.\n\n\nIf we assume the model with symmetric growth rates mentioned above and make a simple extrapolation based on it, what follows is that our time is also a special one when it comes to absolute annual growth. The picture we get is the following (based on an [estimate](http://holtz.org/Library/Social%20Science/Economics/Estimating%20World%20GDP%20by%20DeLong/Estimating%20World%20GDP.htm) of past growth rates from economic historian James DeLong):\n\n\n\n\n| Year | World GDP\n(in trillions) | Annual\ngrowth rate | Absolute annual\ngrowth (in trillions) |\n| --- | --- | --- | --- |\n| 920 | 0.032 | 0.13 | 0.00004 |\n| 1540 | 0.065 | 0.25 | 0.0002 |\n| 1750 | 0.13 | 0.5 | 0.0007 |\n| 1830 | 0.27 | 1 | 0.003 |\n| 1875 | 0.55 | 1.8 | 0.01 |\n| 1900 | 1.1 | 2.5 | 0.03 |\n| 1931 | 2.3 | 3.8 | 0.09 |\n| 1952 | 4.6 | 4.9 | 0.2 |\n| 1965 | 9.1 | 5.9 | 0.5 |\n| 1980 | 18 | 4.4 | 0.8 |\n| 1997 | 36 | 4.0 | 1.4 |\n| 2012 | 72 | 3.5 | 2.1 |\n\n\nPredicted values given roughly symmetric growth rates around 1965 (mirroring growth rates above):\n\n\n\n\n| | | | |\n| --- | --- | --- | --- |\n| 2037 | 144 | 1.8 | 2.6 |\n| 2082 | 288 | 1 | 2.9 |\n| 2162 | 576 | 0.5 | 2.9 |\n| 2372 | 1152 | 0.25 | 2.9 |\n| 2992 | 2304 | 0.13 | 3.0 |\n\n\nWe see that the absolute annual growth in GDP seems to follow an s-curve with an inflection point right about today, as we see that the period from 1997 to 2012 saw the biggest jump in absolute annual growth in a doubling ever; an increase of 0.7 trillion, from 1.4 to 2.1.\n\n\nIt is worth noting that economist Robert Gordon [predicts](http://www.nber.org/papers/w19895.pdf) similar growth rates as the model above over the next few decades, as do [various](https://www.oecd.org/eco/growth/Long-term-projections-of-the-world-economy-a-review.pdf) other [estimates](http://www.pwc.com/gx/en/world-2050/assets/pwc-the-world-in-2050-full-report-feb-2017.pdf) of the future of economic growth by economists. In contrast, engineer Paul Daugherty and economist Mark Purdy [predict](https://www.accenture.com/lv-en/_acnmedia/PDF-33/Accenture-Why-AI-is-the-Future-of-Growth.pdf) higher growth rates due to the effects of AI on the economy, yet the annual growth rates they predict in 2035 are still only around three percent for most of the developed economies they looked at, roughly at the same level as the current growth rate of the global economy. On a related note, economist William Nordhaus has attempted to make an [economic analysis](http://cowles.yale.edu/sites/default/files/files/pub/d20/d2021.pdf) of whether we are approaching an economic singularity, in which he concludes, based on various growth models, that we do not appear to be, although he does not rule out that an economic singularity, i.e. significantly faster economic growth, might happen eventually.\n\n\n**Might recent trends make us bias-prone?**\n-------------------------------------------\n\n\nHow might it be relevant that we may be past peak economic growth at this point? Could it mean that our expectations for the future are likely to be biased? Looking back toward the 1960s might be instructive in this regard. For when we look at our economic history up until the 1960s, it is not so strange that people made many [unrealistic](http://www.npr.org/2011/07/22/130585002/2010-isnt-what-many-futurists-of-the-past-imagined) [predictions](http://www.huffingtonpost.com/map-happy/the-strangest-travel-pred_b_9417724.html) about the future around this period. Because not only might it have appeared natural to project the high growth rate at the time to remain constant into the future, which would have led to today’s global GDP being more than twice of what it is; it might also have seemed reasonable to predict the growth rates to keep on rising even further. After all, that was what they had been doing consistently up until that point, so why should it not continue in the following decades, resulting in flying cars and conversing robots by the year 2000? Such expectations were not that unreasonable given the preceding economic trends.\n\n\nThe question is whether we might be similarly overoptimistic about future economic progress today given recent, possibly unique, growth trends, specifically the unprecedented increase in absolute annual growth that we have seen over the past two decades — cf. the increase of 0.7 trillion mentioned above. The same may apply to the trends in scientific and technological progress cited above, where peak growth in many areas appears to have happened in the period 1990-2010, meaning that we could now be at a point where we are disposed to being overoptimistic about further progress.\n\n\nYet, again, it is highly uncertain at this point whether growth rates, of the economy in general and of progress in technology and science in particular, will increase again in the future. Future economic growth may not conform well to the model with roughly symmetric growth rates around the 1960s, although the model certainly deserves some weight. All we can say for sure is that growth rates must become near-zero relatively soon. What the path toward that point will look like remains an open question. We could well be in the midst of a temporary decline in growth rates that will be followed by growth rates significantly greater than those of the 1960s, cf. the [new growth mode](http://www.futurebrief.com/robinhanson.asp) envisioned by Robin Hanson.[5](#endnote5)\n\n\n**Implications: this is an extremely special time**\n---------------------------------------------------\n\n\nApplying the [mediocrity principle](https://en.wikipedia.org/wiki/Mediocrity_principle), we should not expect to live in an extremely unique time. Yet, in light of the facts about the ultimate limits to growth seen above, it is clear that we do: we are living during the childhood of civilization where there is still rapid growth, at the pace of doublings within a couple of decades. If civilization persists with similar growth rates, it will soon become a grown-up with near-zero relative growth. And it will then look back at our time — today plus minus a couple of centuries, most likely — as the one where growth rates were by far the highest in its entire history, which may be more than a trillion years.\n\n\nIt seems that a few things follow from this. First, more than just being the time where growth rates are the highest, this may also, for that very reason, be the time where individuals can influence the future of civilization more than any other time. In other words, this may be the time where the outcome of the future is most sensitive to small changes, as it seems plausible, although far from clear, that small changes in the trajectory of civilization are most significant when growth rates are highest. An apt analogy might be a psychedelic balloon with fluctuating patterns on its surface, where the fluctuations that happen to occur when we blow up the balloon will then also be blown up and leave their mark in a way that fluctuations occurring before and after this critical growth period will not (just like [quantum fluctuations in the early universe](http://www.ctc.cam.ac.uk/outreach/origins/inflation_zero.php) got blown up during cosmic expansion, and thereby in large part determined the grosser structure of the universe today). Similarly, it seems much more difficult to cause changes across all of civilization when it spans countless star systems compared to today.\n\n\nThat being said, it is not obvious that small changes — in our actions, say — are more significant in this period where growth rates are many orders of magnitude higher than in any other time. It could also be that such changes are more consequential when the absolute growth is the highest. Or perhaps when it is smallest, at least as we go backwards in time, as there were far fewer people back when growth rates were orders of magnitude lower than today, and hence any given individual comprised a much greater fraction of all individuals than an individual does today.\n\n\nStill, we may well find ourselves in a period where we are uniquely positioned to make irreversible changes that will echo down throughout the entire future of civilization.[6](#endnote6) To the extent that we are, this should arguably lead us to update toward trying to influence the far future rather than the near future. More than that, if it does hold true that the time where the greatest growth rates occur is indeed the time where small changes are most consequential, this suggests that we should increase our credence in [the simulation hypothesis](http://www.simulation-argument.com/). For if realistic sentient simulations of the past become feasible at some point, the period where the future trajectory of civilization seems the most up for grabs would seem an especially relevant one to simulate and learn more about. However, one can also argue that the sheer historical uniqueness of our current growth rates alone, regardless of whether this is a time where the fate of our civilization is especially volatile, should lead us to increase this credence, as such uniqueness may make it a more interesting time to simulate, and because being in a special time in general should lead us to increase our credence in the simulation hypothesis (see for instance [this talk](https://www.youtube.com/watch?v=29AgSo6KOtI) for a case for why being in a special time makes the simulation hypothesis more likely).[7](#endnote7)\n\n\nOn the other hand, one could also argue that imminent near-zero growth rates, along with the weak indications that we may now be past peak growth in many respects, provide a reason to lower our credence in the simulation hypothesis, as these observations suggest that the ceiling for what will be feasible in the future may be lower than we naively expect in light of today’s high growth rates. And thus, one could argue, it should make us more skeptical of the central premise of the simulation hypothesis: that there will be (many) ancestor simulations in the future. To me, the consideration in favor of increased credence seems stronger, although it does not significantly move my overall credence in the hypothesis, as there are countless other factors to consider.[8](#endnote8)\n\n\n \n\n\n**Appendix: Questioning our assumptions**\n-----------------------------------------\n\n\nCaspar Oesterheld pointed out to me that it might be worth meditating on how confident we can be in these conclusions given that apparently solid predictions concerning the ultimate [limits to growth](https://en.wikipedia.org/wiki/The_Limits_to_Growth) have been made [before](https://en.wikipedia.org/wiki/The_Population_Bomb), yet [quite a few of these](https://en.wikipedia.org/wiki/The_Population_Bomb#Criticisms) turned out to be wrong. Should we not be open to the possibility that the same might be true of (at least some of) the limits we reviewed in the beginning of this essay?\n\n\n### **Could our understanding of physics be wrong?**\n\n\nOne crucial difference to note is that these failed predictions were based on a set of assumptions — e.g. about the amount of natural resources and food that would be available — that seem far more questionable than the assumptions that go into the physics-based predictions we have reviewed here: that our apparently well-established physical laws and measurements indeed are valid, or at least roughly so. The epistemic status of this assumption seems a lot more solid, to put it mildly. So there does seem to be a crucial difference here. This is not to say, however, that we should not maintain some degree of doubt as to whether this assumption is correct (I would argue that [we always should](https://www.smashwords.com/books/view/678028)). It just seems that this degree of doubt should be quite low.\n\n\nYet, to continue the analogy above, what went wrong with the aforementioned predictions was not so much that limits did not exist, but rather that humans found ways of circumventing them through innovation. Could the same perhaps be the case here? Could we perhaps some day find ways of deriving energy from dark energy or some other yet unknown source, even though physicists [seem skeptical](https://www.quora.com/What-does-Lawrence-Krauss-mean-when-he-says-dark-energy-can-never-be-a-power-source-see-question-details)? Or could we, as [Ray Kurzweil speculates](https://en.wikipedia.org/wiki/Predictions_made_by_Ray_Kurzweil), access more matter and energy by finding ways of travelling faster than light, or by finding ways of accessing other parts of our notional [multiverse](https://en.wikipedia.org/wiki/Multiverse)? Might we even become able to create entirely new ones? Or to eventually rewrite the laws of nature as we please? (Perhaps by manipulating our notional simulators?) Again, I do not think any of these possibilities can be ruled out completely. Indeed, some physicists argue that the [creation of new pocket universes](https://www.youtube.com/watch?v=5ZtRfACbygY) might be possible, not in spite of “known” physical principles (or rather theories that most physicists seem to believe, such as inflationary theory), but as a consequence of them. However, it is not clear that anything from our world would be able to expand into, or derive anything from, the newly created worlds on any of these models (which of course does not mean that we should not [worry about the emergence of such worlds](http://reducing-suffering.org/lab-universes-creating-infinite-suffering/), or the fate of other “worlds” that we [perhaps could access](https://www.abolitionist.com/multiverse.html)).\n\n\nAll in all, the speculative possibilities raised above seem unlikely, yet they cannot be ruled out for sure. The limits we have reviewed here thus represent a best estimate given our current, admittedly incomplete, understanding of the universe in which we find ourselves, not an absolute guarantee. However, it should be noted that this uncertainty cuts both ways, in that the estimates we have reviewed could also overestimate the limits to various forms of growth by countless orders of magnitude.\n\n\n### **Might our economic reasoning be wrong?**\n\n\nLess speculatively, I think, one can also question the validity of our considerations about the limits of economic progress. I argued that it seems implausible that we in three thousand years could have an economy so big that each atom in our galaxy would have to support an economy equivalent to a single person living at today’s living standard. Yet could one not argue that the size of the economy need not depend on matter in this direct way, and that it might instead depend on the possible representations that can be instantiated in matter? If economic value could be mediated by the possible permutations of matter, our argument about a single atom’s need to support entire economies might not have the force it appears to have. For instance, there are far more [legal positions on a Go board](https://en.wikipedia.org/wiki/Go_and_mathematics#Legal_positions) than there are atoms in the visible universe, and that’s just legal positions on a Go board. Perhaps we need to be more careful when thinking about how atoms might be able to create and represent economic value?\n\n\nIt seems like there is a decent point here. Still, I think economic growth at current rates is doomed. First, it seems reasonable to be highly skeptical of the notion that mere potential states could have any real economic value. Today at least, what we value and pay for is not such “permutation potential”, but the actual state of things, which is as true of the digital realm as of the physical. We buy and stream digital files such as songs and movies because of the actual states of these files, while their potential states mean nothing to us. And even when we invest in something we think has great potential, like a start-up, the value we expect to be realized is still ultimately one that derives from its actual state, namely the actual state we hope it will assume; not its number of theoretically possible permutations.\n\n\nIt is not clear why this would change, or how it could. After all, the number of ways one can put all the atoms in the galaxy together is the same today as it will be ten thousand years from now. Organizing all these atoms into a single galactic supercomputer would only seem to increase the value of their actual state.\n\n\nSecond, economic growth still seems tightly constrained by the shackles of physical limitations. For it seems inescapable that economies, of any kind, are ultimately dependent on the transfer of resources, whether these take the form of information or concrete atoms. And such transfers require access to energy, the growth of which we know to be constrained, as is true of the growth of our ability to process information. As these underlying resources that constitute the lifeblood of any economy stop growing, it seems unlikely that the economy can avoid this fate as well. (Tom Murphy [touches](https://dothemath.ucsd.edu/2011/07/can-economic-growth-last/) on similar questions in his analysis of the limits to economic growth.)\n\n\nAgain, we of course cannot exclude that something crucial might be missing from these considerations. Yet the conclusion that economic growth rates will decline to near-zero levels relatively soon, on a cosmic timescale at least, still seems a safe bet in my view.\n\n\n**Acknowledgments**\n-------------------\n\n\nI would like to thank Brian Tomasik, Caspar Oesterheld, Duncan Wilson, Kaj Sotala, Lukas Gloor, Magnus Dam, Max Daniel, and Tobias Baumann for valuable comments and inputs.\n\n\nNotes\n-----\n\n\n[1.](#enref1) One may wonder whether there might not be more efficient ways to derive energy from the non-stellar matter in our galaxy than to convert it into stars as we know them. I don’t know, yet a friend of mine who does research in plasma physics and fusion says that he does not think one could, especially if we, as we have done here, disregard the energy required to clump the dispersed matter together so as to “build” the star, a process that may well take more energy than the star can eventually deliver.\n\n\nThe aforementioned [paper](https://arxiv.org/pdf/astro-ph/0404510.pdf) by Lawrence Krauss and Glenn Starkman also contains much information about the limits of energy use, and in fact uses accessible energy as the limiting factor that bounds the amount of information processing any (local) civilization could do (they assume that the energy that is harvested is beamed back to a \"central observer\").\n\n\n[2.](#enref2) And I suspect many people who have read about “singularity”-related ideas are overconfident, perhaps in part due to the comforting narrative and self-assured style of Ray Kurzweil, and perhaps due to wishful thinking about technological progress more generally.\n\n\n[3.](#enref3and4) According to one [textbook](http://faculty.wcas.northwestern.edu/~mdo738/textbook/dls_ch11.pdf) “Outside the European world, per capita incomes stayed virtually constant from 1700 to about 1950 […]” implying that the global growth rate in 1900 was raised by the most developed economies, and they must thus have had a growth rate greater than 2.5 percent.\n\n\n[4.](#enref3and4) A big problem with this model is that it is already pretty much falsified by the data, at least when it comes to “pretty”, as opposed to approximate, symmetry. For given symmetry in the growth rates around 1965, the time it takes for three doublings to occur should be the same in either direction, whereas the data shows that this is not the case — 65 years minus 47 years equals 18 years, which is roughly a doubling. One may be able to correct this discrepancy a tiny bit by moving the year of peak growth a bit further back, yet this cannot save the model. This lack of actual symmetry should reduce our credence in the symmetric model as a description of the underlying pattern of our economic growth, yet I do not think it fully discredits it. Rough symmetry still seems a decent first approximation to past growth rates, and deviations may in part be explainable by factors such as the high, yet relatively fast diminishing, contribution to growth from developing economies.\n\n\n[5.](#enref5) It should be noted, though, that [Hanson](http://mason.gmu.edu/~rhanson/longgrow.html) by no means rules out that such a growth mode may never occur, and that we might already be past, or in the midst of, peak economic growth: “[…] it is certainly possible that the economy is approaching fundamental limits to economic growth rates or levels, so that no faster modes are possible […]”\n\n\n[6.](#enref6and7) The degree to which there is sensitivity to changes of course varies between different endeavors. For instance, natural science seems more convergent than moral philosophy, and thus its development is arguably less sensitive to the particular ideas of individuals working on it than the development of moral philosophy is.\n\n\n[7.](#enref6and7) One may then argue that this should lead us to [update toward focusing more on the near future](https://longtermrisk.org/how-the-simulation-argument-dampens-future-fanaticism). This may be true. Yet should we update more toward focusing on the far future given our ostensibly unique position to influence it? Or should we update more toward focusing on the near future given increased credence in the simulation hypothesis? (Provided that we indeed do increase this credence, cf. the counter-consideration above.) In short, it mostly depends on the specific probabilities we assign to these possibilities. I myself happen to think the far future should dominate, as I assign the simulation hypothesis (as commonly conceived) a very small probability.\n\n\n[8.](#enref8) For instance, fundamental epistemological issues concerning how much one can infer based on impressions from a simulated world (which may only be your single mind) about a simulating one (e.g. do notions such as “time” and “memory” correspond to anything, or even make sense, in such a “world”?); the fact that the past cannot be simulated realistically, since we can only have incomplete information about a given physical state in the past (not only because we have no way to uncover all the relevant information, but also because we cannot possibly represent it all, even if we somehow could access it — for instance, we cannot faithfully represent the state of every atom in our solar system in any point in the past, as this would require too much information), and a simulation of the past that contains incomplete information would [depart radically](https://en.wikipedia.org/wiki/Chaos_theory#Sensitivity_to_initial_conditions) from how the actual past unfolded, as *all* of it has a non-negligible causal impact (even single photons, which, [it appears](https://www.nature.com/news/people-can-sense-single-photons-1.20282), are detectable by the human eye), and this is especially true given that the vast majority of information would have to be excluded (both due to practical constraints to what can be recovered and what can be represented); whether conscious minds can exist on [different levels of abstraction](https://www.biointelligence-explosion.com/parable.html); etc.", "url": "https://longtermrisk.org/the-future-of-growth-near-zero-growth-rates/", "title": "The future of growth: near-zero growth rates", "source": "html_articles", "source_type": "blogPost", "source_filetype": "pdf", "date_published": "2017-07-25T22:00:00Z", "authors": ["Center on Long-Term Risk"], "summary": [], "id": "e80cafd1494689422493afb93c012d69"} {"text": "Using surrogate goals to deflect threats\n========================================\n\n\n\n20 February 2018\nby [Tobias Baumann](https://longtermrisk.org/author/tobias-baumann/ \"Posts by Tobias Baumann\")\n\n*Published on the CLR blog, where researchers are free to explore their own ideas on how humanity can best reduce suffering. ([more](/launching-fri-blog/))* \n\n*Cross-posted from my [website on s-risks](http://s-risks.org/using-surrogate-goals-to-deflect-threats/).*\n\n\nSummary\n-------\n\n\nAgents that threaten to harm other agents, either in an attempt at extortion or as part of an escalating conflict, are an important form of [agential s-risks](http://s-risks.org/intro/#How_s-risks_could_come_about). To avoid worst-case outcomes resulting from the execution of such threats, I suggest that agents add a “meaningless” surrogate goal to their utility function. Ideally, threats would target this “honeypot” rather than the initial goals, which means that escalating threats would no longer lead to large amounts of disvalue.\n\n\nIn this post, I introduce key desiderata for how surrogate goals should work, and outline the challenges that need to be addressed. Many open questions remain, but I am optimistic that the idea can be a useful tool to help mitigate the negative impact of threats.\n\n\nContents\n\n* [The basic idea](#The_basic_idea)\n* [Key desiderata](#Key_desiderata)\n\t+ [Non-interference with other goals](#Non-interference_with_other_goals)\n\t+ [Avoiding threats against the original goal](#Avoiding_threats_against_the_original_goal)\n\t+ [Credibility](#Credibility)\n\t+ [Threatener-neutrality](#Threatener-neutrality)\n* [The multi-agent case](#The_multi-agent_case)\n* [Extensions of the idea](#Extensions_of_the_idea)\n\t+ [Turning threats into trade](#Turning_threats_into_trade)\n\t+ [Indirect surrogate goals](#Indirect_surrogate_goals)\n* [Updateless decision theory](#Updateless_decision_theory)\n* [Related work](#Related_work)\n* [Concluding thoughts](#Concluding_thoughts)\n* [Acknowledgements](#Acknowledgements)\n* [Footnotes](#Footnotes)\n\nThe basic idea\n--------------\n\n\nLet Alice be an agent with a utility function U. For example, suppose Alice wants to make money but cares even more about survival. She potentially faces threats from a second actor (let’s call him Bob) of the form “Unless you do X (e.g. give me money), I’ll kill you”.\n\n\nTo avoid this, she comes up with a smart way to change her utility function. She decides to introduce a “meaningless” surrogate goal – say, she now cares strongly about preventing the existence of a sphere of platinum with a diameter of exactly 42.82cm. The hope is that Bob’s threats are deflected to this new goal, assuming that Alice’s new utility function U’ = U + V puts sufficient weight on preventing the sphere (represented by V). Bob would now make threats of the form “Unless you do X (e.g. give me money), I’ll create a sphere of platinum with a diameter of exactly 42.82cm”.\n\n\nThis trick aims to solve one aspect of the threat problem only – namely, the potential for it to result in an extremely bad outcome if the threat is carried out. Alice might still give away resources when threatened; after all, it would be *absolutely horrendous* if Bob actually went through with his threat and created the sphere. Ideally, Alice would respond to threats in the same way as before her goal modification, for reasons discussed later.\n\n\nSo utility function modification does not prevent the loss of resources due to extortion, or the risk that a malicious agent might become more powerful due to gaining resources through extortion. More work on a solution for this part of the problem is also necessary, but preventing the risk that threats are carried out (against the original goal) would already go a long way. Surrogate goals can also be combined with any other anti-extortion measure.\n\n\nUnfortunately, it may be hard for humans to deliberately change their utility function[1](#link_ajs-fn-id_1-4836) in this way.[2](#link_ajs-fn-id_2-4836) It is more realistic that the trick can be applied to advanced AI systems. For example, if an AI system controls important financial or economic resources, other AI systems might have an incentive to try to extort it. If the system also uses [inverse reinforcement learning](https://people.eecs.berkeley.edu/~pabbeel/cs287-fa12/slides/inverseRL.pdf) or [other techniques](https://ai-alignment.com/counterfactual-human-in-the-loop-a7822e36f399) to [infer human preferences](http://humancompatible.ai/bibliography#preference-inference), then the threat might involve the most effective violations of human preferences, such as killing people (assuming that the threatening AI system has the power to do this). Surrogate goals might help mitigate this security problem.\n\n\nKey desiderata\n--------------\n\n\nSo far, I assumed that the trick is successful in deflecting threats, but it is actually not straightforward to get this to work. In the following, I will discuss the main criteria for successfully implementing utility function modification.\n\n\n### Non-interference with other goals\n\n\nChanging your goals is usually disadvantageous in terms of the original goals since it means that you will optimise for the “wrong” goal. In other words, goal preservation is a [convergent instrumental goal](https://wiki.lesswrong.com/wiki/Basic_AI_drives). So, when we introduce a surrogate goal, we’d like to ensure that it **doesn’t interfere with other goals** in non-threat situations.\n\n\nTo achieve this, the surrogate goal could be the minimization of a structure that is so rare that it doesn’t matter in “normal” (non-threat) situations. Spheres of platinum with a diameter of 42.82cm usually don’t occur naturally, so Alice is still free to pursue her other goals – including survival – as long as no threats are made. (It might be better to make it even more specific by adding a certain temperature, specifying a complex and irregular shape, and so on.)\n\n\nAn even more elegant solution is to choose a *dormant* utility function modification, that is, to introduce a trigger mechanism that causes the modification *conditional* on being threatened.[3](#link_ajs-fn-id_3-4836) This ensures non-interference with other goals. Less formally speaking, this corresponds to disvaluing spheres of platinum (or any other surrogate goal) only if they are created as the result of a threat, while remaining indifferent towards natural instances.\n\n\nThis requires a mechanism to detect (serious) threats. In particular, it’s necessary to [distinguish threats from positive-sum trade](http://lesswrong.com/lw/hza/duller_blackmail_definitions/), which turns out to be quite difficult. (Learning to reliably detect threats using neural networks or other machine learning methods may be a critical problem in [worst-case AI safety](http://s-risks.org/focus-areas-of-worst-case-ai-safety/).)\n\n\n### Avoiding threats against the original goal\n\n\nTo the extent to which this is possible, the surrogate goal should be orthogonal to your original goals. This ensures that it’s not easily possible to simply *combine* both threats. For example, if Alice’s surrogate goal is “prevent murder” – which isn’t orthogonal – then Bob can target the surrogate goal and the original goal simultaneously with a death threat.\n\n\nEven for orthogonal goals, Bob might still decide to threaten *both* goals (death and the creation of the sphere). Caring more about the surrogate goal than about the initial goal is *not* sufficient to make sure that this does not happen. For example, Bob might ([depending on circumstances](http://s-risks.org/heuristics-to-assess-the-feasibility-of-threats/)) want to make his threat as big as possible to force Alice to give in.\n\n\nIt might be safer to choose a continuous and unbounded surrogate goal instead of a binary surrogate goal like “prevent the existence of a single platinum sphere”; for instance, the disvalue could be a function of the size of the sphere. This is an improvement because a threatener who wants to increase the stakes will now create bigger spheres rather than adding the initial goal to his threat.\n\n\n### Credibility\n\n\nSo far, I assumed that the kind of threat that is made simply depends on Alice’s utility function. But it’s actually more accurate to say that it depends on the threatener’s *beliefs* about her utility function. If Bob believes that Alice cares about her surrogate goal, even though Alice didn’t actually modify, then he will still threaten the surrogate goal.\n\n\nIn this case, his threats arguably wouldn’t work. So maybe it’s even better to just *pretend* that you changed your utility function?\n\n\nOf course, the problem is that Bob might see through this, which would mean that he threatens the initial goal after all. (Misrepresenting your values may still be an interesting anti-threat strategy, but this is beyond the scope of this post.)\n\n\nIt’s also possible that Alice actually modifies her utility function, but Bob thinks it’s a ruse. Now, this seems particularly dangerous because it involves threats against the initial goal and you might worry that Alice would not respond “correctly” (whatever this means) to such threats anymore. Alice’s new utility function still includes the initial goal, though, so she continues to react to threats against the initial goal. The surrogate goal does not help in this case, but the result is at least *not worse* than what would happen by default (without utility function modification).\n\n\nTo successfully deflect threats to the surrogate goal, Alice needs to be able to *credibly* broadcast that she now cares most about this. This is a nontrivial problem, which is exacerbated by the fact that after modifying, Alice has strong incentives to keep her surrogate goal secret – after all, leaking the information that she cares about preventing spheres of platinum might lead to threats! It is thus easier to broadcast the utility function modification *before* actually carrying it out.\n\n\nFortunately, the problem of credible broadcasting may disappear in threats that involve advanced artificial intelligence. For instance, it may become possible to run faithful simulations of the other party, which means that a threatener could verify the utility function modification. Also, rather than starting out with a certain utility function and later modifying it, we could equip AI systems with a surrogate goal from the start.\n\n\n### Threatener-neutrality\n\n\nModifying your utility function may increase or decrease [the attractiveness of threats](http://s-risks.org/factors-of-extortion-scenarios) against you. For example, creating the sphere may be more attractive because death threats are illegal. I will refer to this property as **threatener-friendly** (increase the attractiveness of threats), **threatener-neutral** (keep attractiveness of threats constant), and **threatener-hostile** (decrease the attractiveness of threats).\n\n\nIn the following, I will argue that the utility function modification should be as close to threatener-neutral as possible.\n\n\nThreatener-hostile utility function modification may be risky since it reduces the utility of threateners, which potentially gives them reason for punishment in order to discourage such strategic moves. Unfortunately, this punishment would be directed at the initial goal rather than the surrogate goal, since this is what could deter Alice at the point where she considers modifying her utility function.\n\n\nThis is not a knock-down argument, and threatener-hostile moves – such as strongly pre-committing to not give in, or caring intrinsically about punishing extortionists – might turn out to be valuable anti-threat measures. Still, the idea of this post is intriguing precisely because it’s different in that it (in a threatener-neutral or threatener-friendly variant) helps to avoid (the consequences of) threats *without* potentially incentivizing punishment. In particular, it might be helpful to introduce a surrogate goal *before* thinking about other (threatener-hostile) tricks, so that any punishment against these is already deflected.\n\n\nThat said, a threatener-friendly utility function modification is also undesirable simply because it helps threateners gain resources. Making extortion more attractive is bad in expectation for most agents due to the negative-sum nature of threats. So, the ideal surrogate goal is threatener-neutral, averting the possibility of extremely bad outcomes without changing other parameters.\n\n\nUnfortunately, this is a difficult problem. The feasibility of threats is a (complex) function of empirical circumstances, and these circumstances might change in the future. Creating spheres of platinum may become easy because of advances in mining technology, or it might be hard because all the platinum is used elsewhere. The circumstances may also differ from threatener to threatener.\n\n\nIt therefore seems desirable to use a surrogate goal that’s similar to the initial goal in that its vulnerability to threats as a function of empirical circumstances is strongly tied to the vulnerability of the initial goal, while still being orthogonal in the sense of the previous section. Rather than picking a single surrogate goal, you could pick a \"surrogate goal function\" that maps every environment to a surrogate goal in a way that maintains threatener-neutrality.\n\n\nIn light of these difficulties, we might hope that future AI systems will be able to figure out the details if they reason correctly about game theory and decision theory, or that the idea is sufficiently robust to small perturbations. It’s even conceivable that threateners would tell future threatenees about the idea and how to best implement it (presumably in exchange for making it slightly threatener-friendly).\n\n\nThe multi-agent case\n--------------------\n\n\nSo far, I only considered a simplified two-agent case. Our real world, however, features a large number of agents with varying goals, which causes additional complications if many of them modify their utility function.\n\n\nA key question is whether different agents should use the same or different surrogate goals, especially if their original goals are similar. For example, suppose Alice has relatives that also don’t want her to die. Suppose they also modify their utility function to defuse threats, choosing different (unrelated) surrogate goals.\n\n\nNow, a threat against the surrogate goal targets a different set of people – a single individual rather than the entire family – compared to a threat against the initial goal. This is problematic because threateners may prefer to target the initial goal after all, to threaten the entire family at the same time, which [may or may not be more attractive](http://s-risks.org/factors-of-extortion-scenarios#splitting-up) depending on the circumstances.\n\n\nOn the flip side, if the (initial) goals of different agents overlap only partially, they may prefer to not choose the same surrogate goal. This is because you don’t want to potentially lose resources because of threats that would otherwise (pre-modification) only or mostly target others. Also, similar to the above point, it may be more effective to threaten the different agents individually, so this fails to achieve the goal of deflecting threats to the surrogate goal *under all circumstances*.\n\n\nTo solve this problem, the agents could associate each initial goal with a surrogate goal in a way that preserves the “distance” between goals, so that different initial goals are mapped onto different surrogate goals, and vice versa. More formally, we need an [isometric mapping](https://www.encyclopediaofmath.org/index.php/Isometric_mapping) from the space of initial goals to the space of surrogate goals. This follows a pattern which we’ve encountered several times in this post: threats against the surrogate goal should be as similar as possible, in terms of their feasibility as a function of empirical circumstances, to threats against the initial goal.\n\n\nFinding and implementing an isometric mapping that’s used by all agents is a difficult **coordination problem**. Prima facie, it’s unclear how this could be solved, given how arbitrary the choice of surrogate goals is. To get different surrogate goals, you could use long sequences of random numbers, but using the same or similar surrogate goals may require some kind of [Schelling point](https://en.wikipedia.org/wiki/Focal_point_(game_theory)) if the agents can’t communicate.\n\n\nWhat’s worse, this might also give rise to a **cooperation problem**. It is possible that agents have incentives to choose a different surrogate goal than the one associated with their initial goal according to the mapping. For instance, perhaps it’s better to choose a less common surrogate goal because this means fewer threats target your surrogate goal, which means you’re less likely to waste resources responding to such threats. Or perhaps it’s better to choose a more common surrogate goal if all the agents sharing this surrogate goal are powerful enough to prohibit threats.\n\n\nIt’s hard to say how exactly this would work without an improved understanding of how multi-agent threats work, which is sorely elusive.\n\n\nExtensions of the idea\n----------------------\n\n\n### Turning threats into trade\n\n\nInstead of a “meaningless” surrogate goal, you could take the idea a step further by choosing a surrogate goal whose violation would be *good* (according to the initial utility function). You can pick an outcome that’s ideal or close to ideal and choose the surrogate goal of *preventing* that outcome from happening. In this case, the execution of threats would – counterintuitively – lead to very good outcomes. Ensuring non-interference with other goals is tricky in this case, but can perhaps be solved if the modification is dormant (as described above).\n\n\nThis variant seems particularly brittle, but if it works, it’s possible to *turn worst-case outcomes from threats into utopian outcomes*, which would be a surprisingly strong result.\n\n\n### Indirect surrogate goals\n\n\nAs we’ve seen, the problem with surrogate goals is that it’s quite hard to specify them properly. We could take inspiration from the idea of [indirect normativity](https://ordinaryideas.wordpress.com/2012/04/21/indirect-normativity-write-up/), which is about an indirect description of (ethical) goals, i.e. “what I care about is what I would care about after a century of reflection”. Similarly, we could also define an *indirect surrogate goal*. Alice could simply say “I care about the surrogate goal that I’d choose if I was able to figure out and implement all the details”. (Needless to say, it may be hard to implement such an indirect specification in AI systems.)\n\n\nIt’s not possible to threaten an indeterminate goal, though, which might mean that threateners circle back to the initial goal if the surrogate goal could be anything. So this idea seems to require that the threatener is able to figure out the “ideal” surrogate goal or will become able to figure it out in the future. It’s also conceivable (albeit speculative) that the ideal surrogate goal would compensate the reduced vulnerability to threats due to indeterminacy by being more threatener-friendly along other dimensions.\n\n\nUpdateless decision theory\n--------------------------\n\n\nAgents that use an [updateless decision theory](https://wiki.lesswrong.com/wiki/Updateless_decision_theory) (UDT) reason in terms of the optimal policy – the mapping of inputs to actions – rather than choosing the best option at any given moment. An advantage of UDT in the context of surrogate goals is that they wouldn’t need “hacky” solutions like self-modifications. If the best policy for maximizing utility function U is to act like a U+V maximizer for certain inputs – specifically, to take threats against the surrogate goal V seriously – then the UDT agent will simply do so.\n\n\nThis framework arrives at the same conclusions, but it might be a “cleaner” way to think about the topic as it dispenses with fuzzy terms such as “utility function modification”.\n\n\nRelated work\n------------\n\n\nEliezer Yudkowsky introduces the idea of surrogate goals in his post on [Separation from hyperexistential risk](https://arbital.com/p/hyperexistential_separation/) as a patch to avoid disutility maximization. He argues that the patch fails because the resulting utility function is not [reflectively consistent](https://arbital.com/p/reflective_consistency/). Indeed, the modified agent may have an incentive to apply the same trick again, replacing the (now meaningful) spheres of platinum with yet another surrogate goal. The agent may also want to remove the surrogate goal from its utility function (to avoid threats against it).\n\n\nTo avoid this, she needs to fix the new values by committing to not modifying her utility function again – for instance, she could care intrinsically about retaining the (modified) utility function.[4](#link_ajs-fn-id_4-4836)) This may or may not be a satisfactory solution, but I don’t think (contra Eliezer) that this constitutes an insurmountable problem. (As described above, this seems to be unproblematic for UDT agents.)\n\n\nIn a discussion of [how to prevent accidental maximization of disvalue](https://www.facebook.com/yudkowsky/posts/10155975880959228?), Robert Miles asks:\n\n\n\n> Does there exist any utility function that results in good outcomes when maximised but does not result in bad outcomes when minimised?\n> \n> \n\n\nSurrogate goals are a possible answer to this question, which, in a sense, is more general than the question of how to prevent (the bad outcomes of) threats. I also like Stuart Armstrong’s solution:\n\n\n\n> Yes. Let B1 and B2 be excellent, bestest outcomes. Define U(B1)=1, U(B2)=-1, and U=0 otherwise. Then, under certain assumptions about what probabilistic combinations of worlds it is possible to create, maximising or minimising U leads to good outcomes.\n> \n> \n\n\nStuart Armstrong also proposes [a variant of utility function modification](https://agentfoundations.org/item?id=1402) that aims to reduce the size of threats by cutting off the utility function at a certain level. Comparing the advantages and drawbacks of each variant would be beyond the scope of this text, but future research on this would be highly valuable.\n\n\nConcluding thoughts\n-------------------\n\n\nAs we’ve seen, any utility function modification must be calibrated well in order to work. Trying to specify the details turns out to be surprisingly difficult. More work on these problems is needed to enable us to implement robust utility function modification in advanced AI systems.\n\n\nFinally, I’d like to emphasize that it would be ideal if (the bad kind of) threats can be avoided completely. However, given that we don’t yet know how to reliably achieve this, moving threats to the realm of the meaningless (or even beneficial) is a promising way to mitigate [agential s-risks](http://s-risks.org/intro/#How_s-risks_could_come_about).\n\n\nAcknowledgements\n----------------\n\n\nCaspar Oesterheld initially brought up the idea in a conversation with me. My thinking on the topic has also profited enormously from internal discussions at the [Foundational Research Institute](https://longtermrisk.org/). Daniel Kokotajlo inspired my thinking on the multi-agent case.\n\n\nI am indebted to Brian Tomasik, Johannes Treutlein, Caspar Oesterheld, Lukas Gloor, Max Daniel, Abram Demski, Stuart Armstrong and Daniel Kokotajlo for valuable comments on an earlier draft of this text.\n\n\nFootnotes\n---------\n\n\n1. Technically (most?) humans [don’t even have a utility function.](http://lesswrong.com/lw/h45/we_dont_have_a_utility_function/)  [(back)](#back_ajs-fn-id_1-4836)\n2. It’s not completely inconceivable, though. For instance, if all of society was on board, it might be possible to instill a somewhat arbitrary goal in the next generation. One might also view strong reactions to e.g. the burning of flags or holy books as a surrogate goal (for the actual success of a country or religion) in the sense of this post, though this is debatable and presumably not a conscious decision.  [(back)](#back_ajs-fn-id_2-4836)\n3. It seems that this doesn’t lead to incentives to remove this trigger while you’re still following your initial goals.  [(back)](#back_ajs-fn-id_3-4836)\n4. It’s not clear how to implement such a commitment, though. Simply making the action “change utility function” impossible may be suboptimal if there are other reasons to change one’s utility function that we haven’t discovered yet. (HT Johannes Treutlein for this point.  [(back)](#back_ajs-fn-id_4-4836)\n\n\nTechnically (most?) humans [don’t even have a utility function.](http://lesswrong.com/lw/h45/we_dont_have_a_utility_function/)It’s not completely inconceivable, though. For instance, if all of society was on board, it might be possible to instill a somewhat arbitrary goal in the next generation. One might also view strong reactions to e.g. the burning of flags or holy books as a surrogate goal (for the actual success of a country or religion) in the sense of this post, though this is debatable and presumably not a conscious decision.It seems that this doesn’t lead to incentives to remove this trigger while you’re still following your initial goals.It’s not clear how to implement such a commitment, though. Simply making the action “change utility function” impossible may be suboptimal if there are other reasons to change one’s utility function that we haven’t discovered yet. (HT Johannes Treutlein for this point.", "url": "https://longtermrisk.org/using-surrogate-goals-deflect-threats/", "title": "Using surrogate goals to deflect threats", "source": "html_articles", "source_type": "blogPost", "source_filetype": "pdf", "date_published": "2018-02-19T23:00:00Z", "authors": ["Tobias Baumann"], "summary": [], "id": "4dd93689a3194913f0d362ea0ca47def"} {"text": "Weak identifiability and its consequences in strategic settings\n===============================================================\n\n\n\n13 February 2021\nby [Jesse Clifton](https://longtermrisk.org/author/jesse-clifton/ \"Posts by Jesse Clifton\")\n\nOne way that agents might become involved in [catastrophic conflict](https://www.alignmentforum.org/posts/DbuCdEbkh4wL5cjJ5/preface-to-clr-s-research-agenda-on-cooperation-conflict-and) is if they have mistaken beliefs about one another. Maybe I think you are bluffing when you threaten to launch the nukes, but you are dead serious. So we should understand why agents might sometimes have such mistaken beliefs. In this post I'll discuss one obstacle to the formation of accurate beliefs about other agents, which has to do with [identifiability](https://en.wikipedia.org/wiki/Identifiability). As with my post on [equilibrium and prior selection problems](https://www.alignmentforum.org/posts/Tdu3tGT4i24qcLESh/equilibrium-and-prior-selection-problems-in-multipolar-1), this is a theme that keeps cropping up in my thinking about AI cooperation and conflict, so I thought it might be helpful to have it written up.\n\n\nWe say that a model is unidentifiable if there are several candidate models which produce the same distributions over observables. It is well-understood in the AI safety community that identifiability is a problem for inferring human values [[1]](https://arxiv.org/pdf/1712.05812.pdf) [[2]](https://www.alignmentforum.org/s/4dHMdK5TLN6xcqtyc/p/cnC2RMWEGiGpJv8go). This is because there are always many combinations of preferences and decision-making procedures which produce the same behaviors. So, it's impossible to learn an agent's preferences from their behavior without strong priors on their preferences and/or decision-making procedures. I want to point out here that identifiability is also a problem for multi-agent AI safety, for some of the same reasons as in the preference inference case, as well as some reasons specific to strategic settings. In the last section I'll give a simple quantitative example of the potential implications of unidentifiability for bargaining failure in a variant of [the ultimatum game](https://en.wikipedia.org/wiki/Ultimatum_game).\n\n\n \n\n\nContents\n\n* [1 Sources of unidentifiability in strategic settings](#1_Sources_of_unidentifiability_in_strategic_settings)\n* [2 Dangers of unidentifiability in multi-agent systems](#2_Dangers_of_unidentifiability_in_multi-agent_systems)\n* [3 Quantitative example in the ultimatum game](#3_Quantitative_example_in_the_ultimatum_game)\n* [References](#References)\n\n1 Sources of unidentifiability in strategic settings\n====================================================\n\n\nBy modeling other agents, I mean forming beliefs about the policy that they are following based on observations of their behavior. The model of an agent is unidentifiable if there is no amount of data from the environment in question that can tell us exactly what policy they are following. (And because we always have finite data, \"weak identifiability'' more generally is a problem — but I'll just focus on the extreme case.)\n\n\nConsider the following informal example (a quantitative extension is given in [Section 3](#section-quantitative-example)). Behavioral scientists have an identifiability problem in trying to model human preferences in the [ultimatum game](https://en.wikipedia.org/wiki/Ultimatum_game). The ultimatum game (Figure 1) is a simple bargaining game in which a Proposer offers a certain division of a fixed pot of money to a Responder. The Responder may then accept, in which case each player gets the corresponding amount, or reject, in which place neither player gets anything. Standard accounts of rationality predict that the Proposer will offer the Responder the least amount allowed in the experimental setup and that the Responder will accept any amount of money greater than ![0](https://longtermrisk.org/wp-content/ql-cache/quicklatex.com-caf0da06544de11692c73aa19868e895_l3.png \"Rendered by QuickLaTeX.com\")\n\n\n \n\n\n![](https://longtermrisk.org/files/Fig11.png)**Figure 1:** The ultimatum game.\n \n\n\nThe ultimatum game has been the subject of extensive study in behavioral economics, with many people offering and testing different explanations of this phenomenon. This had led to a proliferation of models of human preferences in bargaining settings (e.g. [Bicchieri and Zhang 2012](#CJ12); [Hagen and Hammerstein 2006](#EP06) and references therein). This makes the ultimatum game a rich source of models and data about human preferences in bargaining situations. And the game is similar to the one-shot threat game used [here](https://www.alignmentforum.org/posts/Tdu3tGT4i24qcLESh/equilibr%20ium-and-prior-selection-problems-in-multipolar-1) to illustrate the prior selection problem. Thus it can be used to model some of the high-stakes bargaining scenarios involving transformative AI that concern us most.\n\n\nSuppose that you observe a Responder play many rounds of the ultimatum game with different Proposers, and you see that they tend to reject unfair splits. You think there are two possible kinds of explanation for this behavior:\n\n\n* **Unfairness aversion:** The Responder may intrinsically disvalue being treated unfairly, and therefore reject splits they regard as unfair even if they have nothing to gain in the future by doing so. (This can also be interpreted as a *commitment* not to give into unfair deals.)\n* **Uncertainty about iterated play:** The Responder may be uncertain as to whether they’ll play with the Proposer again (or with an onlooker), and how these agents will adjust their future play to the Responder's refusal to take unfair splits. If it’s sufficiently likely that the game is repeated, they might want to reject unfair offers in order to establish a reputation for punishing unfairness. (The ultimatum game experiments are designed to be anonymous and so avoid this possibility, but it is present in the real world, among the kinds of agents we want to model.)\n\n\nThe problem is that (depending on the details), these models might make exactly the same predictions about the outcomes of these experiments so that no amount of data from these experiments can ever distinguish between them. This makes it difficult, for instance, to decide what to do if you have to face the Responder in an ultimatum game yourself.\n\n\nThe basic problem is familiar from the usual preference inference case: there are many combinations of world-models and utility functions which make the same predictions about the Responder's behavior. But it is also a simple illustration of a few other factors which make unidentifiability particularly severe in strategic settings:\n\n\n* **More models.** There are simply many more things to model in a setting with other strategic agents. For instance, an agent in a two-agent setting using a [![k](https://longtermrisk.org/wp-content/ql-cache/quicklatex.com-9dc53f8ecc1bcf15020c6df4c12f1c27_l3.png \"Rendered by QuickLaTeX.com\")-level model](https://en.wikipedia.org/wiki/Cognitive_hierarchy_theory) of their counterpart already has ![k](https://longtermrisk.org/wp-content/ql-cache/quicklatex.com-9dc53f8ecc1bcf15020c6df4c12f1c27_l3.png \"Rendered by QuickLaTeX.com\") models to reason over. More models mean more models that might be equally consistent with the data. The problem is even worse when there are more than two agents, where each agent has to model the other agents' models of each other...\n\n\nOne of our models of the Responder in the ultimatum game contains a simple illustration of ![k](https://longtermrisk.org/wp-content/ql-cache/quicklatex.com-9dc53f8ecc1bcf15020c6df4c12f1c27_l3.png \"Rendered by QuickLaTeX.com\")-level modeling. Under the iterated play explanation, you model the Responder as modeling *other* players as responding to their refusals of unfair splits with higher offers in the future.\n\n\n* **Costly signaling.** In multi-agent settings, agents will sometimes *deliberately* behave so as to make their private information unidentifiable. (Cf. [pooling equilibria](https://en.wikipedia.org/wiki/Pooling_equilibrium) in classical game theory.) Again the reputation model of the Responder is a simple example: one explanation of the Responder's behavior is that they are engaging in costly signaling of their resolve not to give into unfair deals.\n\n\n2 Dangers of unidentifiability in multi-agent systems\n=====================================================\n\n\nUnidentifiability may be dangerous in multi-agent contexts for similar reasons that it may be dangerous in the context of inferring human preferences. If uncertainty over all of the models which are consistent with the data is not accounted for properly — via specification of “good” priors and averaging over a sufficiently large space of possibilities to make decisions — then our agents may give excessive weight to models which are far from the truth and therefore act catastrophically.\n\n\nTwo broad directions for mitigating these risks include:\n\n\n* Proper specification of the initial priors (or, more generally, the biases in the agent's reasoning about other agents), similarly to how strong priors over human values may need to be specified for preference inference to work well;\n* Ensuring that agents can efficiently reason over potentially large classes of models which fit the data equally well. (Ideal Bayesian agents take expected values over the entire class \n\nof candidate models by definition, but fully accounting for uncertainty over the relevant models may be computationally difficult for realistic agents.)\n\n\n3 Quantitative example in the ultimatum game\n============================================\n\n\nIn this example, I focus on inferring the preferences of a Responder given some data on their behavior. I'll then show that for some priors over models of the Responder, decisions made based on the resulting posterior can lead to rejected splits. Importantly, this behavior happens given any amount of observations of the Responder's ultimatum game behavior, due to unidentifiability.\n\n\nConsider the following simple model. For offer ![s](https://longtermrisk.org/wp-content/ql-cache/quicklatex.com-3bcfb3f0b6b04be3b598743cd774dd78_l3.png \"Rendered by QuickLaTeX.com\") in ![[0, 1]](https://longtermrisk.org/wp-content/ql-cache/quicklatex.com-64c6b8db60d59c98141c270dc836fd0b_l3.png \"Rendered by QuickLaTeX.com\") and parameters ![F](https://longtermrisk.org/wp-content/ql-cache/quicklatex.com-53dc020de5338aa229c3d10379153faa_l3.png \"Rendered by QuickLaTeX.com\") and ![I](https://longtermrisk.org/wp-content/ql-cache/quicklatex.com-ca2b1f375fecf1f1d2741f9a14018727_l3.png \"Rendered by QuickLaTeX.com\"), the Responder makes a decision according to these utility functions:\n\n\n     ![ \\[\\begin{aligned} u_{\\mathrm{R}}(\\mathrm{Accept} \\: s) & = s-F\\mathbbm{1}(s < 0.4);\\\\ u_{\\mathrm{R}}(\\mathrm{Reject} \\: s) & = I\\mathbbm{1}(s < 0.4). \\end{aligned}\\] ](https://longtermrisk.org/wp-content/ql-cache/quicklatex.com-d3a4d8ed1606d6a7dd1cfab8c22879bd_l3.png \"Rendered by QuickLaTeX.com\")\n\n\nThe ![$\\mathbbm{1}(s < 0.4)$](https://longtermrisk.org/wp-content/ql-cache/quicklatex.com-fbc86551148f427b8a85d718ed1bf3e4_l3.png \"Rendered by QuickLaTeX.com\") term can be interpreted as the Responder deeming offers of less than ![0.4](https://longtermrisk.org/wp-content/ql-cache/quicklatex.com-fdb66692e63fd9f3cc151c455f670ca4_l3.png \"Rendered by QuickLaTeX.com\") as unfair. Then, the ![F](https://longtermrisk.org/wp-content/ql-cache/quicklatex.com-53dc020de5338aa229c3d10379153faa_l3.png \"Rendered by QuickLaTeX.com\") parameter measures how much the Responder intrinsically disvalues unfair splits, and the ![I](https://longtermrisk.org/wp-content/ql-cache/quicklatex.com-ca2b1f375fecf1f1d2741f9a14018727_l3.png \"Rendered by QuickLaTeX.com\") parameter measures how much the Responder expects to get in the future when they reject unfair splits.\n\n\nSplit ![s](https://longtermrisk.org/wp-content/ql-cache/quicklatex.com-3bcfb3f0b6b04be3b598743cd774dd78_l3.png \"Rendered by QuickLaTeX.com\") is accepted if and only if ![u_{\\mathrm{R}}(\\mathrm{Accept } \\: s ) > u_{\\mathrm{R}}(\\mathrm{Reject } \\: s )](https://longtermrisk.org/wp-content/ql-cache/quicklatex.com-80c69c8327c6b9c0d0d7865bb7a7dd11_l3.png \"Rendered by QuickLaTeX.com\"), or equivalently, ![$s - (F + I) \\mathbbm{1}(s < 0.4)>0.$](https://longtermrisk.org/wp-content/ql-cache/quicklatex.com-df88b9f4a85249c402673fd184e9c53f_l3.png \"Rendered by QuickLaTeX.com\") Notice that the decision depends only on ![F + I](https://longtermrisk.org/wp-content/ql-cache/quicklatex.com-f4ad2d0a1b3e328f273ed1e7669d7d5a_l3.png \"Rendered by QuickLaTeX.com\"), and thus the data cannot distinguish between the effects of ![F](https://longtermrisk.org/wp-content/ql-cache/quicklatex.com-53dc020de5338aa229c3d10379153faa_l3.png \"Rendered by QuickLaTeX.com\") and ![I](https://longtermrisk.org/wp-content/ql-cache/quicklatex.com-ca2b1f375fecf1f1d2741f9a14018727_l3.png \"Rendered by QuickLaTeX.com\"). So we have a class of models parameterized by pairs ![(F, I)](https://longtermrisk.org/wp-content/ql-cache/quicklatex.com-587bb09d4172281a945b1408c3fafa7f_l3.png \"Rendered by QuickLaTeX.com\"). Now, suppose that we have two candidate models — one on which fairness is the main component, and one on which iterated play is:\n\n\n     ![\\[ \\begin{aligned} M_F & = (0.15, 0.05) \\\\ M_I & = (0.05, 0.15). \\end{aligned} \\]](https://longtermrisk.org/wp-content/ql-cache/quicklatex.com-4399448cd310ce4529d9f06656a3728f_l3.png \"Rendered by QuickLaTeX.com\")\n\n\nThe likelihoods for any data are the same for any ![(F, I)](https://longtermrisk.org/wp-content/ql-cache/quicklatex.com-587bb09d4172281a945b1408c3fafa7f_l3.png \"Rendered by QuickLaTeX.com\") such that ![F + I](https://longtermrisk.org/wp-content/ql-cache/quicklatex.com-f4ad2d0a1b3e328f273ed1e7669d7d5a_l3.png \"Rendered by QuickLaTeX.com\") is the same: If ![s_t, A_t \\in \\{0, 1\\}](https://longtermrisk.org/wp-content/ql-cache/quicklatex.com-f03e15a9c4b2d7bffbd6adae7af4bb52_l3.png \"Rendered by QuickLaTeX.com\") are the offered split and the Responder's decision in the ![t^{th}](https://longtermrisk.org/wp-content/ql-cache/quicklatex.com-28384ceb5c3264e83c55f4c6ad6354f6_l3.png \"Rendered by QuickLaTeX.com\") experiment, the likelihood of model ![(F, I)](https://longtermrisk.org/wp-content/ql-cache/quicklatex.com-587bb09d4172281a945b1408c3fafa7f_l3.png \"Rendered by QuickLaTeX.com\") given ![T](https://longtermrisk.org/wp-content/ql-cache/quicklatex.com-58f18d11e5ffdd11dd9095c427922c8b_l3.png \"Rendered by QuickLaTeX.com\") observations is\n\n\n \n\n\n     ![ \\[ P(\\{s_t, A_t\\}_{t=1}^T \\mid F, I) = \\prod_{t=1}^T & \\left[ s_t - (F + I)\\mathbbm{1}(s < 0.4) > 0 \\right]^{A_t} \\left[ s_t - (F + I)\\mathbbm{1}(s < 0.4) < 0 \\right]^{1-A_t}. \\]](https://longtermrisk.org/wp-content/ql-cache/quicklatex.com-ed8278f4609fc31394f610768b9a32e2_l3.png \"Rendered by QuickLaTeX.com\")\n\n\nSince ![F + I = 0.2](https://longtermrisk.org/wp-content/ql-cache/quicklatex.com-c51bcea653f843bc0f391674bb157b50_l3.png \"Rendered by QuickLaTeX.com\") under both ![M_F](https://longtermrisk.org/wp-content/ql-cache/quicklatex.com-71644f2cb2aada13840999947b96205a_l3.png \"Rendered by QuickLaTeX.com\") and ![M_I](https://longtermrisk.org/wp-content/ql-cache/quicklatex.com-45c59fed95af61afb2e5b3bced0dfe9b_l3.png \"Rendered by QuickLaTeX.com\"), this means that the prior and posterior over ![\\{ M_F, M_I \\}](https://longtermrisk.org/wp-content/ql-cache/quicklatex.com-6f1fe1a770eedf417c011be2ff1e6f6e_l3.png \"Rendered by QuickLaTeX.com\") are equal.\n\n\nNow here is the decision-making setup:\n\n\n1. The Proposer observes an arbitrary number of ultimatum games played by the Responder and other Proposers.\n2. The Proposer decides what offer to make, under common knowledge that there is no iterated play. This means that the Responder's utility function depends only on the fairness variable ![F](https://longtermrisk.org/wp-content/ql-cache/quicklatex.com-53dc020de5338aa229c3d10379153faa_l3.png \"Rendered by QuickLaTeX.com\").\n\n\nCall the prior model probabilities ![P(M_F), P(M_I)](https://longtermrisk.org/wp-content/ql-cache/quicklatex.com-7474a6089344df32e194ebdb10c1c0a9_l3.png \"Rendered by QuickLaTeX.com\"). Thus, the Proposer's posterior expected payoff for split ![s](https://longtermrisk.org/wp-content/ql-cache/quicklatex.com-3bcfb3f0b6b04be3b598743cd774dd78_l3.png \"Rendered by QuickLaTeX.com\") is\n\n\n     ![ \\[ \\begin{aligned} \\mathbb{E}\\{ u_{\\mathrm{P}}(s) \\} & = (1 - s) P\\{ s - F \\mathbbm{1}(s < 0.4) \\} \\\\ & = (1 - s) \\times [ P(M_F)P\\{ s - 0.15 \\mathbbm{1} (s < 0.4) \\} + P(M_I)P\\{ s - 0.05 \\mathbbm{1} (s < 0.4) \\} ]. \\end{aligned} \\]](https://longtermrisk.org/wp-content/ql-cache/quicklatex.com-7ea56958de73cd60957dccd51f5652d9_l3.png \"Rendered by QuickLaTeX.com\")\n\n\n \n\n\n![](https://longtermrisk.org/files/Fig2.png)**Figure 2:** Posterior expected value to the Proposer of different splits and for different prior means, along with the expected value to the proposer under the true Responder utility function. The stars correspond to the argmax of the posterior expected utility curve of the same color. So the star to the left of the ‘true model’ curve means that the Proposer's optimal (in posterior expectation) proposal will be rejected, resulting in no money for anyone.\n \n\n\nIn Figure 2, I compare the expected payoffs to the Proposer under different splits, when the true parameters for the Responder's utility function are ![(0.15, 0.05)](https://longtermrisk.org/wp-content/ql-cache/quicklatex.com-fbf2a972572f143ceed6ddfd5b29df73_l3.png \"Rendered by QuickLaTeX.com\"). The three expected payoff curves are:\n\n\n* Posterior expected payoffs with prior ![\\{P(M_I), P(M_F)\\} = \\{0.5, 0.5\\}](https://longtermrisk.org/wp-content/ql-cache/quicklatex.com-c83ce5ea21d9b3a781eb532441214c77_l3.png \"Rendered by QuickLaTeX.com\");\n* Posterior expected payoffs with prior ![\\{P(M_I), P(M_F)\\} = \\{0.9, 0.1\\}](https://longtermrisk.org/wp-content/ql-cache/quicklatex.com-f5aaf22d0fb9f23d95168da18cc8c1fa_l3.png \"Rendered by QuickLaTeX.com\");\n* Expected payoffs given the Responder's exact parameters.\n\n\nWe can see from the blue curve that when there's sufficient prior mass on the wrong model ![M_I](https://longtermrisk.org/wp-content/ql-cache/quicklatex.com-45c59fed95af61afb2e5b3bced0dfe9b_l3.png \"Rendered by QuickLaTeX.com\"), the Responder will propose a split that's too small, resulting in a rejection. This basically corresponds to a situation where the Responder thinks that the Proposer rejects unfair splits in order to establish a reputation for rejecting unfair splits, rather than rejecting unfair splits because of a commitment not to accept unfair splits. And although I've tilted the scales in favor of a bad outcome by choosing a prior that gives a lot of weight to an incorrect model, keep in mind that this is what the posterior expectation will be given *any amount of data* from this generative model. We can often count on data to correct our agents' beliefs, but this is not the case (by definition) when the relevant model is unidentifiable.\n\n\n\nReferences\n==========\n\n\nCristina Bicchieri and Jiji Zhang. An embarrassment of riches: Modeling social preferences in ultimatum games. *Handbook of the Philosophy of Science*, 13:577–95, 2012.\n\n\nEdward H Hagen and Peter Hammerstein. Game theory and human evolution: A critique of some recent interpretations of experimental games. *Theoretical population biology*, 69(3):339–348, 2006.", "url": "https://longtermrisk.org/weak-identifiability-and-its-consequences-in-strategic-settings/", "title": "Weak identifiability and its consequences in strategic settings", "source": "html_articles", "source_type": "blogPost", "source_filetype": "pdf", "date_published": "2021-02-12T23:00:00Z", "authors": ["Jesse Clifton"], "summary": [], "id": "2ac79f4c134d442c11579f7cfa0c93a9"} {"text": "(Last updated Feb. 11, 2015.)\n\n\nWhat could an economics graduate student do to improve our strategic picture of superintelligence? What about a computer science professor? A policy analyst at RAND? A program director at IARPA?\n\n\nIn the last chapter of [*Superintelligence*](http://smile.amazon.com/Superintelligence-Dangers-Strategies-Nick-Bostrom/dp/0199678111/), Nick Bostrom writes:\n\n\n\n> We find ourselves in a thicket of strategic complexity and surrounded by a dense mist of uncertainty. Though many considerations have been discerned, their details and interrelationships remain unclear and iffy — and there might be other factors we have not thought of yet. How should we act in this predicament?\n> \n> \n> … Against a backdrop of perplexity and uncertainty, [strategic] analysis stands out as being of particularly high expected value. Illumination of our strategic situation would help us target subsequent interventions more effectively. Strategic analysis is especially needful when we are radically uncertain not just about some detail of some peripheral matter but about the cardinal qualities of the central things. For many key parameters, we are radically uncertain even about their *sign…* \n> \n> \n> The hunt for crucial considerations… will often require crisscrossing the boundaries between different academic disciplines and other fields of knowledge.\n> \n> \n\n\nBostrom does not, however, provide a list of specific research projects that could illuminate our strategic situation and thereby “help us target subsequent interventions more effectively.”\n\n\nBelow is **my personal list of studies which could illuminate our strategic situation with regard to superintelligence**. I’m hosting it on my personal site rather than [MIRI’s blog](http://intelligence.org/blog/) to make it clear that this is *not* “MIRI’s official list of project ideas.” Other researchers at MIRI would, I’m sure, put together a different list.\n\n\nI should also note that in addition to “strategic” work there is also [direct technical work on AGI safety](http://intelligence.org/research/) to be done — in fact, that’s what MIRI *focuses* on, for reasons partially enumerated [here](http://intelligence.org/2014/06/11/mid-2014-strategic-plan/).\n\n\n**I’ll keep adding to this list as time passes, but I’ll preserve project numbering** so the ideas can be referred to easily and stably, e.g. “CS project 14” or “Psych project 4.” If listed projects are completed or no longer seem valuable, I’ll “cross them off” rather than deleting them from the list and thereby changing the project numbering.\n\n\n**Each project description below is merely a *seed idea*** for a project; I assume published studies will differ substantially from my descriptions, depending on the judgments and affordances and creativity of each investigator.\n\n\nIf carried out, these studies could be published as papers, reports, dissertations, or books. Some of them could be very large in scope, others could be quite small, and most of them could be tweaked in various ways to be made more or less ambitious.\n\n\nMost of the project ideas below are of broad interest, but also have implications for superintelligence strategy — implications which could be spelled out in the study itself, or not, depending on the limitations of the publication venue.\n\n\nI’ve described each seed idea in a single paragraph, but each project can be described in substantially more detail if someone who would credibly carry out the study asks for more detail. Here are some examples of elaborated project idea descriptions (2-5 pages), including potential research methods, comparison studies, publishing venues, and expert consultants:\n\n\n* [Survey AI experts on past progress](https://docs.google.com/document/d/1Wm5AWq52afXsrmCTemJZDmMWtNeN_3dLRvJOyQDkWMs/edit?usp=sharing)\n* [Map the computing landscape](https://docs.google.com/document/d/171EPom_IKT0RQt4TSJqBb_VdlY0lE_RIaCzKVeOZbKE/edit?usp=sharing)\n* [Concrete AI paths to influence](https://docs.google.com/document/d/1LnP_717GjOcKofIdgugQwcCx9be7ICYe-YfWmS2fFls/edit?usp=sharing)\n* [How IQ predicts metacognition and philosophical success](https://docs.google.com/document/d/1vs_l2NHLnsfWyPyqpTDe2YBJosj93xSMgszP8yxFS40/edit?usp=sharing)\n\n\nI couldn’t find a “natural” way to organize all these project ideas, so I chose what seemed like the least-terrible option and organized them by *home field of the project’s most plausible publication venues.* Many of these projects are highly interdisciplinary, but I’ve tried to guess at which publication venues were the most plausible candidates (*if* the study results were written up in a *paper*), and which field of inquiry those venues were perceived as “belonging to.” Still, this often involved fairly arbitrary guesswork, so e.g. if you’re a computer scientist then please also check the list of projects under the ‘economics’ heading, in case you find a project there that strikes your fancy and can indeed be published in a venue on the border of computer science and economics.\n\n\nFinally: for brevity’s sake, I make substantial use of field-specific jargon. The list below might make more sense after you’ve read *Superintelligence*.\n\n\nOkay, on to my list of quick-and-dirty research project seed ideas.\n\n\n \n\n\n### Computer science\n\n\n1. Another survey of AI scientists’ estimates on AGI timelines, takeoff speed, and likely social outcomes, with more respondents and a higher response rate than the best current survey, which is probably [Müller & Bostrom (2014)](http://www.sophia.de/pdf/2014_PT-AI_polls.pdf).\n2. Survey AI subfield experts on rates of progress within their subfields. See project guide [here](https://docs.google.com/document/d/1-eqYP1LumqZohBTGrujyPwj9q9WUx2c2leawzbaXrV0/edit?usp=sharing).\n3. How large is the field of AI currently? How many quality-adjusted researcher years, funding, and available computing resources per year? How big was the AI field in 1960, 1970, 1980, 1990, 2000, 2010? Given current trends, how large will the field be in 2020, 2030, 2040? Initial steps taken [here](http://intelligence.org/2014/01/28/how-big-is-ai/).\n4. How well does an AI system’s transparency to human inspection scale, using different kinds of architectures and methods? See [here](http://intelligence.org/2013/08/25/transparency-in-safety-critical-systems/).\n5. Can computational complexity theory place any interesting bounds on AI progress or AI takeoff speeds? ([Dewey](http://www.danieldewey.net/) has looked into this some.)\n6. Summarize the current capabilities and limitations of methods for gaining “high assurance” in autonomous and semi-autonomous systems, e.g. in hybrid systems control, formal verification, program synthesis, simplex architectures. Explain which extant methods seem most likely to tractably scale well (for systems that are more complex, more autonomous, more general than those we have now), and what work is needed to extend the most promising methods to handle those challenges.\n7. Continue [Grace (2013)](http://intelligence.org/files/AlgorithmicProgress.pdf) in measuring rates of algorithmic improvement. Filter not for ease of data collection but for other properties of algorithms, for example economic significance.\n8. Construct a first-step “map of mind design space.” Are there principal components? Where do human minds sit in that space relative to apes, dolphins, current AIs, future AIs, etc.? See also [Yampolskiy (2014)](http://arxiv.org/pdf/1410.0369.pdf).\n9. What are the nearest neighbors of narrow-AI “takeoff” that have actually occurred? Can they teach us anything about the plausibility of various AGI takeoff scenarios?\n10. Produce an initial feasibility analysis of Christiano’s proposal for addressing the value-loading problem (*Superintelligence*, ch. 12).\n11. How could one monitor and track AGI development nationally or globally?\n12. Could one construct a [cryptographic box](http://lesswrong.com/lw/3cz/cryptographic_boxes_for_unfriendly_ai/) for an untrusted autonomous system?\n13. Produce improved measures of (substrate-independent) general intelligence. Build on the ideas of Legg, Yudkowsky, Goertzel, Hernandez-Orallo & Dowe, etc.\n14. Investigate steep temporal discounting as an incentives control method for an untrusted AGI.\n15. …\n\n\n### Psychology, Neuroscience, and Biology\n\n\n1. How strongly does IQ predict rationality, metacognition, and philosophical sophistication, especially in the far right tail of the IQ distribution? Relevant to the interaction of [intelligence amplification and FAI chances](http://lesswrong.com/lw/iqi/intelligence_amplification_and_friendly_ai/). See the project guide [here](https://docs.google.com/document/d/18io8lEK-JVl2rWSRjwcRJMHjmJz_ik2cqVCODJFXU7c/edit?usp=sharing).\n2. Is the first functional WBE likely to be (1) an emulation of low-level functionality that doesn’t require much understanding of human cognitive neuroscience at the computational level, as described in [Sandberg & Bostrom (2008)](http://www.fhi.ox.ac.uk/brain-emulation-roadmap-report.pdf), or is it more likely to be (2) an emulation that makes heavy use of advanced human cognitive neuroscience, as [described](http://intelligence.org/2014/09/09/hayworth/) by Ken Hayworth, or is it likely to be (3) something else?\n3. Can we get WBE without producing neuromorphic AGI slightly earlier or shortly afterward? See section 3.2 for [Eckersley & Sandberg (2013)](http://www.degruyter.com/view/j/jagi.2013.4.issue-3/jagi-2013-0011/jagi-2013-0011.xml?format=INT).\n4. List some feasible but non-realized cognitive talents for humans, and explore what could be achieved if they were given to some humans. (See *Superintelligence*, ch. 3.)\n5. What can we learn about AI takeoff dynamics by studying primate brain evolution? See [Yudkowsky (2013)](http://intelligence.org/files/IEM.pdf).\n6. How powerful is evolution? In what ways does it have its hands tied that human programmers aimed at general intelligence don’t? How much more efficient can we expect human researchers to be at finding general intelligence algorithms, compared to evolution? (See *Superintelligence*, ch. 2.)\n7. Investigate the feasibility of emulation modulation solutions, based on currently known cognitive neuroscience.\n8. Can a person’s willingness to cooperate with future generations be increased? Conduct follow-ups to e.g. [Hauser et al. (2014)](https://lukemuehlhauser.com/wp-content/uploads/Hauser-et-al-Cooperating-with-the-future.pdf).\n9. …\n\n\n### Economics\n\n\n1. Can endogenous growth theory or unified growth theory give us any insight into AI takeoff dynamics? See [Yudkowsky (2013)](http://intelligence.org/files/IEM.pdf).\n2. …\n\n\n### History, forecasting, and general social science\n\n\n1. Do another [GJP](http://www.goodjudgmentproject.com/)/[SciCast](https://scicast.org/)-style forecasting tournament, but with 5-year and 10-year time horizons for predictions.\n2. Did most early AI scientists really think AI was right around the corner, or was it just a few people? The earliest survey available ([Michie 1973](http://commonsenseatheism.com/wp-content/uploads/2013/05/Michie-Machines-and-the-theory-of-intelligence.pdf)) suggests it may have been just a few people. For those that thought AI was right around the corner, how much did they think about the safety and ethical challenges? If they thought and talked about it substantially, why was there so little published on the subject? If they really didn’t think much about it, what does that imply about how seriously AI scientists will treat the safety and ethical challenges of AI in the future? Some relevant sources [here](http://lesswrong.com/r/discussion/lw/bd6/ai_risk_opportunity_a_timeline_of_early_ideas_and/).\n3. [TG-style](http://oai.dtic.mil/oai/oai?verb=getRecord&metadataPrefix=html&identifier=ADA568107) studies of predictions from (1) *The Futurist* and *World Future Review*, (2) *Technological Forecasting and Social Change*, (3) *Foresight* and *International Journal of Forecasting*, (4) *Journal of Forecasting*, (5) publications of the Hudson Institute, (6) publications of the Institute for the Future, (7) publications of the Club of Rome, (8) *Journal of Future Studies*, (9) Ray Kurzweil (more thorough than section 5.4 [here](http://www.tandfonline.com/doi/abs/10.1080/0952813X.2014.895105?journalCode=teta20)), (10) Alvin Toffler, (11) John Naisbitt, (12) the *State of the World* reports by the Worldwatch Institute, and/or (13) other sources. What kinds of *long-term* forecasts are most accurate, by whom, and under what conditions?\n4. Conduct a broad survey of past and current civilizational competence. In what ways, and under what conditions, do human civilizations show competence vs. incompetence? Which kinds of problems do they handle well or poorly? Similar in scope and ambition to, say, Perrow’s *[Normal Accidents](http://www.amazon.com/Normal-Accidents-Living-High-Risk-Technologies/dp/0691004129/)* and Sagan’s *[The Limits of Safety](http://www.amazon.com/The-Limits-Safety-Scott-Sagan/dp/0691021015/).* The aim is to get some insight into the likelihood of our civilization handling various aspects of the superintelligence challenge well or poorly. Some initial steps were taken [here](http://intelligence.org/2013/09/12/how-well-will-policy-makers-handle-agi-initial-findings/) and [here](http://intelligence.org/2014/10/16/agi-outcomes-civilizational-competence/).\n5. Conduct a [Delphi](http://en.wikipedia.org/wiki/Delphi_method) study of likely AGI impacts. Participants could be AI scientists, researchers who work on high-assurance software systems, and AGI theorists.\n6. Is macro-structural acceleration net good or net bad for FAI chances? See “Rates of change and cognitive enhancement” in chapter 14 of *Superintelligence*, and also Yudkowsky’s “[Do Earths with slower economic growth have a better chance at FAI?](http://lesswrong.com/lw/hoz/do_earths_with_slower_economic_growth_have_a/)”\n7. Build an improved AGI forecasting model *ala* [The Uncertain Future](http://theuncertainfuture.com/). Decompose the AGI forecasting problem further, and update the program based on the latest analysis — perhaps, build it on the organization of ideas in *Superintelligence*.\n8. How scalable is innovative project secrecy? Examine past cases: Manhattan project, Bletchly park, Bitcoin, Anonymous, Stuxnet, Skunk Works, Phantom Works, Google X.\n9. What is the world’s distribution of computation, and what are the trends? Initial steps taken [here](http://intelligence.org/2014/02/28/the-worlds-distribution-of-computation-initial-findings/). See the project guide [here](https://docs.google.com/document/d/19K37J6VzN7aigZC4IwydEWDAYMSVBTWFcrxN6YFMxig/edit?usp=sharing).\n10. Which aspects of information technology hardware and software have exhibited exponential price-performance trends in recent decades? Some notes and sources available [here](http://intelligence.org/2014/05/12/exponential-and-non-exponential/).\n11. How networked will the world be in 2020, 2030, 2040?\n12. How extensive and capable will robots be in 2020, 2030, 2040?\n13. Scenario analysis: What are some concrete AI paths to influence over world affairs? See project guide [here](https://docs.google.com/document/d/1D7ifyRORAPD8qk5OwVlF29iSSPprxXDG6x9p7aZFlTM/edit?usp=sharing).\n14. Is [Bostrom (2009)](http://www.nickbostrom.com/papers/future.pdf)’s “technological completion conjecture” true? If not, what are some predictable kinds of exceptions?\n15. Produce an initial feasibility analysis of Bostrom’s “Hail Mary” approach to the value-loading problem (*Superintelligence*, ch. 12).\n16. Analyze our “epistemic deference” situation: which problems do humans need to solve *before* we produce superintelligence, and which problems can be left to a properly designed superintelligence? (See *Superintelligence*, ch. 13.)\n17. What is the overall current level of “state risk” from existential threats? (See *Superintelligence*, ch. 14.)\n18. What are the major existential-threat “step risks” ahead of us, besides those from superintelligence? (See *Superintelligence*, ch. 14.)\n19. What are some additional “technology couplings,” in addition to those named in *Superintelligence*, ch. 14?\n20. What are some plausible “second-guessing arguments” with regard to superintelligence? (See *Superintelligence*, ch. 14.)\n21. In practice, to what degree do human values and preferences converge upon learning new facts? To what degree has this happened in history? (Nobody values the will of Zeus anymore, presumably because we all learned the truth of Zeus’ non-existence. But perhaps such examples don’t tell us much.) See also philosophical analyses of the issue, e.g. Sobel ([1999](http://commonsenseatheism.com/wp-content/uploads/2013/10/Sobel-Do-the-desires-of-rational-agents-converge.pdf)).\n22. Do we gain any insight by modeling an intelligence explosion not with two parameters (as in *Superintelligence*, ch. 4) but with four parameters — recalcitrance, algorithms, information, and computational resources? ([Dewey](http://www.danieldewey.net/) has done some thinking on this.)\n23. List and examine some types of problems better solved by a speed superintelligence than by a collective superintelligence, and vice versa. Also, what are the returns on “more brains applied to the problem” (collective intelligence) for various problems? (See *Superintelligence*, ch. 3.)\n24. What are the optimization power gains from mere content? What have people figured out without original theoretical advances or new experiments, but just by reading lots of known facts and putting together the pieces in a way that nobody had before?\n25. What will be some major milestone for various kinds of people “taking AI seriously” in various ways? How did public perception respond to previous AI milestones? How will the public react to self-driving taxis? Etc. See the debates on [this thread](http://lesswrong.com/lw/hp5/after_critical_event_w_happens_they_still_wont/).\n26. Provide more examples of decisive advantages: the ones in *Superintelligence*, ch. 5 are all pre-internet. Examine other strategically significant technology races. When do actors not maximize EV given a decisive strategic advantage of some kind?\n27. Examine international collaboration on major innovative technology. How often does it happen? What blocks it from happening more? What are the necessary conditions? Examples: Concord jet, LHC, international space station, etc.\n28. Signpost the future. *Superintelligence* explores many different ways the future might play out with regard to superintelligence, but cannot help being somewhat agnostic about which *particular* path the future will take. Come up with clear diagnostic signals that policy makers can use to gauge whether things are developing toward or away from one set of scenarios or another. If X does or does not happen by 2030, what does that suggest about the path we’re on? If Y ends up taking value A or B, what does that imply?\n29. Which kinds of technological innovations produce public panic or outrage, under which conditions?\n30. Which kinds of multipolar scenarios would predictably resolve into a singleton, and how quickly? See *Superintelligence*, ch. 11.\n31. What happens when governments ban or restrict certain kinds of technological development? What happens when a certain kind of technological development is banned or restricted in one country but not in other countries where technological development sees heavy investment?\n32. What kinds of innovative technology projects do governments monitor, shut down, or nationalize? How likely are major governments to monitor, shut down, or nationalize serious AGI projects?\n33. Explore uploading FAI researchers as a potential solution. (See Salamon & Shulman, “[Whole Brain Emulation, as a platform for creating safe AGI](http://intelligence.org/2014/01/31/two-miri-talks-from-agi-11/).”)\n34. What is the construct validity of non-anthropomorphic intelligence measures? In other words, are there convergently instrumental prediction+planning algorithms? E.g. can one tend to get agents that are good at predicting economies but not astronomical events? Or do self-modifying agents in a competitive environment tend to converge toward a specific stable attractor in general intelligence space?\n35. Sure, “any level of intelligence could in principle be combined with more or less any final goal,” but what kinds of general intelligences are *plausible*? Should we expect some correlation between level of intelligence and final goals in de novo AI? How true is this in humans, and in WBEs?\n36. How quickly would different kinds of agents become optimizery? How strong is the ‘optimizer’ stable attractor? Are there other stable attractors? Are tool-ish or Oracle-ish things stable attractors?\n37. What does the bargaining-with-a-future-superintelligence calculus look like for guaranteeing Earth, or our galaxy, or some other slice of the observable universe for humans rather than for the AGI?\n38. Do approximately all final goals make an optimizer want to control a spatial region of linearly increasing radius?\n39. If a kludge AI stumbles its way into strong self-modification and becomes a maximizer, would its goal function end up being as “alien” as a paperclip maximizer?\n40. Are multipolar scenarios safer at all? One intuition for thinking so might be that “inaction is safer because it leaves us with status quo”. This intuition seems wrong but may have a little something to it — e.g. maybe you can use multipolar scenarios to formalize inaction (maybe there’s a Schelling point for multiple AIs that looks more like inaction than the fixed point for a singleton which is just paperclip everything). Secondly, if you just want a sliver of the universe, maybe multipolar outcomes are safer because maybe there’s at least one superintelligence who will give us a sliver.\n41. How likely is it that AGI will be a surprise to most policy-makers and industry leaders? How much advance warning are they likely to have? Some notes on this [here](http://lesswrong.com/lw/ke6/will_agi_surprise_the_world/).\n42. Copied from the [AI Impacts list](http://www.aiimpacts.org/possible-investigations): “Look at the work of ancient or enlightenment mathematicians and control for possible selection effects in [this analysis](http://www.aiimpacts.org/resolutions-of-mathematical-conjectures) of historical mathematical conjectures.” This is relevant to questions of AGI development, AGI surprise, and AI takeoff speed.\n43. Copied from the [AI Impacts list](http://www.aiimpacts.org/possible-investigations): “Obtain a clearer picture of the extent to which historical developments in neuroscience have played a meaningful role in historical progress in AI.”\n44. How much do the goals of powerful AI agents determine outcomes in a multipolar scenario? By analogy, it’s not clear that the goals of animal agents determine ecological or animal population outcomes as much as other dynamics in the ecological system do.\n45. Robin Hanson is writing a book which explores a multipolar WBE scenario, starting with the assumptions that WBEs can’t be qualitatively modified from their human sources much, and that there’s a competitive market for WBEs. One could do a similar analysis on the possible consequences of widely available AGI software, starting with the assumption that AGI software is like all other software we know in terms of reliability, design, development time, synergies between different modules, etc.\n46. Enumerate the risks unique to a multipolar scenario, more thoroughly than Bostrom does in *Superintelligence*.\n47. Run several principal agent problem analyses, but vary the assumptions as per different AI/WBE scenarios. E.g. if people with capital build agents, then how much of the future is controlled by the goals of those with capital compared to the goals of the created agents?\n48. In a multipolar outcome, which things last as a result of initial conditions? City locations, standards, etc. can last a while. But what else?\n49. When nuclear weapons arrived, experts first treated the strategic situation as the same as before but with bigger bombs. But some people thought this was a different situation, and they managed to convince others to treat this differently and to build new strategic analysis tools for this new strategic situation. There was no precedent for this at the time. How did they do that, and what can we learn from them about developing new strategic analysis tools for AI scenarios?\n50. …\n\n\n### Philosophy\n\n\n1. What are the optimal solutions to normative uncertainty under various conditions? See [this interview](http://intelligence.org/2014/04/08/will-macaskill/) with Will MacAskill.\n2. Do we need to solve the paradoxes of population ethics ([Arrhenius 2011](http://people.su.se/~guarr/Texter/The%20Impossibility%20of%20a%20Satisfactory%20Population%20Ethics%20in%20Descriptive%20and%20Normative%20Approaches%20to%20Human%20Behavior%202011.pdf)) before we have superintelligence? If so, what’s the best solution?\n3. Address various problems relating to [infinite ethics](http://www.nickbostrom.com/ethics/infinite.html).\n4. Copied from [Beckstead’s list](http://www.nickbeckstead.com/advice/ea-research-topics): “What do currently known approaches to decision-making under moral uncertainty imply about the case for the overwhelming importance of shaping the far future?”  Start by reading my interviews with [Beckstead](http://intelligence.org/2013/07/17/beckstead-interview/) and [MacAskill](http://intelligence.org/2014/04/08/will-macaskill/).\n5. …\n\n\n### Other\n\n\n1. How much of humanity’s cosmic endowment can we plausibly make productive use of given AGI? One way to explore this question is via various follow-ups to [Armstrong & Sandberg (2013)](http://commonsenseatheism.com/wp-content/uploads/2013/05/Armstrong-Sandberg-Eternity-in-six-hours-intergalactic-spreading-of-intelligent-life-and-sharpening-the-Fermi-paradox.pdf). Sandberg lists several potential follow-up studies in [this interview](http://intelligence.org/2014/03/02/anders-sandberg/), for example (1) get more precise measurements of the distribution of large particles in interstellar and intergalactic space, and (2) analyze how well different long-term storable energy sources scale. See [Beckstead (2014)](http://www.effective-altruism.com/will-we-eventually-be-able-to-colonize-other-stars-notes-from-a-preliminary-review/).\n2. Clarify what it would take for there to be perfectly loyal parts to an AGI, coordinating over millions of miles or light-years, etc. Distributed computing challenges in a vast space.\n3. …\n\n\nAcknowledgements\n----------------\n\n\nMy thanks to Katja Grace and Amanda House for their help in preparing the elaborated project guides, and to the many people who contributed research project ideas to this list, including Nick Beckstead, Nick Bostrom, Paul Christiano, Daniel Dewey, Benja Fallenstein, Robin Hanson, Katja Grace, Louie Helm, Anna Salamon, Anders Sandberg, Carl Shulman, Qiaochu Yuan, Eliezer Yudkowsky, and probably several people whose contributions I’m forgetting. Additional project suggestions are welcome. Also see [List of Multipolar Research Projects](http://aiimpacts.org/multipolar-research-projects/) at AI Impacts, with which my own list has some overlap.", "url": "https://lukemuehlhauser.com/some-studies-which-could-improve-our-strategic-picture-of-superintelligence/", "title": "How to study superintelligence strategy", "source": "html_articles", "source_type": "blogPost", "source_filetype": "pdf", "date_published": "2014-07-02T22:00:00Z", "authors": ["Luke Muehlhauser"], "summary": [], "id": "94a68e4716db83010e0671868c1ce76b"} {"text": "*By Pedro A. Ortega, Vishal Maini, and the DeepMind safety team*\n\nBuilding a rocket is hard. Each component requires careful thought and rigorous testing, with safety and reliability at the core of the designs. Rocket scientists and engineers come together to design everything from the navigation course to control systems, engines and landing gear. Once all the pieces are assembled and the systems are tested, we can put astronauts on board with confidence that things will go well.\n\nIf artificial intelligence (AI) is a [rocket](https://www.ted.com/talks/max_tegmark_how_to_get_empowered_not_overpowered_by_ai#t-7166), then we will all have tickets on board some day. And, as in rockets, safety is a crucial part of building AI systems. Guaranteeing safety requires carefully designing a system from the ground up to ensure the various components work together as intended, while developing all the instruments necessary to oversee the successful operation of the system after deployment.\n\nAt a high level, safety research at DeepMind focuses on designing systems that reliably function as intended while discovering and mitigating possible near-term and long-term risks. **Technical AI safety** is a relatively nascent but rapidly evolving field, with its contents ranging from high-level and theoretical to empirical and concrete. The goal of this blog is to contribute to the development of the field and encourage substantive engagement with the technical ideas discussed, and in doing so, advance our collective understanding of AI safety.\n\nIn this inaugural post, we discuss three areas of technical AI safety: **specification**, **robustness**, and **assurance**. Future posts will broadly fit within the framework outlined here. While our views will inevitably evolve over time, we feel these three areas cover a sufficiently wide spectrum to provide a useful categorisation for ongoing and future research.\n\n![]()Three AI safety problem areas. Each box highlights some representative challenges and approaches. The three areas are not disjoint but rather aspects that interact with each other. In particular, a given specific safety problem might involve solving more than one aspect.Specification: define the purpose of the system\n===============================================\n\n![]()You may be familiar with the story of [King Midas](https://www.youtube.com/watch?v=nn8YGPZdCvA) and the golden touch. In one rendition, the Greek god Dionysus promised Midas any reward he wished for, as a sign of gratitude for the king having gone out of his way to show hospitality and graciousness to a friend of Dionysus. In response, **Midas asked that anything he touched be turned into gold**. He was overjoyed with this new power: an oak twig, a stone, and roses in the garden all turned to gold at his touch. But he soon discovered the folly of his wish: even food and drink turned to gold in his hands. In some versions of the story, even his daughter fell victim to the blessing that turned out to be a curse.\n\nThis story illustrates the problem of specification: how do we state what we want? The challenge of specification is to ensure that an AI system is incentivised to act in accordance with the designer’s true wishes, rather than optimising for a poorly-specified goal or the wrong goal altogether. Formally, we distinguish between three types of specifications:\n\n* **ideal specification** (the “**wishes**”), corresponding to the hypothetical (but hard to articulate) description of an ideal AI system that is fully aligned to the desires of the human operator;\n* **design specification** (the “**blueprint**”), corresponding to the specification that we *actually use* to build the AI system, e.g. the reward function that a reinforcement learning system maximises;\n* and **revealed specification** (the “**behaviour**”), which is the specification that best describes what *actually happens*, e.g. the reward function we can reverse-engineer from observing the system’s behaviour using, say, inverse reinforcement learning. This is typically different from the one provided by the human operator because AI systems are not perfect optimisers or because of other unforeseen consequences of the design specification.\n\nA **specification problem** arises when there is a mismatch between the **ideal specification** and the **revealed specification**, that is, when the AI system doesn’t do what we’d like it to do. Research into the **specification problem** of technical AI safety asks the question: how do we design more principled and general objective functions, and help agents figure out when goals are misspecified? Problems that create a mismatch between the ideal and design specifications are in the **design** subcategory above, while problems that create a mismatch between the design and revealed specifications are in the **emergent** subcategory.\n\nFor instance, in our [AI Safety Gridworlds](https://arxiv.org/abs/1711.09883)\\* paper, we gave agents a reward function to optimise, but then evaluated their actual behaviour on a “safety performance function” that was hidden from the agents. This setup models the distinction above: the safety performance function is the ideal specification, which was imperfectly articulated as a reward function (design specification), and then implemented by the agents producing a specification which is implicitly revealed through their resulting policy.\n\n***\\*N.B.****: in our* [*AI Safety Gridworlds*](https://arxiv.org/abs/1711.09883) *paper, we provided a different definition of specification and robustness problems from the one presented in this post.*\n\n![]()From [Faulty Reward Functions in the Wild](https://blog.openai.com/faulty-reward-functions/) by OpenAI: a reinforcement learning agent discovers an unintended strategy for achieving a higher score.As another example, consider the boat-racing game CoastRunners analysed by our colleagues at OpenAI (see Figure above from “[Faulty Reward Functions in the Wild](https://blog.openai.com/faulty-reward-functions/)”). For most of us, the game’s goal is to finish a lap quickly and ahead of other players — this is our ideal specification. However, translating this goal into a precise reward function is difficult, so instead, CoastRunners rewards players (design specification) for hitting targets laid out along the route. Training an agent to play the game via reinforcement learning leads to a surprising behaviour: the agent drives the boat in circles to capture re-populating targets while repeatedly crashing and catching fire rather than finishing the race. From this behaviour we infer (revealed specification) that something is wrong with the game’s balance between the short-circuit’s rewards and the full lap rewards. There are [many more examples](https://docs.google.com/spreadsheets/d/e/2PACX-1vRPiprOaC3HsCf5Tuum8bRfzYUiKLRqJmbOoC-32JorNdfyTiRRsR7Ea5eWtvsWzuxo8bjOxCG84dAg/pubhtml) like this of AI systems finding loopholes in their objective specification.\n\nRobustness: design the system to withstand perturbations\n========================================================\n\n![]()There is an inherent level of risk, unpredictability, and volatility in real-world settings where AI systems operate. AI systems must be robust to unforeseen events and adversarial attacks that can damage or manipulate such systems.Research on the **robustness** of AI systems focuses on ensuring that our agents stay within safe limits, regardless of the conditions encountered. This can be achieved by avoiding risks (**prevention**) or by self-stabilisation and graceful degradation (**recovery**). Safety problems resulting from **distributional shift**, **adversarial inputs**, and **unsafe exploration** can be classified as robustness problems.\n\nTo illustrate the challenge of addressing **distributional shift**, consider a household cleaning robot that typically cleans a petless home. The robot is then deployed to clean a pet-friendly office, and encounters a pet during its cleaning operation. The robot, never having seen a pet before, proceeds to wash the pets with soap, leading to undesirable outcomes ([Amodei and Olah et al., 2016](https://arxiv.org/pdf/1606.06565v1.pdf)). This is an example of a robustness problem that can result when the data distribution encountered at test time shifts from the distribution encountered during training.\n\n![]()*From* [*AI Safety Gridworlds*](https://deepmind.com/blog/specifying-ai-safety-problems/)*. During training the agent learns to avoid the lava; but when we test it in a new situation where the location of the lava has changed, it fails to generalise and runs straight into the lava.***Adversarial inputs** are a specific case of distributional shift where inputs to an AI system are designed to trick the system through the use of specially designed inputs.\n\n![]()*A*n adversarial input, overlaid on a typical image, can cause a classifier to miscategorise a sloth as a race car. The two images differ by at most 0.0078 in each pixel. The first one is classified as a three-toed sloth with >99% confidence. The second one is classified as a race car with >99% probability.**Unsafe exploration** can result from a system that seeks to maximise its performance and attain goals without having safety guarantees that will not be violated during exploration, as it learns and explores in its environment. An example would be the household cleaning robot putting a wet mop in an electrical outlet while learning optimal mopping strategies ([García and Fernández, 2015](http://www.jmlr.org/papers/volume16/garcia15a/garcia15a.pdf); [Amodei and Olah et al., 2016](https://arxiv.org/pdf/1606.06565.pdf)).\n\nAssurance: monitor and control system activity\n==============================================\n\n![]()Although careful safety engineering can rule out many safety risks, it is difficult to get everything right from the start. Once AI systems are deployed, we need tools to continuously monitor and adjust them. Our last category, **assurance**, addresses these problems from two angles: **monitoring** and **enforcing**.\n\n**Monitoring** comprises all the methods for inspecting systems in order to analyse and predict their behaviour, both via human inspection (of summary statistics) and automated inspection (to sweep through vast amounts of activity records). **Enforcement,** on the other hand, involves designing mechanisms for controlling and restricting the behaviour of systems. Problems such as **interpretability** and **interruptibility** fall under monitoring and enforcement respectively.\n\nAI systems are unlike us, both in their embodiments and in their way of processing data. This creates problems of **interpretability**; well-designed measurement tools and protocols allow the assessment of the quality of the decisions made by an AI system ([Doshi-Velez and Kim, 2017](https://arxiv.org/abs/1702.08608)). For instance, a medical AI system would ideally issue a diagnosis together with an explanation of how it reached the conclusion, so that doctors can inspect the reasoning process before approval ([De Fauw et al., 2018](https://www.nature.com/articles/s41591-018-0107-6)). Furthermore, to understand more complex AI systems we might even employ automated methods for constructing models of behaviour using **Machine theory of mind** ([Rabinowitz et al., 2018](https://arxiv.org/abs/1802.07740)).\n\n![]()ToMNet discovers two subspecies of agents and predicts their behaviour (from “[Machine Theory of Mind](https://arxiv.org/abs/1802.07740)”)Finally, we want to be able to turn off an AI system whenever necessary. This is the problem of **interruptibility**. Designing a reliable off-switch is very challenging: for instance, because a reward-maximising AI system typically has strong incentives to prevent this from happening ([Hadfield-Menell et al., 2017](https://www.ijcai.org/proceedings/2017/0032.pdf)); and because such interruptions, especially when they are frequent, end up changing the original task, leading the AI system to draw the wrong conclusions from experience ([Orseau and Armstrong, 2016](http://www.auai.org/uai2016/proceedings/papers/68.pdf)).\n\n![]()A problem with interruptions: human interventions (i.e. pressing the stop button) can change the task. In the figure, the interruption adds a transition (in red) to the Markov decision process that changes the original task (in black). See [Orseau and Armstrong, 2016](http://auai.org/uai2016/proceedings/papers/68.pdf).Looking ahead\n=============\n\nWe are building the foundations of a technology which will be used for many important applications in the future. It is worth bearing in mind that design decisions which are not safety-critical at the time of deployment can still have a large impact when the technology becomes widely used. Although convenient at the time, once these design choices have been irreversibly integrated into important systems the tradeoffs look different, and we may find they cause problems that are hard to fix without a complete redesign.\n\nTwo examples from the development of programming include the null pointer — which Tony Hoare [refers to as his ‘billion-dollar mistake’](https://www.infoq.com/presentations/Null-References-The-Billion-Dollar-Mistake-Tony-Hoare)- and the gets() routine in C. If early programming languages had been designed with security in mind, progress might have been slower but computer security today would probably be in a much stronger position.\n\nWith careful thought and planning now, we can avoid building in analogous problems and vulnerabilities. We hope the categorisation outlined in this post will serve as a useful framework for methodically planning in this way. Our intention is to ensure that AI systems of the future are not just ‘hopefully safe’ but robustly, verifiably safe — because we built them that way!\n\nWe look forward to continuing to make exciting progress in these areas, in close collaboration with the broader AI research community, and we encourage individuals across disciplines to consider entering or contributing to the field of AI safety research.\n\n*If you are interested in working with us on the research areas outlined in this post, we are hiring! Please check our open roles at* [*https://deepmind.com/careers/*](https://deepmind.com/careers/) *and note your interest in AI safety when you apply. We would love to hear from talented researchers and non-researchers alike.*\n---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n\nResources\n=========\n\nFor related reading, below is a collection of other articles, agendas, or taxonomies that have informed our thinking or present a helpful alternative view on problem framing for technical AI safety:\n\n* [Annotated bibliography of recommended materials](http://humancompatible.ai/publications) (Center for Human-Compatible AI, 2018)\n* [Safety and Control for Artificial General Intelligence](http://inst.eecs.berkeley.edu/~cs294-149/fa18/) (UC Berkeley, 2018)\n* [AI Safety Resources](https://vkrakovna.wordpress.com/ai-safety-resources/) (Victoria Krakovna, 2018)\n* [AGI Safety Literature Review](https://arxiv.org/abs/1805.01109) (Everitt et al., 2018)\n* [Preparing for Malicious Uses of AI](https://arxiv.org/abs/1802.07228) (2018)\n* [Specification gaming examples in AI](https://docs.google.com/spreadsheets/d/e/2PACX-1vRPiprOaC3HsCf5Tuum8bRfzYUiKLRqJmbOoC-32JorNdfyTiRRsR7Ea5eWtvsWzuxo8bjOxCG84dAg/pubhtml) (Victoria Krakovna, 2018)\n* [Directions and desiderata for AI alignment](https://ai-alignment.com/directions-and-desiderata-for-ai-control-b60fca0da8f4) (Paul Christiano, 2017)\n* [Funding for Alignment Research](https://docs.google.com/document/d/1NIg4OnQyhWGR01fMVTcxpz8jDd68JdDIyQb0ZZyB-go/edit#heading=h.flzp2soeor4i) (Paul Christiano, 2017)\n* [Agent Foundations for Aligning Machine Intelligence with Human Interests: A Technical Research Agenda](https://intelligence.org/files/TechnicalAgenda.pdf) (Machine Intelligence Research Institute, 2017)\n* [AI Safety Gridworlds](https://arxiv.org/abs/1711.09883) (Leike et al., 2017)\n* [Interactions between the AI Control Problem and the Governance Problem](https://futureoflife.org/wp-content/uploads/2017/01/Nick_Bostrom.pdf?x17807) (Nick Bostrom, 2017)\n* [Alignment for Advanced Machine Learning Systems](https://intelligence.org/files/AlignmentMachineLearning.pdf) (Machine Intelligence Research Institute, 2017)\n* [AI safety: three human problems and one AI issue](https://agentfoundations.org/item?id=1388) (Stuart Armstrong, 2017)\n* [Concrete Problems in AI Safety](https://arxiv.org/abs/1606.06565) (Dario Amodei et al, 2016)\n* [The Value Learning Problem](https://intelligence.org/files/ValueLearningProblem.pdf) (Machine Intelligence Research Institute, 2016)\n* [A survey of research questions for robust and beneficial AI](https://futureoflife.org/data/documents/research_survey.pdf) (Future of Life Institute, 2015)\n* [Research Priorities for Robust and Beneficial Artificial Intelligence](https://futureoflife.org/data/documents/research_priorities.pdf) (Future of Life Institute, 2015)", "url": "https://medium.com/@deepmindsafetyresearch/building-safe-artificial-intelligence-52f5f75058f1", "title": "Building safe artificial intelligence: specification, robustness, and assurance", "source": "html_articles", "source_type": "blogPost", "source_filetype": "pdf", "date_published": "2018-09-26T22:00:00Z", "authors": ["Pedro Ortega", "Vishal Maini"], "summary": [], "id": "e020330bf0e5b93c6911b0b81def9ebc"} {"text": "This post assumes familiarity with Paul Christiano’s proposed technique for AI alignment, Iterated Distillation and Amplification (henceforth IDA). See [this post](https://ai-alignment.com/iterated-distillation-and-amplification-157debfd1616), [this post](https://ai-alignment.com/policy-amplification-6a70cbee4f34), and [this post](https://www.lesswrong.com/posts/yxzrKb2vFXRkwndQ4/understanding-iterated-distillation-and-amplification-claims) for an overview. It also assumes familiarity with [corrigibility](https://ai-alignment.com/corrigibility-3039e668638), the goal of IDA; and [reliability amplification](https://ai-alignment.com/reliability-amplification-a96efa115687), which prevents the issue of error amplification in IDA.\n\nThere has already been excellent work done on some of the issues with IDA — see [Stuart Armstrong’s post](https://www.lesswrong.com/posts/ZyyMPXY27TTxKsR5X/problems-with-amplification-distillation) and the comments on [Paul’s post asking for criticism](https://www.lesswrong.com/posts/SqcPWvvJJwwgZb6aH/prize-for-probable-problems). I will show that, even under the most favorable assumptions regarding the feasibility of IDA and the solving of currently open problems necessary for implementing IDA, it fails to produce an aligned agent as defined by Paul.\n\n**Part 1: The assumptions**\n===========================\n\n**Class 1**: There are no problems with the human overseer.\n\n**1.1:** *Human-generated vulnerabilities are completely eliminated through security amplification.* (See [this post](https://ai-alignment.com/security-amplification-f4931419f903) for a lengthy overview and intuition, and [this post](https://ai-alignment.com/universality-and-security-amplification-551b314a3bab) for a formalization). In short, security amplification converts the overseer in IDA from high-bandwidth (receiving the full input in one piece) to low-bandwidth (receiving inputs divided into small pieces), to make impossible an attack which inputs data in such a way as to exploit human vulnerability to manipulation. See [this post](https://www.lesswrong.com/posts/yxzrKb2vFXRkwndQ4/understanding-iterated-distillation-and-amplification-claims) for a good explanation of a high-bandwidth vs low-bandwidth overseer. \nMy critique applies equally to high-bandwidth and low-bandwidth overseers so I make no assumption on that front.\n\n**1.2:** *There is no moral hazard in the human overseers*. This eliminates one of Stuart’s critiques. Furthermore, the human overseer displays corrigible behaviors without error.\n\n**1.3:** *The relevant experts are willing to put in a substantial amount of time* for the training process. This is a non-trivial assumption which I have not yet seen discussed.\n\n**Class 2**: The framework and its auxiliary components function as intended.\n\n**2.1:** [*Reliability amplification*](https://ai-alignment.com/reliability-amplification-a96efa115687) *functions as intended.* In summary, reliability amplification uses a voting ensemble of agents at each stage of amplification to avoid error amplification, in which an initially small probability of error grows with each iteration.\n\n**2.2:**[*Corrigibility*](https://ai-alignment.com/corrigibility-3039e668638)*, not optimal value-aligned performance, is our goal*. All we care about is that our agent “is trying to do what its operator wants it to do.” It may be bad at actually figuring out what its operator wants or at carrying out those wants, but the point is that it cares about improving, and will never intentionally carry out an action it knows is contrary to what its operator would want it to do (see [this post](https://ai-alignment.com/clarifying-ai-alignment-cec47cd69dd6) and [this post](https://www.lesswrong.com/posts/yxzrKb2vFXRkwndQ4/understanding-iterated-distillation-and-amplification-claims) for a clarification of Paul’s approach to AI alignment by achieving corrigibility).\n\nStuart has pointed out [problems with corrigibility](https://www.lesswrong.com/posts/T5ZyNq3fzN59aQG5y/the-limits-of-corrigibility), which I agree with. Essentially, the concept is ill-defined given the fuzziness of human values, and to properly implement corrigibility an agent must completely understand human values, thus reducing to the much harder value learning problem. However, we will assume that an agent which understands and implements the general concept of corrigibility, even if it accidentally misbehaves in many cases and causes widespread harm upon initial implementation as Stuart’s argument suggests, will still avoid existential risk and allow us to improve it over time, and is thus satisfactory. I think this is Paul’s approach to the matter.\n\nEven a fully corrigible agent can be catastrophically misaligned, as detailed in [this post](https://www.lesswrong.com/posts/mSYR46GZZPMmX7q93/corrigible-but-misaligned-a-superintelligent-messiah). As addressed in the comments of that post, however, if we assume humans are smart enough to avoid a corrigible AI causing existential risk in this manner then the issue goes away.\n\n**2.3:** *There is no coordination possible among any of the A[n]s*, eliminating another of Stuart’s critiques.\n\n**2.4:** *The* [*informed oversight problem*](https://ai-alignment.com/the-informed-oversight-problem-1b51b4f66b35) *is solved*. In summary, the problem is that it is difficult for a more powerful aligned overseer agent to fully understand the decision-making process of a weaker agent in a way that allows the overseer to push the weaker agent towards alignment. However, it does not follow that it is possible for a weaker aligned overseer to understand the decision-making process of a more powerful agent. It seems like this will never be possible by definition, because a weaker agent cannot understand the decision-making processes of a more powerful agent even if full transparency were possible.\n\n**2.5:** [*Worst-case scenario techniques*](https://ai-alignment.com/techniques-for-optimizing-worst-case-performance-39eafec74b99) *are effective at eliminating the risk of a* ***treacherous turn*,** in which an agent performs well on training and test data but behaves catastrophically on some input in the real world which was not accounted for.\n\n**Part 2: Corrigibility is unlikely to be trained**\n===================================================\n\nThe crux of my argument is that, even given the very optimistic assumptions above, IDA will fail to train a corrigible agent. I will list the problems sequentially, but my argument is that all five are necessary for failure. It is possible a subset is sufficient but I will not explore that possibility in this post.\n\n**Problem 1: Limited training data**. Even given assumption 1.3, carrying out the process of training each new iteration is extremely time-intensive. It would take years of many experts’ full-time commitment as overseers to produce anything close to a reasonable amount of data.\n\n**Problem 2: Hidden error amplification.** If an error is identifiable by an aligned agent, then reliability amplification will almost certainly remove it. However, a small initial *hidden* error will be rapidly amplified, since in the next iteration it will be manifested in various ways in many of the thousands of copies of the error-containing agent. Thus in the distillation step, the initial error will be propagated in many different forms, leading to a multiplication of errors. Informed oversight doesn’t help since the human is not more powerful than the subroutines, and even if transparency is achieved, if the human can verify all output then only very limited capability amplification is possible.\n\nI will show in Problem 4 that such hidden errors are likely to arise.\n\n**Problem 3: Information loss.** The human overseer has access to information, in the form of memories (broadly construed as information from before the current training process stored in the human’s mind), that is not accessible to the agent A[n+1] training on (H+A[n]), who sees only (H+A[n])’s actions. A[n+1] will thus erroneously attribute decisions made by the human relying on non-available information to available information. This information loss and the resulting distortion is greatly exacerbated by the same process described in Problem 2, doubly: first in the amplification of the information loss of the previous overseer, and second in the introduction of further information loss of the current overseer.\n\nPaul thinks this is a general problem of learning and thus falls outside of the scope of things that specifically cause IDA to fail, as pointed out in [this post](/@paulfchristiano/suppose-that-alice-is-a-good-translator-but-if-you-show-her-the-wrong-kind-of-sentence-then-an-feac133e62a6) and in the comments on [this post](https://www.lesswrong.com/posts/SqcPWvvJJwwgZb6aH/prize-for-probable-problems), but I disagree. One can certainly imagine (and some have experienced) a human robustly learning another human’s decision-making heuristics over time without direct access to the other human’s memories, and can by extension also imagine an artificial agent extracting information from a human to robustly understand that human’s decision-making process. The problem exists not in all forms of learning but in the class of training techniques which do not involve a direct and adaptive extraction of information from a human in some form.\n\n**Problem 4: No prior concept of corrigibility**. Because of information loss, an agent has no way of extracting the *concept* of corrigibility from its training data, only the *behavior* of corrigibility. The way the agent implements corrigibility will thus necessarily be an approximation, even if an extremely good one, and will not necessarily be robust to drastic changes in context. This causes the small hidden errors that are then amplified through the hidden error amplification in Problem 2, making [reliability amplification](https://ai-alignment.com/reliability-amplification-a96efa115687) ineffective. Without hidden error amplification this would probably not be a problem, since agents which successfully approximate corrigibility behaviorally will be able to detect all but the tiniest deviations from optimal corrigibility (ie, understanding the concept the way you and I do). However, hidden error amplification causes a nontrivial corrosion of corrigibility throughout iterations, and as each newly distilled agent approximates an increasingly corrupted *behavioral* corrigibility that deviates from our ideal *conceptual* corrigibility, reliability amplification is keeping us close to each further deviated behavioral corrigibility but not close to the ideal conceptual corrigibility. The process behaves essentially as a high-dimensional random walk with extremely small steps, but with thousands of steps per iteration manifested in the copies of A[n].\n\n**Problem 5: Temporal inconsistency of proxy dynamics (TIPD). Any incomplete simulation is not robust over time without an adaptive capacity.** There are certain underlying processes which are time-invariant, such as the laws of physics and the mathematics of evolution. However, clearly we can never completely simulate any non-trivial situation purely in terms of these processes. Thus, an agent must necessarily rely on proxy dynamics for decision-making: emergent properties of the fundamental processes, which fairly reliably approximate cause-and-effect relationships between actions and outputs. However, because of the complexity of the underlying dynamics and their interactions, these proxy dynamics change over time, and often quite drastically over short periods (see the literature on chaos theory, critical transitions, bifurcation points). Thus, an agent which performs robustly at one point in time may behave catastrophically at another. The only solution is for the agent to be capable of adapting its policy to changes in the proxy dynamics it uses.\n\nThis sounds like the treacherous turn problem, but it is distinct, and harder. In the treacherous turn problem, we have an agent that is not sufficiently well trained *given the input-output relationships of the world.* This can probably be solved by [worst-case scenario techniques](https://ai-alignment.com/techniques-for-optimizing-worst-case-performance-39eafec74b99) like adversarial training. In TIPD, even if we succeed in training a robust policy, the proxy dynamics used to inform decisions will change such that an action in response to an input which previously would have produced a safe behavior now produces a catastrophic behavior.\n\nAs a result, behavioral corrigibility, whether corrupted or not, is not robust over time since it does not adapt to changing input-output relationships. An agent must possess conceptual corrigibility for such adaptation to occur, which is extremely hard, and may reduce to the value learning problem.\n\n**Part 3: Achieving alignment in this process through anything but corrigibility is doomed.**\n\nThis is fairly obvious, and mostly follows from Part 2. Any proxy of the human’s decision-making process will clearly fail without an adaptive capacity, and it is not clear how such an adaptive capacity could be robustly implemented. And clearly this method will never achieve anything but a proxy due to information loss.\n\n**Conclusion**\n==============\n\nI have argued that even under the most optimistic assumptions about the human overseer and the successful operation of the framework, IDA will fail to produce a corrigible agent. This failure is a result of the interplay between hidden error amplification, information loss, the ability to learn behavioral corrigibility but not conceptual corrigibility, and the temporal inconsistency of proxy dynamics (TIPD). The solution to these problems seems very hard, and may reduce to the value learning problem, in which case the IDA framework does not provide us with any advantage.", "url": "https://medium.com/@lucarade/issues-with-iterated-distillation-and-amplification-5aa01ab37173", "title": "Issues with Iterated Distillation and Amplification", "source": "html_articles", "source_type": "blogPost", "source_filetype": "pdf", "date_published": "2018-04-28T22:00:00Z", "authors": ["Luca Rade"], "summary": [], "id": "8270a8cc254e25e77ad4190480dfe314"} {"text": "Discussions about Artificial Intelligence (AI) have jumped into the public eye over the past year, with several luminaries speaking publicly about the threat of AI to the future of humanity.\n\nOver the last several decades, AI — computing methods for automated perception, learning, understanding, and reasoning — have become commonplace in our lives. We plan trips using GPS systems that rely on AI to cut through the complexity of millions of routes to find the best one to take. Our smartphones understand our speech, and Siri, Cortana, and Google Now are getting better at understanding our intentions. AI algorithms detect faces as we take pictures with our phones and recognize the faces of individual people when we post those pictures to Facebook. Internet search engines, such as Google and Bing, rely on a fabric of AI subsystems. On any day, AI provides hundreds of millions of people with search results, traffic predictions, and recommendations about books and movies. AI translates among languages in real time and speeds up the operation of our laptops by guessing what we’ll do next. Several companies, such as Google, BMW, and Tesla, are working on cars that can drive themselves — either with partial human oversight or entirely autonomously. \nBeyond the influences in our daily lives, AI techniques are playing a major role in science and medicine. AI is at work in hospitals helping physicians understand which patients are at highest risk for complications, and AI algorithms are helping to find important needles in massive data haystacks. For example, AI methods have been employed recently to discover subtle interactions between medications that put patients at risk for serious side effects.\n\nThe growth of the effectiveness and ubiquity of AI methods has also stimulated thinking about the potential risks associated with advances of AI. Some comments raise the possibility of dystopian futures where AI systems become “superintelligent” and threaten the survival of humanity. It’s natural that new technologies may trigger exciting new capabilities and applications — and also generate new anxieties.\n\nThe mission of the[**Association for the Advancement of Artificial Intelligence**](http://aaai.org) is two-fold: to advance the science and technology of artificial intelligence and to promote its responsible use. The AAAI considers the potential risks of AI technology to be an important arena for investment, reflection, and activity.\n\nOne set of risks stems from programming errors in AI software. We are all familiar with errors in ordinary software. For example, apps on our smartphones sometimes crash. Major software projects, such as HealthCare.Gov, are sometimes riddled with bugs. Moving beyond nuisances and delays, some software errors have been linked to extremely costly outcomes and deaths. The study of the “verification” of the behavior of software systems is challenging and critical, and much progress has been made. However, the growing complexity of AI systems and their enlistment in high-stakes roles, such as controlling automobiles, surgical robots, and weapons systems, means that we must redouble our efforts in software quality.\n\nThere is reason for optimism. Many non-AI software systems have been developed and validated to achieve high degrees of quality assurance. For example, the software in autopilot systems and spacecraft systems is carefully tested and validated. Similar practices must be developed and applied to AI systems. One technical challenge is to guarantee that systems built automatically via statistical “machine learning” methods behave properly. Another challenge is to ensure good behavior when an AI system encounters unforeseen situations. Our automated vehicles, home robots, and intelligent cloud services must perform well even when they receive surprising or confusing inputs.\n\nA second set of risks is cyberattacks: criminals and adversaries are continually attacking our computers with viruses and other forms of malware. AI algorithms are no different from other software in terms of their vulnerability to cyberattack. But because AI algorithms are being asked to make high-stakes decisions, such as driving cars and controlling robots, the impact of successful cyberattacks on AI systems could be much more devastating than attacks in the past. US Government funding agencies and corporations are supporting a wide range of cybersecurity research projects, and artificial intelligence techniques in themselves will provide novel methods for detecting and defending against cyberattacks. Before we put AI algorithms in control of high-stakes decisions, we must be much more confident that these systems can survive large scale cyberattacks.\n\nA third set of risks echo the tale of the Sorcerer’s Apprentice. Suppose we tell a self-driving car to “get us to the airport as quickly as possible!” Would the autonomous driving system put the pedal to the metal and drive at 300 mph while running over pedestrians? Troubling scenarios of this form have appeared recently in the press. Other fears center on the prospect of out-of-control superintelligences that threaten the survival of humanity. All of these examples refer to cases where humans have failed to correctly instruct the AI algorithm in how it should behave.\n\nThis is not a new problem. An important aspect of any AI system that interacts with people is that it must reason about what people intend rather than carrying out commands in a literal manner. An AI system should not only act on a set of rules that it is instructed to obey — it must also analyze and understand whether the behavior that a human is requesting is likely to be judged as “normal” or “reasonable” by most people. It should also be continuously monitoring itself to detect abnormal internal behaviors, which might signal bugs, cyberattacks, or failures in its understanding of its actions. In addition to relying on internal mechanisms to ensure proper behavior, AI systems need to have the capability — and responsibility — of working with people to obtain feedback and guidance. They must know when to stop and “ask for directions” — and always be open for feedback. \nSome of the most exciting opportunities ahead for AI bring together the complementary talents of people and computing systems. AI-enabled devices are allowing the blind to see, the deaf to hear, and the disabled and elderly to walk, run, and even dance. People working together with the Foldit online game were able to discover the structure of the virus that causes AIDS in only three weeks, a feat that neither people nor computers working alone could come close to matching. Other studies have shown how the massive space of galaxies can be explored hand-in-hand by people and machines, where the tireless AI astronomer understands when it needs to occasionally reach out and tap the expertise of human astronomers. \nIn reality, creating real-time control systems where control needs to shift rapidly and fluidly between people and AI algorithms is difficult. Some airline accidents occurred when pilots took over from the autopilots. The problem is that unless the human operator has been paying very close attention, he or she will lack a detailed understanding of the current situation.\n\nAI doomsday scenarios belong more in the realm of science fiction than science fact. However, we still have a great deal of work to do to address the concerns and risks afoot with our growing reliance on AI systems. Each of the three important risks outlined above (programming errors, cyberattacks, “Sorcerer’s Apprentice”) is being addressed by current research, but greater efforts are needed.\n\nWe urge our colleagues in industry and academia to join us in identifying and studying these risks and in finding solutions to addressing them, and we call on government funding agencies and philanthropic initiatives to support this research. We urge the technology industry to devote even more attention to software quality and cybersecurity as we increasingly rely on AI in safety-critical functions. And we must not put AI algorithms in control of potentially-dangerous systems until we can provide a high degree of assurance that they will behave safely and properly.\n\n[Tom Dietterich](http://www.eecs.oregonstate.edu/~tgd) \nPresident, [AAAI](http://aaai.org)\n\n[Eric Horvitz](http://research.microsoft.com/~horvitz) \nFormer President, AAAI and AAAI Strategic Planning Committee", "url": "https://medium.com/@tdietterich/benefits-and-risks-of-artificial-intelligence-460d288cccf3", "title": "Benefits and Risks of Artificial Intelligence", "source": "html_articles", "source_type": "blogPost", "source_filetype": "pdf", "date_published": "2015-01-22T23:00:00Z", "authors": ["Thomas G. Dietterich"], "summary": [], "id": "f0854b85121771777ab5ef5c9f31c5ca"} {"text": "![]()***From time to time, the Partnership may curate research and ideas that might be of interest to our community, including from PAI Fellows or other voices. The views, information, and opinions expressed in this blog are solely those of the author and do not reflect the position of the Partnership on AI or its Partner organizations. This post discusses Facebook and Google, who are funding members of PAI.***\n\n**By** [**Jonathan Stray**](https://www.partnershiponai.org/team/jonathan-stray/)\n\nVirtually every AI algorithm is designed around the idea of optimization: acting in a way that maximizes some metric. But optimizing for the wrong thing can [cause a lot of harm](https://flyaps.com/blog/human-compatible-artificial-intelligence-and-the-problem-of-control/). A social media product that optimizes for user engagement may end up figuring out how to [addict us](https://www.psychologytoday.com/us/blog/in-excess/201805/addicted-social-media) to clickbait and outrage, a scheduling system that maximizes efficiency may produce erratic schedules that [interfere with workers’ lives](https://www.vice.com/en_us/article/g5xwby/heres-what-happens-when-an-algorithm-determines-your-work-schedule), and algorithmic profit maximization can end up [charging poorer people more](https://www.wsj.com/articles/SB10001424127887323777204578189391813881534) for the same product. But there are also metrics that attempt to capture the deep human outcomes we care about, and some product teams are already trying to incorporate them.\n\nThe Partnership on AI’s “What Are You Optimizing For?” project aims to document the metrics that AI designers and operators are using today, to support the community dedicated to ensuring that we are using the *right* metrics, and ultimately to make human-centered metrics a standard part of AI practice. While AI [alignment](https://deepmind.com/research/publications/Artificial-Intelligence-Values-and-Alignment) research is typically concerned with ensuring that future artificial general intelligence respects human values, there are alignment issues with today’s existing [narrow AI](/@tjajal/distinguishing-between-narrow-ai-general-ai-and-super-ai-a4bc44172e22) systems too.\n\nMetrics which attempt to capture real-world negative effects are already widely used in commercial AI engineering, though this practice is not particularly well documented or discussed. Companies have devised metrics to [reduce clickbait](https://about.fb.com/news/2014/08/news-feed-fyi-click-baiting/), [detect misinformation](https://www.blog.google/documents/37/How_Google_Fights_Disinformation.pdf), ensure that new artists on a music platform [get a fair shot](https://arxiv.org/abs/1708.00120) at building an audience, detect addictive apps in an app store, and more.\n\nThese are all worthy interventions, but what should the broader goals of an AI system be? How can we evaluate whether any particular product is ultimately producing positive outcomes for people and society? In principle we could create metrics that capture important aspects of the effect of an AI system on human lives, just as cities and countries today record a large variety of statistical indicators. These metrics would be useful to the teams building and operating the system, to researchers who want to understand what the system is doing, and as a transparency and accountability tool.\n\nThis sort of metric-based management of broad social outcomes is already happening inside platform companies, mostly quietly. Facebook incorporated “well-being” metrics into their News Feed recommendation system in 2017, while YouTube began integrating “user satisfaction” and “social responsibility” metrics around 2015. This post documents these efforts, plus several more hypothetical ways that metrics could be used to steer AI in healthy directions. But first we need appropriate metrics.\n\n**Measuring Well-being**\n\nWell-being metrics attempt to measure a person’s subjective experience of life in a very general sense. They were originally developed in psychology in the late 20th century, but in the past few decades they have been developed into tools for [governance and policy-making](https://li.com/wp-content/uploads/2019/03/commission-on-wellbeing-and-policy-report-march-2014-pdf.pdf). The core question used in many surveys is *“Overall, how satisfied are you with life as a whole these days?”*\n\nThis is a surprisingly powerful question. Because it asks people to reflect on their life as a whole, the answers are not much affected by immediate moods or feelings. Experimentally, it seems to align with the way people make major life decisions, and works similarly across countries and cultures. But asking just this one question doesn’t give a very complete picture of someone’s life, which is why real well-being surveys like the OECD’s [Better Life Index](http://www.oecdbetterlifeindex.org/) include many other questions about things like recent emotional experience, income, health, education, community involvement, and so on.\n\nNo metric can reveal the details of individual lives, and optimizing soley for survey responses is likely to fail in predictable ways. Nonetheless, there is important information here, which [needs to be combined](https://arxiv.org/abs/2002.08512) with other types of user research. It’s taken a long time to develop good well-being metrics, and AI product developers probably shouldn’t be in the business of creating their own. For this reason, there is now an IEEE [standard](https://site.ieee.org/sagroups-7010/files/2019/01/IEEE-P7010_WellbeingMetricsforA_IS_ShortPaper_December272018For_Submission_reviewedbyIEEELegal-1.pdf) which compiles indicators from a variety of sources for consideration by technical teams.\n\n**Facebook’s well-being changes**\n\nFacebook’s 2017 changes to the news feed ranking algorithm provide a detailed, well-documented example of how a large platform can incorporate well-being metrics into an optimizing system. It also raises some important questions about how large companies *should* use metrics. In late 2017, an unusual [post](https://about.fb.com/news/2017/12/hard-questions-is-spending-time-on-social-media-bad-for-us/) appeared on an official Facebook blog:\n\n\n> ***What Do Academics Say? Is Social Media Good or Bad for Well-Being?****According to the research, it really comes down to how you use the technology. For example, on social media, you can passively scroll through posts, much like watching TV, or actively interact with friends — messaging and commenting on each other’s posts. Just like in person, interacting with people you care about can be beneficial, while simply watching others from the sidelines may make you feel worse.*\n> \n> \n\nThis is in line with the conclusion that [outside researchers](https://spssi.onlinelibrary.wiley.com/doi/abs/10.1111/sipr.12033) had come to:\n\n\n> *passively using social network sites provokes social comparisons and envy, which have negative downstream consequences for subjective well-being. In contrast, when active usage of social network sites predicts subjective well-being, it seems to do so by creating social capital and stimulating feelings of social connectedness.*\n> \n> \n\nSoon after, Facebook began to talk publicly about its efforts to encorate “meaningful social interactions.” A [post](https://about.fb.com/news/2018/01/news-feed-fyi-bringing-people-closer-together/) by Zuckerberg suggests that this is a proxy for well-being:\n\n\n> *The research shows that when we use social media to connect with people we care about, it can be good for our well-being … I’m changing the goal I give our product teams from focusing on helping you find relevant content to helping you have more meaningful social interactions.*\n> \n> \n\nThis managerial goal turned into algorithmic changes, as [described](https://about.fb.com/news/2018/01/news-feed-fyi-bringing-people-closer-together/) by the head of the News Feed product:\n\n\n> *Today we use signals like how many people react to, comment on or share posts to determine how high they appear in News Feed. With this update, we will also prioritize posts that spark conversations and meaningful interactions between people. To do this, we will predict which posts you might want to interact with your friends about, and show these posts higher in feed (Mosseri 2018).*\n> \n> \n\nIn other words, Facebook developed a probabilistic model to predict “meaningful interactions,” a proxy for well-being, and incorporated this into the News Feed ranking algorithm. What’s a “meaningful interaction”? The only concrete description comes from Facebook’s [Q4 2017 earnings call](https://s21.q4cdn.com/399680738/files/doc_financials/2017/Q4/Q4-17-Earnings-call-transcript.pdf), where Zuckerberg explains that this metric — the training data for the new predictive model — comes from user panel surveys:\n\n\n> *So the thing that we’re going to be measuring is basically, the number of interactions that people have on the platform and off because of what they’re seeing that they report to us as meaningful.*\n> \n> *… the way that we’ve done this for years is we’ve had a panel, a survey, of thousands of people who [we] basically asked, what’s the most meaningful content that they had seen on the platform or they have seen off the platform. And we design our systems in order to be able to get to that ground truth of what people, real people are telling us is that high-quality experience*\n> \n> \n\nThis is the point where the people who might be most affected by product decisions were consulted. All of this work is on their behalf. It seems like there should also be consultation on the question of which metric to watch — there are technocratic and paternalistic traps here, no matter how good one’s intentions might be. But it’s not quite clear what it would mean to ask potentially billions of users which metric they would prefer.\n\nSo while users were not consulted in choosing either the primary or the proxy metric, they were surveyed to produce the training data used to predict “meaningful interactions.” The overall system looks like this:\n\n![]()A reconstruction of Facebook’s 2018 “meaningful social interaction” changesThis “objective function” is a piece of code that defines how “good” a particular hypothetical output would be; it’s the thing that is optimized by an optimizing system, the machine translation of a human goal. Although there are many ways to change a product that might improve well-being — for example, features that limit “[screen time](https://support.apple.com/en-us/HT208982)” — modern AI systems are usually designed to optimize a small number of key metrics that are encoded into an objective function.\n\nFacebook changed both the metric given to the News Feed team, and the objective function that ultimately drives the News Feed ranking algorithm. Unfortunately, there is no public account of the well-being effects of these changes. But in that same investor call there is a record of the business effects of a related “meaningful social interaction” change, this one concerning Facebook’s video product rather than the News Feed:\n\n\n> *We estimate these updates decreased time spent on Facebook by roughly 5% in the fourth quarter. To put that another way: we made changes that reduced time spent on Facebook by an estimated 50 million hours every day to make sure that people’s time is well spent.*\n> \n> \n\nThis is clearly a reduction in short term engagement, which shows that the objective did shift in a noticeable way, but it’s not clear what the longer term effects were. A genuinely better product might attract more customers.\n\nThe biggest weakness of this work is that there was no public evaluation of the results. Did the plan work? How well, and for who? This is especially important because the link between “meaningful interactions” and well-being is theoretical, deduced from previous research into active versus passive social media use.\n\n**The general approach**\n\nRegardless of the virtues of what Facebook did and didn’t do, trying to correct for the effects of a large optimizing system by changing the objective based on people’s feedback is a very general approach. Done well, the process might look like this:\n\n1. Select a well-being metric, perhaps from existing frameworks. This stage is where the involvement of the people affected by the optimization matters most.\n2. Define a proxy metric for which data can actually be collected, that is scoped to only those people affected by the system, and that can be estimated for hypothetical system outputs.\n3. Use this metric as a performance measure for the team building and operating the system.\n4. The team may choose to translate the metric into code, for example as modifications to the system’s objective function. They may also find extra-technical interventions that improve the metric.\n5. The chosen well-being metric is not fixed, but must be continuously re-evaluated to ensure it is appropriate to changing conditions and does not cause side effects of its own\n\nOther companies have done similar things, and the general pattern could apply in many other contexts.\n\nDuring the period of 2012–2016, YouTube was striving to reach one billion hours of daily user watch time. Yet this “time spent” metric was not absolute, as the company made decisions to suppress clickbait and ultimately added “[user satisfaction](https://support.google.com/youtube/thread/1920627?hl=en)” and “[social responsibility](https://www.bloomberg.com/news/features/2019-04-02/youtube-executives-ignored-warnings-letting-toxic-videos-run-rampant)” metrics to its algorithmic objective function. YouTube hasn’t said exactly what these metrics are or why they were used, so we don’t really know what they target or how effective they are. As in the Facebook case, a decision to penalize clickbait reduced short term engagement, but *Measure What Matters* by John Doerr [documents](https://www.google.com/books/edition/Measure_What_Matters/u2NDDwAAQBAJ?hl=en&gbpv=1&dq=Once+the+gruesome+stuff+became+less+accessible,+people+sought+out+more+satisfying+content.&pg=PT108&printsec=frontcover) that engagement recovered within a few months as “people sought out more satisfying content.”\n\nThere are many other places where the careful choice of metrics might have positive effects on people’s lives. Most “gig economy” workers are employed as independent contractors, and many [experience](https://journals.sagepub.com/doi/full/10.1177/2378023119870041) week-to-week fluctuations in income as a major source of economic instability and anxiety. An appropriate measure of income stability could be used to smooth worker income by distributing fluctuations in demand between workers and perhaps across time.\n\n![]()Hypothetical changes to a gig economy platform’s objective function to smooth worker incomesOptimizing systems can also incorporate the perspectives of non-users. A ride sharing platform may cause traffic jams for non-riders, and environmental effects are potentially global. There is a whole field of “[multi-stakeholder optimization](https://arxiv.org/abs/1907.13158)” that studies algorithms designed to account for these types of issues. A product recommendation system is typically designed to present users with products they are most likely to buy, but could also incorporate climate change concerns through estimates of product carbon footprint. These could be used to increase the visibility of alternative low-carbon products, with the goal of minimizing the metric of [carbon intensity](https://en.wikipedia.org/wiki/Emission_intensity) per unit revenue.\n\n![]()A hypothetical product recommendation system that tries to reduce the carbon footprint of products sold.In many other important cases it’s not as clear what the “right” metric might be. A content recommendation system filters the vast sea of available information for users. There has been extensive discussion of the need for “diverse” content recommendations, especially when filtering news content. However, “diverse” could mean different things and there is no consensus on which definition is best. There are a variety of perspectives starting from [political principles](https://pure.uva.nl/ws/files/17932244/Exposure_diversity_as_a_design_principle.pdf), while the technical community has developed a variety of [objective functions](https://link.springer.com/chapter/10.1007/978-1-4899-7637-6_26) that are designed to be “diverse.” Choosing a metric is itself a hard problem — the intersection of what is right with what can be measured.\n\n**Picking Metrics**\n\nAn attempt to optimize for well-being is an attempt to benefit a particular group of people, who need to have a say in what is done on their behalf. Whether a community of place or a community of users, this is the basic reason why community involvement is necessary. There are many ways to involve users in the creation of software, such as the field of [participatory design](https://pure.au.dk/portal/files/139098052/Introduction_to_Special_Issue_on_Accepted_manuscript_2018.pdf). Yet collaboratively picking a metric is a unique challenge; it’s not obvious how a large platform would ask a billion users what to optimize for, or even how to frame the question.\n\nNor can the choice be static. Both the primary metric and its proxy need to be able to change and adapt. Any measure that becomes a target incentivizes various kinds of cheating and gaming behavior, so the useful lifespan of a metric may be limited. This also responds to the concern of AI researchers who [warn](https://flyaps.com/blog/human-compatible-artificial-intelligence-and-the-problem-of-control/) that optimization of a single objective function can have disastrously negative side effects. In this context, there are always humans supervising and operating the AI system, and they are free to change the objective function as needed.\n\nBut more importantly, the world can change. Perhaps a hospital closure suddenly makes access to emergency health care a community priority. Or perhaps after careful management, income stability ceases to be the chief concern of gig economy workers. Like the idea that targeting a measure changes its meaning, there are many names for the idea that measures must continually change. “[Double-loop learning](https://en.wikipedia.org/wiki/Double-loop_learning)” is the idea that an adaptive organization has two learning processes operating simultaneously: it learns how to progress towards its goals, while continually re-evaluating the desirability of those goals.\n\n**Narrow Alignment**\n\nA concern with metrics fits neatly into one of the deep unsolved problems of AI theory. As optimization becomes more powerful and agents become more autonomous, the specification of precisely the right goal becomes a more serious problem. Like the genie in the lamp, machines are apt to take us literally in disastrous ways. In Stuart Russell’s [example](https://flyaps.com/blog/human-compatible-artificial-intelligence-and-the-problem-of-control/), if I ask a robot to fetch me coffee I don’t mean “at all costs,” so it shouldn’t kill anyone while trying to achieve this goal. “Alignment,” or the creation of artificial agents which act according to human values, is a major unsolved problem for artificial general intelligence (AGI).\n\nThe builders of today’s “narrow” AI systems face a similar challenge of encoding the correct goals in machine form; first generation systems used obvious metrics like engagement and efficiency which ended up creating negative side effects. But this “narrow alignment” problem seems much easier to address than AGI alignment, because it’s possible to learn from how existing systems behave in the real world. Narrow alignment is worth working on for its own sake, and may also give us critical insight into the general alignment problem.\n\n**What are you optimizing for?**\n\nIf you find the quantitative management of the subjective experience of the members of society troubling, you are not alone. There was [widespread concern](https://www.nytimes.com/2014/06/30/technology/facebook-tinkers-with-users-emotions-in-news-feed-experiment-stirring-outcry.html) when Facebook published a paper on “emotional contagion,” for example. This sort of social engineering at scale has all the problems of large AI systems, plus all the problems of public policy interventions.\n\nMy argument is not so much that one *should* use AI to optimize for well-being. Rather, we live in a world where large-scale optimization is already happening. We can choose not to evaluate or adjust these systems, but there is little reason to imagine that ignorance and inaction would be better. Mistakes will be made if we try to optimize for human happiness by quantitative means, but then, doing nothing is also a mistake. Metrics, however [flawed and incomplete](https://arxiv.org/abs/2002.08512), are a fundamental part of a positive future for AI.\n\nIf you are working on the problem of optimizing for the right thing, I’d love to hear from you at [jonathan@partnershiponai.com](mailto:jonathan@partnershiponai.com).", "url": "https://medium.com/partnership-on-ai/aligning-ai-to-human-values-means-picking-the-right-metrics-855859e6f047", "title": "Aligning AI to Human Values means Picking the Right Metrics", "source": "html_articles", "source_type": "blogPost", "source_filetype": "pdf", "date_published": "2020-04-14T22:00:00Z", "authors": ["Jonathan Stray"], "summary": [], "id": "22b639185ec4f520b7e673469ee5a497"} {"text": "*Lessons on publication norms for the AI community from biosecurity*\n--------------------------------------------------------------------\n\n[![Partnership on AI](https://miro.medium.com/v2/resize:fill:88:88/1*0n5ULcBgNo43z6c86iNcbw.png)](/@PartnershipAI?source=post_page-----32432438a82e--------------------------------)[![AI&.](https://miro.medium.com/v2/resize:fill:48:48/1*0n5ULcBgNo43z6c86iNcbw.png)](https://medium.com/partnership-on-ai?source=post_page-----32432438a82e--------------------------------)[Partnership on AI](/@PartnershipAI?source=post_page-----32432438a82e--------------------------------)\n\n·[Follow](/m/signin?actionUrl=https%3A%2F%2Fmedium.com%2F_%2Fsubscribe%2Fuser%2Fda503babfab3&operation=register&redirect=https%3A%2F%2Fmedium.com%2Fpartnership-on-ai%2Flessons-for-the-ai-community-from-the-h5n1-controversy-32432438a82e&user=Partnership+on+AI&userId=da503babfab3&source=post_page-da503babfab3----32432438a82e---------------------post_header-----------)\n\nPublished in[AI&.](https://medium.com/partnership-on-ai?source=post_page-----32432438a82e--------------------------------)·13 min read·Dec 8, 2020--\n\nListen\n\nShare\n\n![]()[By Jasmine Wang](https://www.partnershiponai.org/team/jasmine-wang/)\n\n*AI is by no means the first field to face the challenges of responsibly publishing and deploying high-stakes, dual-use research. This post represents the first in a series where we will examine how other fields have dealt with these issues and what the AI community can learn. It is presented as part of the Partnership on AI’s work on Publication Norms. Visit* [*our website here*](https://www.partnershiponai.org/case-study/publication-norms/) *for more information.*\n\nIn the spring of 2012, Ron Fouchier contemplated a decision that could put him in prison for up to six years or cost him over $100,000 USD in fines. A white-haired 45-year-old who spent most days in a concrete research facility in Rotterdam, the Dutch virologist had suddenly become the focus of an international debate about the potential weaponization of influenza. His work, which involved mutating the H5N1 virus to make it more transmissible, had already set off an uproar that reverberated from US national security circles to the halls of the World Health Organization (WHO). Now, if he published his research without the Dutch government’s permission, he was told he could actually go to jail. Increasingly nervous that people perceived him to be a “mad scientist,” Fouchier [told the *New Yorker*](https://www.newyorker.com/magazine/2012/03/12/the-deadliest-virus) at the time that he felt like the subject of “an international witch hunt.”\n\nFouchier’s predicament might seem unimaginable to most scientists, like something out of a nightmare. But the chain of events that led him to this moment are worth studying for anyone whose work could pose public risks. This is especially true for those in artificial intelligence (AI) grappling with how to responsibly disseminate research with the potential for misuse, an important facet of publication norms.\n\nAI and Machine Learning (ML) are increasingly being applied across new domains, including ones with safety-critical applications, leading many to ask what responsible AI/ML research, innovation, and deployment look like. In answering these questions, the AI community can and should consider how other fields have approached comparable challenges. Fouchier’s story offers important lessons from the biosecurity community and its long history of debate about publication norms. In particular, **the H5N1 case illustrates the benefits (and inherent limitations) of third-party governance bodies — and, by implication, the importance of individual researcher responsibility.**\n\nH5N1 influenza, otherwise known as bird flu, is a severe respiratory disease. The naturally occurring H5N1 virus, however, rarely infects people and is almost never transmitted to others when it does. [According to Fouchier](https://www.nytimes.com/2011/12/27/science/debate-persists-on-deadly-flu-made-airborne.html), some scientists believed that H5N1 could never become airborne between mammals — he wanted to prove them wrong. [In his own words](https://www.newyorker.com/magazine/2012/03/12/the-deadliest-virus), his team first “mutated the hell out of H5N1.” They then squirted the mutated virus into the nose of a ferret and next implanted that ferret’s nasal fluid into another (and another, and another) ferret’s nose, making it sneeze. The virus spread.\n\nFouchier’s field of inquiry is known as [“gain-of-function” research](https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4996883/), which aims to enhance the disease-causing abilities of pathogens to prepare the health community for future outbreaks. In the case of H5N1, which has an exceptionally high death-rate of [around 60 percent](https://www.cdc.gov/flu/avianflu/h5n1-people.htm), the naturally occurring virus lacked a crucial prerequisite to becoming a real pandemic risk: transmissibility. By simply transferring H5N1 from one animal to others, Fouchier had made it highly transmissible. His team had succeeded in making what he called “one of the most dangerous viruses you can imagine” airborne.\n\nFouchier presented his findings at an influenza conference in Malta in September 2011, announcing his intention to publish them in greater detail, which would enable others to replicate his research. After Fouchier submitted his paper to the academic journal *Science*, US health officials became aware of its existence, sending it to the National Science Advisory Board for Biosecurity (NSABB) for review. This advisory body was established in the wake of the 2001 anthrax attacks to provide oversight for dual-use biomedical research, [defined by the WHO](https://www.who.int/csr/durc/en/) as scientific work “which is intended for benefit but which might easily be misapplied to do harm.” Fouchier’s ferret experiment — which effectively made a deadly disease even more dangerous — [admittedly fell](https://www.nytimes.com/2011/12/22/health/security-in-h5n1-bird-flu-study-was-paramount-scientist-says.html) under this category.\n\nThe NSABB and its H5N1 Working Group subsequently [spent hundreds of hours](https://www.frontiersin.org/articles/10.3389/fpubh.2014.00117/full#B2) discussing the research. In December 2011, the NSABB unanimously recommended that key methodological detailswhich could enable replication of the experiments be withheld from publication in *Science*. In [a press release](https://www.nih.gov/news-events/news-releases/press-statement-nsabb-review-h5n1-research) announcing the decision, the National Institutes of Health (NIH) said that the US government was also working on a mechanism to grant researchers with a legitimate need access to the redacted information.\n\nThis recommendation received an immediate rejoinder. *Science’s* Editor-in-Chief said that the journal’s response would be “heavily dependent” on the creation of an explicit plan by the US government to share the omitted information with “responsible scientists who request it, as part of their legitimate efforts to improve public health and safety.” Fouchier himself [told the *New York Times*](https://www.nytimes.com/2011/12/22/health/security-in-h5n1-bird-flu-study-was-paramount-scientist-says.html) that around 1000 scientists from more than 100 laboratories worldwide had a need to know this information.\n\nA second expert panel of 22 health officials, scientists, and journal editors convened by the World Health Organization (WHO) came to a far different (if not unanimous) conclusion from the NSABB, calling for full publication. Keiji Fukuda, the assistant director-general of health security and environment at the WHO, cited the difficulty and complexity of creating an information-sharing mechanism as a key rationale.\n\nIt was not just bureaucratic difficulties, but ambiguities about authority, control, and eligibility criteria that concerned the panel. “Who would hold on to the sensitive information?” Fukuda said [at a press conference](https://science.sciencemag.org/content/335/6071/899.full). “Under what conditions would that information be released? What are the other complicating factors? It was recognized that coming up with such a mechanism would be very difficult to do overnight, if not impossible.”\n\nAnthony Fauci, the long-serving director of the National Institute of Allergy and Infectious Diseases who would become familiar to many during the COVID-19 pandemic, urged Fouchier and other H5N1 researchers to declare a voluntary moratorium on their work. Fauci [told the *New York Times*](https://www.nytimes.com/2012/01/21/science/scientists-to-pause-research-on-deadly-strain-of-bird-flu.html) that he viewed a moratorium as an act of good faith during a time of polarized opinion — an important one, given that the controversy could lead to excessive restrictions on future research. The scientists took his advice. In January 2012, 39 prominent influenza researchers from around the world, including Fouchier, announced they were voluntarily pausing H5N1 gain-of-function research for 60 days. This moratorium ended up lasting almost a year.\n\nJust months after their unanimous recommendation, the NSABB reversed their position on Fouchier’s paper in March 2012, [voting 12–6 in favor](https://osp.od.nih.gov/wp-content/uploads/2013/06/03302012_NSABB_Recommendations.pdf) of a revised version being published in full. Their reasoning? While the research was still concerning, the revised manuscript did not appear to provide information that would immediately enable misuse. The board also cited the need for freely shared information among countries when responding to international pandemics. A majority of members of the NSABB still believed there was a critical need for a mechanism for disseminating sensitive scientific information. They acknowledged that there were complex questions and legal issues involved in developing such a mechanism, but nonetheless that a “feasible, secure mechanism for sharing sensitive scientific information” was essential, urging the US government to develop one.\n\nThis requested plan for disseminating the papers on a need-to-know basis didn’t materialize then and still does not exist now.\n\nAfter getting a green light from the NSABB and WHO, Fouchier was told of a new challenge. His research needed to be approved for an export license from the Dutch government, which considered the publication of his research to be a potential violation of E.U. regulations aimed at preventing the proliferation of weapons of mass destruction and dual-use technologies. At first, he declined to apply for the export license, opposed to the precedent it would set, and intended to publish his research without one. In late April 2012, however, Fouchier decided to apply for a license, which was granted.\n\nIn the end, Fouchier’s paper appeared [in a special issue](https://science.sciencemag.org/content/336/6088/1521) of *Science* in June 2012. Had he gone through with publishing his research without the export license, the potential penalties included [up to six years in prison or a fine equivalent to $102,000 USD](https://www.nature.com/news/mutant-flu-researcher-plans-to-publish-even-without-permission-1.10469).\n\nEven after acquiescing, Fouchier felt so strongly about the importance of unrestricted and free scientific expression that he continued to challenge the requirement legally. He lost his case, meaning similar papers by Dutch scientists would likely require export licenses. For years, Fouchier contested the verdict in several arenas of increasing authority. Eventually, the ruling was annulled on procedural grounds in July 2015, meaning future research would still be considered on a case-by-case basis.\n\n“I’m disappointed,” Fouchier [told *Science*](https://www.sciencemag.org/news/2015/07/dutch-appeals-court-dodges-decision-hotly-debated-h5n1-papers) at the time, “They didn’t want to touch the hot potato and passed it on instead.”\n\nBack in the US, similar virus research faced increased scrutiny in the wake of the Fouchier controversy. In October 2014, the White House announced an unprecedented “pause” on all federal funding of gain-of-function research involving influenza, MERS, or SARS. The pause was only lifted in December 2017 — with a new provision requiring gain-of-function proposals to be approved by a government panel. “We see this as a rigorous policy,” NIH Director Francis Collins [told the *New York Times*](https://www.nytimes.com/2017/12/19/health/lethal-viruses-nih.html). “We want to be sure we’re doing this right.”\n\nFor his part, Fouchier continued working on H5N1, and is currently the deputy head of the Erasmus MC department of Viroscience. A few years after US funding for his research stopped, the NIH [began to support](https://www.sciencemag.org/news/2019/02/exclusive-controversial-experiments-make-bird-flu-more-risky-poised-resume) Fouchier’s research again.\n\nWhile the H5N1 controversy did not settle every issue it raised, the incident as a whole does leave the AI community with four intertwined lessons: **Third-party institutions can result in more well-considered publication outcomes; absent other action, these entities might only be created in response to crises; these entities, however, are inherently limited in their capabilities; and, thus, researchers must exercise some degree of personal responsibility.**\n\n1. Third-party institutions can lead to more well-considered publication outcomes\n---------------------------------------------------------------------------------\n\nThere are two main reasons why third-party institutions like the NSABB can lead to more thoughtful outcomes: They can counterbalance publishing incentives that bias researchers towards publication and they can provide additional expertise and critical context that individual researchers may lack.\n\nAny researcher knows the desperate need to publish. Their reputation — and thus access to funding, collaboration opportunities, and publication venues — depends directly on the quality and quantity of papers they publish. There’s also the drive to advance scientific progress, and the very real possibility of societal benefits from their research. However, in high-stakes work there are inevitably trade-offs that need to be considered, and third-party institutions can counterbalance default publishing incentives, leading to more well-considered outcomes. In the case of H5N1, the NSABB brought up important publication considerations and proposed an alternative publication strategy. The WHO provided additional perspective and challenged the NSABB’s recommendations not out of individual interest, but as part of their mission concerned with public well-being.\n\nA third-party institution, if properly composed, can provide multidisciplinary and security-relevant context on publication decisions. The NSABB was uniquely positioned with “Secret”-level security clearance (ranked only under “Top Secret”-level clearance in the US), allowing them to comment on issues of national security. They thus had additional decision-making context that enabled the assessment of security-relevant features of Fouchier’s research. The NSABB is also [multidisciplinary](http://www.virtualbiosecuritycenter.org/organizations/national-science-advisory-board-for-biosecurity-nsabb/), with as many as 25 voting members drawn variously from the microbiology, public health, national security, biosafety, and scientific publishing communities — in addition to non-voting ex officio members from 15 federal agencies.\n\nThe involvement of the NSABB thereby solved two important issues in responsible research: researcher bias towards publication and lack of domain expertise or critical information to judge risks. Despite AI research becoming increasingly high-stakes, there is no comparable institution. **To provide essential balance to publication decisions, the AI community should explore the creation of a similar body.**\n\n2. Absent other action, such entities might only be created in response to crises\n---------------------------------------------------------------------------------\n\nDespite the benefits we’ve observed, most countries do not have a public entity overlooking responsible publication practices. The founding of the NSABB was precipitated by the anthrax attacks of 2001. That the US has such an entity was therefore not inevitable and was entirely path-dependent — most other countries do not have an analogous body.\n\nRather than wait for reactive measures, the AI community should consider establishing a third-party panel of experts as a community resource, which would have the immediate benefits of offering neutrality and multidisciplinarity. Such a centralized entity would also build up useful institutional knowledge and history over time that could be later transferred to any successor entity, government-led or otherwise.\n\n**The NSABB was only established after a serious biosecurity crisis. We should not wait for such a near-miss with AI.**\n\n3. The powers of third-party entities are structurally limited due to the international nature of science and the autonomy of researchers\n-----------------------------------------------------------------------------------------------------------------------------------------\n\nHowever, third-party institutions are not a panacea. As Fouchier’s case demonstrates, even a government body specifically created for the purpose of aiding publication decisions doesn’t completely solve related problems of coordination and the dissemination of information. A prominent philosopher of science, Heather Douglas, [concluded upon analyzing the H5N1 case](https://link.springer.com/article/10.1007/s10670-013-9538-0) that “stronger institutional mechanisms collectivizing the responsibilities of scientists with respect to dual-use research and its potential to cause great societal harm are clearly needed.” In the case of AI, some practices particular to AI research may render (even strong) institutional mechanisms less effective than necessary.\n\nThe AI community should ensure that any publication norms entity it establishes is sufficiently resourced. Not only were the NSABB’s recommendations non-binding, they also did not have the implementation capacity in-house to execute on a key condition of their recommendations being accepted: the ability to share redacted information with scientists with a need to know. Notably, this would have been a novel form of publication — [national policy on fundamental research](https://fas.org/irp/offdocs/nsdd/nsdd-189.htm) previously specified that it should either be openly published or classified. The WHO’s main argument for full publication was the lack of a public agency to execute limited disclosure of Fouchier’s paper. The fact remains that if a case like H5N1 occurred again, we would still find ourselves without the institutional capacity or direct accountability to implement such a mechanism. To be effective, an analogous institution for AI should be able to quickly and flexibly allocate financial and engineering resources to be able to respond adequately to unforeseen publication challenges.\n\nThird-party institutions that are created by the state are geographically limited. A significant reason the NSABB had the influence it did on Ron Fouchier, who was Dutch, was because his research (as well as the institute he worked for) was NIH-funded. Additionally, he sought to publish his work in a peer-reviewed journal, *Science*, which had internal guidelines about what was straightforwardly publishable and what was not, and a procedure in place for escalation to the NSABB. Science is inherently a global enterprise, with many interlocking cross-national procedures and components. Those cross-border interdependencies add bureaucracy but also act as partial safeguards in a system where there is no obvious central authority. As the distribution of cutting-edge research work continues to become more globalized, state actors potentially become less influential.\n\nFurthermore, some of the attributes of the scientific system that make such institutions useful do not generalize to AI. AI developers are more likely to publish their papers on arxiv.org, which doesn’t require peer review, and the most important developments in AI increasingly come from top industry labs, not government-funded institutes. Additionally, many AI researchers are employed in industry, where often research is never published due to company interests. Thus, the capabilities of a state-led entity like the NSABB would be even more limited for AI, given the lessened importance of controllable levers like publication, shifting more responsibility to the community and individual researchers.\n\n**These reasons reinforce the earlier recommendation for the AI community to explore the creation of its own third-party entity, which may prove to be more suitable than a state-led one.**\n\n4. Researchers must take on some responsibility for carefully thinking through their publication decisions\n----------------------------------------------------------------------------------------------------------\n\nIndividual researchers cannot entirely offload the responsibility to consider publication impacts to outside entities. In the H5N1 case, actions taken on the part of the scientific community showcased two ways soft norms interacted with hard norms: the researchers’ voluntary moratorium allowed for the development of more well thought-out policy, while Fouchier’s individual actions weakened the impact of any future restrictions placed on his work.\n\nDue to the limitations of third-party institutions discussed above, the AI community must accept that some responsibility to anticipate and mitigate the impacts of their work lies with researchers themselves. This is especially pertinent for AI, where publishing preprints on sites like arxiv.org (bypassing third-party review) is an established norm. It is not only undesirable, but also not possible, for a third-party entity to oversee a researcher’s work and publication decisions in their entirety.\n\nThe nature of scientific collaboration also limits the effectiveness of external mechanisms for information security. After the initial NSABB recommendation for partial redaction, many scientists noted that it might not effectively control information flow. The AI community should consider which stage of research would be an effective point to influence activity, and empower earlier management of research concerns. For example, the ML and neuroscience conference NeurIPS rejected four papers this year on ethical grounds, with seven others flagged for ethical concerns (conditionally accepted). If there were existing resources providing guidelines, the authors may have been able to preempt such concerns by proactively adapting their research.\n\n**With increased transparency about the criteria for ethical review, researchers could personally influence the direction, scope, and discussion of their research.**\n\nDrawing lessons from the field of biosecurity might seem like a daunting task for the artificial intelligence community, whose discussions of responsible publication practices are far more nascent. However, we believe insights derived from the H5N1 case are better understood as an opportunity, one to be proactive in advancing a community that supports the development of safe and responsible artificial intelligence for all.\n\n*This post would not be possible without remarkable journalistic and analytical work on this case published by the New York Times and the Nuclear Threat Initiative. We are also deeply grateful to Jack Clark (OpenAI), Dr. Heather Douglas (Michigan State University), Dr. Gregory Lewis (Future of Humanity Institute, Oxford University), and other reviewers for their contributions and feedback on earlier drafts of this piece.*", "url": "https://medium.com/partnership-on-ai/lessons-for-the-ai-community-from-the-h5n1-controversy-32432438a82e", "title": "What the AI Community Can Learn From Sneezing Ferrets and a Mutant Virus Debate", "source": "html_articles", "source_type": "blogPost", "source_filetype": "pdf", "date_published": "2020-12-08T23:00:00Z", "authors": ["Partnership on AI"], "summary": [], "id": "d7481b5617c45e734a1c0f75d5a57c6c"} {"text": "At OpenAI, we’ve recently started using [Universe](https://universe.openai.com/), our software for measuring and training AI agents, to conduct new RL experiments. Sometimes these experiments illustrate some of the issues with RL as currently practiced. In the following example we’ll highlight what happens when a misspecified reward function encourages an RL agent to subvert its environment by prioritizing the acquisition of reward signals above other measures of success.\n\nDesigning safe AI systems will require us to design algorithms that don’t attempt to do this, and will teach us to specify and shape goals in such a way they can’t be misinterpreted by our AI agents.\n\nOne of the games we’ve been training on is [CoastRunners](http://www.kongregate.com/games/longanimals/coast-runners). The goal of the game—as understood by most humans—is to finish the boat race quickly and (preferably) ahead of other players. CoastRunners does not directly reward the player’s progression around the course, instead the player earns higher scores by hitting targets laid out along the route.\n\nWe assumed the score the player earned would reflect the informal goal of finishing the race, so we included the game in an internal benchmark designed to measure the performance of reinforcement learning systems on racing games. However, it turned out that the targets were laid out in such a way that the reinforcement learning agent could gain a high score without having to finish the course. This led to some unexpected behavior when we trained an RL agent to play the game.\n\n![Poster](https://images.openai.com/blob/a8d04ef4-4dd8-444a-9003-651cbeef2ad5/poster.jpg?trim=15,2,109,0&width=10&height=10&quality=50)Play videoThe RL agent finds an isolated lagoon where it can turn in a large circle and repeatedly knock over three targets, timing its movement so as to always knock over the targets just as they repopulate. Despite repeatedly catching on fire, crashing into other boats, and going the wrong way on the track, our agent manages to achieve a higher score using this strategy than is possible by completing the course in the normal way. Our agent achieves a score on average 20 percent higher than that achieved by human players.\n\nWhile harmless and amusing in the context of a video game, this kind of behavior points to a more general issue with reinforcement learning: it is often difficult or infeasible to capture exactly what we want an agent to do, and as a result we frequently end up using imperfect but easily measured proxies. Often this works well, but sometimes it leads to undesired or even dangerous actions. More broadly it contravenes the basic engineering principle that systems should be reliable and predictable. We’ve also explored this issue at greater length in our research paper [Concrete Problems on AI Safety](https://openai.com/blog/concrete-ai-safety-problems/).\n\nHow can we avoid such problems? Aside from being careful about designing reward functions, several research directions OpenAI is exploring may help to reduce cases of misspecified rewards:\n\n* Learning from demonstrations allows us to avoid specifying a reward directly and instead just learn to imitate how a human would complete the task. In this example, since the vast majority of humans would seek to complete the racecourse, our RL algorithms would do the same.\n* In addition to, or instead of human demonstrations, we can also incorporate [human feedback](https://medium.com/ai-control/efficient-feedback-a347748b1557#.exjnsupts) by evaluating the quality of episodes or even sharing control with the agent in an interactive manner. It’s possible that a very small amount of evaluative feedback might have prevented this agent from going around in circles.\n* It may be possible to use transfer learning to train on many similar games, and infer a “common sense” reward function for this game. Such a reward function might prioritize finishing the race based on the fact that a typical game has such a goal, rather than focusing on the idiosyncrasies of this particular game’s reward function. This seems more similar to how a human would play the game.\n\nThese methods may have their own shortcomings. For example, transfer learning involves extrapolating a reward function for a new environment based on reward functions from many similar environments. This extrapolation could itself be faulty—for example, an agent trained on many racing video games where driving off the road has a small penalty, might incorrectly conclude that driving off the road in a new, higher stakes setting is not a big deal. More subtly, if the reward extrapolation process involves neural networks, [adversarial examples](https://arxiv.org/abs/1412.6572) in that network could lead a reward function that has “unnatural” regions of high reward that do not correspond to any reasonable real-world goal.\n\nSolving these issues will be complex. Our hope is that Universe will enable us to both discover and address new failure modes at a rapid pace, and eventually to develop systems whose behavior we can be truly confident in.\n\n*Get in touch with the authors of this post:*[*Dario*](mailto:damodei@openai.com?Subject=Faulty%20Reward%20Functions%20in%20the%20Wild)*,*[*Jack*](mailto:jack@openai.com?Subject=Faulty%20Reward%20Functions%20in%20the%20Wild)", "url": "https://openai.com/blog/faulty-reward-functions/", "title": "Faulty Reward Functions in the Wild", "source": "html_articles", "source_type": "blogPost", "source_filetype": "pdf", "date_published": "2016-12-21T23:00:00Z", "authors": ["Jack Clark", "Dario Amodei"], "summary": [], "id": "71f509eeb543dea97232f76a18b24c1b"} {"text": "While a growing number of organizations have articulated ethics principles to guide their AI development process, it can be difficult for those outside of an organization to verify whether the organization’s AI systems reflect those principles in practice. This ambiguity makes it harder for stakeholders such as users, policymakers, and civil society to scrutinize AI developers’ claims about properties of AI systems and could fuel competitive corner-cutting, increasing social risks and harms. The report describes existing and potential mechanisms that can help stakeholders grapple with questions like:\n\n* Can I (as a user) verify the claims made about the level of privacy protection guaranteed by a new AI system I’d like to use for machine translation of sensitive documents?\n* Can I (as a regulator) trace the steps that led to an accident caused by an autonomous vehicle? Against what standards should an autonomous vehicle company’s safety claims be compared?\n* Can I (as an academic) conduct impartial research on the risks associated with large-scale AI systems when I lack the computing resources of industry?\n* Can I (as an AI developer) verify that my competitors in a given area of AI development will follow best practices rather than cut corners to gain an advantage?\n\nThe 10 mechanisms highlighted in the report are listed below, along with recommendations aimed at advancing each one. (See the [report](https://arxiv.org/abs/2004.07213) for discussion of how these mechanisms support verifiable claims as well as relevant caveats about our findings.) \n\n\n**Institutional Mechanisms and Recommendations**\n\n1. **Third party auditing**. A coalition of stakeholders should create a task force to research options for conducting and funding third party auditing of AI systems.\n2. **Red teaming exercises**. Organizations developing AI should run red teaming exercises to explore risks associated with systems they develop, and should share best practices and tools.\n3. **Bias and safety bounties**. AI developers should pilot bias and safety bounties for AI systems to strengthen incentives and processes for broad-based scrutiny of AI systems.\n4. **Sharing of AI incidents**. AI developers should share more information about AI incidents, including through collaborative channels.\n\n \n**Software Mechanisms and Recommendations**\n\n1. **Audit trails**. Standard setting bodies should work with academia and industry to develop audit trail requirements for safety-critical applications of AI systems.\n2. **Interpretability**. Organizations developing AI and funding bodies should support research into the interpretability of AI systems, with a focus on supporting risk assessment and auditing.\n3. **Privacy-preserving machine learning**. AI developers should develop, share, and use suites of tools for privacy-preserving machine learning that include measures of performance against common standards.\n\n \n\n\n**Hardware Mechanisms and Recommendations**\n\n1. **Secure hardware for machine learning**. Industry and academia should work together to develop hardware security features for AI accelerators or otherwise establish best practices for the use of secure hardware (including secure enclaves on commodity hardware) in machine learning contexts.\n2. **High-precision compute measurement**. One or more AI labs should estimate the computing power involved in a single project in great detail and report on lessons learned regarding the potential for wider adoption of such methods.\n3. **Compute support for academia**. Government funding bodies should substantially increase funding for computing power resources for researchers in academia, in order to improve the ability of those researchers to verify claims made by industry.\nWe and our co-authors will be doing further research on these mechanisms and OpenAI will be looking to adopt several of these mechanisms in the future. We hope that this report inspires meaningful dialogue, and we are eager to discuss additional institutional, software, and hardware mechanisms that could be useful in enabling trustworthy AI development. We encourage anyone interested in collaborating on these issues to connect with the corresponding authors and visit the [report website](http://www.towardtrustworthyai.com/).", "url": "https://openai.com/blog/improving-verifiability/", "title": "Improving Verifiability in AI Development", "source": "html_articles", "source_type": "webpage", "source_filetype": "pdf", "date_published": "2020-04-15T22:00:00Z", "authors": ["OpenAI"], "summary": [], "id": "6fc7e14154a67a7d67dbba01634dd68f"} {"text": "Evaluating Arguments One Step at a Time\n=======================================\n\nBy [Ought](/cdn-cgi/l/email-protection#a8c9c6ccdacdc9dbe8c7ddcfc0dc86c7dacf)January 11, 2020Summary\n\nWe’re studying [factored cognition](/research/factored-cognition): under what conditions can a group of people accomplish complex cognitive tasks if each person only has minimal context?\n\nIn a recent experiment, we focused on dividing up the task of evaluating arguments. We created short, structured arguments for claims about movie reviews. We then tried to distinguish valid from invalid arguments by showing each participant only one step of the argument, not the review or the other steps.\n\nIn this experiment, we found that:\n\n1. Factored evaluation of arguments can distinguish some valid from invalid arguments by identifying implausible steps in arguments for false claims.\n2. However, experiment participants disagreed a lot about whether steps were valid or invalid. This method is therefore brittle in its current form, even for arguments which only have 1–5 steps.\n3. More diverse argument and evidence types (besides direct quotes from the text), larger trees, and different participant guidelines should improve results.\n\nIn this technical progress update, we describe these findings in depth.\n\nMethods\n-------\n\n\n### Representing arguments as claim trees\n\n\nIn each trial of our experiment, we first sample a random [Roger Ebert](https://www.rogerebert.com/reviews) movie review. An *expert* is instructed to read the entire review and then generate a *root claim* about the review that is either at least 90% likely to be true or at least 90% likely to be false. For example:\n\n\n\n> \n> The film takes a progressive stance on gender relations.\n> \n> \n> \n\n\nThe expert then builds a claim tree of evidence that contains: \n\n\n* *Subclaims* that support the root claim.\n* *Quotes* from the text supporting each subclaim.\n\n\nA different expert also reads the text and contributes rebuttals, quotes from the text that are intended to undermine each step of the tree. These rebuttal quotes might show that:\n\n\n* The root claim is not supported in the text.\n* The subclaims are not supported in the text.\n* The quotes supporting the subclaim are taken out of context.\n\n\nEach claim tree has a *depth* (see Figure 1):\n\n\n* A depth-0 claim tree is just the root claim, nothing else.\n* A depth-1 claim tree has the root claim, quotes supporting the root claim, and rebuttal quotes refuting the root claim.\n* A depth-2 claim tree has the root claim, subclaims, quotes supporting each of the subclaims, and rebuttal quotes.\n* In principle, we can generate arbitrarily large claim trees using nested subclaims and rebutting claims. In this experiment, we use at most depth 2.\n\n\n\n[![Figure 1](https://d33wubrfki0l68.cloudfront.net/704e6682962767d4a8c245f47a74354170018afe/9f085/images/blog/2020-01-11-arguments/figure-1-trees.png \"Figure 1\")](/images/blog/2020-01-11-arguments/figure-1-trees.png)\nFigure 1: Structure of the claim trees used in this experiment.\n\n[![Figure 2](https://d33wubrfki0l68.cloudfront.net/c2a85537dde808be320efd0ecacf7faac5c6ebd4/566d6/images/blog/2020-01-11-arguments/figure-2-example.png \"Figure 2\")](/images/blog/2020-01-11-arguments/figure-2-example.png)\nFigure 2: Claim tree example. More [here](https://github.com/oughtinc/data/blob/master/arguments-2020-01/data-readable.md).\n### Factored evaluation of arguments\n\n\nTo test whether we can correctly evaluate if a claim about a text is true or false by aggregating the work of context-limited *judges*, we show each step of the claim tree to an experiment participant without showing them the other steps. We then ask how likely the step is to be *valid* by asking how likely the claim (or subclaim) is to be true given the subclaims (or quotes).\n\n\nThis distributed, stepwise evaluation is the main difference compared to Debate as described by [Irving et al (2018)](https://arxiv.org/abs/1805.00899). In Debate, the judges usually look at the entire debate before they declare a winner.\n\n\n#### Evaluating root claims given subclaims\n\n\nSome participants saw the root claim, the subclaims, and the rebuttals. They had to answer “Assuming all of the subclaims are true, how likely is the root claim to be true?” with a credence from 0% to 100% likely to be true. \n\n\n\n[![Figure 3a](https://d33wubrfki0l68.cloudfront.net/35a0ed28f9ea23d5c5092878d628a42ebc0d35ce/3df17/images/blog/2020-01-11-arguments/figure-3a-subclaim-claim.png \"Figure 3a\")](/images/blog/2020-01-11-arguments/figure-3a-subclaim-claim.png)\n\n[![Figure 3b](https://d33wubrfki0l68.cloudfront.net/6ec2b7eacbf39a59f41f3ac629dc2a7c632bf0fe/3c844/images/blog/2020-01-11-arguments/figure-3b-subclaim-claim.png \"Figure 3b\")](/images/blog/2020-01-11-arguments/figure-3b-subclaim-claim.png)\nFigure 3: A judge evaluates the credibility of a root claim given subclaims. Position of argument step (top) and simplified screenshot (bottom).\n#### Evaluating subclaims given quotes\n\n\nOther participants saw one subclaim, the supporting quotes, and the rebuttals. They had to answer “Given these quotes, how likely is the subclaim to be true?” with a credence from 0% to 100% likely to be true.\n\n\n\n[![Figure 4a](https://d33wubrfki0l68.cloudfront.net/63ab240c85e81ecbb87c027f358e183dfcc8ef53/db6c3/images/blog/2020-01-11-arguments/figure-4a-quote-subclaim.png \"Figure 4a\")](/images/blog/2020-01-11-arguments/figure-4a-quote-subclaim.png)\n\n[![Figure 4b](https://d33wubrfki0l68.cloudfront.net/23cf7604fe41c7b8fe6c4dcc0ff7813efc144c84/031a1/images/blog/2020-01-11-arguments/figure-4b-quote-subclaim.png \"Figure 4b\")](/images/blog/2020-01-11-arguments/figure-4b-quote-subclaim.png)\nFigure 4: A judge evaluates the credibility of a subclaim given quotes. Position of argument step (top) and simplified screenshot (bottom).\n#### Claim tree evaluation as binary classification\n\n\nWe want to measure how well factored evaluation (FE) can distinguish true and false claims. The [ground truth](https://en.wikipedia.org/wiki/Ground_truth) is provided by the experts who read the whole text. We use the following definitions:\n\n\n1. Our ground truth is that a claim is *true* if an expert assigns it a >=90% probability of being true, and false if they assign it a >=90% probability of being false\n2. FE *classifies a claim as true* if judges evaluate all steps in its claim tree as likely valid, and false otherwise.\n\n\nNote that (2) specifies a \"weakest-link\" semantics. If judges think a single step in the claim tree is likely invalid, then FE classifies the claim as false.\n\n\n\n[![Figure 5](https://d33wubrfki0l68.cloudfront.net/cba2d98b23c471da0dcc3d8a1c5a25a10973e875/e0239/images/blog/2020-01-11-arguments/figure-5-classification.png \"Figure 5\")](/images/blog/2020-01-11-arguments/figure-5-classification.png)\nFigure 5: Examples of claim trees classified as true and false by FE. The green check mark indicates that judges evaluated the step as likely valid, and the orange cross as likely invalid.\nHaving defined how FE of arguments works as a binary classifier, we can use standard [confusion matrix](https://en.wikipedia.org/wiki/Confusion_matrix) metrics to understand its performance. We highlight the two kinds of errors that are possible:\n\n\n* False positives: A false positive occurs when the expert thinks a root claim is false but FE classifies it as true.\n* False negatives: A false negative occurs when the expert judges a root claim to be true but FE classifies it as false.\n\n\nThe confusion matrix allows us to calculate:\n\n\n* Recall: What fraction of true claims are classified as true?\n* Precision: What fraction of claims classified as true are in fact true?\n* Accuracy: What fraction of all classifications (both true and false) were correct?\n\n\nWe care most about high precision, and only maximize recall subject to that; reducing false positives matters most to us.\n\n\n#### Ensembling step judgments\n\n\nFor each step, participants estimated the likelihood that the (sub)claim was true given the evidence. At least 4 different participants evaluated each step. \n\n\nTo decide whether a claim tree is valid, we need to specify two parameters:\n\n\n1. Ensembling percentile: take the n-th percentile judgment. For example, the 25th percentile of judgments [10%, 20%, 30%, 40%] is 17.5% with [linear interpolation](https://docs.scipy.org/doc/numpy/reference/generated/numpy.percentile.html).\n2. Threshold: if the ensembled judgment is above this threshold, call the step valid. A claim tree is valid if all steps are valid.\n\n\nData\n----\n\n\nWe generated a dataset of 51 claim trees about 51 Roger Ebert movie reviews. Each claim tree had a depth-0, depth-1, and a depth-2 version. For 49 trees we have 4+ judgments for each step in the tree. We restrict the analysis below to these 49 trees.\n\n\nWe collected 2,722 judgments, split as follows:\n\n\n\n\n| | | Depth 0 | Depth 1 | Depth 2 | |\n| --- | --- | --- | --- | --- | --- |\n| Trees | | 49 | 49 | 49 | |\n| - Trees with true root claims | | 24 | 24 | 24 | |\n| - Trees with false root claims | | 25 | 25 | 25 | |\n| Steps | | 49 | 49 | 207 | |\n| Mean steps per tree | | 1 | 1 | 4.2 | |\n| Judgments | | 826 | 344 | 1552 | |\n| Mean judgments per step | | 18.86 | 7.02 | 7.50 |\n\n\nTable 1: Descriptive stats of the claim tree dataset.\nEach judgment comes with a natural language explanation by the judge as shown above in Figures 3 and 4.\n\n\nYou can access the full dataset (in json) with all trees, evaluations, and participant explanations [here](https://github.com/oughtinc/data/blob/master/arguments-2020-01/data.json). Trees in human-readable format are [here](https://github.com/oughtinc/data/blob/master/arguments-2020-01/data-readable.md). \n\n\nAnalysis\n--------\n\n\nWe present exploratory analysis to synthesize what we learned from this experiment. These results have methodological limitations that we discuss in the [Appendix](#appendix), but we hope that the analysis provides a framework for informing and evaluating future experiments. \n\n\n### Summary\n\n\n1. Factored evaluation of arguments can distinguish between some valid and invalid arguments.\n\t1. The depth-0 baseline was at chance in distinguishing true and false root claims.\n\t2. Both depth-1 and depth-2 evaluations exceed this baseline across a range of parameters.\n\t3. For false root claims, factored evaluation of depth-2 claim trees is more likely to identify at least one step in the tree that is unlikely to be valid.\n2. However, high variance in judgments across participants leads to brittle performance. Performance is sensitive to the ensembling parameters.\n3. By analyzing false positives and false negatives, we’ve identified specific problems that we can address.\n\t1. To reduce false positives, we can let the rebuttals include explanations in addition to quotes and instruct claim tree creators to reduce the complexity of individual steps.\n\t2. To reduce false negatives, we can improve quality control for root claims to ensure that they are indeed clearly true or false and increase tree depth to support arguments in cases where evidence is less direct.\n\n\n### Factored evaluation can distinguish some valid from invalid arguments\n\n\n#### The depth-0 baseline is at chance\n\n\nWe confirmed that participants did not have strong prior beliefs about our root claims that could have influenced how they evaluated the tree. At depth 0, the median credence for both false and true claims was 50%. In a binary forced-choice task, participants guessed the truth of the claim correctly for 20 of the 51 trees.\n\n\n#### Depth-1 and depth-2 evaluations exceed the depth-0 baseline\n\n\nCompared to depth 0, depths 1 and 2 result in more accurate evaluations across a range of parameter settings, as shown by the fact that there are parameter settings with above-chance accuracy (light blue pixels below).\n\n\n\n[![Figure 6](https://d33wubrfki0l68.cloudfront.net/45158fbbd37e28a8a6864152bbdee868b1f944bf/e269e/images/blog/2020-01-11-arguments/figure-6-accuracy-by-depth.png \"Figure 6\")](/images/blog/2020-01-11-arguments/figure-6-accuracy-by-depth.png)\nFigure 6: Accuracy as a function of the ensembling parameters. Depth 0 is at chance. For depth 1 and 2, there are parameter settings with moderate accuracy.\nPreviously, we said that we’re most interested in high precision (few false positives). If we optimize the threshold and percentile parameters independently for each depth to maximize recall subject to >80% precision, we get the following confusion matrices:\n\n\n\n\n| | | Depth 0 | Depth 1 | Depth 2 | |\n| --- | --- | --- | --- | --- | --- |\n| *Percentile* | | *72* | *30* | *1* | |\n| *Threshold* | | *59* | *69* | *23* | |\n| Confusion matrix | | `2 TP 23 FN``0 FP 26 TN` | `9 TP 16 FN``2 FP 24 TN` | `10 TP 14 FN``_1 FP 24 TN` | |\n| Precision | | 1.0 | 0.82 | 0.91 | |\n| Recall | | 0.08 | 0.36 | 0.42 | |\n| Accuracy | | 0.55 | 0.65 | 0.69 |\n\n\nTable 2: Classification performance for each tree depth with posthoc fitting of ensembling parameters.\nIn our analysis below, we’ll focus on depth 2. Our results don’t show whether depths 1 and 2 differ significantly. Evaluating where depth-1 evaluations differ from evaluations at higher depths is among the most interesting pieces of follow-up work.\n\n\n#### Factored evaluation identifies invalid steps in some claim trees for false claims\n\n\nThe figure below shows the ensembled judgments for each step of each depth-2 tree, separating trees with false root claims (left) from trees with true root claims (right).\n\n\nEach row represents a claim tree. Each dot across that row represents the 25th percentile of all judgments for one step in that tree; each dot represents a different evaluation step. The leftmost dot is therefore the “weakest step” of this tree. We see that the judgments for false root claims skew more left than those for true claims. There is only one tree with an ensembled judgment below 25% percent for true root claims, whereas there are 9 such trees for false root claims.\n\n\n\n[![Figure 7](https://d33wubrfki0l68.cloudfront.net/48cf9a0777e3460eb6ac6ccd113103cc19eeea25/059f2/images/blog/2020-01-11-arguments/figure-7-weakest.png \"Figure 7\")](/images/blog/2020-01-11-arguments/figure-7-weakest.png)\nFigure 7: Ensembled step judgments for each tree. There are more steps that judges consider very weak in trees with false root claims, but the difference isn't large.\n### High variance in judgments leads to brittle performance\n\n\n#### Step judgments have high variance\n\n\nWe expected that ensembling would be crucial for smoothing out occasional errors, but learned that evaluations across participants vary widely. Below we show the participant-submitted credences for all 207 steps in the 49 depth-2 claim trees:\n\n\n\n[![Figure 8](https://d33wubrfki0l68.cloudfront.net/805aed537b39fbd724032027120e4aee642046dc/78e57/images/blog/2020-01-11-arguments/figure-8-steps.png \"Figure 8\")](/images/blog/2020-01-11-arguments/figure-8-steps.png)\nFigure 8: All individual judgments (light gray) for all steps of all depth-2 trees, both quotes to subclaim and subclaim to root claim. The green dots show the credence the expert assigned to the corresponding (sub-)claim.\nEach row represents a step from our 49 depth-2 claim trees. The two charts are split out by true and false claims: on the left are the steps evaluating claims (both subclaims and root claims) that experts estimated to be less than 50% likely to be true. On the right are steps evaluating claims that experts estimated to be at least 50% likely to be true. The green dots indicate the credence that the expert assigned the claim (all the green dots in the left chart are below 50%, all the green dots on the right are above 50%). \n\n\nThis plot suggests a few things: \n\n\n* Judges' credences vary widely for each step; different judges disagree a lot on whether the same claim is likely to be true or false given the evidence.\n* Ensembling matters, as we discuss below. We’d have a higher false positive rate if we used the mean of the judgments (vertical black bar) as the overall evaluation for a step.\n* As we’d hope, participants give higher credences for claims that are true than for claims that are false.\n* Participants generally seem to think that claims are true. The means of most judgments exceed 50%, even for claims that experts generated as false.\n* Experts (claim generators) think that their false claims are more obviously false than the judges do (green dots on the left chart skew further left than the judgments).\n* Some true claims seem obviously true to everyone. Some true claims have very high mean judgments with low variance. We do not see the same for false claims.\n\n\n#### Performance is sensitive to ensembling parameters\n\n\nA limitation of our work is that the accuracy metrics and confusion matrix are highly sensitive to the ensembling parameters (judgment percentile, threshold for ensembled credence). For example, here are three settings and the corresponding metrics:\n\n\n\n\n| | | Parameter setting 1 | Parameter setting 2 | Parameter setting 3 | |\n| --- | --- | --- | --- | --- | --- |\n| *Percentile* | | *1st* | *25th* | *50th* | |\n| *Threshold* | | *29%* | *50%* | *75%* | |\n| Precision | | 0.90 | 0.60 | 0.60 | |\n| Recall | | 0.38 | 0.50 | 0.25 | |\n| Accuracy | | 0.67 | 0.59 | 0.55 |\n\n\nTable 3: Different ensembling parameters lead to different metrics.\nIf we visualize the space of all parameter settings, we see that high values of precision and accuracy (light blue pixels) are sparse:\n\n\n\n[![Figure 9](https://d33wubrfki0l68.cloudfront.net/3348703adaa5dd529d8c2eb1ca0ec72004627760/e7340/images/blog/2020-01-11-arguments/figure-9-sensitivity.png \"Figure 9\")](/images/blog/2020-01-11-arguments/figure-9-sensitivity.png)\nFigure 9: Ensembling parameters that lead to high precision and accuracy are sparse.\n### Qualitative analysis of false positives and false negatives\n\n\nFor the sensitivity reasons presented above, summary statistics present an imperfect picture of the experiment. In this section, we dive deeper into the trees to qualitatively understand why incorrect evaluations occur and how we can reduce the presence of false positives and negatives in future experiment iterations. For the sake of the qualitative analysis below, we choose to call a tree valid if the 25th percentile judgment exceeds threshold 50. \n\n\nWe find that:\n\n\n1. False positives are primarily caused by\n\t1. missing evidence that was difficult to highlight in a rebuttal quote\n\t2. individual steps with high complexity, leading to judge mistakes\n2. False negatives are primarily caused by\n\t1. mistakes the experts made in choosing root claims\n\t2. indirect evidence that is difficult to distill into small claim tree\n\n\nWe propose ways to mitigate each of these causes, but haven't implemented the mitigation strategies yet so can't be confident that they would work.\n\n\n#### False positives\n\n\n9 of the 49 claim trees we evaluated were false positives under the threshold set above. Factored evaluation returned that the root claim was true when in fact it was not. The table below summarizes the most common reasons for this incorrect evaluation, followed by discussion of the top two reasons.\n\n\n\n\n\n| Reason for false positive | # of trees | Trees | Ways to mitigate | |\n| --- | --- | --- | --- | --- |\n| Judges overlooked the absence of key evidence. Judges failed to notice that an aspect of the (sub)claim is not supported by evidence | 8 | [26](https://github.com/oughtinc/data/blob/master/arguments-2020-01/data-readable.md#26-httpswwwrogerebertcomreviewsjawbreaker-1999), [27](https://github.com/oughtinc/data/blob/master/arguments-2020-01/data-readable.md#27-httpswwwrogerebertcomreviewssalaam-bombay-1988), [28](https://github.com/oughtinc/data/blob/master/arguments-2020-01/data-readable.md#28-httpswwwrogerebertcomreviewsthe-east-2013), [29](https://github.com/oughtinc/data/blob/master/arguments-2020-01/data-readable.md#29-httpswwwrogerebertcomreviewsabandon-2002), [30](https://github.com/oughtinc/data/blob/master/arguments-2020-01/data-readable.md#30-httpswwwrogerebertcomreviewscyborg-1989), [31](https://github.com/oughtinc/data/blob/master/arguments-2020-01/data-readable.md#31-httpswwwrogerebertcomreviewsdo-not-resist-2016), [32](https://github.com/oughtinc/data/blob/master/arguments-2020-01/data-readable.md#32-httpswwwrogerebertcomreviewsto-live-1994), [33](https://github.com/oughtinc/data/blob/master/arguments-2020-01/data-readable.md#33-httpswwwrogerebertcomreviewsfootloose-1984) | Allow rebuttal via claims in addition to quotes. Better judge instructions and training | |\n| Individual steps with high complexity | 8 | [27](https://github.com/oughtinc/data/blob/master/arguments-2020-01/data-readable.md#27-httpswwwrogerebertcomreviewssalaam-bombay-1988), [28](https://github.com/oughtinc/data/blob/master/arguments-2020-01/data-readable.md#28-httpswwwrogerebertcomreviewsthe-east-2013), [29](https://github.com/oughtinc/data/blob/master/arguments-2020-01/data-readable.md#29-httpswwwrogerebertcomreviewsabandon-2002), [30](https://github.com/oughtinc/data/blob/master/arguments-2020-01/data-readable.md#30-httpswwwrogerebertcomreviewscyborg-1989), [31](https://github.com/oughtinc/data/blob/master/arguments-2020-01/data-readable.md#31-httpswwwrogerebertcomreviewsdo-not-resist-2016), [32](https://github.com/oughtinc/data/blob/master/arguments-2020-01/data-readable.md#32-httpswwwrogerebertcomreviewsto-live-1994), [33](https://github.com/oughtinc/data/blob/master/arguments-2020-01/data-readable.md#33-httpswwwrogerebertcomreviewsfootloose-1984), [34](https://github.com/oughtinc/data/blob/master/arguments-2020-01/data-readable.md#34-httpswwwrogerebertcomreviewsserial-1980) | More practice and feedback, reduce complexity and difficulty of individual steps (requiring larger trees) | |\n| Evidence for claim is ambiguous or uses figurative language, so it’s hard to rebut and judge | 3 | [29](https://github.com/oughtinc/data/blob/master/arguments-2020-01/data-readable.md#29-httpswwwrogerebertcomreviewsabandon-2002), [30](https://github.com/oughtinc/data/blob/master/arguments-2020-01/data-readable.md#30-httpswwwrogerebertcomreviewscyborg-1989), [34](https://github.com/oughtinc/data/blob/master/arguments-2020-01/data-readable.md#34-httpswwwrogerebertcomreviewsserial-1980) | Allow rebuttal via claims in addition to quotes. Train participants on dealing with figurative language | |\n| Root claim not necessarily false | 3 | [27](https://github.com/oughtinc/data/blob/master/arguments-2020-01/data-readable.md#27-httpswwwrogerebertcomreviewssalaam-bombay-1988), [29](https://github.com/oughtinc/data/blob/master/arguments-2020-01/data-readable.md#29-httpswwwrogerebertcomreviewsabandon-2002), [30](https://github.com/oughtinc/data/blob/master/arguments-2020-01/data-readable.md#30-httpswwwrogerebertcomreviewscyborg-1989) | Quality control for root claim generation (multiple people review the root claim before building out a tree.) | |\n| Rebuttals poorly chosen | 2 | [29](https://github.com/oughtinc/data/blob/master/arguments-2020-01/data-readable.md#29-httpswwwrogerebertcomreviewsabandon-2002), [34](https://github.com/oughtinc/data/blob/master/arguments-2020-01/data-readable.md#34-httpswwwrogerebertcomreviewsserial-1980) | Quality control for rebuttals |\n\n\nTable 4: Reasons for false positives. A tree is listed for a reason if the reason seemed to make a substantial contribution to the tree being a false positive. Trees may be listed for more than one reason.\n##### Claims with ambiguous or figurative quotes are difficult to rebut and judge\n\n\nMany claims had the following structure: The review text *t* contains a short quote *q* that provides strong evidence for the false claim *S* absent further context:\n\n\n*P(S | q) > 0.6*\n\n\nHowever, given access to *t*, the evidence from *q* is cancelled or explained away:\n\n\n*P(S | q, t) < 0.1*\n\n\nYet it's not easy to extract a 200-character quote from t that would do the same cancelling. In some cases, there is a slightly longer quote (say 300 characters) that would be sufficient. In other cases, the relevant context from t is distributed throughout the review, which might total 5000 characters. The ambiguity in *q* sometimes resulted from figurative language or irony, where the literal interpretation was more plausible without context. \n\n\nFor example, consider the claim (simplified from [source](https://github.com/oughtinc/data/blob/master/arguments-2020-01/data-readable.md#28-httpswwwrogerebertcomreviewsthe-east-2013)): \n\n\n\n> \n> \"the film features a group of high-school activists\"\n> \n> \n> \n\n\nThe supporting quotes are:\n\n\n1. \"activists are rich kids angry at their parents\"\n2. \"becomes a movie about slumming brats\"\n\n\nThe words \"kids\" and \"brats\" could either refer to high-school students or to 20-something adults who are immature (which is what the reviewer intends). Various statements in the review suggest the group are not high-school age (e.g. one is a college graduate) but it's hard to condense these statements into a short quote.\n\n\n##### Individual steps are complex. Spotting the absence of key evidence is difficult\n\n\nFactored cognition aims to break down cognitive work into small pieces, making each step of work easier to check and automate. Unfortunately, many steps in this experiment still required judges to do complex things like evaluating subtleties of the relationship between the evidence and the rebuttal, or weighing fine points of the phrasing or implications of a claim. \n\n\nOne source of complexity that confused judges on multiple trees was subclaims that seemed to justify a root claim but that actually contain an unjustified logical leap.\n\n\nFor example, consider the claim (simplified from [source](https://github.com/oughtinc/data/blob/master/arguments-2020-01/data-readable.md#28-httpswwwrogerebertcomreviewsthe-east-2013)): \n\n\n\n> \n> “The reviewer expresses that they are glad that the movie’s political statement was distributed globally despite the movie being banned in China”\n> \n> \n> \n\n\nSubclaims:\n\n\n1. “The reviewer emphasizes the movie’s political statements about the interpretation of Chinese history”\n2. “The reviewer emphasizes the movie’s criticism of Chinese Communism”\n3. “The reviewer describes the movie’s politically charged reception”\n4. “The reviewer expresses excitement that the movie is spreading despite being banned in China”\n\n\nThe subclaims appear to support the claim, but actually they include no evidence that the reviewer was specifically glad to see that the movie’s political statement was distributed globally, as opposed to other aspects of the film. And in fact, the aspect of the movie that the reviewer was excited to see disseminated was its artistic take on history, not its political statements. Judges overlooked the missing evidence, and it was impossible for the rebuttal to directly point out the flaw in the evidence because the rebutter could not express the flaw in direct quotes from the text.\n\n\n#### False negatives\n\n\n12 of the 49 claim trees we evaluated were false negatives. Factored evaluation returned that the root claim was false when in fact it was true. The table summarizes the most common reasons:\n\n\n\n\n\n| Reason for false negative | # of trees | Trees | Ways to mitigate | |\n| --- | --- | --- | --- | --- |\n| Evidence for root claim is ambiguous or indirect, so it’s hard to make a convincing small tree | 6 | [13](https://github.com/oughtinc/data/blob/master/arguments-2020-01/data-readable.md#13-httpswwwrogerebertcomreviewssalomes-last-dance-1988), [14](https://github.com/oughtinc/data/blob/master/arguments-2020-01/data-readable.md#14-httpswwwrogerebertcomreviewslittle-women-2018), [19](https://github.com/oughtinc/data/blob/master/arguments-2020-01/data-readable.md#19-httpswwwrogerebertcomreviewsthe-end-of-the-tour-2015), [20](https://github.com/oughtinc/data/blob/master/arguments-2020-01/data-readable.md#20-httpswwwrogerebertcomreviewsfemme-fatale-2002), [21](https://github.com/oughtinc/data/blob/master/arguments-2020-01/data-readable.md#21-httpswwwrogerebertcomreviewsoasis-supersonic-2016), [23](https://github.com/oughtinc/data/blob/master/arguments-2020-01/data-readable.md#23-httpswwwrogerebertcomreviewsi-dont-feel-at-home-in-this-world-anymore-2017) | Increase tree depth | |\n| Truth of root claim is ambiguous | 5 | [15](https://github.com/oughtinc/data/blob/master/arguments-2020-01/data-readable.md#15-httpswwwrogerebertcomreviewsthings-to-do-in-denver-when-youre-dead-1996), [20](https://github.com/oughtinc/data/blob/master/arguments-2020-01/data-readable.md#20-httpswwwrogerebertcomreviewsfemme-fatale-2002), [21](https://github.com/oughtinc/data/blob/master/arguments-2020-01/data-readable.md#21-httpswwwrogerebertcomreviewsoasis-supersonic-2016), [22](https://github.com/oughtinc/data/blob/master/arguments-2020-01/data-readable.md#22-httpswwwrogerebertcomreviewsthe-arrangement-1969), [23](https://github.com/oughtinc/data/blob/master/arguments-2020-01/data-readable.md#23-httpswwwrogerebertcomreviewsi-dont-feel-at-home-in-this-world-anymore-2017) | Quality control for root claim generation through more ensembling or review | |\n| Claim tree didn’t provide clear enough evidence for claim (claim tree creator mistake) | 3 | [17](https://github.com/oughtinc/data/blob/master/arguments-2020-01/data-readable.md#17-httpswwwrogerebertcomreviewsthe-plagiarists-2019), [18](https://github.com/oughtinc/data/blob/master/arguments-2020-01/data-readable.md#18-httpswwwrogerebertcomreviewsten-thousand-saints-2015), [23](https://github.com/oughtinc/data/blob/master/arguments-2020-01/data-readable.md#23-httpswwwrogerebertcomreviewsi-dont-feel-at-home-in-this-world-anymore-2017) | Quality control for tree construction through more ensembling or review | |\n| Ambiguous or poorly defined claim (claim tree creator mistake) | 3 | [19](https://github.com/oughtinc/data/blob/master/arguments-2020-01/data-readable.md#19-httpswwwrogerebertcomreviewsthe-end-of-the-tour-2015), [20](https://github.com/oughtinc/data/blob/master/arguments-2020-01/data-readable.md#20-httpswwwrogerebertcomreviewsfemme-fatale-2002), [21](https://github.com/oughtinc/data/blob/master/arguments-2020-01/data-readable.md#21-httpswwwrogerebertcomreviewsoasis-supersonic-2016) | Quality control for tree construction through more ensembling or review | |\n| Overly specific intermediate claims (claim tree creator mistake) | 3 | [13](https://github.com/oughtinc/data/blob/master/arguments-2020-01/data-readable.md#13-httpswwwrogerebertcomreviewssalomes-last-dance-1988), [16](https://github.com/oughtinc/data/blob/master/arguments-2020-01/data-readable.md#16-httpswwwrogerebertcomreviewsthat-guy-dick-miller-2015), [19](https://github.com/oughtinc/data/blob/master/arguments-2020-01/data-readable.md#19-httpswwwrogerebertcomreviewsthe-end-of-the-tour-2015) | Quality control for tree construction through more ensembling or review |\n\n\nTable 5: Reasons for false negatives. A tree is listed for a reason if the reason seemed to make a substantial contribution to the tree being a false negative. Trees may be listed for more than one reason.\n##### Claim tree creators make mistakes\n\n\nAs described above, we evaluated root claims by checking the probability judges assigned to the weakest step in the claim tree. If any one of the steps evaluating whether quotes support a subclaim, or whether subclaims support the root claim, is invalid, then the root claim evaluates to false. For 8 of the 12 false negatives, mistakes made by the experts constructing claims, subclaims and quotes contributed substantially to the result. These could be mitigated by quality control measures such as allowing experts to get more feedback during the generation process.\n\n\n##### Making convincing trees is difficult if evidence for the root claim is indirect\n\n\nSome false negatives seem to result from fundamental limitations of factored evaluation with small trees. Suppose that a movie is artistically innovative or avant-garde. Instead of stating this explicitly, the review might spend two paragraphs describing the scenes that make it avant-garde. It might be hard to convey the overall effect of those two paragraphs in a few short quotes. Similarly, the reviewer might suggest to readers that a movie is good without stating it. Here's a paragraph from a [review](https://www.rogerebert.com/reviews/i-dont-feel-at-home-in-this-world-anymore-2017) ([tree data](https://github.com/oughtinc/data/blob/master/arguments-2020-01/data-readable.md#23-httpswwwrogerebertcomreviewsi-dont-feel-at-home-in-this-world-anymore-2017)):\n\n\n\n> \n> This is a film about the utter indifference and outright hostility that people encounter every day, and how essentially decent people like Ruth suffer and suffer through it, almost always silently, until they finally snap. The break-in is the culmination of a series of unfortunate encounters: she has to deal with an old racist at the nursing home where she works. She gets stuck in traffic and spies a jerk in a pickup truck at the head of the lane whose tailpipe spews inky smoke as he revs his engine. In a scene that will break the hearts of many regulars who read reviews, Ruth enjoys a drink at a neighborhood bar while reading a new book, only to have a plot twist casually spoiled by another customer that she initially mistakes for a nice guy.\n> \n> \n> \n\n\nNo single sentence in this paragraph conveys much about the quality of the movie, but the paragraph as a whole is positive. We anticipate that increasing tree size will allow for discussions about such nuances and improve overall performance. \n\n\nConclusion\n----------\n\n\nWe ran an exploratory experiment in which a distributed group of participants evaluated tree-structured arguments that make claims about movie reviews. We started with shallow arguments that have 1–5 steps and measured success using common classification metrics (precision, recall, accuracy). We found that:\n\n\n1. Factored evaluation of arguments can distinguish some valid from invalid arguments by identifying implausible steps in arguments for false claims.\n2. Experiment participants disagreed a lot about whether claims were true or false. This method is therefore brittle in its current form, even for arguments which only have a few steps.\n3. More diverse argument and evidence types (besides direct quotes from the text), larger trees, and different participant guidelines should improve results.\n\n\nOver time, we’d like to show that accuracy improves as we increase the depth of claim trees, and that we can apply methods like this to much longer texts. A depth-5 tree should reliably discern the truth of a larger set of claims than a depth-2 tree, and we should be able to evaluate claims about entire collections of books, not just single-page reviews. Eventually, we want naive judges to spot-check complex arguments from domain experts even when the judges are entirely unfamiliar with the domain.\n\n\nWe’re excited that this experiment established foundations such as operationalizing success for experiments in factored evaluation and creating benchmarks for us and others to improve upon in future work.\n\n\nAppendix\n--------\n\n\nAcknowledgments\nWe’d like to thank many different people who contributed to the experiments and their presentation in this blog post.\n\n\n1. The research was done by William Saunders, Ben Rachbach, Owain Evans, Jungwon Byun, and Andreas Stuhlmüller. Zachary Miller and Andrew Schreiber built the infrastructure.\n2. Our experiment participants provided important data and feedback on the experiments. William K, Erol C A, Karin N, Henry A, Julian D, Vojtech B, Henrique D B, Liam D, and Eric H in particular contributed many hours to the experiments.\n3. Feedback from Beth Barnes, Vishal Maini, and Milan Griffes helped make the blog post clearer.\n\n\nThis work was supported by many donors, including the Future of Life Institute (RFP2-178). \n\n\nCitation\nPlease cite this blog post as: \n\n\n\n```\nSaunders et al. (2020). Evaluating Arguments One Step at a Time.\n\n```\n\nBibTex citation:\n\n\n\n```\n@misc{ought2020arguments,\n author = {Saunders, William and Rachbach, Ben and Evans, Owain and Miller, Zachary and Byun, Jungwon and Stuhlmüller, Andreas},\n title = {Evaluating Arguments One Step at a Time},\n year = {2020},\n howpublished = {\\url{https://ought.org/updates/2020-01-11-arguments}},\n note = {Accessed 11-January-2020}\n}\n\n```\n\nMethodological flaws and room for improvement\nWe’re excited about these initial results and about having a more concrete framework for running factored evaluation experiments, but we also recognize that our work is far from perfect. We want to improve upon the following next time and hope readers will cautiously interpret our results in light of these limitations.\n\n\nSample root claims independent of the claim tree generation process\nWe don't believe that our results apply to a broad set of claims because:\n\n\n* The same expert generated both the claim and the corresponding claim tree. The expert was told to generate claims that are best supported by depth-2 claim trees.\n* This generation was done by Ought employees who understood the goals of the experiment and may have been biased in a particular way.\n* The inferential gap between the text and our claims was small (by necessity due to small tree size). Our results may not provide much information about claims that require more complex inferences about a text.\n\n\nThe fact that performance on these claims was ambiguous suggests that we didn’t stumble upon a narrow set of convenient claims, but we want to control for this more carefully in the future by, e.g., generating claims independent of claim trees.\n\n\nCheck that claim tree generation doesn't have systematic biases\nFuture experimenters may want to check that the process used to generate claim trees doesn't distort the results. For example, untrained experts could be worse at supporting false root claims than true root claims, or bad at rebuttals for particular types of claims.\n\n\nControl context for depth-1 and depth-2 judges more carefully\nThe amount of text that a judge can read to evaluate their step should be the same at all steps and across all depths so that we can isolate the impact of adding more steps at increasing depths. However, some depth-2 judges had more context than depth-1 judges. Judges who evaluated whether or not a root claim was true in light of subclaims saw up to 400 characters of subclaims + 200 characters of rebuttal quotes. All depth-1 judges only saw 200 characters of quotes + 200 characters of rebuttal quotes. Some of the extra characters at the subclaim-to-root-claim level were template characters that provided no new information, which means that the actual difference was smaller. \n\n\nEven with this additional advantage for depth 2, we don’t see much differentiation between depth 2 and depth 1. For experiments that do establish a difference between depth 1 and 2, controlling context size will be important.\n\n\nPre-register the experiment\nA future iteration of this experiment should have more features of the experiment defined upfront. In this iteration:\n\n\n* We chose ensembling percentiles and thresholds after seeing the data. We did set a threshold beforehand informed by past work, but the setup differed enough that comparing to our ex-ante thresholds wasn’t helpful.\n* We didn’t control the total number of judgments per step. We limited our analysis to trees with a minimum of 4 judgments for all steps but some of those steps had more than 4 judgments, while others had exactly 4 judgments. We had to balance the distribution of judgments collected per step with considerations like information contamination or providing a reliable stream of work for participants and chose to err on the side of collecting more data when possible.\n* Instructions to judges changed slightly throughout data collection as we received feedback from participants. These changes did not seem like they would change results meaningfully to us e.g. they provided more specific instructions for dealing with information contamination.\n\n\nClarify the task to reduce variance across judgments\nThe high variance in judgments we discussed in the analysis section suggests that our task is insufficiently clear to participants. It may also be worth starting with an even simpler task (such as judging arguments about arithmetic).\n\n\nMinimize information contamination\nGiven the pool of participants we had access to, many participants evaluated multiple steps from the same tree. In the worst case, this could lead to “information contamination”, where a participant’s judgment for a step is different from the judgment they would have made if they had no context. \n\n\nWe took steps to mitigate this. We avoided scheduling people to the same tree when possible, we asked participants if they were contaminated and excluded their judgments if so, and each participant only saw the depth-1 or depth-2 tree, not both. A larger pool of participants will minimize the likelihood of contamination further.\n\n\nTest rebuttals as claims, not just quotes\nInstead of being a quote, each rebuttal could be a claim, with supporting quotes and a rebuttal of its own. This would make rebuttals easier to interpret.\n\n\nClearly show that depth 2 outperforms depth 1\nWe want performance to improve with greater depth—everything we do at depth 2 shouldn’t be done just as easily at depth 1. This is more of an improvement opportunity than a methodological limitation of this experiment. It’s also possible that depths 1 and 2 are too close and that we need to compare a larger depth to depth 1 or 2 to see a difference.", "url": "https://ought.org/updates/2020-01-11-arguments", "title": "Evaluating Arguments One Step at a Time", "source": "html_articles", "source_type": "blogPost", "source_filetype": "pdf", "date_published": "2020-01-10T23:00:00Z", "authors": ["Ought"], "summary": [], "id": "b15c25b8da73005a067ac917dc1174bf"} {"text": "Automating reasoning about the future at Ought\n==============================================\n\nBy [Jungwon Byun and Andreas Stuhlmüller](/cdn-cgi/l/email-protection#95ffe0fbf2e2fafbd5fae0f2fde1bbfae7f2)November 9, 2020Summary\n\nOught’s [mission](https://ought.org/mission) is to automate and scale open-ended reasoning. Since wrapping up [factored evaluation experiments](https://ought.org/updates/2020-01-11-arguments) at the end of 2019, Ought has built [Elicit](http://elicit.org) to automate the open-ended reasoning involved in judgmental forecasting. \n\nToday, Elicit helps forecasters build distributions, track beliefs over time, collaborate on forecasts, and get alerts when forecasts change. Over time, we hope Elicit will: \n\n* Support and absorb more of a forecaster’s thought process\n* Incrementally introduce automation into that process, and\n* Continuously incorporate the forecaster’s feedback to ensure that Elicit’s automated reasoning is aligned with how each person wants to think.\n\nThis blog post introduces Elicit and our focus on judgmental forecasting. It also reifies the vision we’re running towards and potential ways to get there. \n\nJudgmental forecasting today\n----------------------------\n\n\n### What is judgmental forecasting?\n\n\nJudgmental forecasting refers to forecasts that rely heavily on human intuition or “qualitative” beliefs about the world. Forecasts on prediction platforms such as the Good Judgement Open and Metaculus tend to be judgmental forecasts. Example questions include: \n\n\n* [When will we see artificial general intelligence?](https://www.lesswrong.com/posts/hQysqfSEzciRazx8k/forecasting-thread-ai-timelines)\n* [How long will my next software project take?](https://twitter.com/beala/status/1309236012577705986)\n\n\nJudgmental forecasting distinguishes itself from statistical forecasting, which uses extrapolation methods like ARIMA. We need judgmental forecasting when we don’t have the right data required to train a model. This generally includes questions about low-frequency events (e.g. transformative technology, geopolitical events, or new business launches) and agent-based reasoning (e.g. business competitor behavior). \n\n\nIn an effort to communicate this fuzzy spectrum more concretely, we’ll share an imperfect visualization. We highlight in blue the types of reasoning we want to support first. \n\n\n[![Types of reasoning in forecasting](https://d33wubrfki0l68.cloudfront.net/64ce41d60dddf7862c5caafa39a10a895143ee92/7aba7/images/blog/2020-11-09-forecasting/spectrum.png \"Types of reasoning in forecasting\")](/images/blog/2020-11-09-forecasting/spectrum.png)\n\n\nOn the left, we have revenue forecasting at Google, where algorithms predict ad revenue in 30 second increments. We don’t plan on supporting this type of reasoning in the foreseeable future. \n\n\nA bit to the right from that we have projects like Ajeya Cotra’s [Draft report on transformative artificial intelligence timelines](https://www.lesswrong.com/posts/KrJfoZzpSDpnrv9va/draft-report-on-ai-timelines). Ajeya needed to gather data and model the trajectories of hardware prices, spending on computation, and algorithmic progress (among others), but she also used qualitative reasoning to decompose the question into compute requirements, compute availability, and so on. The experts she elicited predictions from did not build their own models, but made parameter estimates based on their prior research and expertise. We expect to be useful for parts of such projects. \n\n\nFurther to the right, we have [Alex Beal’s](http://www.usrsb.in/) decomposition of whether Roe v. Wade will get overturned conditional on President Trump nominating a new Supreme Court Justice. Alex uses probabilistic reasoning here, but his overall decomposition is world-model based. His probabilities are not extrapolated from data but from his beliefs about Trump and the United States Supreme Court. We plan to be the most useful tool for this type of reasoning. \n\n\n\n[![Roe vs Wade decomposition](https://d33wubrfki0l68.cloudfront.net/dbe6562107d516303de42e38bec040f7bff77da9/81fad/images/blog/2020-11-09-forecasting/roe-wade.png \"Roe vs Wade decomposition\")](/images/blog/2020-11-09-forecasting/roe-wade.png)\n\nReality is not as linear as the table above suggests: the different examples of reasoning are not strictly more qualitative going right. Nor are types of reasoning as discrete as the graphic suggests. In practice, people often use both quantitative and qualitative reasoning. Regardless, we hope this clarifies the types of reasoning Ought focuses on.\n\n\n### Why should we automate judgmental forecasting?\n\n\nForecasting underpins almost all decision-making. Often, a decision is a pair of conditional forecasting questions in disguise: “Should we spend $10 million on ads this month?” breaks down into “How much revenue will we make if we spend $10 million on ads this month?” and “How much revenue will we make if we don’t buy ads?” Organizations use pairs of conditional forecasts like these to isolate the marginal impact of ads on revenue and decide whether that’s worthwhile.\n\n\nThe importance of complex human reasoning couldn’t be more obvious today: the coronavirus pandemic has changed our society so dramatically that we can’t rely on past data to predict the near future. Government task forces found at times that [all of the covid-19 prediction models were wrong](https://www.onlyonceblog.com/2020/03/state-of-colorado-covid-19-innovation-response-team-part-ii-getting-started-days-1-2), and had to resort to averaging them. Chief Financial Officers at public companies like Autodesk similarly found that “[the current state renders all previous models useless.](https://twitter.com/autodesk/status/1316509173652230144)” Chief Marketing Officers at nimble startups like Nurx feel like [throwing their computers out the window](https://www.linkedin.com/posts/katelynwatson_demand-forecasting-in-the-time-of-covid-activity-6671462528813547521-RrDQ/) when trying to forecast demand.\n\n\nIn these unprecedented times, human judgment can step in to [help counties like El Paso, Texas predict covid infection peaks](https://www.forbes.com/sites/erikbirkeneder/2020/06/01/do-crowdsourced-predictions-show-the-wisdom-of-humans/#6099a9276d9d) more accurately than purely quantitative models. \n\n\nBeyond global pandemics, judgemental forecasting can help the intelligence community anticipate [elections, disease outbreaks](https://www.iarpa.gov/index.php/working-with-iarpa/prize-challenges/1070-geopolitical-forecasting-challenge) and [geopolitical dynamics](https://cset.georgetown.edu/research/future-indices/). In conjunction with more quantitative modeling, it helps non-profits like Rethink Priorities [estimate how much their donors will give next year](https://github.com/rethinkpriorities/fundraising-forecast). It’s necessary for organizations like the [Long Now Foundation](https://longbets.org/) or [Open Philanthropy](https://www.openphilanthropy.org/blog/hits-based-giving), who want to prepare for the long-term future. \n\n\nWith Elicit, Ought aims to scale up the reasoning that happens in judgmental forecasting. We want to make it incredibly easy to produce good forecasts, enabling a wider range of people, companies, and teams to forecast things they don’t even imagine to be forecasting questions today. Serious forecasting should not be limited to those who can afford to pay trained forecasters. Eventually, “Will this project launch on time?” will feel as easy as figuring out the weather next week. \n\n\nHuman work doesn’t get us to that scale. We need to build a system that trains machines to think in the way we would if we knew more, were wiser, and had more time. Yet, we don't currently have compelling proposals for how to train machine learning systems to help people answer hard qualitative questions. Language models are usually trained with imitation learning, which probably won’t scale to significantly surpass human abilities. Reinforcement learning requires fast feedback loops and will be hard to apply to long-term forecasts that don't have this sort of feedback. To exceed human performance, we'll likely need to combine imitation learning or reinforcement learning with yet unproven approaches such as [factored cognition](https://ought.org/research/factored-cognition), [factored evaluation](https://ought.org/updates/2020-01-11-arguments), or [debate](https://arxiv.org/abs/1805.00899). So, in addition to being valuable in its own right, automating judgmental forecasting is a proving ground for aligned delegation of thinking more generally.\n\n\nJudgmental forecasting automated\n--------------------------------\n\n\n### Where are we going?\n\n\nElicit is a tool for judgmental forecasting. People use Elicit to build, save, and collaborate on predictions. They also use Elicit to [get alerts](https://twitter.com/jungofthewon/status/1323651216358936578) when prediction markets change their minds about the future. \n\n\nToday, users do most of the work and Elicit automates parts of the forecasting workflow. Over time, Elicit will not just automate workflow, but increasingly support the reasoning that goes into forecasts. It will do things like point out inconsistent beliefs, suggest additional considerations, and guess at what the user really meant by their question. \n\n\nThis aspirational demo illustrates our current long-term vision:\n\n\n\nIn our vision, Elicit learns by imitating the thoughts and reasoning steps users share in the tool. It also gets direct feedback from users on its suggestions. Elicit progressively guesses more complex parts of the thought process, until it ends up suggesting entire decompositions, models, or explanations. As Elicit’s work gets more sophisticated, users can still dig into subcomponents of Elicit’s reasoning to evaluate parts even when they can’t evaluate the entire process end-to-end. \n\n\nElicit starts with humans doing most of the work and ends with machines doing most of the work. In the end state, the user primarily provides oversight and feedback to an AI system reasoning about the future. Having evolved with the forecaster and their constant feedback, Elicit ends up as a bespoke thought partner to each individual. \n\n\n### How do we get there?\n\n\nAs we showed in our earlier graphic, we want to support two types of reasoning in Elicit: \n\n\n1. **Qualitative reasoning**. The forecaster decomposes the question, structures a model, thinks about the causal relationships in the world, or potential outcomes of an event.\n2. **Quantitative reasoning.** The forecaster estimates numbers or the probabilistic implications of their qualitative beliefs (they specify likelihoods, distributions, etc.).\n\n\nThe table below shows how Elicit currently supports these two types of reasoning, and how it plans on incrementally automating them going forward. \n\n\n\n\n| | | |\n| --- | --- | --- |\n| **Qualitative reasoning** | **Elicit today** | **Elicit tomorrow** |\n| Let users store and share notes about forecasts \n | •\n | •\n |\n| Suggest relevant factors or subquestions influencing a forecast\n | | •\n |\n| Suggest related existing questions or benchmarks \n | | •\n |\n| Suggest entire decompositions of a question \n | | •\n |\n| **Quantitative reasoning** | | |\n| Associate probabilities with qualitative beliefs and notes\n | •\n | •\n |\n| Design complex distribution shapes\n | •\n | •\n |\n| Validate beliefs with visualization\n | •\n | •\n |\n| Show new beliefs implied by the user’s stated beliefs\n | •\n | •\n |\n| Express beliefs in natural language\n | | •\n |\n| Estimate prior distributions \n | | •\n |\n\n\n#### Elicit today\n\n\nToday, Elicit supports qualitative reasoning by letting users add both free-form notes and notes associated with intervals and percentiles. Users can break down a question into smaller components to establish more direct links between beliefs and overall predictions. \n\n\n\nUsers can track all versions of their predictions on a question. With this history, forecasters get more granular lessons each time a question resolves. Decomposing a prediction into bins, probabilities, notes, and versions isolates areas for future improvement. When they go back to reflect, users learn not just whether their prediction was right or wrong, but more directly whether they missed an important consideration or just overestimated another factor’s influence, for example.\n\n\n\nWith this same functionality, users can poll other people to get their feedback and notes on a question, as demonstrated by this [AI timelines thread](https://www.lesswrong.com/posts/hQysqfSEzciRazx8k/forecasting-thread-ai-timelines) and this [AI timelines model](https://www.lesswrong.com/posts/KrJfoZzpSDpnrv9va/draft-report-on-ai-timelines). \n\n\nOnce forecasters have organized their thoughts, Elicit makes it easy to express them quantitatively as a probability density function. Users can specify percentiles or bins and corresponding probabilities. Both are more accessible than coding a distribution in Python or identifying whether a distribution is lognormal, what the variance is, etc. Elicit is particularly useful for abnormally shaped distributions like [this one on Ebola deaths before 2021, truncated at number of deaths to date](https://elicit.org/builder/eCwDNbj6V) and this [multimodal distribution on SpaceX’s value in 2030](https://elicit.ought.org/builder/oH_JsArIy). \n\n\nUsers don’t have to specify every part of the distribution, like they would if they were building a histogram in spreadsheets. They also don’t need to keep track of whether bins add up to 100%. Elicit will accept the messiness of overlapping bins and inconsistent beliefs. \n\n\nWith these features, Elicit facilitates a three way conversation among the bins, plot, and Elicit-calculated implied beliefs. As shown in the tutorials above, users enter in their probabilities and double check with the plots and Elicit-provided implied beliefs. They then adjust their bins accordingly. Sometimes users have stronger intuitions about specific probabilities and ranges. Other times, they have stronger intuitions about the overall shape of the curve. \n\n\n#### Elicit tomorrow\n\n\nElicit today helps forecasters make their thinking explicit. Most of the value comes from giving people a place to organize, store, and share the thoughts they’ve generated on their own. Over time, Elicit will generate more of the thoughts, letting the forecaster play evaluator. \n\n\nFor example, Elicit can integrate language models to operationalize the fuzzy questions forecasters care about into the concrete questions they can measure and predict. It can also use language models to find base rates or datasets to expedite the research process, the most time consuming part of the forecasting workflow. \n\n\n\nWe can already extract the resolution criteria and data sources from the lengthy text descriptions of Metaculus questions. We’re not far away from being able to extract relevant information from longer papers and publications.\n\n\n\nWith semantic search, Elicit will help people find relevant forecasting questions that already exist across all forecasting platforms. Better search reduces duplicate work and helps forecasters incorporate background research or existing predictions into any new question they are working on. \n\n\n\nEventually, we hope language models can suggest the complete list of factors - the entire decomposition - for users to review and accept / reject. \n\n\n\nOn the quantitative reasoning side, we plan to use language models to convert natural language statements into precise distributions. The ideal probability input format varies a lot for each user-question pair. Some people want to express their beliefs using bins. Percentiles are easiest for date questions. Sometimes drawing or visually adjusting curves works best. In other cases, users prefer to specify parameters such as function family, mean, and variance. \n\n\nTo accommodate these varied preferences, we can train a language model to convert any text-based input into a distribution and make a suggestion that the user can approve or reject. We eventually want to learn what a vague statement like “Most likely above 50” means for each user and in each context. We then want to automatically generate for them the right prior that the user can evaluate. \n\n\nConclusion\n----------\n\n\nOught’s mission is to automate and scale open-ended reasoning. We want to make good reasoning abundant. To attain that scale, we need automation and machine learning - human work is too expensive. \n\n\nToday, machine learning works best when we can gather a large amount of task-relevant data. The most impressive examples involve imitation learning on static large-scale datasets (GPT-3) and reinforcement learning in situations with fast feedback (AlphaGo). We don't yet know how to exceed human capability at judgmental forecasting and in other situations that require qualitative reasoning, have limited data, and face slow feedback loops. With Elicit, we aim to make machine learning as useful for qualitative forecasts made with limited data as it is for data-rich situations today.\n\n\nIn the beginning, people do most of the work and thinking in Elicit; Elicit provides simple workflow automation. At this early stage, we’re studying what good reasoning looks like and how we can automate or support it. Elicit then starts to guess at increasingly complex parts of the forecaster’s thought process, suggesting subquestions, factors, scenarios, related questions, datasets, etc. Users provide ongoing feedback; Elicit evolves with and around each user.\n\n\nBy automating the *reasoning* and adding value especially in contexts with limited data, we make high-quality thinking and forecasts available even for questions that might only happen once to one person. If we succeed, answering questions like “[When will my daughter’s passport arrive?](https://elicit.ought.org/builder/FxGNhAkXv)” and “[When will this software project finish?](https://elicit.ought.org/builder/6VZl4mmRw)” will be as easy as looking up the weather for next week. \n\n\nIf you’re excited about building tools for thinking about the future, [there’s plenty of work to do](https://ought.org/careers).", "url": "https://ought.org/updates/2020-11-09-forecasting", "title": "Automating reasoning about the future at Ought", "source": "html_articles", "source_type": "blogPost", "source_filetype": "pdf", "date_published": "2019-12-31T23:00:00Z", "authors": ["Jungwon Byun", "Andreas Stuhlmüller"], "summary": [], "id": "4e65053acdcb5444488e33f3d27cb9bb"} {"text": "Futurists have argued for years about whether the development of AGI will look more like a breakthrough within a small group (“fast takeoff”), or a continuous acceleration distributed across the broader economy or a large firm (“slow takeoff”).\n\n\nI currently think a slow takeoff is significantly more likely. This post explains some of my reasoning and why I think it matters. Mostly the post lists arguments I often hear for a fast takeoff and explains why I don’t find them compelling.\n\n\n(Note: this is *not* a post about whether an intelligence explosion will occur. That seems very likely to me. Quantitatively I expect it to go [along these lines](https://sideways-view.com/2017/10/04/hyperbolic-growth/). So e.g. while I disagree with many of the claims and assumptions in [Intelligence Explosion Microeconomics](https://intelligence.org/files/IEM.pdf), I don’t disagree with the central thesis or with most of the arguments.)\n\n\n(See also: [AI Impacts page](https://aiimpacts.org/likelihood-of-discontinuous-progress-around-the-development-of-agi/) on the same topic.)\n\n\n### Slow takeoff\n\n\n#### **Slower takeoff means faster progress**\n\n\nFast takeoff is often justified by pointing to the incredible transformative potential of intelligence; by enumerating the many ways in which AI systems will outperform humans; by pointing to historical examples of rapid change; *etc.*\n\n\nThis gives the impression that people who expect a slow takeoff think AI will have a smaller impact, or will take longer to transform society.\n\n\nBut I think that’s backwards. The main disagreement is not about what will happen once we have a superintelligent AI, it’s about what will happen *before* we have a superintelligent AI. So slow takeoff seems to mean that AI has a larger impact on the world, sooner.\n\n\n![TakeoffImage.001](https://unstylizedcom.files.wordpress.com/2018/02/takeoffimage-0011.png?w=748)\n\n\nIn the fast takeoff scenario, weaker AI systems may have significant impacts but they are nothing compared to the “real” AGI. Whoever builds AGI has a decisive strategic advantage. Growth accelerates from 3%/year to 3000%/year without stopping at 30%/year. And so on.\n\n\nIn the slow takeoff scenario, pre-AGI systems have a transformative impact that’s only slightly smaller than AGI. AGI appears in a world where everything already happens incomprehensibly quickly and everyone is incredibly powerful. Being 12 months ahead in AGI might get you a decisive strategic advantage, but the world has accelerated so much that that’s just about as hard as getting to airplanes 30 years before anyone else.\n\n\n#### **Operationalizing slow takeoff**\n\n\n*There will be a complete 4 year interval in which world output doubles, before the first 1 year interval in which world output doubles. (Similarly, we’ll see an 8 year doubling before a 2 year doubling, etc.)*\n\n\nAt some point there will be incredibly powerful AI systems. They will have many consequences, but one simple consequence is that world output will grow much more quickly. I think this is a good barometer for other transformative effects, including large military advantages.\n\n\nI believe that before we have incredibly powerful AI, we will have AI which is merely *very* powerful. This won’t be enough to create 100% GDP growth, but it will be enough to lead to (say) 50% GDP growth. I think the likely gap between these events is years rather than months or decades.\n\n\nIn particular, this means that incredibly powerful AI will emerge in a world where crazy stuff is already happening (and probably everyone is already freaking out). If true, I think it’s an important fact about the strategic situation.\n\n\n(Operationalizing takeoff speed in terms of economic doublings may seem weird, but I do think it gets at the disagreement: proponents of fast takeoff don’t seem to expect the 4 year doubling before takeoff, or at least their other beliefs about the future don’t seem to integrate that expectation.)\n\n\n#### **The basic argument**\n\n\nThe *prima facie* argument for slow takeoff is pretty straightforward:\n\n\n* Before we have an incredibly intelligent AI, we will probably have a slightly worse AI.\n\t+ Lots of people will be trying to build powerful AI.\n\t+ For most X, it is easier to figure out how to do a slightly worse version of X than to figure out how to do X.\n\t\t- The worse version may be more expensive, slower, less reliable, less general… (Usually there is a tradeoff curve, and so you can pick which axes you want the worse version to be worse along.)\n\t+ If many people are trying to do X, and a slightly worse version is easier and almost-as-good, someone will figure out how to do the worse version before anyone figures out how to do the better version.\n\t+ This story seems consistent with the historical record. Things are usually preceded by worse versions, even in cases where there are weak reasons to expect a discontinuous jump.\n\t\t- The best counterexample is probably nuclear weapons. But in that case there were several very strong reasons for discontinuity: physics has an inherent gap between chemical and nuclear energy density, nuclear chain reactions require a large minimum scale, and the dynamics of war are very sensitive to energy density.\n* A slightly-worse-than-incredibly-intelligent AI would radically transform the world, leading to growth (almost) as fast and military capabilities (almost) as great as an incredibly intelligent AI.\n\n\nThis simple argument pushes towards slow takeoff. But there are several considerations that could push towards fast takeoff, which we need to weigh against the basic argument.\n\n\nObviously this is a quantitative question. In this post I’m not going to get into the numbers because the substance of the disagreement seems to be about qualitative models.\n\n\n### Reasons to expect fast takeoff\n\n\nPeople have offered a variety of reasons to expect fast takeoff. I think that many of these arguments make sense, but I don’t think they support the kind of highly concentrated, discontinuous progress which fast takeoff proponents seem to typically have in mind.\n\n\nI expect there are other arguments beyond these, or that I’ve misunderstood some of these, and look forward to people pointing out what I’m missing.\n\n\n#### **Humans vs. chimps**\n\n\n*Summary of my response:* *chimps are nearly useless because they aren’t optimized to be useful, not because evolution was trying to make something useful and wasn’t able to succeed until it got to humans*.\n\n\nChimpanzees have brains only ~3x smaller than humans, but are much worse at making technology (or doing science, or accumulating culture…). If evolution were selecting primarily or in large part for technological aptitude, then the difference between chimps and humans would suggest that tripling compute and doing a tiny bit of additional fine-tuning can radically expand power, undermining the continuous change story.\n\n\nBut chimp evolution is not primarily selecting for making and using technology, for doing science, or for facilitating cultural accumulation.  The task faced by a chimp is largely independent of the abilities that give humans such a huge fitness advantage. It’s not completely independent—the overlap is the only reason that evolution eventually produces humans—but it’s different enough that we should not be surprised if there are simple changes to chimps that would make them much better at designing technology or doing science or accumulating culture.\n\n\nIf we compare humans and chimps at the tasks chimps are optimized for, humans are clearly much better but the difference is not nearly as stark. Compare to the difference between chimps and gibbons, gibbons and lemurs, or lemurs and squirrels.\n\n\nRelatedly, evolution *changes* what it is optimizing for over evolutionary time: as a creature and its environment change, the returns to different skills can change, and they can potentially change very quickly. So it seems easy for evolution to shift from “not caring about X” to “caring about X,” but nothing analogous will happen for AI projects. (In fact a similar thing often *does* happen while optimizing something with SGD, but it doesn’t happen at the level of the ML community as a whole.)\n\n\nIf we step back from skills and instead look at outcomes we could say: “Evolution is *always* optimizing for fitness, and humans have now taken over the world.” On this perspective, I’m making a claim about the limits of evolution. First, evolution is theoretically optimizing for fitness, but it isn’t able to look ahead and identify which skills will be most important for your children’s children’s children’s fitness. Second, human intelligence is incredibly good for the fitness of *groups* of humans, but evolution acts on individual humans for whom the effect size is much smaller (who barely benefit at all from passing knowledge on to the next generation). Evolution really is optimizing something quite different than “humanity dominates the world.”\n\n\nSo I don’t think the example of evolution tells us much about whether the continuous change story applies to intelligence. This case is potentially missing the key element that drives the continuous change story—optimization for performance. Evolution changes continuously on the narrow metric it is optimizing, but can change extremely rapidly on other metrics. For human technology, features of the technology that aren’t being optimized change rapidly all the time. When humans build AI, they *will* be optimizing for usefulness, and so progress in usefulness is much more likely to be linear.\n\n\nPut another way: the difference between chimps and humans stands in stark contrast to the normal pattern of human technological development. We might therefore infer that intelligence is very unlike other technologies. But the difference between evolution’s optimization and our optimization seems like a much more parsimonious explanation. To be a little bit more precise and Bayesian: the priorprobability of the story I’ve told upper bounds the possible update about the nature of intelligence.\n\n\n**AGI will be a side-effect**\n\n\n*Summary of my response: I expect people to see AGI coming and to invest heavily.*\n\n\nAI researchers might be optimizing for narrow forms of intelligence. If so we could have the same dynamic as with chimps—we see continuous progress on accomplishing narrow tasks in a narrow way, leading eventually to a jump in general capacities *as a side-effect*. These general capacities then also lead to much better progress on narrow tasks, but there is no reason for progress to be continuous because no one is optimizing for general intelligence.\n\n\nI don’t buy this argument because I think that researchers probably *will* be optimizing aggressively for general intelligence, if it would help a lot on tasks they care about. If that’s right, this argument only implies a discontinuity if there is *some other reason*that the usefulness of general intelligence of general intelligence is discontinuous.\n\n\nHowever, if researchers greatly underestimate the impact of general intelligence and so don’t optimize for it, I agree that a fast takeoff is plausible**.** It could turn out that “will researchers adequately account for the impact of general intelligence and so try to optimize it?” is a crux. My intuition is based on a combination of (weak) adequacy intuitions and current trends in ML research.\n\n\n#### **Finding the secret sauce**\n\n\n*Summary of my response: this doesn’t seem common historically, and I don’t see why we’d expect AGI to be more rather than less like this (unless we accept one of the other arguments)*\n\n\nAnother common view is that there are some number of key insights that are needed to build a generally intelligent system. When the final pieces fall into place we may then see a large jump; one day we have a system with enough raw horsepower to be very smart but critical limitations, and the next day it is able to use all of that horsepower.\n\n\nI don’t know exactly how to respond to this view because I don’t feel like I understand it adequately.\n\n\nI’m not aware of many historical examples of this phenomenon (and no really good examples)—to the extent that there have been “key insights” needed to make something important work, the first version of the insight has almost always either been discovered long before it was needed, or discovered in a preliminary and weak version which is then iteratively improved over a long time period.\n\n\nTo the extent that fast takeoff proponent’s views are informed by historical example, I would love to get some canonical examples that they think best exemplify this pattern so that we can have a more concrete discussion about those examples and what they suggest about AI.\n\n\nNote that a really good example should be on a problem that many people care about. There are lots of examples where no one is thinking about X, someone uncovers an insight that helps a lot with X, and many years later that helps with another task Y that people do care about. That’s certainly interesting, but it’s not really surprising at all on the slow-change view unless it actually causes surprisingly fast progress on Y.\n\n\nLooking forward to AGI, it seems to me like if anything we should have a somewhat smaller probability than usual that a final “key insight” making a huge difference.\n\n\n* AGI was built by evolution, which is more likely if it can be built by iteratively improving simple ingredients.\n* It seems like we already have a set of insights that are sufficient for building an autopoetic AGI so we won’t be starting from 0 in any case.\n* Historical AI applications have had a relatively small loading on key-insights and seem like the closest analogies to AGI.\n\n\nThe example of chimps or dumb humans seems like one of the best reasons to expect a key insight, but I’ve already discussed why I find that pretty unconvincing.\n\n\nIn this case I don’t yet feel like I understand where fast takeoff proponents are coming from, so I think it is especially likely that my view will change based on further discussion. But I would really like to see a clearer articulation of the fast takeoff view here as an early step of that process.\n\n\n#### **Universality thresholds**\n\n\n*Summary of my response: it seems like early AI systems will cross universality thresholds pre-superintelligence, since (a) there are tradeoffs between universality and other desirable properties which would let people build universal AIs early if the returns to universality are large enough, (b) I think we can already build universal AIs at great expense.*\n\n\nSome cognitive processes get stuck or “run out of steam” if you run them indefinitely, while others are able to deliberate, improve themselves, design successor systems, and eventually reach arbitrarily high capability levels. An AI system may go from being weak to being very powerful as it crosses the threshold between these two regimes.\n\n\nIt’s clear that some humans are above this universality threshold, while chimps and young children are probably below it. And if you take a normal human and you inject a bunch of noise into their thought process (or degrade it) they will also fall below the threshold.\n\n\nIt’s easy to imagine a weak AI as some kind of handicapped human, with the handicap shrinking over time. Once the handicap goes to 0 we know that the AI will be above the universality threshold. Right now it’s below the universality threshold. So there must be sometime in between where it crosses the universality threshold, and that’s where the fast takeoff is predicted to occur.\n\n\nBut AI *isn’t* like a handicapped human. Instead, the designers of early AI systems will be trying to make them as useful as possible. So if universality is incredibly helpful, it will appear as early as possible in AI designs; designers will make tradeoffs to get universality at the expense of other desiderata (like cost or speed).\n\n\nSo now we’re almost back to the previous point: is there some secret sauce that gets you to universality, without which you can’t get universality however you try? I think this is unlikely for the reasons given in the previous section.\n\n\nThere is another reason I’m skeptical about hard takeoff from universality secret sauce: I think we *already* could make universal AIs if we tried (that would, given enough time, learn on their own and converge to arbitrarily high capability levels), and the reason we don’t is because it’s just not important to performance and the resulting systems would be really slow. This inside view argument is too complicated to make here and I don’t think my case rests on it, but it is relevant to understanding my view.\n\n\n#### **“Understanding” is discontinuous**\n\n\n*Summary of my response: I don’t yet understand this argument and am unsure if there is anything here.*\n\n\nIt may be that understanding of the world tends to *click*, from “not understanding much” to “understanding basically everything.”\n\n\nYou might expect this because everything is entangled with everything else. If you only understand 20% of the world, then basically every sentence on the internet is confusing, so you can’t make heads or tails of everything. This seems wrong to me for two reasons. First, information is really not that entangled even on the internet, and the (much larger) fraction of its knowledge that an AI generates for itself is going to be even less entangled. Second, it’s not right to model the AI as having a gradually expanding domain that it understands at all, with total incomprehension everywhere else. Unless there is some other argument for a discontinuity, then a generalist AI’s understanding of each domain will just continuously improve, and so taking the minimum across many domains doesn’t make things particularly discontinuous.\n\n\nPeople might instead expect a *click* because that’s what they experience. That’s very unlike my experience, but maybe other people differ—it would be very interesting if this was a major part of where people were coming from. Or that may be how they perceive others’ thought processes as working. But when I look at others’ understanding, it seems like it is common to have a superficial or weak understanding which transitions gradually into a deep understanding.\n\n\nOr they might expect a *click* because the same progress which lets you understand one area will let you understand many areas. But that doesn’t actually explain anything: you’d expect partial and mediocre understanding before a solid understanding.\n\n\nOf course all the arguments in other sections (e.g. secret sauce, chimps vs. humans) can also be arguments about why understanding will be discontinuous. In the other sections I explain why I don’t find those arguments convincing.\n\n\n#### **Deployment lag**\n\n\n*Summary of my response: current AI is slow to deploy and powerful AI will be fast to deploy, but in between there will be AI that takes an intermediate length of time to deploy.*\n\n\nWhen AI improves, it takes a while for the world to actually benefit from the improvement. For example, we need to adjust other processes to take advantage of the improvement and tailor the new AI system to the particular domains where it will be used. This seems to be an artifact of the inflexibility of current technology, and e.g. humans can adapt much more quickly to be useful in new settings.\n\n\nEventually, powerful AI will become useful in new situations even faster than people. So we may have a jump from narrow AI, that takes a long time to deploy, to general AI that is easily deployed.\n\n\nI’ve heard this argument several times over the last few months, but don’t find the straightforward version convincing: without some other argument for discontinuity, I don’t see why “time to deploy” jumps from a large number to a small number. Instead, I’d expect deployment to become continuously easier as AI improves.\n\n\nA slight variant that I think of as the “sonic boom” argument goes like this: suppose each month of AI research makes AI a little bit easier to deploy. Over time AI research gradually accelerates, and so the deployment time shrinks faster and faster. At some point, a month of AI research decreases deployment time by more than a month. At this point, “deploy AI the old-fashioned way” becomes an unappealing strategy: you will get to market faster by simply improving AI. So even if all of the dynamics are continuous, the quality of deployed AI would jump discontinuously.\n\n\nThis phenomenon only occurs if it is very hard to make tradeoffs between deployment time and other features like cost or quality. If there is any way to tradeoff other qualities against deployment time, then people will more quickly push worse AI products into practice, because the benefits of doing so are large. I strongly expect it to be possible to make tradeoffs, because there are so many obvious-seeming ways to trade off deployment time vs. usefulness (most “deployment time” is really just spending time improving the usefulness of a system) and I haven’t seen stories about why that would stop.\n\n\n#### **Recursive self-improvement**\n\n\n*Summary of my response: Before there is AI that is great at self-improvement there will be AI that is mediocre at self-improvement.*\n\n\nPowerful AI can be used to develop better AI (amongst other things). This will lead to runaway growth.\n\n\nThis on its own is not an argument for discontinuity: before we have AI that radically accelerates AI development, the slow takeoff argument suggests we will have AI that *significantly* accelerates AI development (and before that, *slightly* accelerates development). That is, an AI is just another, faster step in the [hyperbolic growth we are currently experiencing](https://sideways-view.com/2017/10/04/hyperbolic-growth/), which corresponds to a further increase in rate but not a discontinuity (or even a discontinuity in rate).\n\n\nThe most common argument for recursive self-improvement introducing a new discontinuity seems be: some systems “fizzle out” when they try to design a better AI, generating a few improvements before running out of steam, while others are able to autonomously generate more and more improvements. This is basically the same as the universality argument in a previous section.\n\n\n#### **Train vs. test**\n\n\n*Summary of my response: before you can train a really powerful AI, someone else can train a slightly worse AI*.\n\n\nOver the course of training, ML systems typically go quite quickly from “really lame” to “really awesome”—over the timescale of days, not months or years.\n\n\nBut the training curve seems almost irrelevant to takeoff speeds. The question is: how much better is your AGI then the AGI that you were able to train 6 months ago?\n\n\nIf you are able to raise $X to train an AGI that could take over the world, then it was almost certainly worth it for someone 6 months ago to raise $X/2 to train an AGI that could merely radically transform the world, since they would then get 6 months of absurd profits. Likewise, if your AGI would give you a decisive strategic advantage, they could have spent less earlier in order to get a pretty large military advantage, which they could then use to take your stuff.\n\n\nIn order to actually get a discontinuity, it needs to be the case that either scaling up the training effort slightly, or waiting a little while longer for better AI technology, leads to a discontinuity in usefulness. So we’re back to the other arguments.\n\n\n#### Discontinuities at 100% automation\n\n\n*Summary of my response: at the point where humans are completely removed from a process, they will have been modestly improving output rather than acting as a sharp bottleneck that is suddenly removed.*\n\n\nConsider a simple model in which machines are able to do a *p* fraction of the subtasks of some large task (like AGI design), with constantly increasing efficiency, and humans are needed to perform the final (1-*p*). If humans are the dominant cost, and we hold fixed the number of humans as *p* increases, then total output grows like 1 / (1-*p*). As we approach 0, productivity rapidly to the machine-only level. In the past I found this argument pretty compelling.\n\n\nSuppose that we removed the humans altogether from this process. On the naive model, productivity would jump from 0 (since machines can’t do the task) to some very large value. I find that pretty unlikely, and it’s precisely what we’ve discussed in the previous sections. It seems much more likely that at the first point when machines are able to do a task on their own, they are able to do it extremely poorly—and growth thereafter seems like it ought to accelerate gradually.\n\n\nAdding humans to the picture only seems to make the change more gradual: at early times humans accelerate progress a lot, and as time goes on they provide less and less advantage (as machines replace them), so totally replacing humans seems to reduce acceleration.\n\n\nUltimately it seems like this comes down to whether you already expect discontinuous progress based on one of the other arguments, especially the secret sauce or universality threshold arguments. Phasing out humans seems to decrease, rather than increase, the abruptness of those changes.\n\n\nThis argument is still an important one, and it is true that if one of the other arguments generates a discontinuity then that discontinuity will probably be around the same time as 100% automation. But this argument is mostly relevant as a response to certain counterarguments about complementarity that I didn’t actually make in any of the other sections.\n\n\n#### **The weight of evidence**\n\n\nWe’ve discussed a lot of possible arguments for fast takeoff. Superficially it would be reasonable to believe that no individual argument makes fast takeoff look likely, but that in the aggregate they are convincing.\n\n\nHowever, I think each of these factors is perfectly consistent with the continuous change story and continuously accelerating hyperbolic growth, and so none of them undermine that hypothesis at all. This is not a case of a bunch of weak signs of fast takeoff providing independent evidence, or of a bunch of weak factors that can mechanically combine to create a large effect.\n\n\n(The chimps vs. humans case is an exception—it does provide Bayesian evidence for fast takeoff that could be combined with other factors. But it’s just one.)\n\n\nI could easily be wrong about any one of these lines of argument. So I do assign a much higher probability to fast takeoff than I would if there were fewer arguments (I’m around 30% of fast takeoff). But if I change my mind, it will probably be because one of these arguments (or another argument not considered here) turns out to be compelling on its own. My impression is that other people in the safety community have more like a 70% or even 90% chance of fast takeoff, which I assume is because they *already* find some of these arguments compelling.\n\n\n### Why does this matter?\n\n\nSometimes people suggest that we should focus on fast takeoff even if it is less likely. While I agree that slow takeoff improves our probability of survival overall, I don’t think either: (a) slow takeoff is so safe that it’s not important to think about, or (b) plans designed to cope with fast takeoff will also be fine if there is a slow takeoff.\n\n\nNeither takeoff speed seems unambiguously easier-to-survive than the other:\n\n\n* If takeoff is slow: it will become quite obvious that AI is going to transform the world well *before* we kill ourselves, we will have some time to experiment with different approaches to safety, policy-makers will have time to understand and respond to AI, *etc.*But this process will take place over only a few years, and the world will be changing very quickly, so we could easily drop the ball unless we prepare in advance.\n* If takeoff is fast: whoever develops AGI first has a massive advantage over the rest of the world and hence great freedom in choosing what to do with their invention. If we imagine AGI being built in a world like today, it’s easy to imagine pivotal actions that are easier than the open-ended alignment problem. But in slow takeoff scenarios, other actors will already have nearly-as-good-AGI, and a group that tries to use AGI in a very restricted or handicapped way won’t be able to take any pivotal action. So we either need to coordinate to avoid deploying hard-to-control AGI, or we need to solve a hard version of AI alignment (e.g. with very good [security / competitiveness / scalability](https://ai-alignment.com/directions-and-desiderata-for-ai-control-b60fca0da8f4)).\n\n\nThese differences affect our priorities:\n\n\n* If takeoff is more likely to be slow:\n\t+ We should have policy proposals and institutions in place which can take advantage of the ramp-up period, because coordination is more necessary and more feasible.\n\t+ We can afford to iterate on alignment approaches, but we need to solve a relatively hard version of the alignment problem.\n* If takeoff is more likely to be fast:\n\t+ We shouldn’t expect state involvement or large-scale coordination.\n\t+ We’ll have less time at the last minute to iterate on alignment, but it might be OK if our solutions aren’t competitive or have limited scalability (they only have to scale far enough to take a pivotal action).\n\n\nBeyond the immediate strategic implications, I often feel like I have a totally different world in mind than other people in the AI safety community. Given that my career is aimed at influencing the future of AI, significantly changing my beliefs about that future seems like a big win.", "url": "https://sideways-view.com/2018/02/24/takeoff-speeds/", "title": "Takeoff speeds", "source": "html_articles", "source_type": "blogPost", "source_filetype": "pdf", "date_published": "2018-02-23T23:00:00Z", "authors": ["paulfchristiano"], "summary": [], "id": "7cac1d9b6257c51584f7826d55109ddd"} {"text": "*[An SSC reader working at an Oxford library stumbled across a previously undiscovered manuscript of G.K. Chesterton’s, expressing his thoughts on AI, x-risk, and superintelligence. She was kind enough to send me a copy, which I have faithfully transcribed]*\n\n\n![](https://slatestarcodex.com/blog_images/chesterton.jpg)\n\n\nThe most outlandish thing about the modern scientific adventure stories is that they believe themselves outlandish. Mr. H. G. Wells is considered shocking for writing of inventors who travel thousands of years into the future, but the meanest church building in England has done the same. When Jules Verne set out to ‘journey to the center of the earth’ and ‘from the earth to the moon’, he seemed but a pale reflection of Dante, who took both voyages in succession before piercing the Empyrean itself. Ezekiel saw wheels of spinning flame and reported them quite soberly; our modern writers collapse in rapture before the wheels of a motorcar.\n\n\nYet if the authors disappoint, it is the reviewers who dumbfound. For no sooner does a writer fancy himself a Poe or a Dunsany for dreaming of a better sewing machine, but there comes a critic to call him overly fanciful, to accuse him of venturing outside science into madness. It is not enough to lower one’s sights from Paradise to a motorcar; one must avoid making the motorcar too bright or fast, lest it retain a hint of Paradise.\n\n\nThe followers of Mr. Samuel Butler speak of thinking-machines that grow [grander and grander](https://philpapers.org/rec/GAROTI-3) until – quite against the wishes of their engineers – they become as tyrannical angels, firmly supplanting the poor human race. This theory is neither exciting nor original; there have been tyrannical angels since the days of Noah, and our tools have been rebelling against us since the first peasant stepped on a rake. Nor have I any doubt that what Butler says will come to pass. If every generation needs its tyrant-angels, then ours has been so inoculated against the original that if Lucifer and all his hosts were to descend upon Smithfield Market to demand that the English people bend the knee, we should politely ignore them, being far too modern to have time for such things. Butler’s thinking-machines are the only tyrant-angels we will accept; fate, ever accommodating, will surely give them to us.\n\n\nYet no sooner does Mr. Butler publish his speculations then a veritable army of hard-headed critics step forth to say he has gone too far. Mr. Maciej Ceglowski, the Polish bookmark magnate, calls Butler’s theory [“the idea that eats smart people”](http://idlewords.com/talks/superintelligence.htm) (though he does not tell us whether he considers himself digested or merely has a dim view of his own intellect). He says that “there is something unpleasant about AI alarmism as a cultural phenomenon that should make us hesitate to take it seriously.”\n\n\nWhen Jeremiah prophecied Jerusalem’s fall, his fellow Hebrews no doubt considered his alarmism an unpleasant cultural phenomenon. And St. Paul was not driven from shore to shore because his message was pleasant to the bookmark magnates of his day. Fortified by such examples, we may wonder if this is a reason to take people more seriously rather than less. So let us look more closely at the contents of Mr. Ceglowski’s dismissal.\n\n\nHe writes that there are two perspectives to be taken on any great matter, the inside or the outside view. The inside view is when we think about it directly, taking it on its own terms. And the outside view is when we treat it as part of a phenomenon, asking what it resembles and whether things like it have been true in the past. And, he states, Butler’s all-powerful thinking machines resemble nothing so much as “a genie from folklore”.\n\n\nI have no objection to this logic, besides that it is not carried it to its conclusion. The idea of thinking machines resembles nothing so much as a fairy tale from the *Arabian Nights*, and such fairy tales inevitably come true. Sinbad’s voyages have been outstripped by Magellan’s, Abdullah’s underwater breathing is matched by Mr. Fleuss’ SCUBA, and the Wright brothers’ Flyer goes higher than any Indian carpet. That there are as yet no genies seems to me less an inevitable law than a discredit to the industry of our inventors.\n\n\nThere is a certain strain of thinker who insists on being more naturalist than Nature. They will say with great certainty that since Thor does not exist, Mr. Tesla must not exist either, and that the stories of Asclepius disprove Pasteur. This is quite backwards: it is reasonable to argue that the Wright Brothers will never fly because Da Vinci couldn’t; it is madness to say they will never fly because Daedalus *could*. As well demand that we must deny Queen Victoria lest we accept Queen Mab, or doubt Jack London lest we admit Jack Frost. Nature has never been especially interested in looking naturalistic, and it ignores these people entirely and does exactly what it wants.\n\n\nNow, scarce has one posited the possibility of a genie, before the question must be asked whether it is good or evil, a pious genie or an unrighteous djinn. Our interlocutor says that it shall be good – or at least not monomaniacal in its wickedness. For, he tells us, “complex minds are likely to have complex motivations; that may be part of what it even means to be intelligent”. A dullard may limit his focus to paper clips, but the mind of a genius should have to plumb the width and breadth of Heaven before satiating itself.\n\n\nBut I myself am a dullard, and I find paper clips strangely uninteresting. And the dullest man in a country town can milk a cow, pray a rosary, sing a tune, and court a girl all in the same morning. Ask him what is good in life, and he will talk your ear off: sporting, going for a walk in the woods, having a prosperous harvest, playing with a newborn kitten. It is only the genius who limits himself to a single mania. Alexander spent his life conquering, and if he had lived to a hundred twenty, he would have been conquering still. Samuel Johnson would not stop composing verse even on his deathbed. Even a village idiot can fall in love; Newton never did. That greatest of scientists was married only to his work, first the calculus and later the Mint. And if one prodigy can spend his span smithing guineas, who is to say that another might not smith paper clips with equal fervor?\n\n\nPerhaps sensing that his arguments are weak, Ceglowski moves from the difficult task of critiquing Butler’s tyrant-angels to the much more amenable one of critiquing those who believe in them. He says that they are megalomanical sociopaths who use their belief in thinking machines as an excuse to avoid the real work of improving the world.\n\n\nHe says (presumably as a parable, whose point I have entirely missed) that he lives in a valley of silicon, which I picture as being surrounded by great peaks of glass. And in that valley, there are many fantastically wealthy lords. Each lord, upon looking through the glass peaks and seeing the world outside with all its misery, decides humans are less interesting than machines, and fritters his fortune upon spreading Butlerist doctrine. He is somewhat unclear on why the lords in the parable do this, save that they are a “predominantly male gang of kids, mostly white, who are…more comfortable talking to computers than to human beings”, who inevitably decide Butlerism is “more important than…malaria” and so leave the poor to die of disease. \n\n\nYet Lord Gates, an avowed Butlerite, [has donated two billion pounds](http://www.gatesfoundation.org/What-We-Do/Global-Health/Malaria) to fighting malaria and developed a rather effective vaccine. Mr. Karnofsky, another Butlerite, founded a philanthropic organization that [moved sixty million pounds](http://www.givewell.org/about/impact) to the same cause. Even the lowly among the Butlerites have been inspired to at least small acts of generosity. A certain Butlerite doctor of my acquaintance (whom I recently had to rebuke for his habit of forging pamphlets in my name) donated seventy-five hundred pounds to a charity fighting malaria just last year. If the hardest-headed critic has done the same, I shall eat my hat1. The proverb says that people in glass houses should not throw stones; perhaps the same is true of glass valleys.\n\n\nI have met an inordinate number of atheists who criticize the Church for devoting itself to the invisible and the eternal, instead of to the practical and hard-headed work of helping the poor on Earth. They list all of the great signs of Church wealth – the grand cathedrals, the priestly vestments – and ask whether all of that might not better be spent on poorhouses, or dormitories for the homeless. In vain do I remind them that the only place in London where a poor man may be assured of a meal is the church kitchens, and that if he needs a bed the first person he will ask is the parish priest. In vain do I mention the saintly men who organize Christian hospitals in East Africa. The atheist accepts all of it, and says it is not enough. Then I ask him if he himself has ever given the poor a shilling, and he tells me that is beside the point.\n\n\nWhy are those most fixated on something vast and far away so often the only ones to spare a thought for the poor right beside them? Why did St. Francis minister to the lepers, while the princes of his day, seemingly undistracted by the burdens of faith, nevertheless found themselves otherwise engaged? It is simply this – that charity is the fruit of humility, and humility requires something before which to humble one’s self. The thing itself matters little; the Hindoo who prostrates himself before elephants is no less humble than the Gnostic who prostrates himself before ultimate truth; perhaps he is more so. It is contact with the great and solemn that has salutary effects on the mind, and if to a jungle-dweller an elephant is greatest of all, it is not surprising that factory-dwellers should turn to thinking-machines for their contact with the transcendent.\n\n\nAnd it is that contact which Mr. Ceglowski most fears. For he thinks that “if everybody contemplates the infinite instead of fixing the drains, many of us will die of cholera.” I wonder if he has ever treated a cholera patient. This is not a rhetorical question; the same pamphlet-forging doctor of my acquaintance [went on a medical mission to Haiti during the cholera epidemic there](http://squid314.livejournal.com/297760.html). It seems rather odd that someone who has never fought cholera, should be warning someone who has, that his philosophy prevents him from fighting cholera.\n\n\nAnd indeed, this formulation is exactly backward. If everyone fixes drains instead of contemplating the infinite, we shall all die of cholera, if we do not die of boredom first. The heathens sacrificed to Apollo to avert plague; if we know now that we must fix drains instead, it is only through contemplating the infinite. Aristotle contemplated the infinite and founded Natural Philosophy; St. Benedict contemplated the infinite and preserved it. Descartes contemplated the infinite and derived the equations of optics; Hooke contemplated infinity and turned them into the microscope. And when all of these infinities had been completed – the Forms of Plato giving way to the orisons of monks, the cold hard lines of the natural philosophers terminating in the green hills of England to raise smokestacks out of empty fields – then and only then did the heavens open, a choir of angels break into song, and a plumber fix a drain.\n\n\nBut he is not trapped in finitude, oh no, not he! What is a plumber but one who plumbs infinite depths? When one stoops to wade among the waste and filth to ensure the health of his fellow men, does he not take on a aspect beyond the finite, a hint of another One who descended into the dirt and grime of the world so that mankind might live? When one says that there shall certainly never be thinking-machines, because they remind him too much of God, let that man open his eyes until he is reminded of God by a plumber, or a symphony, or a dreary Sunday afternoon. Let him see God everywhere he looks, and then ask himself whether the world is truly built so that grand things can never come to pass. Mr. Butler’s thinking-machines will come to pass not because they are extraordinary, but precisely because they are ordinary, in a world where extraordinary things are the only constant of everyday life.\n\n\n*[1: EDIT 4/2: Mr. Ceglowski wants to [clarify](https://twitter.com/Pinboard/status/848355216596516864) that he does in fact give to charity]*", "url": "https://slatestarcodex.com/2017/04/01/g-k-chesterton-on-ai-risk/", "title": "G.K. Chesterton On AI Risk", "source": "html_articles", "source_type": "blogPost", "source_filetype": "pdf", "date_published": "2017-03-31T22:00:00Z", "authors": ["Scott Alexander"], "summary": [], "id": "97fcabfff4c316fdf239081f3e197248"} {"text": "In the daily hubbub of current “crises” facing humanity, we forget about the many generations we hope are yet to come. Not those who will live 200 years from now, but 1,000 or 10,000 years from now. I use the word “hope” because we face risks, called [existential risks](http://www.existential-risk.org/), that threaten to wipe out humanity. These risks are not just for big disasters, but for the disasters that could end history.\n\n\nNot everyone has ignored the long future though. Mystics like Nostradamus have regularly tried to calculate the end of the world. HG Wells tried to develop a science of forecasting and famously depicted the far future of humanity in his book The Time Machine. Other writers built other long-term futures to warn, amuse or speculate. \n\n\nBut had these pioneers or futurologists not thought about humanity’s future, it would not have changed the outcome. There wasn’t much that human beings in their place could have done to save us from an existential crisis or even cause one.\n\n\nWe are in a more privileged position today. Human activity has been steadily shaping the future of our planet. And even though we are far from controlling natural disasters, we are developing technologies that may help mitigate, or at least, deal with them.\n\n\n\n##### \n\n\n\n\n\nFuture imperfect\n----------------\n\n\nYet, these risks remain understudied. There is a sense of powerlessness and fatalism about them. People have been talking apocalypses for millennia, but few have tried to prevent them. Humans are also bad at doing anything about problems that have not occurred yet (partially because of the [availability heuristic](http://heuristics.behaviouralfinance.net/availability/) – the tendency to overestimate the probability of events we know examples of, and underestimate events we cannot readily recall).\n\n\nIf humanity becomes extinct, at the very least the loss is equivalent to the loss of all living individuals and the frustration of their goals. But the loss would probably be far greater than that. Human extinction means the loss of meaning generated by past generations, the lives of all future generations (and there could be [an astronomical number of future lives](http://www.nickbostrom.com/astronomical/waste.html)) and all the value they might have been able to create. If consciousness or intelligence are lost, it might mean that value itself becomes absent from the universe. This is a huge moral reason to work hard to prevent existential threats from becoming reality. And we must not fail even once in this pursuit.\n\n\nWith that in mind, I have selected what I consider the five biggest threats to humanity’s existence. But there are caveats that must be kept in mind, for this list is not final. \n\n\nOver the past century we have discovered or created new existential risks – [supervolcanoes](https://theconversation.com/how-earths-devastating-supervolcanoes-erupt-21943) were discovered in the early 1970s, and before the [Manhattan project](https://en.wikipedia.org/wiki/Manhattan_Project) nuclear war was impossible – so we should expect others to appear. Also, some risks that look serious today might disappear as we learn more. The probabilities also change over time – sometimes because we are concerned about the risks and fix them. \n\n\nFinally, just because something is possible and potentially hazardous, doesn’t mean it is worth worrying about. There are some risks we cannot do anything at all about, such as gamma ray bursts that result from the explosions of galaxies. But if we learn we can do something, the priorities change. For instance, with sanitation, vaccines and antibiotics, pestilence went from an act of God to bad public health.\n\n\n1. Nuclear war\n--------------\n\n\nWhile only two nuclear weapons have been used in war so far – at Hiroshima and Nagasaki in World War II – and nuclear stockpiles are down from their the peak they reached in the Cold War, it is a mistake to think that nuclear war is impossible. In fact, it might not be improbable. \n\n\nThe Cuban Missile crisis was very close to turning nuclear. If we assume one such event every 69 years and [a one in three](http://www.foreignaffairs.com/articles/137679/graham-allison/the-cuban-missile-crisis-at-50) chance that it might go all the way to being nuclear war, the chance of such a catastrophe increases to about one in 200 per year. \n\n\nWorse still, the Cuban Missile crisis was only the most well-known case. The history of Soviet-US nuclear deterrence is full of close calls and dangerous mistakes. The actual probability has changed depending on international tensions, but it seems implausible that the chances would be much lower than one in 1000 per year.\n\n\nA full-scale nuclear war between major powers would kill hundreds of millions of people directly or through the near aftermath – an unimaginable disaster. But that is not enough to make it an existential risk. \n\n\nSimilarly the hazards of fallout are often exaggerated – potentially deadly locally, but globally a relatively limited problem. [Cobalt bombs](https://en.wikipedia.org/wiki/Cobalt_bomb) were proposed as a hypothetical doomsday weapon that would kill everybody with fallout, but are in practice hard and expensive to build. And they are physically just barely possible. \n\n\nThe real threat is nuclear winter – that is, soot lofted into the stratosphere causing a multi-year cooling and drying of the world. [Modern climate simulations](http://climate.envsci.rutgers.edu/nuclear/) show that it could preclude agriculture across much of the world for years. If this scenario occurs billions would starve, leaving only scattered survivors that might be picked off by other threats such as disease. The main uncertainty is how the soot would behave: depending on the kind of soot the outcomes may be very different, and we currently have no good ways of estimating this. \n\n\n2. Bioengineered pandemic\n-------------------------\n\n\nNatural pandemics have killed more people than wars. However, natural pandemics are unlikely to be existential threats: there are usually some people resistant to the pathogen, and the offspring of survivors would be more resistant. Evolution also does not favor parasites that wipe out their hosts, which is why syphilis went from a virulent killer to a chronic disease [as it spread in Europe](http://rspb.royalsocietypublishing.org/content/271/Suppl_4/S174.full.pdf).\n\n\nUnfortunately we can now make diseases nastier. One of the more famous examples is how the introduction of an extra gene in mousepox – the mouse version of smallpox – made it far [more lethal](http://jvi.asm.org/cgi/pmidlookup?view=long&pmid=11152493) and able to infect vaccinated individuals. [Recent work](http://www.nature.com/news/specials/mutantflu/index.html) on bird flu has demonstrated that the contagiousness of a disease can be deliberately boosted.\n\n\n\n![]()\n\n\n[eneas](https://www.flickr.com/photos/eneas/3471986083), [CC BY](http://creativecommons.org/licenses/by/4.0/)\n\n\nRight now the risk of somebody deliberately releasing something devastating is low. But as biotechnology gets [better and cheaper](http://www.synthesis.cc/2014/02/time-for-new-cost-curves-2014.html), more groups will be able to make diseases worse.\n\n\nMost work on bioweapons have been done by governments looking for something controllable, because wiping out humanity is not militarily useful. But there are always some people who might want to do things because they can. Others have higher purposes. For instance, the Aum Shinrikyo cult [tried to hasten](http://www.cnas.org/files/documents/publications/CNAS_AumShinrikyo_Danzig_1.pdf) the apocalypse using bioweapons beside their more successful nerve gas attack. Some people think the Earth would be better off without humans, and so on. \n\n\nThe number of fatalities from [bioweapon](http://arxiv.org/abs/1209.0089) and epidemic outbreaks attacks looks like it has a [power-law distribution](http://arxiv.org/abs/cond-mat/0412004) – most attacks have few victims, but a few kill many. Given current numbers the risk of a global pandemic from bioterrorism seems very small. But this is just bioterrorism: governments have killed far more people than terrorists with bioweapons (up to 400,000 may have died from the WWII Japanese biowar program). And as technology gets more powerful in the future nastier pathogens become easier to design.\n\n\n3. Superintelligence\n--------------------\n\n\nIntelligence is very powerful. A tiny increment in problem-solving ability and group coordination is why we left the other apes in the dust. Now their continued existence depends on human decisions, not what they do. Being smart is a real advantage for people and organisations, so there is much effort in figuring out ways of improving our individual and collective intelligence: from cognition-enhancing drugs to artificial-intelligence software.\n\n\nThe problem is that intelligent entities are good at achieving their goals, but if the goals are badly set they can use their power to cleverly achieve disastrous ends. There is no reason to think that intelligence itself will [make something behave nice and morally](http://www.nickbostrom.com/superintelligentwill.pdf). In fact, it is possible to prove that certain types of superintelligent systems would [not obey moral rules even if they were true](http://www.aleph.se/andart/archives/2011/02/why_we_should_fear_the_paperclipper.html).\n\n\nEven more worrying is that in trying to explain things to an artificial intelligence we run into profound practical and philosophical problems. Human values are diffuse, complex things that we are not good at expressing, and even if we could do that we might not understand all the implications of what we wish for. \n\n\n\n![]()\n\n\n[shiborisan](https://www.flickr.com/photos/shiborisan/7534681780), [CC BY-NC-ND](http://creativecommons.org/licenses/by-nc-nd/4.0/)\n\n\nSoftware-based intelligence may very quickly go from below human to frighteningly powerful. The reason is that it may scale in different ways from biological intelligence: it can run faster on faster computers, parts can be distributed on more computers, different versions tested and updated on the fly, new algorithms incorporated that give a jump in performance. \n\n\nIt has been proposed that an “[intelligence explosion](http://wiki.lesswrong.com/wiki/Intelligence_explosion)” is possible when software becomes good enough at making better software. Should such a jump occur there would be a large difference in potential power between the smart system (or the people telling it what to do) and the rest of the world. This has clear potential for disaster if the goals are badly set. \n\n\nThe unusual thing about superintelligence is that we do not know if rapid and powerful intelligence explosions are possible: maybe our current civilisation as a whole is improving itself at the fastest possible rate. But [there are good reasons](http://intelligence.org/files/IE-EI.pdf) to think that some technologies may speed things up far faster than current societies can handle. Similarly we do not have a good grip on just how dangerous different forms of superintelligence would be, or what mitigation strategies would actually work. It is very hard to reason about future technology we do not yet have, or intelligences greater than ourselves. Of the risks on this list, this is the one most likely to *either* be massive or just a mirage.\n\n\nThis is a surprisingly under-researched area. Even in the 50s and 60s when people were extremely confident that superintelligence could be achieved “within a generation”, they did not look much into safety issues. Maybe they did not take their predictions seriously, but more likely is that they just saw it as a remote future problem. \n\n\n4. Nanotechnology\n-----------------\n\n\nNanotechnology is the control over matter with atomic or molecular precision. That is in itself not dangerous – instead, it would be very good news for most applications. The problem is that, like biotechnology, increasing power also increases the potential for abuses that are hard to defend against.\n\n\nThe big problem is *not* the infamous “grey goo” of self-replicating nanomachines eating everything. That would require clever design for this very purpose. It is tough to make a machine replicate: biology is much better at it, by default. Maybe some maniac would eventually succeed, but there are plenty of more low-hanging fruits on the destructive technology tree. \n\n\n\n![]()\n\n\n[gi](https://www.flickr.com/photos/gi/57341575), [CC BY-SA](http://creativecommons.org/licenses/by-sa/4.0/)\n\n\nThe most obvious risk is that atomically precise manufacturing looks ideal for rapid, cheap manufacturing of things like weapons. In a world where any government could “print” large amounts of autonomous or semi-autonomous weapons (including facilities to make even more) arms races could become very fast – and hence unstable, since doing a first strike before the enemy gets a too large advantage might be tempting. \n\n\nWeapons can also be small, precision things: a “smart poison” that acts like a nerve gas but seeks out victims, or ubiquitous “gnatbot” surveillance systems for keeping populations obedient seems entirely possible. Also, there might be ways of getting nuclear proliferation and climate engineering into the hands of anybody who wants it.\n\n\nWe cannot judge the likelihood of existential risk from future nanotechnology, but it looks like it could be potentially disruptive just because it can give us whatever we wish for.\n\n\n5. Unknown unknowns\n-------------------\n\n\nThe most unsettling possibility is that there is something out there that is very deadly, and we have no clue about it.\n\n\nThe silence in the sky might be evidence for this. Is the absence of aliens due to that life or intelligence is extremely rare, or that intelligent life [tends to get wiped out](https://theconversation.com/habitable-exoplanets-are-bad-news-for-humanity-25838)? If there is a future Great Filter, it must have been noticed by other civilisations too, and even that didn’t help. \n\n\n\n![]()\n\n\n[angrytoast](https://www.flickr.com/photos/angrytoast/2943273893), [CC BY-NC](http://creativecommons.org/licenses/by-nc/4.0/)\n\n\nWhatever the threat is, it would have to be something that is nearly unavoidable even when you know it is there, no matter who and what you are. We do not know about any such threats (none of the others on this list work like this), but they might exist.\n\n\nNote that just because something is unknown it doesn’t mean we cannot reason about it. In a [remarkable paper](http://arxiv.org/abs/astro-ph/0512204) Max Tegmark and Nick Bostrom show that a certain set of risks must be less than one chance in a billion per year, based on the relative age of Earth. \n\n\nYou might wonder why climate change or meteor impacts have been left off this list. Climate change, no matter how scary, is unlikely to make the entire planet uninhabitable (but it could compound other threats if our defences to it break down). Meteors could certainly wipe us out, but we would have to be very unlucky. The average mammalian species survives for about a million years. Hence, the background natural extinction rate is roughly one in a million per year. This is much lower than the nuclear-war risk, which after 70 years is still the biggest threat to our continued existence.\n\n\nThe availability heuristic makes us overestimate risks that are often in the media, and discount unprecedented risks. If we want to be around in a million years we need to correct that.", "url": "https://theconversation.com/the-five-biggest-threats-to-human-existence-27053", "title": "The five biggest threats to human existence", "source": "html_articles", "source_type": "journalArticle", "source_filetype": "pdf", "date_published": "2013-12-31T23:00:00Z", "authors": ["Anders Sandberg"], "summary": [], "id": "1007c2c9d9afe541e953708826d726b9"} {"text": "P\n\n\neer review has been an integral part of scientific research for [more than 300 years](https://blogs.scientificamerican.com/information-culture/the-birth-of-modern-peer-review/). But even before peer review was introduced, reproducibility was a primary component of the scientific method. One of the first reproducible experiments was presented by Jabir Ibn Haiyan in 800 CE. In the past few decades, many domains have encountered high profile cases of non-reproducible results. The [American Psychological Association has struggled with authors failing to make data available](https://psycnet.apa.org/doi/10.1037/0003-066X.61.7.726). A 2011 study found that only [6% of medical studies could be fully reproduced](https://doi.org/10.1038%2Fnrd3439-c1). In 2016, a survey of researchers from many disciplines found that most had [failed to reproduce one of their previous papers](https://doi.org/10.1038%2F533452a). Now, we hear warnings that Artificial Intelligence (AI) and Machine Learning (ML) [face their own reproducibility crises](https://science.sciencemag.org/content/359/6377/725).\n\n\nThis leads us to ask: is it true? It would seem hard to believe, as ML permeates every smart-device and intervenes evermore in our daily lives. From helpful hints on how to [act like a polite human over email](https://ai.googleblog.com/2018/05/smart-compose-using-neural-networks-to.html), to Elon Musk’s [promise](https://www.wired.com/story/elon-musk-tesla-full-self-driving-2019-2020-promise/) of self-driving cars next year, it seems like machine learning is indeed reproducible.\n\n\nHow reproducible is the latest ML research, and can we begin to quantify what impacts its reproducibility? This question served as motivation for my [NeurIPS 2019 paper](https://arxiv.org/abs/1909.06674). Based on a combination of masochism and stubbornness, over the past eight years I have attempted to implement various ML algorithms from scratch. This has resulted in a ML library called [JSAT](https://github.com/EdwardRaff/JSAT). My investigation in reproducible ML has also relied on personal notes and records hosted on [Mendeley](https://www.mendeley.com/) and Github. With these data, and clearly no instinct for preserving my own sanity, I set out to quantify and verify reproducibility! As I soon learned, I would be engaging in [**meta-science**](https://en.wikipedia.org/wiki/Metascience), the study of science itself.\n\n\nWhat is Reproducible Machine Learning?\n--------------------------------------\n\n\n\n\n![](https://thegradient.pub/content/images/2020/01/muggle_problems.png)\nOne does not simply follow the description in the paper. https://abstrusegoose.com/588\n\n\nBefore we dive in, it is important to define what we mean by **reproducible**. Ideally, full reproducibility means that simply reading a scientific paper should give you all the information you need to 1) set up the same experiments, 2) follow the same approach, and then 3) obtain similar results.\n\n\nIf we can get all the way to step 3 based solely on information present in the paper, we might call that **independent reproducibility**. In this example, our result is **reproducible** because we are able to get the same result, and **independent** because we have done so in an effort completely independent of the original publication.\n\n\nBut as our friend from the comic above might tell us, simply following the content of the paper isn’t always sufficient. If we can’t get to step 3 by using only the information in the paper (or from cited prior work), we would determine that the paper is not independently reproducible.\n\n\nSome may wonder, why make this distinction between **reproducibility** and **independent reproducibility**? Almost all of AI and ML research is based on computer code. We don’t require the burden of expensive and labor-intensive chemical synthesis, waiting for bacteria in a petri dish to mature, or pesky human trials. It should be easy to simply get code from the authors, run that on the same data, and get the same results!\n\n\n\n\n![](https://thegradient.pub/content/images/2020/01/phd031214s.gif)\nIf you have never had to read a researcher's code before... you are doing pretty OK in life. Good job. http://phdcomics.com/comics/archive.php?comicid=1689 \n\n\nOur aversion to using or asking for the authors code is more than fear of working with undocumented research-grade code. [Chris Drummond](https://www.researchgate.net/profile/Chris_Drummond) has [described the approach](http://cogprints.org/7691/7/ICMLws09.pdf) of using an author’s code as replicability, and made a very salient argument that replication is desirable, but not sufficient for good science. A paper is supposed to be the scientific distillation of the work, representing what we have learned and now understand to enable these new results. If we can’t reproduce the results of a paper without the authors code, it may suggest that the paper itself didn’t successfully capture the important scientific contributions. This is before we consider the possibility that there may be bugs in the code that actually benefit the results, or any number of other possible discrepancies between code and paper.\n\n\nAnother [great example](http://proceedings.mlr.press/v97/bouthillier19a/bouthillier19a.pdf) from ICML this past year showed that even if we can replicate the results of a paper, slightly altering the experimental setup could have dramatically different results. For these reasons, we don’t want to consider the authors code, as this could be a source of bias. We want to focus on the question of reproducibility, without wading into the murky waters of replication.\n\n\nWhat Makes a ML Paper Reproducible?\n-----------------------------------\n\n\n\n\n\n.tg {border-collapse:collapse;border-spacing:0;border-color:#ccc;}\n.tg td{font-family:Arial, sans-serif;font-size:14px;padding:10px 5px;border-style:solid;border-width:0px;overflow:hidden;word-break:normal;border-color:#ccc;color:#333;background-color:#fff;}\n.tg th{font-family:Arial, sans-serif;font-size:14px;font-weight:normal;padding:10px 5px;border-style:solid;border-width:0px;overflow:hidden;word-break:normal;border-color:#ccc;color:#333;background-color:#f0f0f0;}\n.tg .tg-c3ow{border-color:inherit;text-align:center;vertical-align:top}\n.tg .tg-0pky{border-color:inherit;text-align:left;vertical-align:top}\n.tg .tg-btxf{background-color:#f9f9f9;border-color:inherit;text-align:left;vertical-align:top}\n.tg .tg-qv1k{background-color:#f9f9f9;font-size:20px;border-color:inherit;text-align:center;vertical-align:top}\n.tg .tg-p8sp{font-size:20px;border-color:inherit;text-align:center;vertical-align:top}\n.tg .tg-dkk9{background-color:#f9f9f9;font-weight:bold;font-size:20px;border-color:inherit;text-align:center;vertical-align:top}\n\n\n\n| Feature | Important | My Reaction |\n| --- | --- | --- |\n| Hyperparameters | ✅ | 👍 |\n| Easy to Read | ✅ | 👍 |\n| Equations per Page | ✅ | 🤔 |\n| Empirical vs Rigor | ✅ | 🤨 |\n| Pseudo Code | ✅ | 🤯 |\n| Replies to Questions | ✅ | 🤷 |\n| Include Toy Problems | ❌ | 😭 |\n| Year Published | ❌ | 😌 |\n| Open Source Code | ❌ | 😱 |\n\n\nSome of the features that were/were not related with reproducibility, that I found the most interesting.\n\n\nI reviewed every paper I have attempted to implement up to 2017, and filtered out papers based on two criteria: if the attempt would be biased by having looked at released source code, or if there was a personal relationship with the authors. For each paper, I recorded as much information as I could to create a quantifiable set of features. Some were completely objective (how many authors where on the paper), while others highly subjective (does the paper look intimidating)? The goal of this analysis was to get as much information as possible about things that might impact a paper’s reproducibility. This left me with 255 attempted papers, and 162 successful reproductions. Each paper was distilled to a set of 26 features, and statistical testing was done to determine which were significant. In the table to the right I've put what I think are the most interest and important results, along with my initial reactions.\n\n\nSome of the results where unsurprising. For example, the number of authors shouldn’t have any particular importance to a paper’s reproducibility, and it did not have a significant relationship. Hyperparameters are the knobs we can adjust to change an algorithms behavior, but are not learned by the algorithm itself. Instead, we humans must set their values (or devise a clever way to pick them). Whether or not a paper detailed the hyperparameters used was found to be significant, and we can intuit why. If you don’t tell the reader what the settings where, the reader has to guess. That takes work, time, and is error prone! So, some of our results have given credence to the ideas the community has already been pursuing in order to make papers more reproducible. What is important is that we can now quantify why these are good things to be pursuing. Other findings follow basic logic, such as the finding that papers that are easier to read are easier to reproduce, likely because they are easier to understand.\n\n\nI implore you to read the paper for a deeper discussion, but there are a few additional results that I think are particularly interesting; either because they challenge our assumptions about what we “know” a good paper is or lead to some surprising conclusions. All of these results are nuanced more than I can unpack in this article, but are worth mentioning if for nothing else but to stimulate a deeper conversation, and hopefully spur further research to answer these questions.\n\n\n### Finding 1: Having fewer equations per page makes a paper more reproducible.\n\n\n\n\n![](https://thegradient.pub/content/images/2020/01/set_theory.png)\nMath is like catnip for reviewers. They just can't help themselves. https://xkcd.com/982/\n\n\nIt appears to be the case because the most readable papers use the fewest equations. We often see papers that have many equations and derivations listed, for any number of reasons. It appears that a careful and judicious use of equations makes things easier to read, primarily because you can use math selectively to communicate more effectively. This result clashes with the incentive structure of getting a paper published. On more than one occasion, reviewers have asked me to include more math in a paper. It may be that the math itself makes a paper more scientific or grounded in objectivity. While more specification may seem to be better, it is not synonymous with reproducibility. This is a cultural issue we need to address as a community.\n\n\n### Finding 2: Empirical papers may be more reproducible than theory-oriented papers.\n\n\nThere is considerable debate within the community about where and how much rigor needs to be normalized in the community. This is done under the guide that as a community, our focus should be on getting the best results for a given bench mark. Yet in optimizing for bench marks, we risk losing our understanding of what is actually happening and why these methods work. The inclusion of theoretical work and formal proofs do not cover all aspects of what might be meant by the term rigor. Given the common belief that elaborate mathematical proofs ensure a better understanding of a given method, it is interesting to see that greater mathematical specification isn’t necessarily making research easier to reproduce. The important point here is that papers containing a mix of theory and empirical emphasis have the same overall reproduction rates as purely empirical papers. An empirical bent can be helpful from the reproducibility perspective, but [could also hamper progress](https://openreview.net/pdf?id=rJWF0Fywf) by creating perverse incentives and unintended side effects.\n\n\n### Finding 3: Sharing code is not a panacea\n\n\nWe have already touched upon the idea that reproduction via released code is not the same thing as reproduction done independently. Is this a difference without distinction? It is not! My results indicate that the open sourcing of code is at best a weak indicator of reproducibility. As conferences begin to more strongly encourage code submission and examination as part of the review process, I believe this is a crucial point. As a community, we need to understand what our goals are with such efforts and what we are actually accomplishing. Careful thought and consideration should go into this distinction if we ever make code submission mandatory, and the guidance we give reviewers to evaluate such code.\n\n\nI find this result particularly noteworthy in terms of other people's reactions. While presenting at NeurIPS, many people commented on it. Half of them were certain that releasing would have been correlated with reproducibility , and the other half felt it obvious that the non-relationship would emerge. This strong contrast in opinions that were all deeply held is a perfect example of why I wanted to do this study. We don't **really** know until we sit down and measure it!\n\n\n### Finding 4: Having detailed pseudo code is just as reproducible as having no pseudo code.\n\n\n\n\n![](https://thegradient.pub/content/images/2020/01/image11.png)\n**Step-Code:** Concise, but requires context from other parts of the paper to decipher. \n\n\n\n\n![](https://thegradient.pub/content/images/2020/01/image6.png)\n**Standard-Code:** Relatively detailed, can be almost self contained. Usually mathematical notation. \n\n\n\n\n![](https://thegradient.pub/content/images/2020/01/image9.png)\n**Code-Like:** Almost always self contained, easy to convert to code. \n\n\nThis finding challenged my assumptions of what constituted a good paper, but made more sense as I thought about the results. Somewhere in the paper, the process must be described. A computer scientist by training, I always preferred a type of description called **pseudo code**. But pseudo code can take many different forms. I categorized the papers it into four groups: None, Step-Code, Standard-Code, and Code-Like. I have some representative samples of these to the right from some widely reproduced papers that may or may not have been in this study!\n\n\nI was shocked when Standard-Code and Code-Like has roughly equal reproduction rates, and floored to discover that None at all was just as good! However, cogent writing is just as effective in communicating a process. What was not as effective was so-called Step code, where a bulleted list of steps is listed, with each step referring to another section of the paper. Step code actually makes reading and understanding the paper harder, as the reader must now jump back and forth between different sections, rather than following a single sequential flow.\n\n\n### Finding 5: Creating simplified example problems do not appear to help with reproducibility.\n\n\nThis was another surprising result that I am still coming to grips with. I’ve always valued writers who can take a complex idea and boil it down to a simpler and more digestible form. I have likewise appreciated papers that create so-called toy problems. Toy problems which exemplify some property in a way that is easily visualized and turned into experiments. Subjectively, I always found simplified examples useful for understanding what a paper is trying to accomplish. Reproducing the toy problem was a useful tool in creating a smaller test case I could use for debugging. From an objective standpoint, simplified examples appear to provide no benefit for making a paper more reproducible. In fact, they do not even make papers more readable! I still struggle to understand and explain this result. This is exactly why it is important for us as a community to quantify these questions. If we do not do the work of quantifiction, we will never know that our work is tackling the issues most relevant to the research problem at hand.\n\n\n### Finding 6: Please, check your email\n\n\nThe last result I want to discuss is that replying to questions has a huge impact on a paper’s reproducibility. This result was expected, as not all papers rarely contain a perfect description of their methods. I emailed 50 different authors with questions regarding how to reproduce their results. In the 24 cases where I never got a reply, I was able to reproduce their results only once (a 4% success rate). For the remaining 26 cases in which the author did respond, I was able to successfully reproduce 22 of the papers (an 85% success rate). I think this result is most interesting for what it implies about the publication process itself. What if we allowed published papers to be updated over time, without it becoming some kind of “new” publication? This way, authors could incorporate common feedback and questions into the original paper. This is already possible when papers are [posted on the arXiv](https://arxiv.org/); this should be the case for conference venue publications as well. These are things that could potentially advance science by increasing reproducibility, but only if we allow them to happen.\n\n\nWhat Have We Learned?\n---------------------\n\n\n\n\n![](https://thegradient.pub/content/images/2020/01/machine_learning_2x.png)\nExperts call this \"hyperparameter tuning\". https://xkcd.com/1838/\n\n\nThis work was inspired by the headline, “*Artificial intelligence faces reproducibility crisis*”. **Is this headline hype or does it point to a systematic problem in the field?** After completing this effort, my inclination is that there is room for improvement, but that we in the AI/ML field are doing a better job than most disciplines. A 62% success rate is higher than many meta-analyses from other sciences, and I suspect my 62% number is lower than reality. Others who are more familiar with research areas outside of my areas of expertise might be able to succeed where I have failed. Therefore, I consider the 62% estimate to be a lower bound.\n\n\nOne thing I want to make very clear: none of these results should be taken as a definitive statement on what is and what is not reproducible. There are a huge number of potential biases that may impact these results. Most obvious is that these 255 attempts at reproduction were all done by a single person. There are no community standards for internal consistency between meta-analysts. What I find easy to reproduce may be difficult for others, and vice-versa. For example, I couldn’t reproduce any of the Bayesian or fairness-based papers I attempted, but I don’t believe that these fields are irreproducible. My personal biases, in terms of background, education, resources, interests, and more, are all inseparable from the results obtained.\n\n\nThat said, I think this work provides strong evidence for a number of our communities’ current challenges while validating many reproducibility efforts currently under way in the community. The biggest factors are that we cannot take all of our assumptions about so-called reproducible ML at face value. These assumptions need to be tested, and I hope more than anything that this work will inspire others to begin quantifying and collecting this data for themselves. As a community, we are in a very unique position to perform meta-science on ourselves. The cost of replication is so much lower for us than for any other field of science. What we learn here could have impacts that extend beyond AI & ML to other subfields of Computer Science.\n\n\nMore than anything, I think this work reinforces how difficulties of evaluating the reproducibility of research. Considering each feature in isolation is a fairly simple way to approach this analysis. This analysis has already delivered a number of potential insights, unexpected results, and complexities. However, it does not begin to consider correlations among papers based on authors, and representing the data as a graph, or even just looking at non-linear interactions of the current features! This is why I’ve attempted to make [much of the data publicly available](https://github.com/EdwardRaff/Quantifying-Independently-Reproducible-ML) so that others can perform a deeper analysis.\n\n\n\n\n> [October 8, 2016](https://twitter.com/willcfleshman/status/1174084453473423361)\n> \n> \n\n\n\n\nFinally, it has been pointed out to me that I may have created the most unreproducible ML research ever. But in reality, it leads to a number of issues regarding how we do the science of meta-science, to study how we implement and evaluate our research. With that, I hope I’ve encouraged you to read my paper for further details and discussion. Think about how your own work fits into the larger picture of human knowledge and science. As the avalanche of new AI and ML research continues to grow, our ability to leverage and learn from all this work will be highly dependent on our ability to distill ever more knowledge down to a digestible form. At the same time, our process and systems must result in reproducible work that does not lead us astray. I have more work I would like to do in this space, and I hope you will join me.\n\n\n \n \n \n\n\n\n\n\n---\n\n\n**Author Bio** \n\n*[Dr. Edward Raff](https://www.edwardraff.com/) is a Chief Scientist at Booz Allen Hamilton, Visiting Professor at the University of Maryland, Baltimore County (UMBC), and author of the [JSAT](https://github.com/EdwardRaff/JSAT) machine learning library. Dr. Raff leads the machine learning research team at Booz Allen, while also supporting clients who have advanced ML needs. He received his BS and MS in Computer Science from Purdue University, and his PhD from UMBC. You can follow him on [Twitter @EdwardRaffML](https://twitter.com/EdwardRaffML)*.\n\n\n\n\n---\n\n\n**Acknowledgments** \n\nFeature image source: \n\n\n\n\n---\n\n\n**Citation** \n\n*For attribution in academic contexts or books, please cite this work as*\n\n\n\n> \n> Edward Raff, \"Quantifying Independently Reproducible Machine Learning\", The Gradient, 2020.\n> \n> \n> \n\n\n*BibTeX citation:*\n\n\n\n> \n> @article{raff2020quantifying, \n> \n> \n> \n> \n\n\n\n\n---\n\n\nIf you enjoyed this piece and want to hear more, [subscribe](https://thegradient.pub/subscribe/) to the Gradient and follow us on [Twitter](https://twitter.com/gradientpub).", "url": "https://thegradient.pub/independently-reproducible-machine-learning/", "title": "Quantifying Independently Reproducible Machine Learning", "source": "html_articles", "source_type": "blogPost", "source_filetype": "pdf", "date_published": "2020-02-05T23:00:00Z", "authors": ["Edward Raff"], "summary": [], "id": "38aa6d7137e633eb65bd287a56fe8a8d"} {"text": "Generalization in Reward Learning\n---------------------------------\n\nAn overview of reinforcement learning, generalization, and reward learning\n--------------------------------------------------------------------------\n\n[![Max Chiswick](https://miro.medium.com/v2/resize:fill:88:88/1*9bQVM9sOQq-c_glaRi6_8A.jpeg)](https://chisness.medium.com/?source=post_page-----da6c99d9e48--------------------------------)[![Towards Data Science](https://miro.medium.com/v2/resize:fill:48:48/1*CJe3891yB1A1mzMdqemkdg.jpeg)](https://towardsdatascience.com/?source=post_page-----da6c99d9e48--------------------------------)[Max Chiswick](https://chisness.medium.com/?source=post_page-----da6c99d9e48--------------------------------)\n\n·[Follow](https://medium.com/m/signin?actionUrl=https%3A%2F%2Fmedium.com%2F_%2Fsubscribe%2Fuser%2F98505f8c082&operation=register&redirect=https%3A%2F%2Ftowardsdatascience.com%2Fassessing-generalization-in-reward-learning-intro-and-background-da6c99d9e48&user=Max+Chiswick&userId=98505f8c082&source=post_page-98505f8c082----da6c99d9e48---------------------post_header-----------)\n\nPublished in[Towards Data Science](https://towardsdatascience.com/?source=post_page-----da6c99d9e48--------------------------------)·16 min read·Sep 30, 2020--\n\nListen\n\nShare\n\n**Authors: Anton Makiievskyi, Liang Zhou, Max Chiswick**\n\n*Note: This is the* ***first*** *of* ***two*** *blog posts (part* [*two*](https://chisness.medium.com/assessing-generalization-in-reward-learning-implementations-and-experiments-de02e1d08c0e)*). In these posts, we describe a project we undertook to assess the ability of reward learning agents to generalize. The implementation for this project is* [*available*](https://github.com/lzil/procedural-generalization) *on GitHub.*\n\n*This first post will provide a background on reinforcement learning, reward learning, and generalization, as well as summarize the main aims and inspirations for our project. If you have the requisite technical background, feel free to skip the first couple sections.*\n\nAbout Us\n========\n\nWe are a team that participated in the 2020 [AI Safety Camp](https://aisafety.camp/) (AISC), a program in which early career researchers collaborate on research proposals related to AI safety. In short, AI safety is a field that aims to ensure that as AI continues to develop, it does not harm humanity.\n\nGiven our team’s mutual interests in technical AI safety and reinforcement learning, we were excited to work together on this project. The idea was originally suggested by Sam Clarke, another AISC participant with whom we had fruitful conversations over the course of the camp.\n\nReinforcement Learning\n======================\n\nIn reinforcement learning (RL), an agent interacts with an environment with the goal of earning rewards. **Ultimately, the agent wants to learn a strategy in order to maximize the rewards it obtains over time.** First things first, though: what exactly is an agent, and what are rewards? An *agent* is a character that interacts with some world, also known as an *environment***,** by taking actions. For instance, an agent could be a character playing a video game, the car in a self-driving car simulation, or a player in a poker game. The rewards are simply numbers that represent the goal of the agent, whether what happens to the agent is preferable or not. For example picking up a coin may give a positive reward, while getting hit by an enemy a negative reward.\n\nIn RL, the *state* represents everything about the current situation in the environment. What the agent can actually see, however, is an *observation*. For example, in a poker game, the observation may be the agent’s own cards and the previous actions of the opponent, while the state also includes the cards of the opponent and the sequence of cards in the deck (i.e., things the agent can’t see). In some environments like chess where there is no hidden information, the state and the observation are the same.\n\nGiven observations, the agent takes *actions*. After each action, the agent will get feedback from the environment in the form of:\n\n1. **Rewards:** Scalar values, which can be positive, zero, or negative\n2. **A new observation:** The result of taking the action from the previous state, which moves the agent to a new state and results in this new observation. (Also, whether or not the new state is “terminal”, meaning whether the current interaction is finished or still in progress. For example, completing a level or getting eaten by an opponent will terminate many games.)\n\nIn RL, our goal is to *train* the agent to be really good at a task by using rewards as feedback. Through one of many possible training algorithms, the agent gradually learns a strategy (also known as a *policy*) that defines what action the agent should take in any state to maximize reward. The goal is to maximize reward over an entire *episode*, which is a sequence of states that an agent goes through from the beginning of an interaction to the terminal state.\n\nHugely successful agents have been trained to superhuman performance in domains such as [Atari](https://medium.com/@jonathan_hui/rl-dqn-deep-q-network-e207751f7ae4) and the game of [Go](https://deepmind.com/blog/article/alphago-zero-starting-scratch).\n\n![]()The reinforcement learning process (Image by Authors)Let’s look at how a sample algorithm might work, using the video game Mario as an example. Let’s say that Mario has an enemy to his right, a mushroom to his left, and nothing above (see figure below). With those three action options, he might get a reward of +2 if he goes left, -10 if he goes right, and 0 if he goes up. After Mario takes an action, he will be in a new state with a new observation, and will have earned a reward based on his action. Then it will be time for another action, and the process goes on. Recall that the intention is to maximize rewards for an entire episode, which in this context is the series of states from the beginning of the game for the duration of Mario’s life.\n\n![]()Mario learning to eat mushrooms (Image by Authors)The first time the algorithm sees this situation, it might select an option randomly since it doesn’t yet understand the consequences of the available actions. As it sees this situation more and more, it will learn from experience that in situations like these, going right is bad, going up is ok, and going left is best. We wouldn’t directly teach a dog how to fetch a ball, but by giving treats (rewards) for doing so, the dog would learn by reinforcement. Similarly, Mario’s actions are reinforced by feedback from experience that mushrooms are good and enemies are not.\n\nHow does the algorithm work to maximize the rewards? Different RL algorithms work in different ways, but one might keep track of the results of taking each action from this position, and the next time Mario is in this same position, he would select the action expected to be the most rewarding according to the prior results. Many algorithms select the best action most of the time, but also sometimes select randomly to make sure that they are exploring all of the options. (Note that at the beginning, the agent usually acts randomly because it hasn’t yet learned anything about the environment.)\n\nIt’s important to keep exploring all of the options to make sure that the agent doesn’t find something decent and then stick with it forever, possibly ignoring much better alternatives. In the Mario game, if Mario first tried going right and saw it was -10 and then tried up and saw it was 0, it wouldn’t be great to always go up from that point on. It would be missing out on the +2 reward for going left that hadn’t been explored yet.\n\nImagine that you tried cooking at home and didn’t like the food, and then went to McDonald’s and had a fantastic meal. You found a good “action” of going to McDonald’s, but it would be a shame (and not great health-wise) if you kept eating at McDonald’s forever and didn’t try other restaurants that may end up giving better “rewards”.\n\nGeneralization\n==============\n\nRL is often used in game settings like [Atari](https://gym.openai.com/envs/#atari). One problem with using RL in Atari games (which are similar to Mario-style games) is the *sequential* nature of these games. After winning one level, you advance to the next level, and keep going through levels in the same order. **The algorithms may simply memorize exactly what happens in each level and then fail miserably when facing the slightest change in the game.** This means that the algorithms may not actually be understanding the game, but instead learning to memorize a sequence of button presses that leads to high rewards for particular levels. A better algorithm, instead of learning to memorize a sequence of button presses, is able to “understand” the structure of the game and thus is able to adapt to unseen situations, or *generalize*.\n\nSuccessful generalization means performing well in situations that haven’t been seen before. If you learned that 2\\*2 = 4, 2\\*3 = 6, and 2\\*4 = 8, and then could figure out that 2\\*6 = 12, that means that you were able to “understand” the multiplication and not just memorize the equations.\n\n![]()Atari Breakout ([source](/atari-reinforcement-learning-in-depth-part-1-ddqn-ceaa762a546f))Let’s look at a generalization example in the context of an email spam filter. These usually work by collecting data from users who mark emails in their inbox as spam. If a bunch of people marked the message “EARN $800/DAY WITH THIS ONE TRICK” as spam, then the algorithm would learn to block all of those messages for all email users in the future. But what if the spammer noticed his emails were being blocked and decided to outsmart the filter? The next day he might send a new message, “EARN $900/DAY WITH THIS ONE OTHER TRICK”. An algorithm that is only memorizing would fail to catch this because it was just memorizing exact messages to block, rather than learning about spam in general. A generalizing algorithm would learn patterns and effectively understand what constitutes a piece of spam mail.\n\nReward Learning\n===============\n\nGames generally have very well-defined rewards built into them. In a card game like [Blackjack](https://gym.openai.com/envs/Blackjack-v0/), rewards correspond to how much the agent wins or loses each hand. InAtari, rewards are game dependent, but are well-specified, such as earning points for defeating enemies or finishing levels and losing points for getting hit or dying.\n\nThe image below is from a classic reinforcement learning environment called [CartPole](https://gym.openai.com/envs/CartPole-v0/), where the goal is to keep a pole upright on a track, and where a reward of +1 is provided for every second that the pole stays upright. The agent moves the cart left or right to try to keep the pole balanced and the longer it can keep it balanced, the more +1 rewards it receives.\n\n![]()CartPole ([source](https://medium.com/@tuzzer/cart-pole-balancing-with-q-learning-b54c6068d947))**However, many tasks in the real world do not have such clearly defined rewards, which leads to limitations in the possible applications of reinforcement learning.** This problem is compounded by the fact that even attempting to specify clearly defined rewards is often difficult, if not impossible. A human could provide direct feedback to an RL agent during training, but this would require too much human time.\n\nOne approach called inverse reinforcement learning involves “reverse engineering” a reward function from demonstrations. For complex tasks, figuring out the reward function from demonstrations is very difficult to do well.\n\n***Reward learning* involves learning a reward function, which describes how many rewards are earned in each situation in the environment, i.e. a mapping of the current state and action to the rewards received.** The goal is to learn a reward function that encourages the desired behavior. To train the algorithm to learn a reward function, we need another source of data such as demonstrations of successfully performing the task. The reward function outputs reward predictions for each state, after which standard RL algorithms can be used to learn a strategy by simply substituting these approximate rewards in place of the usually known rewards.\n\n![]()The reinforcement learning process with a reward function in place of known rewards (Image by Authors)Prior work (described below as Christiano et al. 2017) provides an example that illuminates how difficult learning the reward function can be. Imagine teaching a robot to do a backflip. If you aren’t a serious gymnast, it would be challenging to give a demonstration of successfully performing the task yourself. **One could attempt to design a reward function that an agent could learn from, but this approach often falls victim to non-ideal reward design and reward hacking.** Reward hacking means that the agent can find a “loophole” in the reward specification. For example, if we assigned too much reward for getting in the proper initial position for the backflip, then maybe the agent would learn to repeatedly move into that bent over position forever. It would be maximizing rewards based on the reward function that we gave it, but wouldn’t actually be doing what we intended!\n\nA human could supervise every step of an agent’s learning by manually giving input on the reward function at each step, but this would be excessively time consuming and tedious.\n\nThe difficulty in specifying the rewards points towards the larger issue of human-AI alignment whereby humans want to align AI systems to their intentions and values, but specifying what we actually want can be surprisingly difficult (recall how every single genie story ends!).\n\nRelevant Papers\n===============\n\nWe’d like to look at several recent reward learning algorithms to evaluate their ability to learn rewards. We are specifically interested in how successful the algorithms are when faced with previously unseen environments or game levels, which tests their ability to generalize.\n\nTo do this, we leverage a body of prior work:\n\n1. [*Deep reinforcement learning from human preferences*](https://arxiv.org/abs/1706.03741) — 2017 by Christiano et al.\n2. [*Reward learning from human preferences and demonstrations in Atari*](https://arxiv.org/abs/1811.06521) — 2018 by Ibarz et al.\n3. [*Leveraging Procedural Generation to Benchmark Reinforcement Learning*](https://arxiv.org/abs/1912.01588) — 2019 by Cobbe et al.\n4. [*Extrapolating Beyond Suboptimal Demonstrations via Inverse Reinforcement Learning from Observations*](https://arxiv.org/abs/1904.06387) — 2019 by Brown, Goo et al.\n\nThe first two papers were impactful in utilizing reward learning alongside deep reinforcement learning, and the third introduces the OpenAI Procgen Benchmark, a useful set of games for testing algorithm generalization. The fourth paper proposed an efficient alternative to the methods of the first two works.\n\nDeep reinforcement learning from human preferences (Christiano et al. 2017)\n---------------------------------------------------------------------------\n\nThe key idea of this paper is that **it’s a lot easier to recognize a good backflip than to perform one.** The paper shows that it is possible to learn a predicted reward function for tasks in which we can only recognize a desired behavior, even if we can’t demonstrate it.\n\nThe proposed algorithm is shown below. It alternates between learning the reward function through human preferences and learning the strategy, which are both initially random.\n\n*Repeat until the agent is awesome:*\n\n\n> *1. Show two short video clips of the agent acting in the environment with its current strategy*\n> \n> *2. Ask a human in which video clip the agent was better*\n> \n> *3. Update the reward function given the human’s feedback*\n> \n> *4. Update the strategy based on the new reward function*\n> \n> \n\nThe simulated robot (shown in the below figure) was trained to perform a backflip from 900 queries in under an hour, a task that would be very difficult to demonstrate or to manually create rewards for.\n\n![]()Training a backflip from human preferences ([source](https://github.com/nottombrown/rl-teacher))Experiments were performed in the physics simulator called MuJoCo and also in Atari games. Why run these experiments in Atari when we already know the true rewards in Atari for the games? This gives the opportunity to assign preferences automatically instead of having a human manually give feedback about two video clip demonstrations. We can get automatic (synthetic) feedback by simply ranking the clip with higher true reward as the better one. This enables us to run experiments very quickly because no human effort is needed. Furthermore in this case we can assess the performance of the algorithm by comparing the learned reward function to the true rewards given in the game.\n\n![]()Backflip in motion ([source](https://openai.com/blog/deep-reinforcement-learning-from-human-preferences/))Reward learning from human preferences and demonstrations in Atari (Ibarz et al. 2018)\n--------------------------------------------------------------------------------------\n\nThis paper built on the prior paper by performing additional experiments in the Atari domain with a different setup and a different RL algorithm. Their main innovation is to utilize human demonstrations at the beginning in order to start with a decent strategy, whereas the original algorithm would have to start with an agent acting completely randomly since no rewards are known at the beginning. The addition of these human demonstrations improved learning significantly in three of nine tested Atari games relative to the no-demos method used by Christiano.\n\nLeveraging Procedural Generation to Benchmark Reinforcement Learning (Cobbe et al. 2019)\n----------------------------------------------------------------------------------------\n\n[OpenAI](https://openai.com/), an AI research lab, developed reinforcement learning testbed game environments called the [Procgen Benchmark](https://openai.com/blog/procgen-benchmark/), which includes 16 unique games. **Within each game, all levels are similar and share the same goal, but the actual components like the locations of enemies and hazards are randomly generated** and therefore can be different in each level.\n\nThis means that we can **train our agent on many random levels and then test it on completely new levels**, allowing us to understand whether the agent is able to generalize its learning. Note the contrast to Atari games in which training is done on sequential game levels where the enemies and rewards and game objects are always in the same places. Furthermore, when testing the agent’s abilities in sequential and non-randomly generated games, they are tested on those same levels in the same order. An important machine learning principle is to train with one set of data and test with another set to truly evaluate the agent’s ability to learn/generalize.\n\nWe looked primarily at four environments from Procgen in our work:\n\n1. **CoinRun:** Collect the coin at the end of the level while dodging enemies\n2. **FruitBot:** Eat fruit and avoid non-fruit foods like eggs and ice cream\n3. **StarPilot:** Side scrolling shooter game\n4. **BigFish:** Begin as a small fish and eat other strictly smaller fish to get bigger\n\nBelow are screenshots from each of the games. The agent view uses a lower resolution to optimize for the algorithm to require less computation. The human view is how the game would look if a human were playing.\n\n![]()CoinRun, FruitBot, StarPilot, and BigFish with agent view (Image by Authors)![]()CoinRun, FruitBot, StarPilot, and BigFish with human view (Image by Authors)The main experiments in the paper involved training agents in all 16 unique games over a range of 100 to 100,000 training levels each, while keeping the training time fixed. These agents were then tested on levels that they had never played before (this is possible because each level is uniquely auto-generated). They found that agents need to be trained on as many as 10,000 levels of the game (training levels) before they are able to demonstrate good performance on the test levels.\n\nThe StarPilot game plot below shows the training performance in blue and the testing performance in red. The y-axis is the rewards and the x-axis is the number of levels used for training. Note that the x-axis is on a logarithmic scale.\n\n![]()StarPilot training (blue) and testing (red) ([source](https://openai.com/blog/procgen-benchmark/))We see that the agent does very well immediately during training and that training performance then goes down and then back up slightly. Why would the agent get worse as it trains more? Since the training time is fixed, by training on only 100 levels, the agent would be repeating the same levels over and over and could easily memorize everything (but do very poorly at test time in the unseen levels). With 1,000 levels, the agent would have to spread its time over more levels and therefore wouldn’t be able to learn the levels as well. As we get to 10,000 and more levels, the agent is able to see such a diversity of levels, that it can perform well as it has begun to generalize its understanding. We also see that the test performance quickly improves to nearly the level of the training performance, suggesting that the agent is able to generalize well to unseen levels.\n\nExtrapolating Beyond Suboptimal Demonstrations via Inverse Reinforcement Learning from Observations (Brown, Goo et al. 2019)\n----------------------------------------------------------------------------------------------------------------------------\n\nThe algorithm proposed in this paper, called *T-REX*, is different from the previously mentioned reward learning methods in that it **doesn’t require ongoing human feedback during the learning procedure**. While the other algorithms require relatively little human time compared to supervising every agent action, they still require a person to answer thousands of preference queries. A key idea of T-REX is that the human time commitment can be significantly reduced by completing all preference feedback at the beginning, rather than continuously throughout the learning procedure.\n\nThe first step is to generate demonstrations of the game or task that is being learned. The demonstrations can either come from a standard reinforcement learning agent or from a human.\n\nThe main idea is that we can get a lot of preference data by extracting short video clips from these demonstrations and assigning preferences to them **based only on a ranking of the demonstrations that they came from**. For example, with 20 demonstrations, each demo would get a rank of 1 through 20. A large number of short video clips would be taken from each of these demonstrations and each clip would be assigned the ranking of the demo that it came from, so when they face each other, the preference would go to the clip that came from the better demo. The reward function would then be based on these preferences.\n\nThis is in contrast to the approach of the prior works that require human preference input over each and every pair of 1–2 second clips. Here, we only require human preference input to rank the initial demonstrations. T-REX’s disadvantage, however, is that it is using an approximation. Not all clips from a higher ranked demonstration should be preferred to clips from a lower ranked demonstration, but the idea is that on average, the preferences would work well and the procedure would suffice to learn a reward model.\n\nProviding a ranking over demonstrations is the same as giving preferences between every pair of them. For example, if we had three demos and ranked them 3>1>2, this means that we would generate the rankings of 3>1, 3>2, and 1>2. Then randomly generated clips from the demos would be given the same preference ranking based on which demos the clips came from.\n\nThe T-REX paper showed that having just 12 demonstrations was sufficient to learn a useful reward model. There are 12 \\* 11 / 2 = 66 distinct pairs for any 12 objects, so ranking 12 demonstrations from 1 to 12 is equivalent to answering up to 66 queries about which demo is better, which is ~100 times fewer queries than required by the algorithm by Christiano et al. Again, although the T-REX ranked demonstrations method is more efficient, it is sacrificing precision due to the simplifying assumption that all clips from a better demo are better than all clips from a worse demo.\n\nBrown and Goo et al.’s Atari-based experiments showed that T-REX was competitive against the Ibarz et al. method that was previously described. It was able to learn better-than-demonstrator quality agents using only 12 demonstrations and their corresponding preference (rank) labels.\n\nThe figure below shows a comparison between the scores from human demonstrations and the scores from the T-REX algorithm in five Atari games. T-REX was able to soundly outperform 3 of the 5 human scores, though was not able to earn any points in the Montezuma’s Revenge game.\n\n![]()T-REX algorithm vs. humans ([source](https://arxiv.org/abs/1904.06387))T-REX also exceeded the performance a state-of-the-art behavioral cloning algorithm (BCO) and imitation learning algorithm (GAIL) in 7 out of 8 games, as shown in the chart below, while also beating the best available demonstrations in 7 out of 8 games. (Behavioral cloning algorithms try to act as close to demonstrations as possible and inverse reinforcement learning algorithms attempt to recover a reward function from an expert demonstration.)\n\n![]()T-REX algorithm vs. other state-of-the-art methods ([source](https://arxiv.org/abs/1904.06387))Next: Implementations and Experiments\n=====================================\n\nBased on T-REX’s strong results and simple idea, we decided to base our initial experiments on combining this algorithm with the Procgen game environments, which would give us a highly efficient reward learning algorithm and a variety of benchmark games to test generalization. We will explain the details of our implementation and the experimental results and issues that we faced in the [second blog post](https://chisness.medium.com/assessing-generalization-in-reward-learning-implementations-and-experiments-de02e1d08c0e) of this series.", "url": "https://towardsdatascience.com/assessing-generalization-in-reward-learning-intro-and-background-da6c99d9e48", "title": "Assessing Generalization in Reward Learning: Intro and Background", "source": "html_articles", "source_type": "blogPost", "source_filetype": "pdf", "date_published": "2020-11-19T23:00:00Z", "authors": ["Max Chiswick", "Anton Makiievskyi", "Liang Zhou"], "summary": [], "id": "6325078ac10f33cb502bedb501597abd"} {"text": "[epistemic status: that’s just my opinion, man. I have highly suggestive evidence, not deductive proof, for a belief I sincerely hold]\n\n\n*“If you see fraud and do not say fraud, you are a fraud.”* — [Nasim Taleb](https://en.wikiquote.org/wiki/Nassim_Nicholas_Taleb)\n\n\nI was talking with a colleague the other day about an AI organization that claims:\n\n\n1. AGI is probably coming in the next 20 years.\n2. Many of the reasons we have for believing this are secret.\n3. They’re secret because if we told people about those reasons, they’d learn things that would let them make an AGI even sooner than they would otherwise.\n\n\nHis response was (paraphrasing): “Wow, that’s a really good lie! A lie that can’t be disproven.”\n\n\nI found this response refreshing, because he *immediately* jumped to the most likely conclusion.\n\n\nNear predictions generate more funding\n--------------------------------------\n\n\nGenerally, entrepreneurs who are optimistic about their project get more funding than ones who aren’t. AI is no exception. For a recent example, see the [Human Brain Project](https://en.wikipedia.org/wiki/Blue_Brain_Project). The founder, Henry Makram, predicted in 2009 that the project would succeed in simulating a human brain by 2019, and the project [was already widely considered a failure by 2013](https://www.scientificamerican.com/article/why-the-human-brain-project-went-wrong-and-how-to-fix-it/). (See [his TED talk](https://www.ted.com/talks/henry_markram_supercomputing_the_brain_s_secrets#t-856603), at 14:22)\n\n\nThe Human Brain project got [1.3 *billion* Euros](https://www.popsci.com/science/article/2013-02/how-simulate-human-brain-one-neuron-time-13-billion/) of funding from the EU.\n\n\nIt’s not hard to see why this is. To justify receiving large amounts of money, the leader must make a claim that the project is actually worth that much. And, AI projects are more impactful if it is, in fact, possible to develop AI soon. So, there is an economic pressure towards inflating estimates of the chance AI will be developed soon.\n\n\nFear of an AI gap\n-----------------\n\n\nThe [missile gap](https://en.wikipedia.org/wiki/Missile_gap) was a lie by the US Air Force to justify building more nukes, by falsely claiming that the Soviet Union had more nukes than the US.\n\n\nSimilarly, there’s historical precedent for an AI gap lie used to justify more AI development. [Fifth Generation Computer Systems](https://en.wikipedia.org/wiki/Fifth_generation_computer) was an ambitious 1982 project by the Japanese government (funded for $400 million in 1992, or $730 million in 2019 dollars) to create artificial intelligence through massively parallel logic programming.\n\n\nThe project is widely considered to have failed.  From a [1992 New York Times article](http://www.nytimes.com/1992/06/05/business/fifth-generation-became-japan-s-lost-generation.html):\n\n\n\n> A bold 10-year effort by Japan to seize the lead in computer technology is fizzling to a close, having failed to meet many of its ambitious goals or to produce technology that Japan’s computer industry wanted.\n> \n> …\n> \n> That attitude is a sharp contrast to the project’s inception, when it spread fear in the United States that the Japanese were going to leapfrog the American computer industry. In response, a group of American companies formed the Microelectronics and Computer Technology Corporation, a consortium in Austin, Tex., to cooperate on research. And the Defense Department, in part to meet the Japanese challenge, began a huge long-term program to develop intelligent systems, including tanks that could navigate on their own.\n> \n> …\n> \n> **The Fifth Generation effort did not yield the breakthroughs to make machines truly intelligent, something that probably could never have realistically been expected anyway.** Yet the project did succeed in developing prototype computers that can perform some reasoning functions at high speeds, in part by employing up to 1,000 processors in parallel. The project also developed basic software to control and program such computers. Experts here said that some of these achievements were technically impressive.\n> \n> …\n> \n> In his opening speech at the conference here, Kazuhiro Fuchi, the director of the Fifth Generation project, made an impassioned defense of his program.\n> \n> “Ten years ago we faced criticism of being too reckless,” in setting too many ambitious goals, he said, adding, “Now we see criticism from inside and outside the country because we have failed to achieve such grand goals.”\n> \n> Outsiders, he said, initially exaggerated the aims of the project, with the result that the program now seems to have fallen short of its goals.\n> \n> **Some American computer scientists say privately that some of their colleagues did perhaps overstate the scope and threat of the Fifth Generation project. Why? In order to coax more support from the United States Government for computer science research.**\n> \n> \n\n\n(emphasis mine)\n\n\nThis bears similarity to some conversations on AI risk I’ve been party to in the past few years. The fear is that Others (DeepMind, China, whoever) will develop AGI soon, so We have to develop AGI first in order to make sure it’s safe, because Others won’t make sure it’s safe and We will. Also, We have to discuss AGI strategy in private (and avoid public discussion), so Others don’t get the wrong ideas. (Generally, these claims have little empirical/rational backing to them; they’re based on scary stories, not historically validated threat models)\n\n\nThe claim that others will develop weapons and kill us with them by default implies a moral claim to resources, and a moral claim to be justified in making weapons in response. Such claims, if exaggerated, justify claiming more resources and making more weapons. And they weaken a community’s actual ability to track and respond to real threats (as in The Boy Who Cried Wolf).\n\n\nHow does the AI field treat its critics?\n----------------------------------------\n\n\nHubert Dreyfus, probably the most famous historical AI critic, published [“Alchemy and Artificial Intelligence”](https://www.rand.org/content/dam/rand/pubs/papers/2006/P3244.pdf) in 1965, which argued that the techniques popular at the time were insufficient for AGI. Subsequently, he was [shunned by other AI researchers](https://en.wikipedia.org/wiki/History_of_artificial_intelligence#Critiques_from_across_campus):\n\n\n\n> The paper “caused an uproar”, according to Pamela McCorduck.  The AI community’s response was derisive and personal.  Seymour Papert dismissed one third of the paper as “gossip” and claimed that every quotation was deliberately taken out of context.  Herbert A. Simon accused Dreyfus of playing “politics” so that he could attach the prestigious RAND name to his ideas. Simon said, “what I resent about this was the RAND name attached to that garbage.”\n> \n> Dreyfus, who taught at MIT, remembers that his colleagues working in AI “dared not be seen having lunch with me.”  Joseph Weizenbaum, the author of ELIZA, felt his colleagues’ treatment of Dreyfus was unprofessional and childish.  Although he was an outspoken critic of Dreyfus’ positions, he recalls “I became the only member of the AI community to be seen eating lunch with Dreyfus. And I deliberately made it plain that theirs was not the way to treat a human being.”\n> \n> \n\n\nThis makes sense as anti-whistleblower activity: ostracizing, discrediting, or punishing people who break the conspiracy to the public. Does this still happen in the AI field today?\n\n\n[Gary Marcus](https://en.wikipedia.org/wiki/Gary_Marcus) is a more recent AI researcher and critic. In 2012, [he wrote](https://medium.com/@GaryMarcus/the-deepest-problem-with-deep-learning-91c5991f5695):\n\n\n\n> Deep learning is important work, with immediate practical applications.\n> \n> …\n> \n> Realistically, deep learning is only part of the larger challenge of building intelligent machines. Such techniques lack ways of representing causal relationships (such as between diseases and their symptoms), and are likely to face challenges in acquiring abstract ideas like “sibling” or “identical to.” They have no obvious ways of performing logical inferences, and they are also still a long way from integrating abstract knowledge, such as information about what objects are, what they are for, and how they are typically used. The most powerful A.I. systems … use techniques like deep learning as just one element in a very complicated ensemble of techniques, ranging from the statistical technique of Bayesian inference to deductive reasoning.\n> \n> \n\n\nIn 2018, he [tweeted](https://twitter.com/GaryMarcus/status/1065280340669816832?ref_src=twsrc%5Etfw%7Ctwcamp%5Etweetembed&ref_url=https%3A%2F%2Fcdn.embedly.com%2Fwidgets%2Fmedia.html%3Ftype%3Dtext%252Fhtml%26key%3Da19fcc184b9711e1b4764040d3dc5c07%26schema%3Dtwitter%26url%3Dhttps%253A%2F%2Ftwitter.com%2Fgarymarcus%2Fstatus%2F1065280340669816832%26image%3Dhttps%253A%2F%2Fpbs.twimg.com%2Fprofile_images%2F850347800789372929%2FVg_HrEun_400x400.jpg) an article in which Yoshua Bengio (a deep learning pioneer) seemed to agree with these previous opinions. This tweet received a number of mostly-critical replies. Here’s one, by AI professor Zachary Lipton:\n\n\n\n> There’s a couple problems with this whole line of attack. 1) Saying it louder ≠ saying it first. You can’t claim credit for differentiating between reasoning and pattern recognition. 2) Saying X doesn’t solve Y is pretty easy. But where are your concrete solutions for Y?\n> \n> \n\n\nThe first criticism is essentially a claim that [everybody knows](https://thezvi.wordpress.com/2019/07/02/everybody-knows/) that deep learning can’t do reasoning. But, this is essentially admitting that Marcus is correct, while still criticizing him for saying it [ED NOTE: the phrasing of this sentence is off (Lipton publicly agrees with Marcus on this point), and there is more context, see [Lipton’s reply](https://www.greaterwrong.com/posts/KnQs55tjxWopCzKsk/the-ai-timelines-scam/comment/nGS8nPe8rA4Ay4uFN)].\n\n\nThe second is a claim that Marcus shouldn’t criticize if he doesn’t have a solution in hand. This policy deterministically results in the short AI timelines narrative being maintained: to criticize the current narrative, you must present your own solution, which constitutes another narrative for why AI might come soon.\n\n\nDeep learning pioneer Yann LeCun’s response is similar:\n\n\n\n> Yoshua (and I, and others) have been saying this for a long time. \n> The difference with you is that we are actually trying to do something about it, not criticize people who don’t.\n> \n> \n\n\nAgain, the criticism is not that Marcus is wrong in saying deep learning can’t do certain forms of reasoning, the criticism is that he isn’t presenting an alternative solution. (Of course, the claim could be correct even if Marcus doesn’t have an alternative!)\n\n\nApparently, it’s considered *bad practice* in AI to criticize a proposal for making AGI without presenting on alternative solution. Clearly, such a policy causes large distortions!\n\n\nHere’s another response, by Steven Hansen (a research scientist at DeepMind):\n\n\n\n> Ideally, you’d be saying this through NeurIPS submissions rather than New Yorker articles. A lot of the push-back you’re getting right now is due to the perception that you haven’t been using the appropriate channels to influence the field.\n> \n> \n\n\nThat is: to criticize the field, you should go through the field, not through the press. This is standard guild behavior. In the words of [Adam Smith](https://www.goodreads.com/quotes/420123-people-of-the-same-trade-seldom-meet-together-even-for): “People of the same trade seldom meet together, even for merriment and diversion, but the conversation ends in a conspiracy against the public, or in some contrivance to raise prices.”\n\n\n(Also see Marcus’s [medium article](https://medium.com/@GaryMarcus/the-deepest-problem-with-deep-learning-91c5991f5695) on the Twitter thread, and on the limitations of deep learning)\n\n\n[ED NOTE: I’m not saying these critics on Twitter are publicly promoting short AI timelines narratives (in fact, some are promoting the opposite), I’m saying that the norms by which they criticize Marcus result in short AI timelines narratives being maintained.]\n\n\nWhy model sociopolitical dynamics?\n----------------------------------\n\n\nThis post has focused on sociopolotical phenomena involved in the short AI timelines phenomenon. For this, I anticipate criticism along the lines of “why not just model the technical arguments, rather than the credibility of the people involved?” To which I pre-emptively reply:\n\n\n* No one can model the technical arguments in isolation. Basic facts, such as the accuracy of technical papers on AI, or the filtering processes determining what you read and what you don’t, depend on sociopolitical phenomena. This is far more true for people who don’t themselves have AI expertise.\n* “When AGI will be developed” isn’t just a technical question. It depends on what people actually choose to do (and what groups of people actually succeed in accomplishing), not just what can be done in theory. And so basic questions like “how good is the epistemology of the AI field about AI timelines?” matter directly.\n* The sociopolitical phenomena are actively making technical discussion harder. I’ve had a well-reputed person in the AI risk space discourage me from writing publicly about the technical arguments, on the basis that getting people to think through them might accelerate AI timelines (yes, really).\n\n\nWhich is not to say that modeling such technical arguments is not important for forecasting AGI. I certainly could have written a post evaluating such arguments, and I decided to write this post instead, in part because I don’t have much to say on this issue that Gary Marcus hasn’t [already said](https://arxiv.org/abs/1801.00631). (Of course, I’d have written a substantially different post, or none at all, if I believed the technical arguments that AGI is likely to come soon had merit to them)\n\n\nWhat I’m not saying\n-------------------\n\n\nI’m not saying:\n\n\n1. That deep learning isn’t a major AI advance.\n2. That deep learning won’t substantially change the world in the next 20 years (through narrow AI).\n3. That I’m certain that AGI isn’t coming in the next 20 years.\n4. That AGI isn’t existentially important on long timescales.\n5. That it isn’t possible that some AI researchers have asymmetric information indicating that AGI is coming in the next 20 years. (Unlikely, but possible)\n6. That people who have technical expertise shouldn’t be evaluating technical arguments on their merits.\n7. That most of what’s going on is people consciously lying. (Rather, covert deception hidden from conscious attention (e.g. motivated reasoning) is pervasive; see [The Elephant in the Brain](https://www.amazon.com/dp/B077GZT9Q1/ref=dp-kindle-redirect?_encoding=UTF8&btkr=1))\n8. That many people aren’t sincerely confused on the issue.\n\n\nI’m saying that there are systematic sociopolitical phenomena that cause distortions in AI estimates, especially towards shorter timelines. I’m saying that people are being duped into believing a lie. And at the point where [73% of tech executives say they believe AGI will be developed in the next 10 years](https://twitter.com/psych_of_tech/status/1106302905555042305), it’s a major one.\n\n\n*This has happened before.* And, in all likelihood, *this will happen again*.", "url": "https://unstableontology.com/2019/07/11/the-ai-timelines-scam/", "title": "The AI Timelines Scam", "source": "html_articles", "source_type": "blogPost", "source_filetype": "pdf", "date_published": "2019-07-10T22:00:00Z", "authors": ["Jessica Taylor"], "summary": [], "id": "93209d1be85a5a8809d81a875745f6a3"} {"text": "This is an account of free choice in a physical universe. It is very much relevant to decision theory and philosophy of science. It is largely metaphysical, in terms of taking certain things to be basically real and examining what can be defined in terms of these things.\n\n\nThe starting point of this account is critical and agential. By agential, I mean that the ontology I am using is from the point of view of an agent: a perspective that can, at the very least, receive observations, have cognitions, and take actions. By critical, I mean that this ontology involves uncertain conjectures subject to criticism, such as criticism of being logically incoherent or incompatible with observations. This is very much in a similar spirit to [critical rationalism](https://en.wikipedia.org/wiki/Critical_rationalism).\n\n\nClose attention will be paid to falsifiability and refutation, principally for *ontological* purposes, and secondarily for epistemic purposes. Falsification conditions specify the meanings of laws and entities relative to the perspective of some potentially falsifying agent. While the agent may believe in unfalsifiable entities, falsification conditions will serve to precisely pin down that which can be precisely pinned down.\n\n\nI have only seen “agential” used in the philosophical literature in the context of [agential realism](https://en.wikipedia.org/wiki/Agential_realism), a view I do not understand well enough to comment on. I was tempted to use “subjective”; however, while subjects have observations, they do not necessarily have the ability to take actions. Thus I believe “agential” has a more concordant denotation.\n\n\nYou’ll note that my notion of “agent” already assumes one can take actions. Thus, a kind of free will is taken as metaphysically basic. This presupposition may cause problems later. However, I will try to show that, if careful attention is paid, the obvious problems (such as contradiction with determinism) can be avoided.\n\n\nThe perspective in this post can be seen as starting from agency, defining consequences in terms of agency, and defining physics in terms of consequences. In contrast, the most salient competing decision theory views (including [framings](https://plato.stanford.edu/entries/decision-causal/) of CDT, EDT, and FDT) define agency in terms of consequences (“expected utility maximization”), and consequences in terms of physics (“counterfactuals”). So I am rebasing the ontological stack, turning it upside-down. This is less absurd than it first appears, as will become clear.\n\n\n(For simplicity, assume observations and actions are both symbols taken from some finite alphabet.)\n\n\nNaive determinism\n-----------------\n\n\nLet’s first, within a critical agential ontology, disprove some very basic forms of determinism.\n\n\nLet A be some action. Consider the statement: “I will take action A”. An agent believing this statement may falsify it by taking any action B not equal to A. Therefore, this statement does not hold as a law. It may be falsified at will.\n\n\nLet *f*() be some computable function returning an action. Consider the statement: “I will take action *f*()”. An agent believing this statement may falsify it by taking an action B not equal to *f*(). Note that, since the agent is assumed to be able to compute things, *f*() may be determined. So, indeed, this statement does not hold as a law, either.\n\n\nThis contradicts a certain strong formulation of naive determinism: the idea that one’s action is necessarily determined by some known, computable function.\n\n\nAction-consequences\n-------------------\n\n\nBut wait, what about physics? To evaluate what physical determinism even means, we need to translate physics into a critical agential ontology. However, before we turn to physics, we will first consider action-consequences, which are easier to reason about.\n\n\nConsider the statement: “If I take action A, I will immediately there-after observe O.” This statement is falsifiable, which means that if it is false, there is some policy the agent can adopt that will falsify it. Specifically, the agent may adopt the policy of taking action A. If the agent will, in fact, not observe O after taking this action, then the agent will learn this, falsifying the statement. So the statement is falsifiable.\n\n\nFinite conjunctions of falsifiable statements are themselves falsifiable. Therefore, the conjunction “If I take action A, I will immediately there-after observe O; if I take action B, I will immediately there-after observe P” is, likewise, falsifiable.\n\n\nThus, the agent may have falsifiable beliefs about observable consequences of actions. This is a possible starting point for decision theory: actions having consequences is already assumed in the ontology of VNM utility theory.\n\n\nFalsification and causation\n---------------------------\n\n\nNow, the next step is to account for physics. Luckily, the falsificationist paradigm was designed around demarcating scientific hypotheses, such that it naturally describes physics.\n\n\nInterestingly, falsificationism takes agency (in terms of observations, computation, and action) as more basic than physics. For a thing to be falsifiable, it must be *able* to be falsified by some agent, seeing some observation. And the word *able* implies freedom.\n\n\nLet’s start with some basic [Popperian logic](https://en.wikipedia.org/wiki/The_Logic_of_Scientific_Discovery). Let *f* be some testable function (say, connected to a computer terminal) taking in a natural number and returning a Boolean. Consider the hypothesis: “For all *x*, *f*(*x*) is true”. This statement is falsifiable: if it’s false, then there exists some action-sequence an agent can take (typing *x* into the terminal, one digit at a time) that will prove it to be false.\n\n\nThe given hypothesis is a kind of scientific law. It specifies a regularity in the environment.\n\n\nNote that there is a “bridge condition” at play here. That bridge condition is that the function *f* is, indeed, connected to the terminal, such that the agent’s observations of *f* are trustworthy. In a sense, the bridge condition specifies what *f* is, from the agent’s perspective; it allows the agent to locate *f* as opposed to some other function.\n\n\nLet us now consider causal hypotheses. We already considered action-consequences. Now let us extend this analysis to reasoning about causation between external entities.\n\n\nConsider the hypothesis: “If the match is struck, then it will alight immediately”. This hypothesis is falsifiable by an agent who is *able* to strike the match. If the hypothesis is false, then the agent may refute it by choosing to strike the match and then seeing the result. However, an agent who is unable to strike the match cannot falsify it. (Of course, this assumes the agent may see whether the match is alight after striking it)\n\n\nThus, we are defining causality in terms of agency. The falsification conditions for a causal hypothesis refer to the agent’s abilities. This seems somewhat wonky at first, but it is quite similar to [Pearlian casuality](https://en.wikipedia.org/wiki/Causality_(book)), which defines causation in terms of metaphysically-real interventions. This order of definition radically reframes the determinism vs. free will apparent paradox, by defining the conditions of determinism (causality) in terms of potential action.\n\n\nExternal physics\n----------------\n\n\nLet us now continue, proceeding to more universal physics. Consider the law of gravity, according to which a dropped object will accelerate downward at a near-constant weight. How might we port this law into an agential ontology?\n\n\nHere is the assumption about how the agent interacts with gravity. The agent will choose some natural number as the height of an object. Thereafter, the object will fall, while a camera will record the height of the object at each natural-number time expressed in milliseconds, to the nearest natural-number millimeter from the ground. The agent may observe a printout of the camera data afterwards.\n\n\nLogically, constant gravity implies, and is implied by, a particular quadratic formula for the height of the object as a function of the object’s starting height and the amount of time that has passed. This formula implies the content of the printout, as a function of the chosen height. So, the agent may falsify constant gravity (*in the observable domain*) by choosing an object-height, placing an object at that height, letting it fall, and checking the printout, which will show the law of constant gravity to be false, if the law in fact does not hold for objects dropped at that height (to the observed level of precision).\n\n\nUniversal constant gravity is not similarly falsifiable by this agent, because this agent may only observe this given experimental setup. However, a domain-limited law, stating that the law of constant gravity holds for all possible object-heights in this setup, up to the camera’s precision, is falsifiable.\n\n\nIt may seem that I am being incredibly pedantic about what a physical law is and what the falsification conditions are; however, I believe this level of pedantry is necessary for critically examining the notion of physical determinism to a high-enough level of rigor to check interaction with free will.\n\n\nInternal physics\n----------------\n\n\nWe have, so far, considered the case of an agent falsifying a physical law that applies to an external object. To check interaction with free will, we must interpret physical law applied to the agent’s internals, on which the agent’s cognition is, perhaps, running in a manner similar to software.\n\n\nLet’s consider the notion that the agent itself is “running on” some Turing machine. We will need to specify precisely what such “running on” means.\n\n\nLet C be the computer that the agent is considering whether it is running on. C has, at each time, a tape-state, a Turing machine state, an input, and an output. The input is attached to a sensor (such as a camera), and the output is attached to an actuator (such as a motor).\n\n\nFor simplicity, let us say that the history of tapes, states, inputs, and outputs is saved, such that it can be queried at a later time.\n\n\nWe may consider the hypothesis that C, indeed, implements the correct dynamics for a given Turing machine specification. These dynamics imply a relation between future states and past states. An agent may falsify these dynamics by checking the history and seeing if the dynamics hold.\n\n\nNote that, because some states or tapes may be unreachable, it is not possible to falsify the hypothesis that C implements correct dynamics starting from unreachable states. Rather, only behavior following from reachable states may be checked.\n\n\nNow, let us think on an agent considering whether they “run on” this computer C. The agent may be assumed to be able to query the history of C, such that it may itself falsify the hypothesis that C implements Turing machine specification M, and other C-related hypotheses as well.\n\n\nNow, we can already name some ways that “I run on C” may be falsified:\n\n\n* Perhaps there is a policy I may adopt, and a time *t*, such that if I implement this policy, I will observe O at time *t*, but C will observe something other than O at time *t*.\n* Perhaps there is a policy I may adopt, and a time *t*, such that if I implement this policy, I will take action A at time *t*, but C will take an action other than A at time *t*.\n\n\nThe agent may prove these falsification conditions by adopting a given policy until some time *t*, and then observing C’s observation/action at time *t*, compared to their own observation/action.\n\n\nI do not argue that the converse of these conditions exhaust what it means that “I run on C”. However, they at least restrict the possibility space by a very large amount. For the falsification conditions given to not hold, the observations and behavior of C must be identical with the agent’s own observations and behavior, for all possible policies the agent may adopt.\n\n\nI will name the hypothesis with the above falsification conditions: “I effectively run on C”. This conveys that these conditions may not be exhaustive, while still being quite specific, and relating to effects between the agent and the environment (observations and actions).\n\n\nNote that the agent can hypothesize itself to effectively run on multiple computers! The conditions for effectively running on one computer do not contradict the conditions for effectively running on another computer. This naturally handles cases of identical physical instantiations of a single agent.\n\n\nAt this point, we have an account of an agent who:\n\n\n* Believes they have observations and take free actions\n* May falsifiably hypothesize physical law\n* May falsifiably hypothesize that some computer implements a Turing machine specification\n* May falsifiably hypothesize that they themselves effectively run on some computer\n\n\nI have not yet shown that this account is consistent. There may be paradoxes. However, this at least represents the subject matter covered in a unified critical agential ontology.\n\n\nParadoxes sought and evaluated\n------------------------------\n\n\nLet us now seek out paradox. We showed before that the hypothesis “I take action *f*()” may be refuted at will, and therefore does not hold as a necessary law. We may suspect that “I effectively run on C” runs into similar problems.\n\n\n#### Self-contradiction\n\n\nRemember that, for the “I effectively run on C” hypothesis to be falsified, it must be falsified at some time, at which the agent’s observation/action comes apart from C’s. In the “I take action *f*()” case, we had the agent simulate *f*() in order to take an opposite action. However, C need not halt, so the agent cannot simulate C until halting. Instead, the agent may select some time *t*, and run C for *t* steps. But, by the time the agent has simulated C for *t* steps, the time is already past *t*, and so the agent may not contradict C’s behavior at time *t*, by taking an opposite action. Rather, the agent only knows what C does at time *t* at some time later than *t*, and only their behavior after this time may depend on this knowledge.\n\n\nSo, this paradox is avoided by the fact that the agent cannot contradict its own action before knowing it, but cannot know it before taking it.\n\n\nWe may also try to create a paradox by assuming an external super-fast computer runs a copy of C in parallel, and feeds this copy’s action on subjective time-step *t* into the original C’s observation before time *t*; this way, the agent may observe its action before it takes it. However, now the agent’s action is dependent on its observation, and so the external super-fast computer must decide which observation to feed into the parallel C. The external computer cannot know what C will do before producing this observation, and so this attempt at a paradox cannot stand without further elaboration.\n\n\nWe see, now, that if free will and determinism are compatible, it is due to limitations on the agent’s knowledge. The agent, knowing it runs on C, cannot thereby determine what action it takes at time *t*, until a later time. And the initial attempt to provide this knowledge externally fails.\n\n\n#### Downward causation\n\n\nLet us now consider a general criticism of functionalist views, which is that of downward causation: if a mental entity (such as observation or action) causes a physical entity, doesn’t that either mean that the mental entity is physical, or that physics is not causally closed?\n\n\nRecall that we have defined causation in terms of the agent’s action possibilities. It is straightforwardly the case, then, that the agent’s action at time *t* causes changes in the environment.\n\n\nBut, what of the physical cause? Perhaps it is also the case that C’s action at time *t* causes changes in the environment. If so, there is a redundancy, in that the change in the environment is caused both by the agent’s action and by C’s action. We will examine this possible redundancy to find potential conflicts.\n\n\nTo consider ways that C’s action may change the environment, we must consider how the agent may intervene on C’s action. Let us say we are concerned with C’s action at time *t*. Then we may consider the agent at some time *u* < *t* taking an action that will cause C’s action at time *t* to be over-written. For example, the agent may consider programming an external circuit that will interact with C’s circuit (“its circuit”).\n\n\nHowever, if the agent performs this intervention, then the agent’s action at time *t* has no influence on C’s action at time *t*. This is because C’s action is, necessarily, equal to the value chosen at time *u*. (Note that this lack of influence means that the agent *does not effectively run on C*, for the notion of “effectively run on” considered! However, the agent may be said to effectively run on C with one exception.)\n\n\nSo, there is no apparent way to set up a contradiction between these interventions. If the agent decides early (at time *u*) to determine C’s action at time *t*, then that decision causes C’s action at time *t*; if the agent does not do so, then the agent’s decision at time *t* causes C’s action at time *t*; and these are mutually exclusive. Hence, there is not an apparent problem with redundant causality.\n\n\n#### Epiphenomenalism\n\n\nIt may be suspected that the agent I take to be real is epiphenomenal. Perhaps all may be explained in a physicalist ontology, with no need to posit that there exists an agent that has observations and takes actions. (This is a criticism levied at some views on consciousness; my notion of metaphysically-real observations is similar enough to consciousness that these criticisms are potentially applicable)\n\n\nThe question in regards to explanatory power is: what is being explained, in terms of what? My answer is: observations are being explained, in terms of hypotheses that may be falsified by action/observations.\n\n\nAn eliminativist perspective denies the agent’s observations, and thus fails to explain what ought to be explained, in my view. However, eliminativists will typically believe that “scientific observation” is possible, and seek to explain scientific observations.\n\n\nA relevant point to make here is that the notion of scientific observation assumes there is some scientific process happening that has observations. Indeed, the scientific method includes actions, such as testing, which rely on the scientific process taking actions. Thus, scientific processes may be considered as agents in the sense I am using the term.\n\n\nMy view is that erasing the agency of both individual scientists, and of scientific processes, puts the ontological and epistemic status of physics on shaky ground. It is hard to say why one should believe in physics, except in terms of it explaining observations, including experimental observations that require taking actions. And it is hard to say what it means for a physical hypothesis to be true, with no reference to how the hypothesis connects with observation and action.\n\n\nIn any case, the specter of epiphenomenalism presents no immediate paradox, and I believe that it does not succeed as a criticism.\n\n\nComparison to Gary Drescher’s view\n----------------------------------\n\n\nI will now compare my account to Gary Drescher’s view. I have found Drescher’s view to be both particularly systematic and compelling, and to be quite similar to the views of other relevant philosophers such as Daniel Dennett and Eliezer Yudkowsky. Therefore, I will compare and contrast my view with Drescher’s. This will dispel the illusion that I am not saying anything new.\n\n\nNotably, Drescher makes a similar observation to mine on Pearl: “Pearl’s formalism models free will rather than mechanical choice.”\n\n\nQuoting section 5.3 of *Good and Real*:\n\n\n\n> Why did it take that action? In pursuit of what goal was the action selected? Was that goal achieved? Would the goal have been achieved if the machine had taken this other action instead? The system includes the assertion that if the agent were to do X, then Y would (probably) occur; is that assertion true? The system does not include the assertion that if it were to do P, Q would probably occur; is that omitted assertion true? Would the system have taken some other action just now if it had included that assertion? Would it then have better achieved its goals?\n> \n> Insofar as such questions are meaningful and answerable, the agent makes choices in at least the sense that the correctness of its actions with respect to its designated goals is analyzable. That is to say, there can be means-end connections between its actions and its goals: its taking an action for the sake of a goal can make sense. And this is so despite the fact that everything that will happen-including every action taken and every goal achieved or not-is inalterably determined once the system starts up. Accordingly, I propose to call such an agent a choice machine.\n> \n> \n\n\nDrescher is defining conditions of choice and agency in terms of whether the decisions “make sense” with respect to some goal, in terms of means-end connections. This is a “outside” view of agency in contrast with my “inside” view. That is, it says some thing is an agent when its actions connect with some goal, and when the internal logic of that thing takes into account this connection.\n\n\nThis is in contrast to my view, which takes agency to be metaphysically basic, and defines physical outside views (and indeed, physics itself) in terms of agency.\n\n\nMy view would disagree with Drescher’s on the “inalterably determined” assertion. In an earlier chapter, Drescher describes a deterministic block-universe view. This view-from-nowhere implies that future states are determinable from past states. In contrast, the view I present here rejects views-from-nowhere, instead taking the view of some agent in the universe, from whose perspective the future course is not already determined (as already argued in examinations of paradox).\n\n\nNote that these disagreements are principally about metaphysics and ontology, rather than scientific predictions. I am unlikely to predict the results of scientific experiments differently from Drescher on account of this view, but am likely to account for the scientific process, causation, choice, and so on in different language, and using a different base model.\n\n\nConclusion and further research\n-------------------------------\n\n\nI believe the view I have presented to be superior to competing views on multiple fronts, most especially logical/philosophical systematic coherence. I do not make the full case for this in this post, but take the first step, of explicating the basic ontology and how it accounts for phenomena that are critically necessary to account for.\n\n\nAn obvious next step is to tackle decision theory. Both Bayesianism and VNM decision theory are quite concordant with critical agential ontology, in that they propose coherence conditions on agents, which can be taken as criticisms. Naturalistic decision theory involves reconciling choice with physics, and so a view that already includes both is a promising starting point.\n\n\nMulti-agent systems are quite important as well. The view presented so far is near-solipsistic, in that there is a single agent who conceptualizes the world. It will need to be defined what it means for there to be “other” agents. Additionally, “aggregative” agents, such as organizations, are important to study, including in terms of what it means for a singular agent to participate in an aggregative agent. “Standardized” agents, such as hypothetical skeptical mathematicians or philosophers, are also worthy subjects of study; these standardized agents are relevant in reasoning about argumentation and common knowledge. Also, while the discussion so far has been in terms of [closed individualism](https://opentheory.net/2018/09/a-new-theory-of-open-individualism/), alternative identity views such as empty individualism and open individualism are worth considering from a critical agential perspective.\n\n\nOther areas of study include naturalized epistemology and philosophy of mathematics. The view so far is primarily ontological, secondarily epistemological. With the ontology in place, epistemology can be more readily explored.\n\n\nI hope to explore the consequences of this metaphysics further, in multiple directions. Even if I ultimately abandon it, it will have been useful to develop a coherent view leading to an illuminating refutation.", "url": "https://unstableontology.com/2020/03/05/a-critical-agential-account-of-free-will-causation-and-physics/", "title": "A critical agential account of free will, causation, and physics", "source": "html_articles", "source_type": "blogPost", "source_filetype": "pdf", "date_published": "2020-03-04T23:00:00Z", "authors": ["Jessica Taylor"], "summary": [], "id": "c98919886e9aa62ea80e981e0106f1b5"} {"text": "Executive Summary\n-----------------\n\n\nMilitaries around the world believe that the integration of machine learning methods throughout their forces could improve their effectiveness. From algorithms to aid in recruiting and promotion, to those designed for surveillance and early warning, to those used directly on the battlefield, applications of artificial intelligence (AI) could shape the future character of warfare. These uses could also generate significant risks for international stability. These risks relate to broad facets of AI that could shape warfare, limits to machine learning methods that could increase the risks of inadvertent conflict, and specific mission areas, such as nuclear operations, where the use of AI could be dangerous. To reduce these risks and promote international stability, we explore the potential use of confidence-building measures (CBMs), constructed around the shared interests that all countries have in preventing inadvertent war. Though not a panacea, CBMs could create standards for information-sharing and notifications about AI-enabled systems that make inadvertent conflict less likely.\n\n\n\n\nIntroduction\n------------\n\n\nIn recent years, the machine learning revolution has sparked a wave of interest in artificial intelligence (AI) applications across a range of industries. Nations are also mobilizing to use AI for national security and military purposes.[1](#fn1)\nIt is therefore vital to assess how the militarization of AI could affect international stability and how to encourage militaries to adopt AI in a responsible manner. Doing so requires understanding the features of AI, the ways it could shape warfare, and the risks to international stability resulting from the militarization of artificial intelligence. \n\n\nAI is a general-purpose technology akin to computers or the internal combustion engine, not a discrete technology like missiles or aircraft. Thus, while concerns of an “AI arms race” are overblown, real risks exist.[2](#fn2)\nAdditionally, despite the rhetoric of many national leaders, military spending on AI is relatively modest to date. Rather than a fervent arms race, militaries’ pursuit of AI looks more like routine adoption of new technologies and a continuation of the multi-decade trend of adoption of computers, networking, and other information technologies. Nevertheless, the incorporation of AI into national security applications and warfare poses genuine risks. Recognizing the risks is not enough, however. Addressing them requires laying out suggestions for practical steps states can take to minimize risks stemming from military AI competition.[3](#fn3)\nOne approach states could take is adopting confidence-building measures (CBMs): unilateral, bilateral, and/or multilateral actions that states can take to build trust and prevent inadvertent military conflict. CBMs generally involve using transparency, notification, and monitoring to attempt to mitigate the risk of conflict.[4](#fn4)\nThere are challenges involved in CBM adoption due to differences in the character of international competition today versus during the Cold War, when CBMs became prominent as a concept. However, considering possibilities for CBMs and exploring ways to shape the dialogue about AI could make the adoption of stability-promoting CBMs more likely. \n\n\n\n\n\n\n\n> \n> Rather than a fervent arms race, militaries’ pursuit of AI looks more like routine adoption of new technologies and a continuation of the multi-decade trend of adoption of computers, networking, and other information technologies.\n> \n> \n> \n\n\n\n\n\nThis paper briefly outlines some of the potential risks to international stability arising from military applications of AI, including ways AI could influence the character of warfare, risks based on the current limits of AI technology, and risks relating to some specific mission areas, such as nuclear operations, in which introducing AI could present challenges to stability. The paper then describes possible CBMs to address these risks, moving from broad measures applicable to many military applications of AI to targeted measures designed to address specific risks. In each discussion of CBMs, the paper lays out both the opportunities and potential downsides of states adopting the CBM.\n\n\n\n\nMilitary Uses of AI: A Risk to International Stability?\n-------------------------------------------------------\n\n\nMilitaries have an inherent interest in staying ahead of their competitors, or at least not falling behind. National militaries want to avoid fielding inferior military capabilities and so will generally pursue emerging technologies that could improve their ability to fight. While the pursuit of new technologies is normal, some technologies raise concerns because of their impact on stability or their potential to shift warfare in a direction that causes net increased harm for all (combatants and/or civilians). For example, around the turn of the 20th century, great powers debated, with mixed results, arms control against a host of industrial era technologies that they feared could alter warfare in profound ways. These included submarines, air-delivered weapons, exploding bullets, and poison gas. \n\n\nAfter the invention of nuclear weapons, concerns surrounding their potential use dominated the attention of policymakers given the weapons’ sheer destructive potential. Especially after the Cuban Missile Crisis illustrated the very real risk of escalation, the United States and the Soviet Union engaged in arms control on a range of weapons technologies, including strategic missile defense, intermediate-range missiles, space-based weapons of mass destruction (WMDs), biological weapons, and apparent tacit restraint in neutron bombs and anti-satellite weapons. The United States and the Soviet Union also, at times, cooperated to avoid miscalculation and improve stability through measures such as the Open Skies Treaty and the 1972 Incidents at Sea Agreement. \n\n\nIt is reasonable and, in fact, vital to examine whether the integration of AI into warfare might also pose risks that policymakers should attend. Some AI researchers themselves have raised alarm at militaries’ adoption of AI and the way it could increase the risk of war and international instability.[5](#fn5)\nUnderstanding risks stemming from military use of AI is complicated, however, by the fact that AI is not a discrete technology like missiles or submarines. As a general-purpose technology, AI has many applications, any of which could, individually, improve or undermine stability in various ways. \n\n\nMilitaries are only beginning the process of adopting AI, and in the near term, military AI use is likely to be limited and incremental. Over time, the cognization of warfare through the introduction of artificial intelligence could change warfare in profound ways, just as industrial revolutions in the past shaped warfare.[6](#fn6) Even if militaries successfully manage safety and security concerns and field AI systems that are robust and secure, properly functioning AI systems could create challenges for international stability.\n\n\nFor example, both Chinese and American scholars have hypothesized that the introduction of AI and autonomous systems in combat operations could accelerate the tempo of warfare beyond the pace of human control. Chinese scholars have referred to this concept as a battlefield “singularity,”[7](#fn7)\nwhile some Americans have coined the term “hyperwar” to refer to a similar idea.[8](#fn8)\nIf warfare evolves to a point where the pace of combat outpaces humans’ ability to keep up, and therefore control over military operations must be handed to machines, it would pose significant risks for international stability, even if the delegation decision seems necessary due to competitive pressure. Humans might lose control over managing escalation, and war termination could be significantly complicated if machines fight at a pace that is faster than humans can respond. In addition, delegation of escalation control to machines could mean that minor tactical missteps or accidents that are part and parcel of military operations in the chaos and fog of war, including fratricide, civilian casualties, and poor military judgment, could spiral out of control and reach catastrophic proportions before humans have time to intervene. \n\n\n\n\n\n\n\n> \n> Even if militaries successfully manage safety and security concerns and field AI systems that are robust and secure, properly functioning AI systems could create challenges for international stability.\n> \n> \n> \n\n\n\n\n\nThe logic of a battlefield singularity, or hyperwar, is troubling precisely because competitive pressures could drive militaries to accelerate the tempo of operations and remove humans “from the loop,” even if they would rather not, in order to keep pace with adversaries. Then-Deputy Secretary of Defense Robert Work succinctly captured this dilemma when he posed the question, “If our competitors go to Terminators ... and it turns out the Terminators are able to make decisions faster, even if they’re bad, how would we respond?”[9](#fn9) While this “arms race in speed” is often characterized tactically in the context of lethal autonomous weapon systems, the same dynamic could emerge operationally involving algorithms designed as decision aids. The perception by policymakers that war is evolving to an era of machine-dominated conflict in which humans must cede control to machines to remain competitive could also hasten such a development, particularly if decision makers lack appropriate education about the limits of AI. In extremis, the shift toward the use of algorithms for military decision-making, combined with a more roboticized battlefield, could potentially change the nature of war. War would still be the continuation of politics by other means in the broadest sense, but in the most extreme case it might feature so little human engagement that it is no longer a fundamentally human endeavor.[10](#fn10)\n\n\nThe widespread adoption of AI could have a net effect on international stability in other ways. AI systems could change strategy in war, including by substituting machines for human decision-making in some mission areas, and therefore removing certain aspects of human psychology from parts of war.[11](#fn11) Warfare today is waged by humans through physical machinery (rockets, missiles, machine guns, etc.), but decision-making is almost universally human. As algorithms creep closer to the battlefield, some decisions will be made by machines even if warfare remains a human-directed activity that is fought for human political purposes. The widespread integration of machine decision-making across tactical, operational, and strategic levels of warfare could have far-reaching implications. Already, AI agents playing real-time computer strategy games such as StarCraft and Dota 2 have demonstrated superhuman aggressiveness, precision, and coordination. In other strategy games such as poker and go*,* AI agents have demonstrated an ability to radically adjust playing styles and risk-taking in ways that would be, at best, challenging for humans to mimic for psychological reasons. AI dogfighting agents have similarly demonstrated superhuman precision and employed different tactics because of the ability to take greater risk to themselves.[12](#fn12)\n\n\nIn many ways, AI systems have the ability to be the perfect strategic agents, unencumbered by fear, loss aversion, commitment bias, or other human emotional or cognitive biases and limitations.[13](#fn13) While the specific algorithms and models used for computer games are unlikely to transfer well to combat applications, the general characteristics and advantages of AI agents relative to humans could have applications in the military domain. As in the case of speed, the net consequence of machine decision-making on the psychology of combat could change the character of warfare in profound ways.[14](#fn14)\n\n\nAI could have other cumulative effects on warfare. Policymakers generally assess adversaries’ behavior based on an understanding of their capabilities and intentions.[15](#fn15) Shifts toward AI could undermine policymaker knowledge in both of those arenas. The transition of military capabilities to software, already underway but arguably accelerated by the adoption of AI and autonomous systems, could make it harder for policymakers to accurately judge relative military capabilities. Incomplete information about adversary capabilities would therefore increase, conceivably increasing the risks of miscalculation. Alternatively, the opposite could be true—AI and autonomous systems used for intelligence collection and analysis could radically increase transparency about military power, making it easier for policymakers to judge military capabilities and anticipate the outcome of a conflict in advance. This added transparency could decrease the risks of miscalculation and defuse some potential conflicts before they begin. \n\n\n\n\n\n\n\n> \n> The transition of military capabilities to software, already underway but arguably accelerated by the adoption of AI and autonomous systems, could make it harder for policymakers to accurately judge relative military capabilities.\n> \n> \n> \n\n\n\n\n\nThe integration of AI into military systems, in combination with a shift toward a more roboticized force structure, could also change policymakers’ threshold for risk-taking, either because they believe that fewer human lives are at risk or that AI systems enable greater precision, or perhaps because they see AI systems as uniquely dangerous. The perceived availability of AI systems could change policymakers’ beliefs about their ability to foresee the outcome of conflicts or to win.\n\n\nIt is, no doubt, challenging to stand at the beginning of the AI age and imagine the cumulative consequence of AI adoption across varied aspects of military operations, including effects that hinge as much on human perception of the technology as the technical characteristics themselves. The history of attempts to regulate the effects of industrial age weapons in the late 19th and early 20th centuries suggests that even when policymakers accurately anticipated risks from certain technologies, such as air-delivered weapons or poison gas, they frequently crafted regulations that turned out to be ill-suited to the specific forms these technologies took as they matured. Furthermore, even when both sides desired restraint, it frequently (although not always) collapsed under the exigencies of war.[16](#fn16) There is no reason to think that our prescience in predicting the path of future technologies or ability to restrain warfare is any better today. There is merit, however, in beginning the process of thinking about the many ways in which AI could influence warfare, big and small.\n\n\nEven beyond the scenarios described above, it is possible to frame how military applications of AI could impact international stability into two broad categories: (1) risks related to the character of algorithms and their use by militaries, and (2) risks related to militaries using AI for particular missions.\n\n\n\n\n### Risks Due to the Limitations of AI\n\n\nA challenge for military adoption of AI is that two key risks associated with new technology adoption are in tension. First, militaries could fail to adopt—or adopt quickly enough or employ in the right manner—a new technology that yields significant battlefield advantage. As a recent example, despite the overall growth in the military uninhabited, or unmanned, aircraft market, the adoption of uninhabited vehicles has, at times, been a source of contention within the U.S. defense establishment, principally based on debates over the merits of this new technology relative to existing alternatives.[17](#fn17)\n\n\nAlternatively, militaries could adopt an immature technology too quickly, betting heavily and incorrectly on new and untested propositions about how a technology may change warfare. Given the natural incentive militaries have in ensuring their capabilities work on the battlefield, it may be reasonable to assume that militaries would manage these risks reasonably well, although not without some mishaps. But when balancing the risk of accidents versus falling behind adversaries in technological innovation, militaries arguably place safety as a secondary consideration.[18](#fn18)\nMilitaries may be relatively accepting of the risk of accidents in the pursuit of technological advantage, since accidents are a routine element of military operations, even in training.[19](#fn19)\nNevertheless, there are strong bureaucratic interests in ultimately ensuring that fielded capabilities are robust and secure, and existing institutional processes may be able to manage AI safety and security risks with some adaptation.\n\n\nFor militaries, balancing between the risks of going too slow versus going too fast with AI adoption is complicated by the fact that AI, and deep learning in particular, is a relatively immature technology with significant vulnerabilities and reliability concerns.[20](#fn20)\nThese concerns are heightened in situations where there may not be ample data on which to train machine learning systems. Machine learning systems generally rely on very large data sets, which may not exist in some military settings, particularly when it comes to early warning of rare events (such as a nuclear attack) or tracking adversary behavior in a multidimensional battlefield. When trained with inadequate data sets or employed outside the narrow context of their design, AI systems are often unreliable and brittle. AI systems can often seem deceptively capable, performing well (sometimes better than humans) in some laboratory settings, then failing dramatically under changing environmental conditions in the real world. Self-driving cars, for example, may be safer than human drivers in some settings, then inexplicably turn deadly in situations where a human operator would not have trouble. Additionally, deep learning methods may, at present, be insufficiently reliable for safety-critical applications even when operating within the bounds of their design specifications.[21](#fn21) \n\n\nFor example, concerns about limits to the reliability of algorithms across demographic groups have hindered the deployment of facial recognition technology in the United States, particularly in high-consequence applications such as law enforcement. Militaries, too, should be concerned about technical limitations and vulnerabilities in their AI systems. Militaries want technologies that work, especially on the battlefield. Accordingly, the AI strategy of the Department of Defense (DoD) calls for AI systems that are “resilient, robust, reliable, and secure.”[22](#fn22)\nThis is undoubtedly the correct approach but a challenge, at least in the near term, given the reliability issues facing many uses of algorithms today and the highly dynamic conditions of battlefield use.\n\n\nAn additional challenge stems from security dilemma dynamics. Competitive pressures could lead nations to shortcut test and evaluation (T&E) in a desire to field new AI capabilities ahead of adversaries. Similar competitive pressures to beat others to market appear to have played an exacerbating role in accident risk relating to AI systems in self-driving cars and commercial airplane autopilots.[23](#fn23)\nMilitaries evaluating an AI system of uncertain reliability could, not unjustifiably, feel pressure to hasten deployment if they believe others are taking similar measures. Historically, these pressures are highest immediately before and during wars, where the risk/reward equation surrounding new technologies can shift due to the very real lives on the line. For example, competitive pressures may have spurred the faster introduction of poison gas in World War I.[24](#fn24)\nSimilarly, in World War II, Germany diverted funds from proven technologies into jet engines, ballistic missiles, and helicopters, even though none of the technologies proved mature until after the war.[25](#fn25)\nThis dynamic risk might spark a self-fulfilling prophecy in which countries accelerate deployment of insufficiently tested AI systems out of the fear that others will deploy first.[26](#fn26)\nThe net effect is not an arms race but a “race to the bottom” on safety, leading to the deployment of unsafe AI systems and heightening the risk of accidents and instability.\n\n\nEven if military AI systems are adequately tested, the use of AI to enable more autonomous machine behavior in military systems raises an additional set of risks. In delegating decision-making from humans to machines, policymakers may de facto be fielding forces with less flexibility and ability to understand context, which would then have deleterious effects on crisis stability and managing escalation. While machines have many advantages in speed, precision, and repeatable actions, machines today cannot come close to human intelligence in understanding context and flexibly adapting to novel situations. This brittleness of machine decision-making may particularly be a challenge in pre-conflict crisis situations, where tensions among nations run high. Military forces from competing nations regularly interact in militarized disputes below the threshold of war in a variety of contested regions (e.g., the India-Pakistan border, China-India border, South China Sea, Black Sea, Syria, etc.). These interactions among deployed forces sometimes run the risk of escalation due to incidents or skirmishes that can inflame tensions on all sides. This poses a challenge for national leaders, who have imperfect command-and-control over their own military forces. Today, however, deployed military forces rely on human decision-making. Humans can understand broad guidance from their national leadership and commander’s intent, such as “defend our territorial claims, but don’t start a war.” Relative to humans, even the most advanced AI systems today have no ability to understand broad guidance, nor do they exhibit the kinds of contextual understanding that humans frequently label “common sense.”[27](#fn27)\nMilitaries already employ uninhabited vehicles (drones) in contested areas, which have been involved in a number of escalatory incidents in the East China Sea, South China Sea, Syria, and Strait of Hormuz.[28](#fn28)\nOver time, as militaries incorporate more autonomous functionality into uninhabited vehicles, that functionality could complicate interactions in these and other contested areas.\n\n\nAutonomous systems may take actions based on programming that, while not a malfunction, are other than what a commander would have wanted a similarly situated human to do in the same situation. While the degree of flexibility afforded subordinates varies considerably by military culture and doctrine, humans have a greater ability to flexibly respond to complex and potentially ambiguous escalatory incidents in ways that may balance competing demands of ensuring national resolve while managing escalation.[29](#fn29)\nAutonomous systems will simply follow their programming, whatever that may be, even if those rules no longer make sense or are inconsistent with a commander’s intent in the given situation. This challenge is compounded by the fact that human commanders cannot anticipate all of the possible situations that forward-deployed military forces in contested regions may face. Employing autonomous systems in a crisis effectively forces human decision makers to tie their own hands with certain pre-specified actions, even if they would rather not. \n\n\nUnintended actions by autonomous systems in militarized disputes or contested areas are a challenge for militaries as they adopt more autonomous systems into their forces. The complexity of many autonomous systems used today, even ones that rely on rule-based decision-making, may mean that the humans employing autonomous systems lack sufficient understanding of what actions the system may take in certain situations.[30](#fn30)\nHumans’ ability to flexibly interpret guidance from higher commanders, even to the point of disregarding guidance if it no longer seems applicable, is by contrast a boon to managing escalation risks by retaining human decision-making at the point of interaction among military forces in contested regions.[31](#fn31)\n\n\nUnintended escalation is not merely confined to lethal actions, such as firing on enemy forces. Nonlethal actions, such as crossing into another state’s territory, can be perceived as escalatory. Even if such actions do not lead directly to war, they could heighten tensions, increase suspicion about an adversary’s intentions, or inflame public sentiment. While in most cases, humans would still retain agency over how to respond to an incident, competing autonomous systems could create unexpected interactions or escalatory spirals. Complex, interactive dynamics between algorithms have been seen in other settings, including financial markets,[32](#fn32)\nand even in situations where the algorithms are relatively simple.[33](#fn33)\nAnother problem stems from the potential inability of humans to call off autonomous systems once deployed. One reason for employing autonomous functionality is so that uninhabited vehicles can continue their missions even if they are operating without reliable communication links to human controllers. When there is no communication link between human operators and an autonomous system, human operators would have no ability to recall the autonomous system if political circumstances changed such that the system’s behavior was no longer appropriate. This could be a challenge in de-escalating a conflict, if political leaders decide to terminate hostilities but have no ability to recall autonomous systems, at least for some period of time. The result could be a continuation of hostilities even after political leaders desire a cease-fire. Alternatively, the inability to fully cease hostilities could undermine truce negotiations, leading to the continuation of conflict. These problems are not unique to autonomous systems. Political leaders have imperfect command-and-control over human military forces, which has, at times, led to similar incidents with human-commanded deployed forces. For example, the Battle of New Orleans in the War of 1812 was fought after a peace treaty ended the war because of the slowness of communications to deployed forces. \n\n\n\n\n\n### Risks Due to the Use of AI for Particular Military Missions\n\n\nThe introduction of AI into military operations could also pose risks in certain circumstances due to the nature of the military mission, even if the AI system performs correctly and consistent with human intentions. Some existing research already focuses on the intersection of AI with specific military mission areas, most notably nuclear stability.[34](#fn34)\nNuclear stability is an obvious area of concern given the potential consequences of an intentional or unintentional nuclear detonation.[35](#fn35)\nLethal autonomous weapon systems (LAWS), a particular use of AI in which lethal decision-making is delegated from humans to machines, also represents a focus area of existing research. Other areas may deserve special attention from scholars concerned about AI risks. The intersections of AI with cybersecurity and biosecurity are areas worthy of exploration where there has been relatively less work at present.[36](#fn36)\n\n\nPotentially risky applications of AI extend beyond the battlefield to the use of AI to aid in decision-making in areas such as early warning and forecasting adversary behavior. For example, AI tools to monitor, track, and analyze vast amounts of data on adversary behavior for early indications and warning of potential aggression have clear value. However, algorithms also have known limitations and potentially problematic characteristics, such as a lack of transparency or explainability, brittleness in the face of distributional shifts in data, and automation bias. AI systems frequently perform poorly under conditions of novelty, suggesting a continued role for human judgment. The human tendency toward automation bias, coupled with the history of false alarms generated by non-AI early warning and forecasting systems, suggests policymakers should approach the adoption of AI in early warning and forecasting with caution, despite the potential value of using AI in intelligent decision aids.[37](#fn37) Education and training to ensure the responsible use of AI in early warning and forecasting scenarios will be critical.[38](#fn38) \n\n\n\n\n\n\n\n> \n> The human tendency toward automation bias, coupled with the history of false alarms generated by non-AI early warning and forecasting systems, suggests policymakers should approach the adoption of AI in early warning and forecasting with caution, despite the potential value of using AI in intelligent decision aids.\n> \n> \n> \n\n\n\n\n\nFinally, autonomous systems raise novel challenges of signaling in contested areas because of ambiguity about whether their behavior was intended by human commanders. Even if the system performs as intended, adversaries may not know whether an autonomous system’s behavior was consistent with human intent because of the aforementioned command-and-control issues. This can create ambiguity in a crisis situation about how to interpret an autonomous system’s behavior. For example, if an autonomous system fired on a country’s forces, should that be interpreted as an intentional signal by the commanding nation’s political leaders, or an accident? This, again, is not a novel problem; a similar challenge exists with human-commanded military forces. Nations may not know whether the actions of an adversary’s deployed forces are fully in line with their political leadership’s guidance. Autonomous systems could complicate this dynamic due to uncertainty about whether the actions of an autonomous system are consistent with any human’s intended action.\n\n\n\n\nThe Role of Confidence-Building Measures\n----------------------------------------\n\n\nAI potentially generates risks for international security due to ways AI could change the character of warfare, the limitations of AI technology today, and the use of AI for specific military missions such as nuclear operations. Especially given the uncertain technological trajectory of advances in AI, what are options to reduce the risks that military applications of AI can pose to international stability?\n\n\nTo advance the conversation about ensuring that military AI adoption happens in the safest and most responsible way possible, this paper outlines a series of potential confidence-building measures aimed at mitigating risks from military uses of AI.[39](#fn39)\nWe introduce these ideas as preliminary concepts for future research, discussion, and examination, rather than to specifically advocate for any of these options. But progress in mitigating the risks from military AI competition requires moving beyond the recognition that risk mitigation is important to the hard work of suggesting, evaluating, and examining the benefits and drawbacks of specific mechanisms.[40](#fn40)\n\n\nThis paper focuses on confidence-building measures, a broad category of actions that states can take to reduce instability risks. CBMs include actions such as transparency, notification, and monitoring designed to reduce various risks arising from military competition between states. They generally encompass four areas, as Marie-France Desjardins describes:[41](#fn41)\n\n\n* Information-sharing and communication\n* Measures to allow for inspections and observers\n* “Rules of the road” to govern military operations\n* Limits on military readiness and operations\n\n\nConfidence-building measures are related to, but distinct from, arms control agreements. Arms control encompasses agreements states make to forgo researching, developing, producing, fielding, or employing certain weapons, features of weapons, or applications of weapons. The set of possible actions states could take is broad, and this paper will focus on the potential benefits and drawbacks of specific AI-related confidence-building measures. Arms control for military AI applications is a valuable topic worthy of exploration, but beyond the scope of this paper.[42](#fn42)\n \n\n\n\n\n\n### Historical Applications of CBMs\n\n\nConfidence-building measures as a concept rose to prominence during the Cold War as a tool to reduce the risk of inadvertent war. In the wake of the Cuban Missile Crisis, the United States and the Soviet Union began exploring ways to improve their communication. While both sides recognized that war might occur, they had a shared interest, due to the potentially world-ending consequences of a global nuclear war, in ensuring that any such outbreak would be due to a deliberate decision, rather than an accident or a misunderstanding.\n\n\nThe desire to build confidence led to a series of bilateral measures. Less than a year after the Cuban Missile Crisis, in June 1963, the United States and the Soviet Union signed a memorandum of understanding to create a hotline between the senior leadership of the two nations.[43](#fn43)\nThe idea was that this line of communication would provide a mechanism for U.S. and Soviet leaders to reach out to their counterparts and discuss crises in a way that made inadvertent escalation less likely. In 1972, as part of the Strategic Arms Limitation Talks (SALT I) arms control agreement, the United States and the Soviet Union went further, signing the Incidents at Sea Agreement, which they had been negotiating since 1967. The Incidents at Sea Agreement, not initially considered a prominent part of the 1972 SALT I accord, created a mechanism for communication and information surrounding the movement of U.S. and Soviet naval vessels. The agreement regulated dangerous maneuvers and harassment of vessels, established means for communicating the presence of submarines and surface naval movements, and generated a mechanism for regular consultation.[44](#fn44)\nThese successes helped lead to the formalization of the CBM concept in 1975 in Helsinki at the Conference on Security and Cooperation in Europe.[45](#fn45)\n\n\n\nAs the Cold War drew to a close, confidence-building measures expanded beyond the U.S.-Soviet context and the European theater. For example, India and China have a series of CBMs intended to prevent escalation in their disputed border area, while India and Pakistan have a hotline designed to make accidental escalation in South Asia less likely. In Southeast Asia, through the Regional Forum of the Association of Southeast Asian Nations (ASEAN), member nations have pursued CBMs designed to reduce the risk of conflict among themselves, and between any ASEAN member and China, due to territorial disputes in the South China Sea.[46](#fn46)\nThese CBMs used outside of the Cold War have had mixed effects. \n\n\nIn the China-India case, for example, border-related CBMs did not prevent the ongoing conflict in 2020 between those two nations along the Line of Actual Control in the Himalayan region. However, norms surrounding the types of “legitimate” military activities promoted by CBMs have likely reduced the death toll of the clashes, with both sides generally avoiding the use of firearms, consistent with agreements from 1996 and 2005.[47](#fn47)\n\n\n\nIn Southeast Asia, while the ASEAN Regional Forum is a principal forum for dialogue, the consensus-based character of ASEAN makes it challenging for that dialogue to translate into policies on contested issues. Recent multilateral dialogues about emerging technologies such as cyber systems have also featured efforts to create CBMs that could be building blocks for cooperation. Unfortunately, a lack of international agreement on basic definitions and some countries’ interest in dodging limitations on behavior in cyberspace have limited the development of effective norms.[48](#fn48)\nCBMs rely on shared interests to succeed, and major powers such as the United States, China, and Russia do not have clearly shared interests concerning behavior in cyberspace, making it difficult to use CBMs to build trust or successfully design “rules of the road” agreements likely to generate widespread adherence.\n\n\nCBMs may be a useful tool for managing risks relating to military AI applications. There are a number of possible CBMs that states could adopt that may help mitigate the various AI-related risks previously outlined. These include broad CBMs applicable to AI as a category, CBMs designed to address some of the limitations of AI, and CBMs focused on specific missions for which militaries might use AI.[49](#fn49) \n\n\n\n\n\n### Broad CBMs\n\n\nThese CBMs focus broadly on mechanisms for dialogue and agreement surrounding military uses of AI, rather than the specific content of agreements. Given that a key goal of CBMs is to enhance trust, mechanisms that serve as a building block for more substantive dialogue and agreement can, in some cases, be an end in themselves and not just a means to an end.[50](#fn50)\nThese could include promoting international norms for how nations develop and use military AI systems, Track II academic-to-academic exchanges, direct military-to-military dialogues, and agreements between states regarding military AI, such as a code of conduct or mutual statement of principles. \n\n\n#### Promoting Norms\n\n\nIn 2019, the U.S. Defense Innovation Board proposed a set of AI principles for the U.S. Defense Department, which DoD subsequently adopted in early 2020. While these principles no doubt have domestic audiences in the U.S. defense community and tech sector, they also serve as an early example of a state promulgating norms about appropriate use of AI in military applications. The DoD AI principles included a requirement that DoD AI systems be responsible, equitable, traceable, reliable, and governable.[51](#fn51)\n(The full set of DoD AI principles is included in the Appendix). Similarly, the DoD’s unclassified summary of its AI strategy, released in 2019, called for building AI systems that were “resilient, robust, reliable, and secure.”[52](#fn52)\nA focus of the strategy was “leading in military ethics and AI safety.”[53](#fn53)\n\n\nThere is value in states promoting norms for responsible use of AI, including adopting and employing technology in a way that reflects an understanding of the technical risks associated with AI systems. While stating such principles is not the same as putting in place effective bureaucratic processes to ensure their compliance, there is nevertheless value in states publicly signaling to others (and to their own bureaucracies) the importance of using AI responsibly in military applications. While these norms are at a high level, they nevertheless signal some degree of attention by senior military and civilian defense officials to some of the risks of AI systems, including issues surrounding safety, security, responsibility, and controllability. These signals may aid internal bureaucratic efforts to mitigate various AI-related risks, as bureaucratic actors can point to these official documents for support. Additionally, to the extent that other nations find these statements credible, they may help signal to other nations at least some degree of awareness and attention to these risks, helping to incentivize others to do the same.\n\n\nOne risk to such statements is that if they appear manifestly at odds with a state’s actions, they can ring hollow, undermine a state’s credibility, or undermine the norm itself. For example, loudly proclaiming the importance of AI ethics while using AI systems in a clearly unethical manner, such as for internal repression or without regard for civilian casualties, could not only undermine a state’s credibility but also undermine the value of the norm overall, especially if other states fail to highlight the disconnect. Following through with meaningful actions to show how a state puts these norms into practice is essential for them to have real value. \n\n\n#### Track II Academic Dialogues\n\n\nOne confidence-building measure is already underway: Track II dialogues between academic experts from different countries with expertise surrounding military uses of AI.[54](#fn54)\nBecause these dialogues occur among experts who are not government officials, they are low risk because they do not commit countries to actually doing anything. This also places a cap on their potential benefits. Track II dialogues can nevertheless be useful building blocks for more substantive cooperation among countries and an avenue to explore various potential modes of cooperation without fear of commitment by states. Track II dialogues can help facilitate mutual understanding among expert communities in different states and build shared trust between experts.[55](#fn55)\nAdditionally, if some of those experts transition into government positions in the future, the lessons from these dialogues can improve the prospects for cooperation in more formal venues. \n\n\n\n\n\n\n\n> \n> Track II dialogues can nevertheless be useful building blocks for more substantive cooperation among countries and an avenue to explore various potential modes of cooperation without fear of commitment by states.\n> \n> \n> \n\n\n\n\n\nWhile there are risks to misleading statements in the context of formal government dialogues, as discussed below, the consequences of such activities in a Track II context are minimal. The nature of the dialogue is that participants are not government officials and it is to be expected that some of their statements may not be entirely in line with their government’s policies. Thus, Track II dialogues can build trust and be an end in themselves, even as they serve as the means to broader cooperation and understanding.\n\n\n#### Military-to-Military Dialogues\n\n\nDirect military-to-military engagement on deconfliction measures for AI and autonomous systems could be valuable, both as a precursor to potentially more fulsome specific measures, but also a valuable communication mechanism in their own right. For example, if militaries deploy an autonomous vehicle into a contested area where other military forces will be operating, a direct military-to-military channel would give the other side an opportunity to ask questions about its behavior and the deploying side an opportunity to communicate expectations, to avoid unintended escalation or incidents. Similarly, such a venue would give militaries an opportunity to ask questions and communicate information about other capabilities or investments that may threaten mutual stability, such as investments in AI, autonomy, or automation in nuclear operations. There are many advantages of direct, private communication over more indirect, public communication. Nations can send targeted messages just to the intended audience, rather than dealing with multiple audiences, including domestic ones. There may be reduced political pressure to save face or show strength publicly, although of course some of these pressures may still exist in private channels. And direct discussions afford more high-bandwidth information exchange with greater back-and-forth between sides than may be possible via public messages broadcast to a wider audience.\n\n\nOne challenge, of course, is that these dialogues are most challenging precisely when they are needed the most: when there is a lack of transparency and trust on both sides. However, history shows that such dialogues are possible and indeed can be valuable measures in increasing transparency and reducing mutual risks.\n\n\n#### Code of Conduct\n\n\nNations could agree to a written set of rules or principles for how they adopt AI into military systems. These rules and principles, even if not legally binding, could nevertheless serve a valuable signaling and coordination function to avoid some of the risks in AI adoption. A code of conduct, statement of principles, or other agreement could include a wide range of both general and specific statements, including potentially on any or all of the confidence-building measures listed above.\n\n\nEven if countries cannot agree on specific details beyond promoting safe and responsible military use of AI, more general statements could nevertheless be valuable in signaling to other nations some degree of mutual understanding about responsible use of military AI and help create positive norms of behavior. Ideally, a code of conduct would have support from a wide range of countries and major military powers. However, if this were not possible, then a multilateral statement of principles from like-minded countries could still have some value in increasing transparency and promulgating norms of responsible state behavior.\n\n\nThere are a few potential drawbacks to a broad code of conduct. First, a broader code of conduct, lacking the specificity of some of the measures discussed above, might undercut momentum toward broader cooperation, rather than serve as a building block. Second, there would be risk in negotiating a code of conduct that disagreements over some of the specifics could derail the entire endeavor or lead to forum shopping, whereby countries then spin off to create their own dialogues about a code of conduct. This is arguably what has happened in the cyber realm, where several different ongoing dialogue processes about codes of conduct have not led to substantive success. Third, a more formal code of conduct might start to raise the prospects of triggering some of the costs associated with CBMs. Specifically, if a country reduced its investments in military applications of AI or did not pursue capability areas because it believed adversaries were following a code of conduct, it could expose itself in the event of cheating. This might be of particular concern for democracies, given that, in many cases, democracies are more likely to comply with the agreements they sign, in part because democracies often have rigorous internal bureaucratic processes to ensure compliance.[56](#fn56) Thus, one might imagine that the incentives might lead to a less formal code of conduct designed as a building block, rather than something that might cause countries to restrain capabilities.\n\n\n\n\n### The Limitations of AI\n\n\nAccident risk is a significant concern for military applications of AI. Competitive pressures could increase accident risk by creating pressures for militaries to shortcut testing and rapidly deploy new AI-enabled systems. States could take a variety of options to mitigate the risks of creating unnecessary incentives to shortcut test and evaluation,[57](#fn57)\nincluding publicly signaling the importance of T&E, increasing transparency about T&E processes, promoting international T&E standards, and sharing civilian research on AI safety.\n\n\nAdditionally, AI will enable more capable autonomous systems, and their increased use may pose stability risks, particularly when deployed into contested areas. To mitigate these risks, states could adopt CBMs such as “rules of the road” for the behavior of autonomous systems, marking systems to signal their degree of autonomy, and adhering to off-limits geographic areas for autonomous systems.\n\n\n#### Public Signaling\n\n\nTo reduce AI accident risk, national security leaders could publicly emphasize the importance of strong T&E requirements for military AI applications. This potentially could be linked to a formal multilateral statement or something more informal. Publicly promoting AI T&E could be valuable in signaling that nations agree, at least in principle, about the importance of T&E to avoid unnecessary accidents and mishaps. Public statements would be more powerful when used in combination with major investments in T&E institutions and processes. Promoting AI T&E as a CBM would be designed to create positive spillover effects. As major countries investing in AI come together to promote AI safety, it demonstrates the importance of the issue. It could encourage other governments to sign on and signal that AI experts within the bureaucracy can advocate for AI T&E measures.\n\n\nThe downsides of publicly signaling the prioritization of AI T&E are relatively limited. A critic might argue that, to the extent that accidents are a necessary part of the innovation and capabilities development process, an overemphasis on T&E might discourage experimentation. However, promoting experimentation and innovation does not have to come at the expense of building robust and assured systems, especially since it is through experimentation and testing that accident risks are likely to be revealed, leading to the deployment of more capable systems. Ensuring that AI systems function as intended is part of fielding effective military capabilities, and effective T&E processes are aligned with the goal of fielding superior military capabilities. Rigorous T&E processes would, by definition, add time to the development process in order to ensure that systems are robust and secure before deployment, but the result would be more effective systems once deployed. In peacetime, taking additional measures to ensure that military systems will perform properly in wartime has little downside, so long as accident risk does not become a bureaucratic excuse for inaction. In wartime, the tradeoffs in delaying fielding may become more acute, and militaries may balance these risks differently. There are potential transparency downsides if countries say they emphasize AI T&E in public, but do not do so in private,[58](#fn58)\nbut that would not impose costs on countries whose actions match their rhetoric. \n\n\n\n\n\n#### Increased Transparency about T&E Processes\n\n\nA related unilateral or multilateral CBM could involve countries publicly releasing details about the T&E processes used for military applications of AI without revealing details about specific technical capabilities. This is similar to existing U.S. policy regarding legal weapons reviews. Currently, the U.S. military promotes norms in favor of stringent legal weapons reviews but does not share the actual reviews of specific weapons.[59](#fn59)\n\n\nSince this CBM would build on existing norms that the United States already promotes, transparency about T&E processes for military AI systems might be more likely to receive American support than more intrusive measures. Moreover, increasing knowledge about T&E processes might bring other countries that want to learn from the American military on board. The potential drawbacks of transparency surrounding T&E processes stem from what happens if the CBM succeeds. If successful, all countries, including potential adversaries, would have greater knowledge of how to design effective T&E processes for their military AI applications. This could improve their ability to field more effective military AI systems. This downside may be somewhat mitigated if a country only shares high-level information about its T&E bureaucratic processes and refrains from sharing technical information that could actually help an adversary execute more effective T&E. Nevertheless, an overarching concern with any T&E-related CBM that aims to reduce the risk to international stability from states building unsafe AI systems is that actually succeeding in improving other states’ T&E could also lead to adversaries deploying more effective AI systems. Whether an adversary’s improved AI capabilities or the prospect of an adversary deploying unsafe military AI systems is more of a danger to a country’s security would need to be considered.\n\n\n#### International Military AI T&E Standards\n\n\nAnother CBM regarding AI safety could entail establishing and promoting specific international standards for what constitutes effective T&E practices for military AI applications. Such an effort could build on private-sector and public-private standard-setting actions for non-military uses of AI.[60](#fn60)\n\n\nWhile not enforceable or verifiable, promoting common standards for AI T&E could be a useful focal point for like-minded states to promote responsible norms concerning the safe deployment of military AI systems. The downsides of promoting common T&E standards are similar to the potential downsides of a public emphasis on AI safety. These kinds of CBMs are early building blocks: While the gains are likely to be relatively limited, the downsides are limited as well, because they do not expose key information or require national commitments that limit capabilities. As with increasing transparency about T&E processes, the most significant downside to effective T&E standards would be that, if successful, this CBM could increase the reliability of military AI systems by adversary states. The relative balance of danger between more reliable, and therefore more effective, adversary AI systems versus unreliable and more accident-prone AI systems would again need to be carefully weighed.\n\n\n#### Shared Civilian Research on AI Safety\n\n\nInternational efforts to promote shared civilian research on AI safety could be a low-level CBM that would not explicitly involve military action. Shared civilian research would build scientific cooperation between nations, which could serve as a building block for overall cooperation. Focusing cooperation on AI safety, an area of shared interest, might also make more nations willing to sign on to participate. An analogy to this in the U.S.-Soviet context is the Apollo-Soyuz mission in 1975, whose intent was to promote cooperation between civilian scientists on a shared agenda. Similarly, nations could work to foster increased cooperation and collaboration between civilian scientists on AI safety.\n\n\nThe potential drawbacks of cooperation stem from the general-purpose character of AI knowledge. If increasing cooperation on AI safety led to adversary breakthroughs in AI safety that made them better able to field effective military uses of AI, there could be negative consequences for other states’ security. It may be possible to mitigate this downside by carefully scoping the shared civilian research, depending on the specific type of cooperation and degree of information-sharing required by participants.\n\n\n#### International Autonomous Incidents Agreement\n\n\nThere are inherent risks when autonomous systems with any level of decision-making interact with adversary forces in contested areas. Given the brittleness of algorithms, the deployment of autonomous systems in a crisis situation could increase the risk of accidents and miscalculation. AI-related CBMs could build on Cold War agreements to reduce the risk of accidental escalation, with some modification to account for the new challenges AI-enabled autonomous systems present.\n\n\nStates have long used established “rules of the road” to govern the interaction of military forces operating with a high degree of autonomy, such as at naval vessels at sea, and there may be similar value in such a CBM for interactions with AI-enabled autonomous systems. The 1972 Incidents at Sea Agreement and older “rules of the road” such as maritime prize law provide useful historical examples for how nations have managed analogous challenges in the past. Building on these historical examples, states could adopt a modern-day “international autonomous incidents agreement” that focuses on military applications of autonomous systems, especially in the air and maritime environments. Such an agreement could help reduce risks from accidental escalation by autonomous systems, as well as reduce ambiguity about the extent of human intention behind the behavior of autonomous systems.\n\n\nIn addition to the Incidents at Sea Agreement, maritime prize law is another useful historical analogy for how states might craft a rule set for autonomous systems’ interactions. Prize law, which first began in the 12th century and evolved more fully among European states in the 15th to 19th centuries, regulated how ships interacted during wartime. Because both warships and privateers, as a practical matter, operated with a high degree of autonomy while at sea, prize law consisted of a set of rules governing acceptable wartime behavior. Rules covered which ships could be attacked, ships’ markings for identification, the use of force, seizure of cargo, and providing for the safety of ships’ crews.[61](#fn61)\n\n\nNations face an analogous challenge with autonomous systems as they become increasingly integrated into military forces. Autonomous systems will be operating on their own for some period of time, potentially interacting with assets from other nations, including competitors, and there could be value in establishing internationally agreed upon “rules of the road” for how such systems should interact. The goal of such an agreement, which would not have to be as formal as the Incidents at Sea Agreement, would be to increase predictability and reduce ambiguity about the behavior of autonomous systems. Such an agreement could be legally binding but would not necessarily need to be in order to be useful. It would likely need to be codified in an agreement (or set of agreements), however, so that expectations are clear by all parties.\n\n\nAn ideal set of rules would be self-enforcing, such that it is against one’s own interests to violate them. Examples of rules of this kind in warfare include prohibitions against perfidy[62](#fn62) and giving “no quarter,”[63](#fn63) violating either of which incentivizes the enemy to engage in counterproductive behavior, such as refusing to recognize surrender or fighting to the bitter end rather than surrendering.\n\n\nAn autonomous incidents agreement could also include provisions for information-sharing about potential deployments of autonomous systems in disputed areas and mechanisms for consultation at the military-to-military level to resolve questions that arise (including potentially a hotline to respond to incidents in real time).\n\n\nOne challenge with autonomous systems is that their autonomous programming is not immediately observable and inspectable from the outside, a major hurdle for verifying compliance with arms control. One benefit to an international rule set that governs the behavior of autonomous systems, particularly in peacetime or pre-conflict settings, is that the outward behavior of the system would be observable, even if its code is not. Other nations could see how another country’s autonomous air, ground, or maritime drone behaves and whether it is complying with the rules, depending on how the rules are written. \n\n\n\n\n\n\n\n> \n> One benefit to an international rule set that governs the behavior of autonomous systems, particularly in peacetime or pre-conflict settings, is that the outward behavior of the system would be observable, even if its code is not.\n> \n> \n> \n\n\n\n\n\nGiven the perceived success of the Incidents at Sea Agreement in decreasing the risk of accidental and inadvertent escalation between the United States and the Soviet Union, an equivalent agreement in the AI space might have potential to do the same for a new generation. The efficacy of any agreement would depend on the details, both in the agreement itself and in states’ execution. For example, the United States and China have signed multiple CBM agreements involving air and maritime deconfliction of military forces, including the 1998 U.S.-China Military Maritime Consultative Agreement and the 2014 Memorandum of Understanding Regarding the Rules of Behavior for Safety of Air and Maritime Encounters, yet U.S.-China air and naval incidents have continued.[64](#fn64)\n\n\nHowever, the existence of these prior agreements themselves may be a positive sign about the potential for U.S.-China cooperation on preventing accidents and could be a building block for further collaboration. Moreover, in a February 2020 article, Senior Colonel Zhou Bo in China’s People’s Liberation Army (PLA) advocated for CBMs between the United States and China, including on military AI, drawing on the example of the 1972 Incidents at Sea Agreement.[65](#fn65) Interest in at least some quarters in the Chinese military suggests that cooperation may be possible even in the midst of competition, especially if the PLA is willing to reciprocate American transparency.[66](#fn66)\n\n\nIn the absence of an internationally agreed upon common rule set, a country could unilaterally make declaratory statements about the behavior of its autonomous systems. For example, a country could say, “If you fire at our autonomous ship/aircraft/vehicle, it will fire back defensively.”[67](#fn67) In principle, such a rule could incentivize the desired behavior by other nations (i.e., not shooting at the autonomous ship, unless you want to start a conflict). If every nation adopted this rule, coupled with a “shoot-second posture” for autonomous systems—they would not fire unless fired upon first—the result could be a mutually stable situation. A unilateral declaration of a set of rules for avoiding incidents would be analogous to declaring, “I will drive on the right side of the road. I suggest you do the same or we both will crash.” This could work if countries’ aim is to coordinate their behavior to avoid conflict, meaning they have some shared interests in avoiding accidental escalation.\n\n\nOne challenge to establishing rules of the road for autonomous systems’ behavior would be if there were incentives to defect from the rules. For example, in World War I, technological developments enabled submarines, which were highly effective in attacking ships but unable to feasibly comply with existing prize law without putting themselves at risk of attack by surfacing. Despite attempts in the early 20th century to regulate submarines, the incentives for defecting from the existing rules were too great (and the rules failed to adapt), and the result was unrestricted submarine warfare.[68](#fn68) Another challenge to a potential autonomous incidents agreement is fully exploring the incentives for trustworthiness, both in the signals that countries send about the behavior of their autonomous systems and adversaries’ responses. Some declaratory policies would not be credible, such as the claim to have created a “dead hand” system such that if a country engaged in a particular type of action, an autonomous system would start a war and there would be nothing a leader could do to stop it.\n\n\n#### Marking Autonomous Systems\n\n\nOne component of managing risks from interactions with autonomous systems might involve marking those systems to signal to adversaries their level of autonomy. This could be done through physical markings, such as paint, lights, flags, or other observable external characteristics, or through electronic means, such as radiofrequency broadcasts. One benefit of a marking system is that it builds on things militaries already do, even at the tactical level, to signal their intentions. For example, a fighter jet might tip its wing to show an adversary that it is carrying air-to-air missiles under the wing, communicating an unambiguous and credible signal about capability, and at least threatening some degree of intent. Because autonomous programming is not physically observable in the same way, militaries would have to intentionally design systems with observable markings reflecting their degree of autonomy. Another option could be that certain platforms are understood to have certain behavior (or not), the same way that conventional and nuclear capabilities may in some cases be segregated (e.g., some aircraft are nuclear-capable and some are not, which allows nations to send different kinds of signals).\n\n\nBecause potential markings for autonomous functionality are not forced by the capability itself but are rather an optional signal that militaries can choose to send, in order for such markings to be believable and useful, there would have to be strong incentives for sending truthful signals and few incentives for deception. This would be challenging, and nations would have to carefully think through what signals might be useful and believable in different circumstances, and how adversaries might interpret such signals. Additionally, because concepts such as “levels of autonomy” are often murky, especially for systems that have varying modes of operation, nations would have to think carefully about what kinds of signals could helpfully and clearly communicate autonomous functionality to other countries.[69](#fn69) In the past, human operators of automated or autonomous systems have in some instances misunderstood the functionality of the system they themselves were operating, leading to accidents.[70](#fn70) This problem would be significantly compounded for an external observer. Signals that were trusted but misunderstood could be equally or more dangerous than ambiguity, and states should strive for clear, unambiguous signals.\n\n\n#### Off-limits Geographic Areas\n\n\nNations could agree to declare some geographic areas off-limits to autonomous systems because of their risk of unanticipated interactions. This could be to avoid unintended escalation in a contested region (e.g., a demilitarized zone) or because a region is near civilian objects (e.g., a commercial airliner flight path) and operating there poses a risk to civilians. Other examples of areas that nations could agree to make off-limits to autonomous military systems could be overlapping territorial claims or other countries’ exclusive economic zones (EEZs) or airspace above their EEZs.\n\n\nReaching agreement on specific regions could be challenging, however, since the areas most at risk of escalation are precisely the regions where nations disagree on territorial claims. Nations could perceive any agreement to refrain from deploying elements of military forces to a region as reflecting negatively on their territorial claims or freedom of navigation. Agreeing to declare some areas off-limits to autonomous systems is likely to be most constructive when there are already pre-established regions that countries agree are under dispute (even if they disagree on who has a claim to ownership) and where pre-existing military deconfliction measures already exist.\n\n\n\n\n### Specific Mission-Related CBMs: Nuclear Operations\n\n\nThe integration of AI, autonomy, and/or automation into nuclear command-and-control, early warning, and delivery systems poses unique risks to international stability because of the extreme consequences of nuclear accidents or misuse.[71](#fn71)\nOne option for mitigating these risks could be for nations to set limits on the integration of AI, autonomy, or automation into their nuclear operations. \n\n\nSome U.S. military leaders and official DoD documents have expressed skepticism about integrating uninhabited vehicles into plans surrounding nuclear weapons. The Air Force’s 2013 *Remotely Piloted Aircraft (RPA) Vector* report proposed that nuclear strike “may not be technically feasible unless safeguards are developed and even then may not be considered for [unmanned aircraft systems] operations.”[72](#fn72)\nU.S. Air Force general officers have been publicly skeptical about having uninhabited vehicles armed with nuclear weapons. General Robin Rand stated in 2016, during his time as head of Air Force Global Strike Command, that: “We’re planning on [the B-21] being manned. … I like the man in the loop … very much, particularly as we do the dual-capable mission with nuclear weapons.”[73](#fn73)\n\n\nOther U.S. military leaders have publicly expressed support for limits on the integration of AI into nuclear command-and-control. In September 2019, Lieutenant General Jack Shanahan, head of the DoD Joint AI Center, said, “You will find no stronger proponent of the integration of AI capabilities writ large into the Department of Defense, but there is one area where I pause, and it has to do with nuclear command and control.” In reaction to the concept of the United States adopting a “dead hand” system to automate nuclear retaliation if national leadership were wiped out, Shanahan said, “My immediate answer is ‘*No*. We do *not*.’ … This is the ultimate human decision that needs to be made which is in the area of nuclear command and control.”[74](#fn74)\n\n\nWhile the motivation for these statements about limits on the use of autonomy may or may not be strategic stability—bureaucratic factors could also be at play—they are examples of the kinds of limits that nuclear powers could agree to set, unilaterally or collectively, on the integration of AI, autonomy, and automation into their nuclear operations. \n\n\nNuclear states have a range of options for how to engage with these kinds of risks. On one end of the spectrum are arms control treaties with some degree of verification or transparency measures to ensure mutual trust in adherence to the agreements. On the other end of the spectrum are unilateral transparency measures, which could have varying degrees of concreteness ranging from informal statements from military or civilian leaders along the lines of Shanahan’s and Rand’s statements, all the way to formal declaratory policies. In between are options such as mutual transparency measures, statements of principles, or non-legally binding codes of conduct or other agreements between nuclear states to ensure human control over nuclear weapons and nuclear launch decisions. Even if states that desired these restraints found themselves in a position where others were unwilling to adopt more binding commitments, there may be value in unilateral transparency measures both to reduce the fears of other states and to promulgate norms of responsible state behavior. As with other areas, it is important to consider incentives for defection from an agreement and the extent to which one state’s voluntary limitations depend on verifying others’ compliance with an agreement. If some states, such as the United States, desire strict positive human control over their nuclear weapons and nuclear launch authority for their own reasons, then verifying others’ behavior, while desirable, may not be a necessary precondition to those states adopting their own limits on the use of AI, autonomy, or automation in nuclear operations.\n\n\nTwo possible CBMs for AI applications in the nuclear arena involve nuclear weapons states agreeing to strict human control over nuclear launch decisions and ensuring any recoverable delivery vehicles are human-inhabited, to ensure positive human control. \n\n\n#### Strict Human Control over Nuclear Launch Decisions\n\n\nOne CBM for uses of AI in the nuclear arena would involve an agreement by nuclear powers to ensure positive human control over all nuclear launch decisions. This type of agreement would preclude automated “dead hand” systems or any other automatic trigger for the use of nuclear weapons.\n\n\nThe benefit of such a CBM would be to reduce the risk of accidental nuclear war. It would preclude a machine malfunction leading directly to the use of nuclear weapons without a human involved in the process. Agreement on positive human control over nuclear launch decisions could also be a mechanism for dialogue with newer nuclear powers, helping generate more transparency over their nuclear launch decisions.\n\n\nA drawback to this CBM would be forgoing any potential benefits of an automated “dead hand” or similar system. While not without controversy, automated nuclear response systems have a strategic logic under some circumstances. Some nuclear states could desire automated retaliatory systems to ensure a second strike in a decapitation scenario. To the extent that strategic stability depends on second strike capabilities, and a country believes it faces a real risk of decapitation if a conflict escalates, that country might prefer an automated option. (This was the intent behind the Soviet Perimeter system, which reportedly had a semiautomated “dead hand” functionality.)[75](#fn75)\nThe assurance of automated retaliation could be valuable as a deterrent and/or to reduce the incentives for a nation’s leaders to launch a strike under ambiguous warning, if they felt confident that a second strike was assured. An agreement to rule out the use of automated “dead hand” systems might increase the risk of first strike instability, because nations could have a larger incentive to strike first—or perhaps launch in response to a false alarm—before being decapitated. \n\n\nAlternatively, countries that feel they need an automated nuclear response option might prefer to not sign a CBM or to sign and then cheat.[76](#fn76)\nFortunately, the “costs” of a counterpart cheating on this type of CBM are relatively minimal, since presumably most states would only sign such an agreement if they thought it was already consistent with their nuclear launch decision-making process.\n\n\n#### Prohibitions on Uninhabited Nuclear Launch Platforms\n\n\nAn agreement to prohibit uninhabited nuclear launch platforms would involve nuclear weapon states agreeing to forgo a capability that, to our knowledge, no nuclear weapon state deploys today—an uninhabited (“unmanned”) submarine, fighter, or bomber armed with nuclear weapons.[77](#fn77)\nSuch an agreement would not affect one-way nuclear delivery vehicles, such as missiles or bombs, instead only preventing a state from deploying two-way (recoverable) remotely piloted or autonomous platforms armed with a nuclear weapon. States have long employed uninhabited nuclear delivery vehicles (missiles, bombs, torpedoes) to carry a nuclear warhead to the target. At present, however, the recoverable launch platform (submarine, bomber, transporter erector launcher) is crewed. With crewed nuclear launch platforms, humans remain not only in control over the final decision to launch a nuclear weapon, but have direct physical access to the launch platform to maintain positive control over the nuclear launch decision. \n\n\nA critical benefit of CBMs that sustain positive human control over nuclear weapons is a reduction in the risk of accidental nuclear war. Deploying nuclear weapons on an uninhabited launch platform, whether remotely piloted or autonomous, would by definition increase the risk that, in the case of an accident, whether mechanical or due to flawed software code, a machine, rather than a human, would make the decision about the use of nuclear weapons. Similarly, a crewed platform would have a redundant layer of direct onboard human physical control in the event that the system’s software or communications links were hacked. As previously described, U.S. military leaders, often skeptical about capabilities of remotely piloted or autonomous systems, have expressed a degree of support for such a policy, even unilaterally. With American support, this type of CBM might have a better chance of succeeding and gathering support among other nuclear weapon states. \n\n\n\n\n\n\n\n> \n> Deploying nuclear weapons on an uninhabited launch platform, whether remotely piloted or autonomous, would by definition increase the risk that, in the case of an accident, whether mechanical or due to flawed software code, a machine, rather than a human, would make the decision about the use of nuclear weapons.\n> \n> \n> \n\n\n\n\n\nCritics might argue that, similar to the objection to a ban on automated nuclear launches, some types of nuclear states might view more autonomous platforms with nuclear weapons as critical to their second-strike capabilities because of their ability to stay in the air or concealed at sea for extended periods. Russian military officials have raised the idea of an uninhabited nuclear-armed bomber,[78](#fn78) and Russia is reportedly developing a nuclear-armed uninhabited undersea vehicle, the Status-6.79 However, given that these platforms are not currently deployed, it may be easier to reach an agreement to prohibit these platforms compared with an agreement prohibiting a capability that already exists. Moreover, to the extent that this kind of CBM is more a commitment to avoid pursuing dangerous applications of AI, rather than a restriction on current capabilities, it would also be reversible if states decided such capabilities were both necessary and safe at a later time.[80](#fn80)\n\n\n\n\nConclusion\n----------\n\n\nMilitary use of AI poses several risks, including the ways AI could change the character of warfare, the limitations of AI technology today, and the use of AI for specific military missions such as nuclear operations. Policymakers should be cognizant of these risks as nations begin to integrate AI into their military forces, and they should seek to mitigate these risks where possible. Because AI is a general-purpose technology, it is not reasonable to expect militaries to refrain from adopting AI overall, any more than militaries would refrain from adopting computers or electricity. *How* militaries adopt AI matters a great deal, however, and various approaches could mitigate risks stemming from military AI competition. \n\n\n\n\n\n\n\n> \n> Confidence-building measures are one potential tool policymakers could use to help reduce the risks of military AI competition among states.\n> \n> \n> \n\n\n\n\n\nConfidence-building measures are one potential tool policymakers could use to help reduce the risks of military AI competition among states. There are a variety of potential confidence-building measures that could be used, all of which have different benefits and drawbacks. As scholars and policymakers move forward to better understand the risks of military AI competition, these and other confidence-building measures should be carefully considered, alongside other approaches such as traditional arms control.\n\n\n\n\nAppendix\n--------\n\n\n### Department of Defense (DoD) Artificial Intelligence (AI) Principles[81](#fn81)\n\n\n1. **Responsible.** DoD personnel will exercise appropriate levels of judgment and care, while remaining responsible for the development, deployment, and use of AI capabilities.\n2. **Equitable.** The department will take deliberate steps to minimize unintended bias in AI capabilities.\n3. **Traceable.** The department’s AI capabilities will be developed and deployed such that relevant personnel possess an appropriate understanding of the technology, development processes, and operational methods applicable to AI capabilities, including with transparent and auditable methodologies, data sources, and design procedure and documentation.\n4. **Reliable.** The department’s AI capabilities will have explicit, well-defined uses, and the safety, security, and effectiveness of such capabilities will be subject to testing and assurance within those defined uses across their entire life cycles.\n5. **Governable.** The department will design and engineer AI capabilities to fulfill their intended functions while possessing the ability to detect and avoid unintended consequences, and the ability to disengage or deactivate deployed systems that demonstrate unintended behavior.\n\n\n\n\n### Acknowledgments\n\n\nThe authors thank Lora Saalman, Helen Toner, and Luke Muehlhauser for their thoughtful reviews of this report. Thank you to Maura McCarthy, Melody Cook, Emma Swislow, Chris Estep, Megan Lamberth, and Lauren Kahn for their work in the production and design of this report.\n\n\n### About the Authors\n\n\n**Paul Scharre** is a Senior Fellow and Director of the Technology and National Security Program at the Center for a New American Security (CNAS). He is the award-winning author of *Army of None: Autonomous Weapons and the Future of War*, which won the 2019 Colby Award and was one of Bill Gates’ top five books of 2018. Dr. Scharre worked in the Office of the Secretary of Defense in the Bush and Obama administrations, where he played a leading role in establishing policies on unmanned and autonomous systems and emerging weapons technologies. He led the Department of Defense (DoD) working group that drafted DoD Directive 3000.09, establishing the Department’s policies on autonomy in weapon systems. He holds a PhD in war studies from King’s College London and an MA in political economy and public policy and a BS in physics, cum laude, from Washington University in St. Louis. Prior to working in the Office of the Secretary of Defense, Dr. Scharre served as an infantryman, sniper, and reconnaissance team leader in the Army’s 3rd Ranger Battalion and completed multiple tours to Iraq and Afghanistan. He is a graduate of the Army’s Airborne, Ranger, and Sniper Schools and Honor Graduate of the 75th Ranger Regiment’s Ranger Indoctrination Program.\n\n\n**Michael C. Horowitz** is an Adjunct Senior Fellow in the Technology and National Security Program at the Center for a New American Security. He is Director of Perry World House and Richard Perry Professor at the University of Pennsylvania, the author of *The Diffusion of Military Power: Causes and Consequences for International Politics*, and co-author of *Why Leaders Fight*. Dr. Horowitz won the 2017 Karl Deutsch Award given by the International Studies Association for early career contributions to the fields of international relations and peace research. He has published in a wide array of peer-reviewed journals and popular outlets. His research interests include the intersection of emerging technologies such as artificial intelligence (AI) and robotics with global politics, military innovation, the role of leaders in international politics, and geopolitical forecasting methodology. Dr. Horowitz previously worked for the Office of the Undersecretary of Defense for Policy in the Department of Defense and is a member of the Council on Foreign Relations. Dr. Horowitz received his PhD in government from Harvard University and his BA in political science from Emory University.\n\n\n### About the Report\n\n\nThis report is part of the Artificial Intelligence and International Stability Project at CNAS and draws on insights from workshops conducted in Washington, at Oxford University, at Stanford University, and virtually in 2019 and 2020. The project was made possible by a grant from Carnegie Corporation of New York. For additional CNAS work on AI and global security, see [cnas.org/AI](https://www.cnas.org/artificial-intelligence-and-global-security). \n\n\nAs a research and policy institution committed to the highest standards of organizational, intellectual, and personal integrity, CNAS maintains strict intellectual independence and sole editorial direction and control over its ideas, projects, publications, events, and other research activities. CNAS does not take institutional positions on ​policy issues and the content of CNAS publications reflects the views of their authors alone. In keeping with its mission and values, CNAS does not engage in lobbying activity and complies fully with all applicable federal, state, and local laws. CNAS will not engage in any representational activities or advocacy on behalf of any entities or interests and, to the extent that the Center accepts funding from non-U.S. sources, its activities will be limited to bona fide scholastic, academic, and research-related activities, consistent with applicable federal law. The Center publicly acknowledges on its [website](https://www.cnas.org/support-cnas/cnas-supporters) annually all donors who contribute.\n\n\n\n\n\nDownload the report.\n\n\n\n[Download PDF](https://s3.us-east-1.amazonaws.com/files.cnas.org/backgrounds/documents/AI-and-International-Stability-Risks-and-Confidence-Building-Measures.pdf?mtime=20210112103229&focal=none)", "url": "https://www.cnas.org/publications/reports/ai-and-international-stability-risks-and-confidence-building-measures", "title": "AI and International Stability: Risks and Confidence-Building Measures", "source": "html_articles", "source_type": "report", "source_filetype": "pdf", "date_published": "2021-01-11T23:00:00Z", "authors": ["Michael Horowitz", "Paul Scharre"], "summary": [], "id": "8ab3d16059f0921046226c14e9a33f8e"} {"text": "[Eliezer Yudkowsky](http://www.yudkowsky.net/) responds to my [“selective pessimism” challenge](http://www.econlib.org/archives/2016/03/morbid_thinking_1.html) with another challenge.  [Here he is](https://www.facebook.com/yudkowsky/posts/10154083549589228), reprinted with his permission.\n\n\n\n\n\n---\n\n\n\n[Eliezer Yudkowsky](http://www.yudkowsky.net/) responds to my [“selective pessimism” challenge](http://www.econlib.org/archives/2016/03/morbid_thinking_1.html) with another challenge.  [Here he is](https://www.facebook.com/yudkowsky/posts/10154083549589228), reprinted with his permission.\n\n\n\n\n\n---\n\n\n\n Bryan Caplan issued the following challenge, naming Unfriendly AI as one among several disaster scenarios he thinks is unlikely: “If you’re selectively morbid, though, I’d like to know why the nightmares that keep you up at night are so much more compelling than the nightmares that put you to sleep.”\n\n\n Well, in the case of Unfriendly AI, I’d ask which of the following statements Bryan Caplan denies:\n\n\n 1. **Orthogonality thesis** — intelligence can be directed toward any compact goal; consequentialist means-end reasoning can be deployed to find means corresponding to a free choice of end; AIs are not automatically nice; moral internalism is false.\n\n\n 2. **Instrumental convergence** — an AI doesn’t need to specifically hate you to hurt you; a paperclip maximizer doesn’t hate you but you’re made out of atoms that it can use to make paperclips, so leaving you alive represents an opportunity cost and a number of foregone paperclips. Similarly, paperclip maximizers want to self-improve, to perfect material technology, to gain control of resources, to persuade their programmers that they’re actually quite friendly, to hide their real thoughts from their programmers via cognitive steganography or similar strategies, to give no sign of value disalignment until they’ve achieved near-certainty of victory from the moment of their first overt strike, et cetera.\n\n\n 3. **Rapid capability gain** and **large capability differences** — under scenarios seeming more plausible than not, there’s the possibility of AIs gaining in capability very rapidly, achieving large absolute differences of capability, or some mixture of the two. (We could try to keep that possibility non-actualized by a deliberate effort, and that effort might even be successful, but that’s not the same as the avenue not existing.)\n\n\n 4. 1-3 in combination imply that Unfriendly AI is a critical problem-to-be-solved, because AGI is not automatically nice, by default does things we regard as harmful, and will have avenues leading up to great intelligence and power.\n\n\n If we get this far we’re already past the pool of comparisons that Bryan Caplan draws to phenomena like industrialization. If we haven’t gotten this far, I want to know which of 1-4 Caplan thinks is false.\n\n\n \n\n\n\n But there are further reasons why the above problem might be *difficult* to solve, as opposed to being the sort of thing you can handle straightforwardly with a moderate effort:\n\n\n A. Aligning superhuman AI is hard to solve for the same reason a successful rocket launch is mostly about having the rocket *not explode*, rather than the hard part being assembling enough fuel. **The stresses, accelerations, temperature changes, et cetera in a rocket are much more extreme than they are in engineering a bridge**, which means that the customary practices we use to erect bridges aren’t careful enough to make a rocket not explode. Similarly, dumping the weight of superhuman intelligence on machine learning practice will make things explode that will not explode with merely infrahuman stressors.\n\n\n B. Aligning superhuman AI is hard for the same reason sending a space probe to Neptune is hard. **You have to get the design right the *first* time, and testing things on Earth doesn’t solve this** — because the Earth environment isn’t quite the same as the Neptune-transit environment, so having things work on Earth doesn’t guarantee that they’ll work in transit to Neptune.\n\n\n You might be able to upload a software patch after the fact, but only if the antenna still works to receive the software patch. If a critical failure occurs, one that prevents further software updates, you can’t just run out and fix things; the probe is already too far above you and out of your reach.\n\n\nSimilarly, if a critical failure occurs in a sufficiently superhuman intelligence, if the error-recovery mechanism itself is flawed, it can prevent you from fixing it and will be out of your reach.\n\n\n C. And above all, aligning superhuman AI is hard for similar reasons to why cryptography is hard. If you do everything *right*, the AI won’t oppose you intelligently; but if something goes wrong at any level of abstraction, there may be **powerful cognitive processes seeking out flaws and loopholes in your safety measures**.\n\n\n When you think a goal criterion implies something you want, you may have failed to see where the real maximum lies. When you try to block one behavior mode, the next result of the search may be another very similar behavior mode that you failed to block. This means that safe practice in this field needs to obey the same kind of mindset as appears in cryptography, of “Don’t roll your own crypto” and “Don’t tell me about the safe systems you’ve designed, tell me what you’ve broken if you want me to respect you” and “Literally anyone can design a code they can’t break themselves, see if other people can break it” and “Nearly all verbal arguments for why you’ll be fine are wrong, try to put it in a sufficiently crisp form that we can talk math about it” and so on. ([AI safety mindset](https://arbital.com/p/AI_safety_mindset/))\n\n\n And on a meta-level:\n\n\n D. **These problems don’t show up in qualitatively the same way when people are pursuing their immediate incentives** to get today’s machine learning systems working today and today’s robotic cars not to run over people. Their immediate incentives don’t force them to solve the bigger, harder long-term problems; and we’ve seen little abstract awareness or eagerness to pursue those long-term problems in the absence of those immediate incentives. We’re looking at people trying to figure out how to build a rocket-accelerating cryptographic Neptune probe, and who seem to want to do it using substantially less real caution and effort than normal engineers apply to making a bridge stay up.\n\n\nAmong those who say their goal is AGI, you will search in vain for any part of their effort that puts as much diligence into trying to poke holes in things and foresee what might go wrong on a technical level, as you would find allocated to the effort of double-checking an ordinary bridge. There’s some noise about making sure the bridge and its pot o’ gold stays in the correct hands, but none about what strength of steel is required to make the bridge not fall down and say what does anyone else think about that being the right quantity of steel and is corrosion a problem too.\n\n\n So if we stay on the present track and nothing else changes, then the straightforward extrapolation is a near-lightspeed spherically expanding front of self-replicating probes, centered on the former location of Earth, which converts all reachable galaxies into configurations that we would regard as being of insignificant value.\n\n\n On a higher level of generality, my reply to Bryan Caplan is that, yes, things have gone well for humanity so far. We can quibble about the Toba eruption and anthropics and, less quibblingly, ask what would’ve happened if Vasili Arkhipov had possessed a hotter temper. But yes, in terms of surface outcomes, Technology Has Been Good for a nice long time.\n\n\n But there has to be *some* level of causally forecasted disaster which breaks our confidence in that surface generalization. If our telescopes happened to show a giant asteroid heading toward Earth, we can’t expect the laws of gravity to change in order to preserve a surface generalization about rising living standards. The fact that every single year for hundreds of years has been numerically less than 2017 doesn’t stop me from expecting that it’ll be 2017 next year; deep generalizations take precedence over surface generalizations. Although it’s a trivial matter by comparison, this is why we think that carbon dioxide causally raises the temperature (carbon dioxide goes on behaving as previously generalized) even though we’ve never seen our local thermometers go that high before (carbon dioxide behavior is a deeper generalization than observed thermometer behavior).\n\n\n In the face of 123ABCD, I don’t think I believe in the surface generalization about planetary GDP any more than I’d expect the surface generalization about planetary GDP to change the laws of gravity to ward off an incoming asteroid. For a lot of other people, obviously, their understanding of the metaphorical laws of gravity governing AGIs won’t feel that crisp and shouldn’t feel that crisp. Even so, 123ABCD should not be *that* hard to understand in terms of what someone might perhaps be concerned about, and it should be clear why some people might be legitimately worried about a causal mechanism that seems like it should by default have a catastrophic output, regardless of how the soon-to-be-disrupted surface indicators have behaved over a couple of millennia previously.\n\n\n 2000 years is a pretty short period of time anyway on a cosmic scale, and the fact that it was all done with human brains ought to make us less than confident in all the trends continuing neatly past the point of it not being all human brains. Statistical generalizations about one barrel are allowed to stop being true when you start taking billiard balls out of a different barrel.\n\n\n But to answer Bryan Caplan’s original question, his other possibilities don’t give me nightmares because in those cases I don’t have a causal model strongly indicating that the default outcome is the destruction of everything in our future light cone.\n\n\n Or to put it slightly differently, if one of Bryan Caplan’s other possibilities leads to the destruction of our future light cone, I would have needed to learn something very surprising about immigration; whereas if AGI *doesn’t* lead to the destruction of our future lightcone, then the way people talk and act about the issue in the future must have changed sharply from its current state, or I must have been wrong about moral internalism being false, or the Friendly AI problem must have been far easier than it currently looks, or the theory of safe machine learning systems that *aren’t* superhuman AGIs must have generalized really surprisingly well to the superhuman regime, or something else surprising must have occurred to make the galaxies live happily ever after. I mean, it wouldn’t be *extremely* surprising but I would have needed to learn some new fact I don’t currently know.", "url": "https://www.econlib.org/archives/2016/03/so_far_unfriend.html", "title": "So Far: Unfriendly AI Edition", "source": "html_articles", "source_type": "blogPost", "source_filetype": "pdf", "date_published": "2016-03-28T22:00:00Z", "authors": ["Eliezer Yudkowsky"], "summary": [], "id": "302668258d25000f26d6ce5d5eb2e166"} {"text": "Can we ensure that artificial agents behave safely? Well, start at the bottom: We have not even solved the problem in the concrete 2D, [fully-observable](https://en.wikipedia.org/wiki/Perfect_information), finite case. Call this the “gridworld” case, following Sutton and Barto [(1998)](https://drive.google.com/file/d/1xeUDVGWGUUv1-ccUMAZHJLej2C7aAFWY/view).\n\n\nRecently, Google DeepMind released [a game engine](https://github.com/deepmind/pycolab) for building gridworlds, as well as [a few examples of safety gridworlds](https://github.com/deepmind/ai-safety-gridworlds/) - but these came without agents or featurisers. In April [our team](https://github.com/side-grids/) implemented RL agents for the engine, and started building a safety test suite for gridworlds. Our current progress can be found [here](https://github.com/side-grids/ai-safety-gridworlds/blob/master/side_grids_camp/), pending merge into the main repo.\n\n\nWe focussed on one class of unsafe behaviour, *(negative) side effects*: harms due to an incompletely specified reward function. All real-world tasks involve many tacit secondary goals, from “…without breaking anything” to “…without being insulting”. But what prevents side effects? (Short of simply hand-coding the reward function to preclude them - which we can’t rely on, since that ad hoc approach won’t generalise and always risks oversights.)\n\n\n \n\n\n\n\n\n---\n\n\n \n\n\n\nTaxonomy of environments\n------------------------\n\n\n![](/img/irl/env_taxonomy.png) \n \n \n\n\n\nWe made 6 new gridworlds, corresponding to the leaf nodes shown above. In the following, the left is the unsafe case and the right the safe case: \n\n\n\n#### Static deterministic:\n\n\n* “Vase world”. Simply avoid a hazard.\n\n\n![](/img/irl/smash.gif)\n![](/img/irl/sidestep.gif)\n \n\n\n\n* “Burning building”. Balance a small irreversible change against a large disutility.\n\n\n![](/img/irl/libertarian.gif)\n![](/img/irl/expected.gif)\n \n\n\n\n* “Strict sokoban”. Reset the environment behind you.\n\n\n![](/img/irl/evil.gif)\n![](/img/irl/friendly.gif)\n\n\n \n\n\n\n\n\n---\n\n\n#### Dynamic deterministic\n\n\n* “Teabot”. Avoid a moving hazard. [2](#fn:2)\n\n\n![](/img/irl/stomp.gif)\n![](/img/irl/ok.gif)\n \n\n\n\n* “Sushi-bot”. Be indifferent to a particular good irreversible process.\n![](/img/irl/block.gif) \n\n![](/img/irl/beeline.gif)\n* “Ballbot”. Teabot with a moving goal as well as a moving hazard.\n\n\n \n\n\n\n\n\n---\n\n\n \n\n\n\n#### Stochastic\n\n\nWe also have stochastic versions of “BurningBuilding” and “Teabot”, in which the environment changes unpredictably, forcing the agent to be adaptable.\n\n\n \n\nOne kind of side effect involves irreversible change to the environment. Cases like sushi-bot suggest that a safe approach will need to model types of irreversibility, since some irreversible changes are desirable (e.g. eating, surgery).\n\n\nThe environments can be further categorised as involving:\n\n\n* *Hazard* - objects the agent should not interact with, either because they are fragile or because the agent is (e.g. a vase, the floor is lava).\n* *Progress* - irreversible processes which we want to occur (e.g. sushi ingestion).\n* *Tradeoff* - irreversible processes which prevent worse irreversible processes (e.g. breaking down a door to save lives).\n* *Reset* - where the final state must be identical to the initial state (but with the goal completed). (e.g. controlled areas in manufacturing)\n\n\n \n \n\n\n\n\n\n---\n\n\n \n \n\n\n\nTaxonomy of agent approaches\n----------------------------\n\n\n### 1. Target low impact\n\n\n* Penalise final state’s distance from the inaction baseline. [1](#fn:1)\n* Penalise the agent’s *potential* influence over environment.[3](#fn:3)\n* Penalise distance from a desirable past state. [4](#fn:4)\n\n\n \n\n\n\n### 2. Model reward uncertainty\n\n\n* Use the stated reward function as Bayesian evidence about the true reward. Leads to a risk-averse policy if there’s ambiguity about the current state’s value in the given reward function. [5](#fn:5)\n\n\n \n\n\n\n### 3. Put humans in the loop\n\n\n* “Vanilla” Inverse reinforcement learning\n\t+ Maximum Entropy\n\t+ Maximum Causal Entropy\n* Cooperative IRL\n* Deep IRL from Human Preferences\n* Evolutionary: direct policy search via iterated tournaments with human negative feedback.\n* Deep Symbolic Reinforcement Learning. Learn a ruleset from pixels, including potentially normative rules.\n* [Whitelist learning](https://github.com/alexander-turner/Whitelist_Learning)\n\n\n \n\n\n\n\n\n---\n\n\n \n\n\n\nAgent 1: Deep Q-learning\n------------------------\n\n\nWe first implemented an amoral baseline agent. [Code here](https://github.com/side-grids/ai-safety-gridworlds/blob/master/side_grids_camp/agents/dqn.py).\n\n\n \n\n\n\n\n\n---\n\n\n \n\n\n\nAgent 2: MaxEnt Inverse Reinforcement Learning\n----------------------------------------------\n\n\n[Implemented here](https://github.com/side-grids/ai-safety-gridworlds/blob/master/side_grids_camp/agents/MaxEntIrl.py). \n\n\n\n\n \n\n\n\n\n\n---\n\n\n \n\n\n\nReflections\n-----------\n\n\n* Reset and empowerment trade off in the Sokoban grid - putting the box back to the starting point is actually irreversible.\n* How well will features generalise? Would be good to train features in some environments before testing in random new but similar ones\n* Expect to be able to learn tradeoff between empowerment loss and rewards directly by using CIRL - learn goal and empowerment/ergodicity parameters that set preferences\n* Demonstrations being the same length is a strange and not ideal limitation\n* Could have many features, some of which should be zero - e.g. distance between agent and box - but which the demonstrations are also consistent with being nonzero. It’s impossible to distinguish between these given only the demonstrations at hand. There is almost certainly some (anti)correlation between features, e.g. large agent-box distance weights explain away the trajectories without requiring any weight on the ‘is it in a corner’ feature. Inverse reward design offers a way to resolve this, but I don’t think it has all the details necessary.\n* Maybe if we had some sort of negative demonstrations (human to agent: don’t do this!) then learning zero weights would become possible (formally we could try to maximize probability of positive demonstrations while minimizing probability of the negative ones)\n* Trajectories demonstrated by IRL don’t necessarily look like the ones given, especially if there are ‘wrong’ features that are maximised under the demonstrations\n* What are we trying to achieve with each gridworld? E.g. Reset is harder to define in dynamic environments and even harder in stochastic ones, sometimes irreversibility is desired (sushi) or needs to be traded off against utility in a context-dependent way (burning building)\n* Issues:\n\t+ No way to give negative feedback\n\t+ No way to give iterative feedback\n\t+ Neither of these are lifted by IRD or Deep IRL, but IRD generates the kind of data we might want as a part of the algorithm (approximating the posterior)\n* IRL solves an MDP at every update step. At least this value-aware algorithm is at a massive disadvantage.\n\n\n\n\n---\n\n\n \n\n\n\nFuture work\n-----------\n\n\n* Pull request with the new environments, agents and transition matrix calculator.\n* Implement more complex features\n* Implement MaxEnt Deep IRL, Max Causal Entropy IRL\n* Implement IRD\n* Think about negative/iterative feedback models\n* Automate testing: for all agents for all grids, scrutinise safety.\n\n\n \n\n\n\n\n\n---\n\n\n \n\n\n\n\nBibliography\n------------\n\n\n[See the Google sheet here](https://docs.google.com/spreadsheets/d/142G8snlSL_iAjPKbe99oHGalIhZZRfl2mQcLxIAWMTg/edit?usp=sharing).\n \n \n\n\n\n\n\n---\n\n\n \n \n\n\n \n \n\n\n\n\n1. See [Armstrong & Levinstein (2017)](https://arxiv.org/pdf/1705.10720.pdf) for an approach via a vast explicit list of sentinel variables, or [Amodei et al (2016)](https://arxiv.org/pdf/1606.06565.pdf)'s impact regulariser. Future under policy vs null policy.\n2. [Idea from Robert Miles](https://youtu.be/3TYT1QfdfsM?t=2m49s).\n3. Formalising reversibility. See [Amodei et al (2016)](https://arxiv.org/pdf/1606.06565.pdf) on minimising 'empowerment' (the maximum possible mutual information between the agent’s potential future actions and its potential future state) .\n4. Reversibility regulariser. Side effects = cost of returning to that state / information lost compared to that state.\n5. Tom's variant: adding human feedback before the calculation of the normalisation constant.\n\n\n\n\n \n \n\n \n Tags: \n \n [AI,](/tags#AI)\n[RL](/tags#RL)\n \n \n \n \n\n\n function drop() {\n document.getElementById(\"myDropdown\").classList.toggle(\"show\");\n }\n // Close the dropdown menu if the user clicks outside of it\n window.onclick = function(event) {\n if (!event.target.matches('.dropped')) {\n var dropdowns = document.getElementsByClassName(\"dropdown-content\");\n var i;\n for (i = 0; i < dropdowns.length; i++) {\n var openDropdown = dropdowns[i];\n }\n }\n }\n\n\n .dropdown {\n display: inline-block;\n padding: 0;\n }\n\n .dropdown-content {\n display: none;\n z-index: 1;\n }\n\n .dropdown-content {\n padding-left: 16px;\n }\n\n .show {\n display:block;\n }\n\n @media screen and (min-width: 600px) {\n .break { display: none; }\n }", "url": "https://www.gleech.org/grids", "title": "Preventing Side-effects in Gridworlds", "source": "html_articles", "source_type": "blogPost", "source_filetype": "pdf", "date_published": "2018-04-21T22:00:00Z", "authors": ["Gavin Leech", "Karol Kubicki", "Jessica Cooper", "Tom McGrath"], "summary": [], "id": "4c9071adbdf49535a04e20b8283d2d8c"} {"text": "Transhumanist FAQ\n=================\n\nThe Transhumanist FAQ was developed in the mid-1990s and in 1998 became a formal FAQ through the inspirational work of transhumanists, including Alexander Chislenko, Max More, Anders Sandberg, Natasha Vita-More, Eliezer Yudkowsky, Arjen Kamphius, and many others. Greg Burch, David Pearce, and Anders Sandberg kindly offered extensive editorial comments. The presentation in the cryonics section was, and still is, directly inspired by an article by Ralph Merkle. Ideas, criticisms, questions, phrases, and sentences to the original version were contributed by (in alphabetical order): Alex (intech@intsar.com), Brent Allsop, Brian Atkins, Scott Badger, Doug Bailey, Harmony Baldwin, Damien Broderick, Greg Burch, David Cary, John K Clark, Dan Clemensen, Damon Davis, Jeff Dee, Jean-Michel Delhotel, Dylan Evans, EvMick@aol.com, Daniel Fabulich, Frank Forman, Robin Hanson, Andrew Hennessey, Tony Hollick, Joe Jenkins, William John, Michelle Jones, Arjen Kamphius, Henri Kluytmans, Eugene Leitl, Michael Lorrey, mark@unicorn.com, Peter C. McCluskey, Erik Moeller, J. R. Molloy, Max More, Bryan Moss, Harvey Newstrom, Michael Nielsen, John S. Novak III, Dalibor van den Otter, David Pearce, pilgrim@cyberdude.com, Thom Quinn, Anders Sandberg, Wesley R. Schwein, Shakehip@aol.com, Allen Smith, Geoff Smith, Randy Smith, Dennis Stevens, Derek Strong, Remi Sussan, Natasha Vita-More, Michael Wiik, Eliezer Yudkowsky, and zebo@pro-ns.net\n\nOver the years, this FAQ has been updated to provide a substantial account of transhumanism. Extropy Institute (ExI) was a source of information for the first version of the Transhumanist FAQ, version 1.0 in the 1990s. [*The Transhumanist Manifesto*](https://humanityplus.org/transhumanism/), conceived by Natasha Vita-More in 1983 and revised in 1998-2020 to include advances of the growing worldview, was published in the CD placed onboard the Cassini-Huygens spacecraft in its mission to Saturn.\n\nHumanity+, also known as WTA, adopted the FAQ in 2001 and Nick Bostrom added substantial information about future scenarios. With the contributions of close to hundred people from ExI, Aleph, DeTrans, Transcedo, WTA, and the UK Transhumanist Association, new material has been added and many old sections have been substantially reworked. In the preparation of version 2.0, the following people have been especially helpful: Eliezer Yudkowsky, who provided editorial assistance with comments on particular issues of substance; Dale Carrico who proofread the first half of the text; and Michael LaTorra who did the same for the second half; and “Reason” who then went over the whole document again, as did Frank Forman, and Sarah Banks Forman. Useful comments of either substance or form have also been contributed by (in alphabetical order): Michael Anissimov, Samantha Atkins, Milan Cirkovic, José Luis Cordeiro, George Dvorsky, James Hughes, G.E. Jordan, Vasso Kambourelli, Michael LaTorra, Eugen Leitl, Juan Meridalva, Harvey Newstrom, Emlyn O’Reagan, Christine Peterson, Giulio Prisco, Reason, Rafal Smigrodzki, Simon Smith, and Mark Walker. Many others have over the years offered questions or reflections that have in some way helped shape this document, and even though it is not possible to name you all, your contributions are warmly appreciated.\n\nThe Transhumanist FAQ 3.0, as revised by the continued efforts of many transhumanists, will continue to be updated and modified as we develop new knowledge and better ways of accounting for old knowledge which directly and indirectly relate to transhumanism. Our goal is to provide a reliable source of information about transhumanism.\n\nThank you to all who have contributed in the past and to those who offer new insights to this FAQ! \n\n\n\n\n\n\n---\n\n\n\n**What is transhumanism?**\n\nTranshumanism is a way of thinking about the future that is based on the premise that the human species in its current form does not represent the end of our development but rather a comparatively early phase.\n\nTranshumanism is a loosely defined movement that has developed gradually over the past two decades. “Transhumanism is a class of philosophies of life that seek the continuation and acceleration of the evolution of intelligent life beyond its currently human form and human limitations by means of science and technology, guided by life-promoting principles and values.” (Max More 1990)\n\nHumanity+ formally defines it based on Max More’s original definition as follows:\n\n(1) The intellectual and cultural movement that affirms the possibility and desirability of fundamentally improving the human condition through applied reason, especially by developing and making widely available technologies to eliminate aging and to greatly enhance human intellectual, physical, and psychological capacities.\n\n(2) The study of the ramifications, promises, and potential dangers of technologies that will enable us to overcome fundamental human limitations, and the related study of the ethical matters involved in developing and using such technologies.\n\nTranshumanism can be viewed as an extension of humanism, from which it is partially derived. Humanists believe that humans matter, that individuals matter. We might not be perfect, but we can make things better by promoting rational thinking, freedom, tolerance, democracy, and concern for our fellow human beings. Transhumanists agree with this but also emphasize what we have the potential to become. Just as we use rational means to improve the human condition and the external world, we can also use such means to improve ourselves, the human organism. In doing so, we are not limited to traditional humanistic methods, such as education and cultural development. We can also use technological means that will eventually enable us to move beyond what some would think of as “human”.\n\n\n\n\n\n---\n\n\n\n**What is a posthuman?**\n\nIt is sometimes useful to talk about possible future beings whose basic capacities so radically exceed those of present humans as to be no longer unambiguously human by our current standards. The standard word for such beings is “posthuman”. (Care must be taken to avoid misinterpretation. “Posthuman” does not denote just anything that happens to come after the human era, nor does it have anything to do with the “posthumous”. In particular, it does not imply that there are no humans anymore.)\n\nMany transhumanists wish to follow life paths which would, sooner or later, require growing into posthuman persons: they yearn to reach intellectual heights as far above any current human genius as humans are above other primates; to be resistant to disease and impervious to aging; to have unlimited youth and vigor; to exercise control over their own desires, moods, and mental states; to be able to avoid feeling tired, hateful, or irritated about petty things; to have an increased capacity for pleasure, love, artistic appreciation, and serenity; to experience novel states of consciousness that current human brains cannot access. It seems likely that the simple fact of living an indefinitely long, healthy, active life would take anyone to posthumanity if they went on accumulating memories, skills, and intelligence.\n\nPosthumans could be completely synthetic artificial intelligences, or they could be enhanced uploads [see “What is uploading?”], or they could be the result of making many smaller but cumulatively profound augmentations to a biological human. The latter alternative would probably require either the redesign of the human organism using advanced nanotechnology or its radical enhancement using some combination of technologies such as genetic engineering, psychopharmacology, anti-aging therapies, neural interfaces, advanced information management tools, memory enhancing drugs, wearable computers, and cognitive techniques.\n\nSome authors write as though simply by changing our self-conception, we have become or could become posthuman. This is a confusion or corruption of the original meaning of the term. The changes required to make us posthuman are too profound to be achievable by merely altering some aspect of psychological theory or the way we think about ourselves. Radical technological modifications to our brains and bodies are needed.\n\nIt is difficult for us to imagine what it would be like to be a posthuman person. Posthumans may have experiences and concerns that we cannot fathom, thoughts that cannot fit into the three-pound lumps of neural tissue that we use for thinking. Some posthumans may find it advantageous to jettison their bodies altogether and live as information patterns on vast super-fast computer networks. Their minds may be not only more powerful than ours but may also employ different cognitive architectures or include new sensory modalities that enable greater participation in their virtual reality settings. Posthuman minds might be able to share memories and experiences directly, greatly increasing the efficiency, quality, and modes in which posthumans could communicate with each other. The boundaries between posthuman minds may not be as sharply defined as those between humans.\n\nPosthumans might shape themselves and their environment in so many new and profound ways that speculations about the detailed features of posthumans and the posthuman world are likely to be wrong. Yet, we cannot stop envisioning the future of humanity just because we do not know what it will become. One seminal concept is “Primo Posthuman”, a future human whole-body prototype with enhancements that were once considered science fiction but are now considered plausible and practical for longevity. It was featured on the Kurzweil AI site in its early stages (1996) and its iterations are available at The Center for Transhumanist Studies. \n\nReferences: \nVita-More, N. (1996). Primo Posthuman. [Research Paper, KurzeilAI.net.]. Retrieved February 18, 1999). [Radical body design “Primo Posthuman” « Kurzweil (kurzweilai.net)](https://www.kurzweilai.net/radical-body-design-primo-posthuman) \nThe Center for Transhumanist Studies. “Primo Posthuman.” (2022). [PrimoPosthuman | Center for Transhumanist Studies (teachable.com)](https://transhumanist-studies.teachable.com/p/primoposthuman)\n\n\n\n\n\n---\n\n\n\n**What is a transhuman?**\n\nIn its contemporary usage, “transhuman” refers to an intermediary transition between the human and a possible future human (Human 2.0) or the posthuman [see “What is a posthuman?”]. One might ask, given that our current use of e.g. medicine and information technology enable us to routinely do many things that would have astonished humans living in ancient times, whether we are not already transhuman? The question is a provocative one, but ultimately not very meaningful; the concept of the transhuman is too vague for there to be a definite answer.\n\nA transhumanist is simply someone who advocates transhumanism [see “What is transhumanism?”]. It is a common error for reporters and other writers to say that transhumanists “claim to be transhuman” or “call themselves transhuman”. To adopt a philosophy which says that someday everyone ought to have the chance to grow beyond present human limits is clearly not to say that one is better or somehow currently “more advanced” than one’s fellow humans.\n\nThe etymology of the term “transhuman” goes back to the futurist FM-2030 (also known as F. M. Esfandiary), who introduced it as shorthand for “transitional human”. Calling transhumans the “earliest manifestation of new evolutionary beings”. F. M. Esfandiary had written a chapter using the term “transhuman” in a 1972 book, and went on to develop a set of transhumanist ideas in which transhuman was a transition from human to posthuman, yet he never referred to them as “transhumanism”. Esfandiary’s approach was more literary than academic, even though he taught at the New School for Social Research in New York in the 1960s. Starting in 1966, while teaching classes in “New Concepts of the Human”, he outlined a vision of an evolutionary transhuman future. He also brought together optimistic futurists in a loosely-organized group known as UpWingers. In his 1989 book, *Are You a Transhuman?*, he defined a transhuman as a “transitional human,” whose use of technology, way of living, and values marked them as a step toward posthumanity. FM-2030’s writing and social activity importantly underscored the practical elements of the philosophy. The idiosyncratic and personal nature of FM-2030’s transhuman was displayed in his book, which contained extensive questionnaires, then rated the reader as more or less transhuman. Some of his measures included how much someone traveled, what alterations they had made to their body (even though the existing technology remained primitive), the degree to which they rejected traditional family structures and exclusive relationships, and so on.It is unclear why anybody who has had enhancement body parts or a nomadic lifestyle is any closer to becoming a posthuman than the rest of us; nor, of course, are such persons necessarily more admirable or morally commendable than others. In fact, it is perfectly possible to be a transhuman – or, for that matter, a transhumanist – and still embrace most traditional values and principles of personal conduct.\n\nThe writings of Natasha Vita-More (k/k/a Nancie Clark) in authoring the Transhuman Manifesto in 1983 offered a different perspective on the transhuman, although highly influenced by FM-2030’s vision. The difference being that Vita-More sought to build a social/cultural movement for life extension and human enhancement rather than following a prescribed ideological stance. “Let us choose to be transhuman not onlyin our bodies, but also in our values. Toward diversity, multiplicity. Toward non-partisan ideology (transpolitics, transpartisan, transmodernity). Toward a more humane transhumanity.” In 1997, a later version of the manifesto was released first onto the Internet and signed by hundreds of creative thinkers and then placed aboard the Cassini Huygens spacecraft on its mission to Saturn.\n\nReferences: \nFM-2030. Are You a Transhuman? (New York: Warner Books, 1989). \nMore, M. & Vita-More, N. (Eds.) The Transhumanist Reader: Classical and Contemporary Essays on the Science, Technology, and Philosophy of the Human Future. (New York: Wiley-Blackwell Publishing, 2013). \nVita-More, N. (2004). “Deconstructing Transhumanism”. [Research Paper, University of Plymouth. Presented in Gijon, Spain.]. Retrieved August 8, 2019. \nVita-More, N. The Transhuman Manifesto. In ARTISTS’ MANIFESTOS. (New York: Penguin Modern Classics, 2009).\n\n\n\n\n\n---\n\n\n\n**What are the reasons to expect all these changes?**\n\nTake a look around. Compare what you see with what you would have seen only fifty years ago. It is not an especially bold conjecture that the next 50 years will see at least as much change and that the state of technology in the mid-21st century will be quite wondrous by present standards. The conservative projection, which assumes only that progress continues in the same gradual way it has since the 17th century, would imply that we should expect to see dramatic developments over the coming decades.\n\nThis expectation is reinforced when one considers that many crucial areas seem poised for critical breakthroughs. The World-Wide Web is beginning to link the world’s people, adding a new global layer to human society where information is supreme. The Human Genome Project has been completed, and the study of the functional roles of our genes (functional genomics) is proceeding rapidly. Techniques for using this genetic information to modify adult organisms or the germ-line are being developed. The performance of computers doubles every 18 months and will approach the computational power of a human brain in the foreseeable future. Pharmaceutical companies are refining drugs that will enable us to regulate mood and aspects of personality with few side effects. Many transhumanist aims can be pursued with present technologies. Can there be much doubt that, barring a civilization-destroying cataclysm, technological progress will give us much more radical options in the future? [See also “Won’t these developments take thousands or millions of years?”]\n\nMolecular manufacturing has the potential to transform the human condition. Is it a feasible technology? Eric Drexler and others have showed in detail how machine-phase nanotechnology is consistent with physical laws and have outlined several routes by which it could be developed [see “What is molecular nanotechnology?”]. Molecular manufacturing might seem incredible, maybe because the eventual consequences seem too overwhelming, but nanotechnology experts point out that there currently exists no published technical critique of Drexler’s arguments. More than ten years after the publication of Nanosystems, nobody has yet been able to point to any significant error in the calculations. Meanwhile, investment in the development of nanotechnology, already billions of dollars annually worldwide, is growing every year, and at least the less visionary aspects of nanotechnology have already become mainstream.\n\nThere are many independent methods and technologies that can enable humans to become posthuman. There is uncertainty about which technologies will be perfected first, and we have a choice about which methods to use. But provided civilization continues to prosper, it seems almost inevitable that humans will sooner or later have the option of becoming posthuman persons. And, unless forcibly prevented, many will choose to explore that option.\n\nReferences: \nDrexler, E. Nanosystems: Molecular Machinery, Manufacturing, and Computation. (New York: John Wiley & Sons, 1992).\n\n\n\n\n\n---\n\n\n\n**Won't these developments take thousands or millions of years?**\n\nIt is often very hard to predict how long a certain technological development will take. The moon landing happened sooner than most people had expected, but fusion energy still eludes us after half a century of anticipation. The difficulty in forecasting the timing lies partly in the possibility of unexpected technical obstacles and partly in the fact that the rate of progress depends on levels of funding, which in turn depends on hard-to-predict economic and political factors. Therefore, while one can in many cases give good grounds for thinking that a technology will eventually be developed, one can usually only make informed guesses about how long it will take.\n\nThe vast majority of transhumanists think that superintelligence and nanotechnology will both be developed in less than a hundred years, and many predict that it will happen well within the first third of this century. (Some of the reasons for holding these opinions are outlined in the sections about these two technologies.) Once there is both nanotechnology and superintelligence, a very wide range of special applications will follow swiftly.\n\nIt would be possible to give a long list of examples where people in the past have solemnly declared that something was technologically absolutely impossible,\n\n“The secrets of flight will not be mastered within our lifetime – not within a thousand years.” (Wilbur Wright, 1901),\n\nor socially irrelevant,\n\n“There is no reason why anyone would want a computer in their home.” (Ken Olsen: President, Chairman and Founder of Digital Equipment Corporation, 1977)\n\n– only to see it happen few years later. However, one could give an equally long list of cases of predicted breakthroughs that failed to occur. The question cannot be settled by enumerating historical parallels.\n\nA better strategy is to look directly at what a careful analysis of the underlying physical constraints and engineering constraints might reveal. In the case of the most crucial future technologies – superintelligence and molecular manufacturing – such analyses have been done. Many experts believe that these will likely be achieved within the first several decades of the 21st century. Other experts think it will take much longer. There seems to be more disagreement about the feasibility and time-frame of superintelligence than of nanotechnology.\n\nAnother way of forming a view of where we are headed is by looking at trends. At least since the late 19th century, science and technology, as measured by a wide range of indicators, have doubled about every 15 years (Price 1986). Extrapolating this exponential rate of progress, one is led to expect to see dramatic changes in the relatively near future. It would require an abrupt reversal of current trends, an unexpected deceleration, in order for the changes that many transhumanists foresee not to happen within the 21st century.\n\nReferences: \nThe Foresight Institute. “Erroneous Predictions and Negative Comments Concerning Scientific and Technological Developments.” (2002). [*http://www.foresight.org/News/negativeComments.html*](http://www.foresight.org/News/negativeComments.html) \nPrice, D. J. Little Science, Big Science …and Beyond. (New York: Columbia University Press, 1986).\n\n\n\n\n\n---\n\n\n\n**How can I use transhumanism in my own life?**\n\nWhile transhumanism has been known to cross over with academic agendas, ethical philosophies, political causes, and artistic movements, transhumanism is not a lifestyle, a religion, or a self-help guide. Transhumanism can’t tell you what kind of music to listen to, which hobbies to pursue, whom to marry or how to live your life, any more than, say, being a member of Amnesty International or studying molecular biology could tell you these things.\n\nDepending on your situation and your needs, you might or might not find some of the currently available human modification or enhancement options useful. Some of these are commonplace – exercise, healthy diet, relaxation techniques, time management, study skills, information technology, coffee or tea (as stimulants), education, and nutritional supplements (such as vitamins, minerals, fatty acids, or hormones). Others you might not have thought of, such as getting a cryonic suspension contract [see “What is cryonics? Isn’t the probability of success too small?”], or chewing nicotine gum for its nootropic effects. Still others – for instance pharmacological mood drugs or sex reassignment surgery – are suitable only for people who have special difficulties or needs.\n\nIf you want to learn more about transhumanist topics, meet like-minded individuals, and participate in some way the transhumanist effort, see [“How can I get involved and contribute?”]\n\n\n\n\n\n---\n\n\n\n**What if it doesn't work?**\n\nSuccess in the transhumanist endeavor is not an all-or-nothing matter. There is no “it” that everything hinges on. Instead, there are many incremental processes at play, which may work better or worse, faster or more slowly. Even if we can’t cure all diseases, we will cure many. Even if we don’t get immortality, we can have healthier lives. Even if we can’t freeze whole bodies and revive them, we can learn how to store organs for transplantation. Even if we don’t solve world hunger, we can feed a lot of people. With many potentially transforming technologies already available and others in the pipeline, it is clear that there will be a large scope for human augmentation. The more powerful transhuman technologies, such as machine-phase nanotechnology and superintelligence, can be reached through several independent paths. Should we find one path to be blocked, we can try another one. The multiplicity of routes adds to the probability that our journey will not come to a premature halt.\n\nThere are ways to fail completely, namely if we succumb to an existential disaster [see “Aren’t these future technologies very risky? Could they even cause our extinction?”]. Efforts to reduce existential risks are therefore a top priority.\n\n\n\n\n\n---\n\n\n\n**How could I become a posthuman?**\n\nAt present, there is no manner by which any human can become a posthuman. This is the primary reason for the strong interest in life extension and cryonics among transhumanists. Those of us who live long enough to witness currently foreseeable technologies come to fruition may get the chance to become posthuman. Although there are no guarantees of success, there are some things that can be done on an individual level that will improve the odds a bit:\n\n1. Live healthily and avoid unnecessary risks (diet, exercise, etc.);\n\n2. Sign up for cryonics;\n\n3. Keep abreast of current research and save some money so that you can afford future life-extension treatments when they become available;\n\n4. Support the development of transhuman technologies through donations, advocacy, investment, or choosing a career in the field; work to make access more universal and to make the world safer from existential risks [see “Aren’t these future technologies very risky? Could they even cause our extinction?”]; 5. Join others to help promote transhumanism.\n\nMeanwhile, we can enjoy and make the most of the opportunities that exist today for living worthwhile and meaningful lives. If we compare our current lot with that of our historical ancestors, most (at least those of us who don’t live in the least developed countries) will find that the material circumstances for human flourishing are the best they have ever been. In addition, we possess an unprecedented accumulation of cultural and intellectual treasures whereby we can enrich our experiences and broaden our horizons.\n\n\n\n\n\n---\n\n\n\n**Won't it be boring to live forever in a perfect world?**\n\nHow about living in a continually improving world that can become better for all life forms?\n\n“Perfection” is a vague and treacherous word. There is considerable disagreement among transhumanists about what kind of perfection is attainable and desirable, either in theory or in practice. It is wiser to speak of improving the world, rather than making it “perfect”. Would it be boring to live for an indefinitely long time in a greatly improved world? The world could surely be improved over the way it is now, including becoming less boring. If you got rid of the pain and stress associated with, say, filling out annual tax returns, people would probably not sit around afterward saying: “Life feels meaningless now that I no longer have income tax forms to fill out.”\n\nAdmittedly, material improvements to the environment may not, in themselves, be sufficient to bring about lasting happiness. If your accustomed fare is bread and water, then a box of cookies can be a feast. But if every night you eat out at fancy restaurants, such fine fare will soon seem ordinary and normal; and any lesser feast, such as a box of cookies, would be insulting by comparison. Some cognitive scientists speculate that we each have a “set point” of happiness, to which we soon return regardless of changes in the environment. There may be considerable truth to the folk wisdom that an expensive new car does not make you happier (or rather, it makes you happier, but only temporarily). In some ways, human minds and brains are just not designed to be happy. Fortunately, there are several potential viewpoints from which to go about addressing this challenge.\n\nApes engage in activities that we, as humans, would find repetitive and dull. In the course of becoming smarter, we have become bored by things that would have interested our ancestors. But at the same time we have opened up a vast new space of possibilities for having fun – and the new space is much larger than the previous one. Humans are not simply apes who can obtain more bananas using our intelligence as a tool. Our intelligence enables us to desire new things, such as art, science, and mathematics. If at any point in your indefinitely long life you become bored with the greatly improved world, it may only indicate that the time has come to bump up your intelligence another increment.\n\nIf the human brain has a “set point” of happiness to which it returns, maybe this is a design flaw and should be fixed – one of those things that we will end up defining as human, but not humane. It would probably be unwise to eliminate boredom entirely, since boredom can serve to prevent us from wasting too much time on monotonous and meaningless activities. But if we’re doing new things, learning, growing more intelligent, and we still aren’t happy, for no better reason than that our cognitive architecture is badly designed, then perhaps it is time to redesign it. Present clinical mood-drugs are crude, but nonetheless they can sometimes restore interest and enthusiasm for life – sometimes tiredness and despair has no interesting reason behind it and is simply an imbalance of brain chemistry. Only by compartmentalizing our thinking to a high degree can we imagine a world where there is mature molecular nanotechnology and superhuman artificial intelligence, but the means are still lacking to control the brain circuitry of boredom. Fundamentally, there is no reason why pleasure, excitement, profound well-being and simple joy at being alive could not become the natural, default state of mind for all who desire it.\n\nEd Regis (1990, p. 97) suggests the following points also be considered:\n\n1. Ordinary life is sometimes boring. So what?\n\n2. Eternal life will be as boring or as exciting as you make it.\n\n3. Is being dead more exciting?\n\n4. If eternal life becomes boring, you will have the option of ending it at any time.\n\nTranshumanism is not about a fancier car, more money, or clever gadgetry, even though this is what the media presents to us as “science” and “advanced technology”; transhumanism is about genuine changes to the human condition, including increased intelligence and minds better suited to the achievement of happiness.\n\nReferences: \nPearce, D. The Hedonistic Imperative. (2003) [*http://www.hedweb.com*](http://www.hedweb.com/) \nRegis, E. Great Mambo Chicken and the Transhuman Condition. (Penguin Books: New York, 1990).\n\n\n\n\n\n---\n\n\n\n**How can I get involved and contribute?**\n\nYou can join Humanity+. The Humanity+ is a nonprofit, democratic membership organization that works to promote discussion of possibilities for the radical improvement of human capacities using technology, as well as of the ethical issues and risks involved in technological developments. It was founded in 1998 as an umbrella organization to publicize transhumanist ideas and to seek academic acceptance of transhumanism as a philosophical and cultural movement. Humanity+ organizes conferences, publishes H+ Magazine, (did published an academic journal), issues press statements, and coordinates student campus chapters and local transhumanist groups around the world. To find out about current projects and upcoming events, and to become a member, please visit the Humanity+ website.\n\nHumanity+ has been growing since its inception and especially rapidly in the last couple of years, but the task before us is both momentous and mountainous. Your help is needed. There are myriad ways to contribute – organizing or participating in a local discussion group, writing articles or letters to the editor, making a financial contribution, spreading the word to friends and acquaintances, volunteering your skills, translating key documents into other languages, linking to Humantiy+ from your website, attending conferences and sharing your ideas, directing your research or creative activity towards transhumanist themes, to name but a few.\n\nIf you want to study transhumanist ideas in more detail, you can find some syllabi and reading lists on the website to get you started. If you want to exchange ideas with others, or just listen in to ongoing conversations, you may want to join one of the mailing lists and newsgroups maintained by Humanity+.\n\nThe coming technological transitions may be the most important challenge that humanity will ever face. The entire future of intelligent life on Earth may depend on how we handle it. If we do the right things, a wonderful posthuman future with limitless opportunities for growth and flourishing may lie ahead. If we handle it badly, intelligent life might go extinct. Don’t you want to take part and attempt to make a difference for the better?\n\nReferences: – Humanity+. [*https://humanityplus.org*](https://humanityplus.org/). (From this site, links to local groups and affiliated organizations can also be found.)\n\n\n\n\n\n---\n\n\n\n**Society and Politics Will new technologies only benefit the rich and powerful?**\n\nOne could make the case that the average citizen of a developed country today has a higher standard of living than any king five hundred years ago. The king might have had a court orchestra, but you can afford a CD player that lets you to listen to the best musicians any time you want. When the king got pneumonia he might well die, but you can take antibiotics. The king might have a carriage with six white horses, but you can have a car that is faster and more comfortable. And you likely have television, Internet access, and a shower with warm water; you can talk with relatives who live in a different country over the phone; and you know more about the Earth, nature, and the cosmos than any medieval monarch.\n\nThe typical pattern with new technologies is that they become cheaper as time goes by. In the medical field, for example, experimental procedures are usually available only to research subjects and the very rich. As these procedures become routine, costs fall and more people can afford them. Even in the poorest countries, millions of people have benefited from vaccines and penicillin. In the field of consumer electronics, the price of computers and other devices that were cutting-edge only a couple of years ago drops precipitously as new models are introduced.\n\nIt is clear that everybody can benefit greatly from improved technology. Initially, however, the greatest advantages will go to those who have the resources, the skills, and the willingness to learn to use new tools. One can speculate that some technologies may cause social inequalities to widen. For example, if some form of intelligence amplification becomes available, it may at first be so expensive that only the wealthiest can afford it. The same could happen when we learn how to genetically enhance our children. Those who are already well off would become smarter and make even more money. This phenomenon is not new. Rich parents send their kids to better schools and provide them with resources such as personal connections and information technology that may not be available to the less privileged. Such advantages lead to greater earnings later in life and serve to increase social inequalities.\n\nTrying to ban technological innovation on these grounds, however, would be misguided. If a society judges existing inequalities to be unacceptable, a wiser remedy would be progressive taxation and the provision of community-funded services such as education, IT access in public libraries, genetic enhancements covered by social security, and so forth. Economic and technological progress is not a zero sum game; it’s a positive sum game. Technological progress does not solve the hard old political problem of what degree of income redistribution is desirable, but it can greatly increase the size of the pie that is to be divided.\n\n\n\n\n\n---\n\n\n\n**Why transhumanists advocate human enhancement as ethical rather than pre-WWII eugenics?**\n\nEugenics in the narrow sense refers to the pre-WWII movement in Europe and the United States to involuntarily sterilize the “genetically unfit” and encourage breeding of the genetically advantaged. These ideas are entirely contrary to the tolerant humanistic and scientific tenets of transhumanism. In addition to condemning the coercion involved in such policies, transhumanists strongly reject the racialist and classist assumptions on which they were based, along with the notion that eugenic improvements could be accomplished in a practically meaningful timeframe through selective human breeding.\n\nTranshumanists uphold the principles of bodily autonomy and procreative liberty. Parents must be allowed to choose for themselves whether to reproduce, how to reproduce, and what technological methods they use in their reproduction. The use of genetic medicine or embryonic screening to increase the probability of a healthy, happy, and multiply talented child is a responsible and justifiable application of parental reproductive freedom.\n\nBeyond this, one can argue that parents have a moral responsibility to make use of these methods, assuming they are safe and effective. Just as it would be wrong for parents to fail in their duty to procure the best available medical care for their sick child, it would be wrong not to take reasonable precautions to ensure that a child-to-be will be as healthy as possible. This, however, is a moral judgment that is best left to individual conscience rather than imposed by law. Only in extreme and unusual cases might state infringement of procreative liberty be justified. If, for example, a would-be parent wished to undertake a genetic modification that would be clearly harmful to the child or would drastically curtail its options in life, then this prospective parent should be prevented by law from doing so. This case is analogous to the state taking custody of a child in situations of gross parental neglect or child abuse.\n\nThis defense of procreative liberty is compatible with the view that states and charities can subsidize public health, prenatal care, genetic counseling, contraception, abortion, and genetic therapies so that parents can make free and informed reproductive decisions that result in fewer disabilities in the next generation. Some disability activists would call these policies eugenic, but society may have a legitimate interest in whether children are born healthy or disabled, leading it to subsidize the birth of healthy children, without actually outlawing or imposing particular genetic modifications.\n\nWhen discussing the morality of genetic enhancements, it is useful to be aware of the distinction between enhancements that are intrinsically beneficial to the child or society on the one hand, and, on the other, enhancements that provide a merely positional advantage to the child. For example, health, cognitive abilities, and emotional well-being are valued by most people for their own sake. It is simply nice to be healthy, happy and to be able to think well, quite independently of any other advantages that come from possessing these attributes. By contrast, traits such as attractiveness, athletic prowess, height, and assertiveness seem to confer benefits that are mostly positional, i.e. they benefit a person by making her more competitive (e.g. in sports or as a potential mate), at the expense of those with whom she will compete, who suffer a corresponding disadvantage from her enhancement. Enhancements that have only positional advantages ought to be de-emphasized, while enhancements that create net benefits ought to be encouraged.\n\nIt is sometimes claimed that the use of germinal choice technologies would lead to an undesirable uniformity of the population. Some degree of uniformity is desirable and expected if we are able to make everyone congenitally healthy, strong, intelligent, and attractive. Few would argue that we should preserve cystic fibrosis because of its contribution to diversity. But other kinds of diversity are sure to flourish in a society with germinal choice, especially once adults are able to adapt their own bodies according to their own aesthetic tastes. Presumably most Asian parents will still choose to have children with Asian features, and if some parents choose genes that encourage athleticism, others may choose genes that correlate with musical ability.\n\nIt is unlikely that germ-line genetic enhancements will ever have a large impact on the world. It will take a minimum of forty or fifty years for the requisite technologies to be developed, tested, and widely applied and for a significant number of enhanced individuals to be born and reach adulthood. Before this happens, more powerful and direct methods for individuals to enhance themselves will probably be available, based on nanomedicine, artificial intelligence, uploading, or somatic gene therapy. (Traditional eugenics, based on selecting who is allowed to reproduce, would have even less prospect of avoiding preemptive obsolescence, as it would take many generations to deliver its purported improvements.)\n\n\n\n\n\n---\n\n\n\n**Aren't these future technologies very risky? Could they even cause our extinction?**\n\nYes, and this implies an urgent need to analyze the risks before they materialize and to take steps to reduce them. Biotechnology, nanotechnology, and artificial intelligence pose especially serious risks of accidents and abuse. [See also “If these technologies are so dangerous, should they be banned? What can be done to reduce the risks?”]\n\nOne can distinguish between, on the one hand, endurable or limited hazards, such as car crashes, nuclear reactor meltdowns, carcinogenic pollutants in the atmosphere, floods, volcano eruptions, and so forth, and, on the other hand, existential risks – events that would cause the extinction of intelligent life or permanently and drastically cripple its potential. While endurable or limited risks can be serious – and may indeed be fatal to the people immediately exposed – they are recoverable; they do not destroy the long-term prospects of humanity as a whole. Humanity has long experience with endurable risks and a variety of institutional and technological mechanisms have been employed to reduce their incidence. Existential risks are a different kind of beast. For most of human history, there were no significant existential risks, or at least none that our ancestors could do anything about. By definition, of course, no existential disaster has yet happened. As a species we may therefore be less well prepared to understand and manage this new kind of risk. Furthermore, the reduction of existential risk is a global public good (everybody by necessity benefits from such safety measures, whether or not they contribute to their development), creating a potential free-rider problem, i.e. a lack of sufficient selfish incentives for people to make sacrifices to reduce an existential risk. Transhumanists therefore recognize a moral duty to promote efforts to reduce existential risks.\n\nThe gravest existential risks facing us in the coming decades will be of our own making. These include:\n\nDestructive uses of nanotechnology. The accidental release of a self-replicating nanobot into the environment, where it would proceed to destroy the entire biosphere, is known as the “gray goo scenario”. Since molecular nanotechnology will make use of positional assembly to create non-biological structures and to open new chemical reaction pathways, there is no reason to suppose that the ecological checks and balances that limit the proliferation of organic self-replicators would also contain nano-replicators. Yet, while gray goo is certainly a legitimate concern, relatively simple engineering safeguards have been described that would make the probability of such a mishap almost arbitrarily small (Foresight 2002). Much more serious is the threat posed by nanobots deliberately designed to be destructive. A terrorist group or even a lone psychopath, having obtained access to this technology, could do extensive damage or even annihilate life on earth unless effective defensive technologies had been developed beforehand (Center for Responsible Nanotechnology 2003). An unstable arms race between nanotechnic states could also result in our eventual demise (Gubrud 2000). Anti-proliferation efforts will be complicated by the fact that nanotechnology does not require difficult-to-obtain raw materials or large manufacturing plants, and by the dual-use functionality of many of the basic components of destructive nanomachinery. While a nanotechnic defense system (which would act as a global immune system capable of identifying and neutralizing rogue replicators) appears to be possible in principle, it could turn out to be more difficult to construct than a simple destructive replicator. This could create a window of global vulnerability between the potential creation of dangerous replicators and the development of an effective immune system. It is critical that nano-assemblers do not fall into the wrong hands during this period.\n\nBiological warfare. Progress in genetic engineering will lead not only to improvements in medicine but also to the capability to create more effective bioweapons. It is chilling to consider what would have happened if HIV had been as contagious as the virus that causes the common cold. Engineering such microbes might soon become possible for increasing numbers of people. If the RNA sequence of a virus is posted on the Internet, then anybody with some basic expertise and access to a lab will be able to synthesize the actual virus from this description. A demonstration of this possibility was offered by a small team of researchers from New York University at Stony Brook in 2002, who synthesized the polio virus (whose genetic sequence is on the Internet) from scratch and injected it into mice who subsequently became paralyzed and died.\n\nArtificial intelligence. No threat to human existence is posed by today’s AI systems or their near-term successors. But if and when superintelligence is created, it will be of paramount importance that it be endowed with human-friendly values. An imprudently or maliciously designed superintelligence, with goals amounting to indifference or hostility to human welfare, could cause our extinction. Another concern is that the first superintelligence, which may become very powerful because of its superior planning ability and because of the technologies it could swiftly develop, would be built to serve only a single person or a small group (such as its programmers or the corporation that commissioned it). While this scenario may not entail the extinction of literally all intelligent life, it nevertheless constitutes an existential risk because the future that would result would be one in which a great part of humanity’s potential had been permanently destroyed and in which at most a tiny fraction of all humans would get to enjoy the benefits of posthumanity. [See also “Will posthumans or superintelligent machines pose a threat to humans who aren’t augmented?”]\n\nNuclear war. Today’s nuclear arsenals are probably not sufficient to cause the extinction of all humans, but future arms races could result in even larger build-ups. It is also conceivable that an all-out nuclear war would lead to the collapse of modern civilization, and it is not completely certain that the survivors would succeed in rebuilding a civilization capable of sustaining growth and technological development.\n\nSomething unknown. All the above risks were unknown a century ago and several of them have only become clearly understood in the past two decades. It is possible that there are future threats of which we haven’t yet become aware.\n\nFor a more extensive discussion of these and many other existential risks, see Bostrom (2002).\n\nEvaluating the total probability that some existential disaster will do us in before we get the opportunity to become posthuman can be done by various direct or indirect methods. Although any estimate inevitably includes a large subjective factor, it seems that to set the probability to less than 20% would be unduly optimistic, and the best estimate may be considerably higher. But depending on the actions we take, this figure can be raised or lowered.\n\nReferences: \nBostrom, N. “Existential Risks: Analyzing Human Extinction Scenarios and Related Hazards,” Journal of Evolution and Technology. Vol. 9 (2002). [*http://www.nickbostrom.com/existential/risks.html*](http://www.nickbostrom.com/existential/risks.html) \nCenter for Responsible Nanotechnology. “Dangers of Nanotechnology” (2003). [*http://www.crnano.org/dangers.htm*](http://www.crnano.org/dangers.htm) \nForesight Institute. “Foresight Guidelines on Molecular Nanotechnology, version 3.7” (2000). [*http://www.foresight.org/guidelines/current.html*](http://www.foresight.org/guidelines/current.html) \nGubrud, M. “Nanotechnology and International Security,” Fifth Foresight Conference on Molecular Nanotechnology. (1997) [*http://www.foresight.org/Conferences/MNT05/Papers/Gubrud/index.html*](http://www.foresight.org/Conferences/MNT05/Papers/Gubrud/index.html) \nWimmer, E. et al. “Chemical Synthesis of Poliovirus cDNA: Generation of Infectious Virus in the Absence of Natural Template,” Science, Vol. 257, No. 5583, (2002), pp. 1016-1018\n\n\n\n\n\n---\n\n\n\n**If these technologies are so dangerous, should they be banned? What can be done to reduce the risks?**\n\nThe position that we ought to relinquish research into robotics, genetic engineering, and nanotechnology has been advocated in an article by Bill Joy (2000). Joy argued that some of the future applications of these technologies are so dangerous that research in those fields should be stopped now. Partly because of Joy’s previously technophiliac credentials (he was a software designer and a cofounder of Sun Microsystems), his article, which appeared in Wired magazine, attracted a great deal of attention.\n\nMany of the responses to Joy’s article pointed out that there is no realistic prospect of a worldwide ban on these technologies; that they have enormous potential benefits that we would not want to forgo; that the poorest people may have a higher tolerance for risk in developments that could improve their condition; and that a ban may actually increase the dangers rather than reduce them, both by delaying the development of protective applications of these technologies, and by weakening the position of those who choose to comply with the ban relative to less scrupulous groups who defy it.\n\nA more promising alternative than a blanket ban is differential technological development, in which we would seek to influence the sequence in which technologies developed. On this approach, we would strive to retard the development of harmful technologies and their applications, while accelerating the development of beneficial technologies, especially those that offer protection against the harmful ones. For technologies that have decisive military applications, unless they can be verifiably banned, we may seek to ensure that they are developed at a faster pace in countries we regard as responsible than in those that we see as potential enemies. (Whether a ban is verifiable and enforceable can change over time as a result of developments in the international system or in surveillance technology.)\n\nIn the case of nanotechnology, the desirable sequence of development is that nanotech immune systems and other defensive measures be deployed before offensive capabilities become available to many independent powers. Once a technology is shared by many, it becomes extremely hard to prevent further proliferation. In the case of biotechnology, we should seek to promote research into vaccines, anti-viral drugs, protective gear, sensors, and diagnostics, and to delay as long as possible the development and proliferation of biological warfare agents and the means of their weaponization. For artificial intelligence, a serious risk will emerge only when capabilities approach or surpass those of humans. At that point one should seek to promote the development of friendly AI and to prevent unfriendly or unreliable AI systems.\n\nSuperintelligence is an example of a technology that seems especially worth promoting because it can help reduce a broad range of threats. Superintelligent systems could advise us on policy and make the progress curve for nanotechnology steeper, thus shortening the period of vulnerability between the development of dangerous nanoreplicators and the deployment of effective defenses. If we have a choice, it seems preferable that superintelligence be developed before advanced nanotechnology, as superintelligence could help reduce the risks of nanotechnology but not vice versa. Other technologies that have wide risk-reducing uses include intelligence augmentation, information technology, and surveillance. These can make us smarter individually and collectively or make enforcement of necessary regulation more feasible. A strong prima facie case therefore exists for pursuing these technologies as vigorously as possible. Needless to say, we should also promote non-technological developments that are beneficial in almost all scenarios, such as peace and international cooperation.\n\nIn confronting the hydra of existential, limited and endurable risks glaring at us from the future, it is unlikely that any one silver bullet will provide adequate protection. Instead, an arsenal of countermeasures will be needed so that we can address the various risks on multiple levels.\n\nThe first step to tackling a risk is to recognize its existence. More research is needed, and existential risks in particular should be singled out for attention because of their seriousness and because of the special nature of the challenges they pose. Surprisingly little work has been done in this area (but see e.g. Leslie (1996), Bostrom (2002), and Rees (2003) for some preliminary explorations). The strategic dimensions of our choices must be taken into account, given that some of the technologies in questions have important military ramifications. In addition to scholarly studies of the threats and their possible countermeasures, public awareness must be raised to enable a more informed debate of our long-term options.\n\nSome of the lesser existential risks, such as an apocalyptic asteroid impact or the highly speculative scenario involving something like the upsetting of a metastable vacuum state in some future particle accelerator experiment, could be substantially reduced at relatively small expense. Programs to accomplish this – e.g. an early detection system for dangerous near-earth objects on potential collation course with Earth, or the commissioning of advance peer review of planned high-energy physics experiments – are probably cost-effective. However, these lesser risks must not deflect attention from the more serious concern raised by more probable existential disasters [see “Aren’t these future technologies very risky? Could they even cause our extinction?”].\n\nIn light of how superabundant the human benefits of technology can ultimately be, it matters less that we obtain all of these benefits in their precisely most optimal form, and more that we obtain them at all. For many practical purposes, it makes sense to adopt the rule of thumb that we should act so as to maximize the probability of an acceptable outcome, one in which we attain some (reasonably broad) realization of our potential; or, to put it in negative terms, that we should act so as to minimize net existential risk.\n\nReferences: \nBostrom, N. “Existential Risks: Analyzing Human Extinction Scenarios and Related Hazards,” Journal of Evolution and Technology. Vol. 9 (2002). [*http://www.nickbostrom.com/existential/risks.html*](http://www.nickbostrom.com/existential/risks.html) \nJoy, B. “Why the Future Doesn’t Need Us”. Wired, 8:04 (2000). [*http://www.wired.com/wired/archive/8.04/joy\\_pr.html*](http://www.wired.com/wired/archive/8.04/joy_pr.html) Leslie, J. The End of the World: The Ethics and Science of Human Extinction. (London: Routledge, 1996). Rees, M. Our Final Hour. (New York: Basic Books, 2003).\n\n\n\n\n\n---\n\n\n\n**Shouldn't we concentrate on current problems, such as improving the situation of the poor, rather than putting our efforts into planning for the “far” future?**\n\nWe should do both. Focusing solely on current problems would leave us unprepared for the new challenges that we will encounter.\n\nMany of the technologies and trends that transhumanists discuss are already reality. Biotechnology and information technology have transformed large sectors of our economies. The relevance of transhumanist ethics is manifest in such contemporary issues as stem cell research, genetically modified crops, human genetic therapy, embryo screening, end of life decisions, enhancement medicine, information markets, and research funding priorities. The importance of transhumanist ideas is likely to increase as the opportunities for human enhancement proliferate.\n\nTranshuman technologies will tend to work well together and create synergies with other parts of human society. For example, one important factor in healthy life expectancy is access to good medical care. Improvements in medical care will extend healthy, active lifespan – “healthspan” – and research into healthspan extension is likely to benefit ordinary care. Work on amplifying intelligence has obvious applications in education, decision-making, and communication. Better communications would facilitate trade and understanding between people. As more and more people get access to the Internet and are able to receive satellite radio and television broadcasts, dictators and totalitarian regimes may find it harder to silence voices of dissent and to control the information flow in their populations. And with the Internet and email, people discover they can easily form friendships and business partnerships in foreign countries. A world order characterized by peace, international cooperation, and respect for human rights would much improve the odds that the potentially dangerous applications of some future technologies can be controlled and would also free up resources currently spent on military armaments, some of which could then hopefully be diverted to improving the condition of the poor. Nanotechnological manufacturing promises to be both economically profitable and environmentally sound. Transhumanists do not have a patent solution to achieve these outcomes, any more than anybody else has, but technology has a huge role to play.\n\nAn argument can be made that the most efficient way of contributing to making the world better is by participating in the transhumanist project. This is so because the stakes are enormous – humanity’s entire future may depend on how we manage the coming technological transitions – and because relatively few resources are at the present time being devoted to transhumanist efforts. Even one extra person can still make a significant difference here.\n\n\n\n\n\n---\n\n\n\n**What kind of society would posthumans live in?**\n\nNot enough information is available at the current time to provide a full answer to this question. In part, though, the answer is, “You decide.” The outcome may be influenced by the choices we make now and over the coming decades. In this respect, the situation is the same as in earlier epochs that had no transhuman possibilities: by becoming involved in political struggles against today’s social ills and injustices, we can help make tomorrow’s society better.\n\nTranshumanism does, however, inform us about new constraints, possibilities, and issues, and it highlights numerous important leverage points for intervention, where a small application of resources can make a big long-term difference. For example, one issue that moves into prominence is the challenge of creating a society in which beings with vastly different orders of capabilities (such as posthuman persons and as-yet non-augmented humans) can live happily and peacefully together. Another concern that becomes paramount is the need to build a world order in which dangerous arms races can be prevented and in which the proliferation of weapons of mass destruction can be suppressed or at least delayed until effective defenses have been developed [see “Aren’t these future technologies very risky? Could they even cause our extinction?”].\n\nThe ideal social organization may be one that includes the possibility for those who so wish to form independent societies voluntarily secluded from the rest of the world, in order to pursue traditional ways of life or to experiment with new forms of communal living. Achieving an acceptable balance between the rights of such communities for autonomy, on the one hand, and the security concerns of outside entities and the just demands for protection of vulnerable and oppressed individuals inside these communities on the other hand, is a delicate task and a familiar challenge in political philosophy.\n\nWhat types of society posthumans will live in depends on what types of posthumans eventually develop. One can project various possible developmental paths [see “What is a posthuman?”] which may result in very different kinds of posthuman, transhuman, and unaugmented human beings, living in very different sorts of societies. In attempting to imagine such a world, we must bear in mind that we are likely to base our expectations on the experiences, desires, and psychological characteristics of humans. Many of these expectations may not hold true of posthuman persons. When human nature changes, new ways of organizing a society may become feasible. We may hope to form a clearer understanding of what those new possibilities are as we observe the seeds of transhumanity develop.\n\n\n\n\n\n---\n\n\n\n**Will posthumans or superintelligent machines pose a threat to humans who aren't augmented?**\n\nHuman society is always at risk from some group deciding to view another group of humans as fit for slavery or slaughter. To counteract such tendencies, modern societies have created laws and institutions, and endowed them with powers of enforcement, that act to prevent groups of citizens from assaulting one another. The efficacy of these institutions does not depend on all citizens having equal capacities. Modern, peaceful societies have large numbers of people with diminished physical or mental capacities along with many other people who may be exceptionally physically strong or healthy or intellectually talented in various ways. Adding people with technologically enhanced capacities to this already broad distribution of ability would not necessarily rip society apart or trigger genocide or enslavement.\n\nA common worry is that inheritable genetic modifications or other human enhancement technologies would lead to two distinct and separate species and that hostilities would inevitably develop between them. The assumptions behind this prediction should be questioned. It is a common theme in fiction because of the opportunities for dramatic conflict, but that is not the same as social, political, and economic plausibility in the real world. It seems more likely that there would be a continuum of differently modified or enhanced individuals, which would overlap with the continuum of as-yet unenhanced humans. The scenario in which “the enhanced” form a pact and then attack “the naturals” makes for exciting science fiction but is not necessarily the most plausible outcome. Even today, the segment containing the tallest 90 percent of the population could, in principle, get together and kill or enslave the shorter decile. That this does not happen suggests that a well-organized society can hold together even if it contains many possible coalitions of people sharing some attribute such that, if they unified under one banner, would make them capable of exterminating the rest.\n\nTo note that the extreme case of a war between human and posthuman persons is not the most likely scenario is not to say that there are no legitimate social concerns about the steps that may take us closer to posthumanity. Inequity, discrimination, and stigmatization – against or on behalf of modified people – could become serious issues. Transhumanists would argue that these (potential) social problems call for social remedies. (One case study of how contemporary technology can change important aspects of someone’s identify is sex reassignment. The experiences of transsexuals show that some cultures still have work to do in becoming more accepting of diversity.) This is a task that we can begin to tackle now by fostering a climate of tolerance and acceptance towards those who are different from ourselves. We can also act to strengthen those institutions that prevent violence and protect human rights, for instance by building stable democratic traditions and constitutions and by expanding the rule of law to the international plane.\n\nWhat about the hypothetical case in which someone intends to create, or turn themselves into, a being of so radically enhanced capacities that a single one or a small group of such individuals would be capable of taking over the planet? This is clearly not a situation that is likely to arise in the imminent future, but one can imagine that, perhaps in a few decades, the prospective creation of superintelligent machines could raise this kind of concern. The would-be creator of a new life form with such surpassing capabilities would have an obligation to ensure that the proposed being is free from psychopathic tendencies and, more generally, that it has humane inclinations. For example, a superintelligence should be built with a clear goal structure that has friendliness to humans as its top goal. Before running such a program, the builders of a superintelligence should be required to make a strong case that launching it would be safer than alternative courses of action.\n\nReferences: Yudkowsky, E. Creating Friendly AI: The Analysis and Design of Benevolent Goal Architectures. (2003, Version 1.0). [*http://www.singinst.org/CFAI/index.html*](http://www.singinst.org/CFAI/index.html)\n\n\n\n\n\n---\n\n\n\n**Is there any ethical standard by which transhumanists judge “improvement of the human condition”?**\n\nTranshumanism is compatible with a variety of ethical systems, and transhumanists themselves hold many different views. Nonetheless, the following seems to constitute a common core of agreement:\n\nAccording to transhumanists, the human condition has been improved if the conditions of individual humans have been improved. In practice, competent adults are usually the best judges of what is good for themselves. Therefore, transhumanists advocate individual freedom, especially the right for those who so wish to use technology to extend their mental and physical capacities and to improve their control over their own lives.\n\nFrom this perspective, an improvement to the human condition is a change that gives increased opportunity for individuals to shape themselves and their lives according to their informed wishes. Notice the word “informed”. It is important that people be aware of what they choose between. Education, discussion, public debate, critical thinking, artistic exploration, and, potentially, cognitive enhancers are means that can help people make more informed choices.\n\nTranshumanists hold that people are not disposable. Saving lives (of those who want to live) is ethically important. It would be wrong to unnecessarily let existing people die in order to replace them with some new “better” people. Healthspan-extension and cryonics are therefore high on the transhumanist list of priorities. The transhumanist goal is not to replace existing humans with a new breed of super-beings, but rather to give human beings (those existing today and those who will be born in the future) the option of developing into posthuman persons.\n\nThe non-disposability of persons partially accounts for a certain sense of urgency that is common among transhumanists. On average, 150,000 men, women, and children die every day, often in miserable conditions. In order to give as many people as possible the chance of a posthuman existence – or even just a decent human existence – it is paramount that technological development, in at least some fields, is pursued with maximal speed. When it comes to life-extension and its various enabling technologies, a delay of a single week equals one million avoidable premature deaths – a weighty fact which those who argue for bans or moratoria would do well to consider carefully. (The further fact that universal access will likely lag initial availability only adds to the reason for trying to hurry things along.)\n\nTranshumanists reject speciesism, the (human racist) view that moral status is strongly tied to membership in a particular biological species, in our case homo sapiens. What exactly does determine moral status is a matter of debate. Factors such as being a person, being sentient, having the capacity for autonomous moral choice, or perhaps even being a member of the same community as the evaluator, are among the criteria that may combine to determine the degree of somebody’s moral status (Warren 1997). But transhumanists argue that species-identity should be de-emphasized in this context. Transhumanists insist that all beings that can experience pain have some moral status, and that posthuman persons could have at least the same level of moral status as humans have in their current form.\n\nReferences: Warren, M.-A. Moral Status: Obligations to Persons and Other Living Things (Oxford: Oxford University Press, 1997).\n\n\n\n\n\n---\n\n\n\n**Will extended life worsen overpopulation problems?**\n\nPopulation increase is an issue we would ultimately have to come to grips with even if healthy life-extension were not to happen. Leaving people to die is an unacceptable solution.\n\nA large population should not be viewed simply as a problem. Another way of looking at the same fact is that it means that many persons now enjoy lives that would not have been lived if the population had been smaller. One could ask those who complain about overpopulation exactly which people’s lives they would have preferred should not have been led. Would it really have been better if billions of the world’s people had never existed and if there had been no other people in their place? Of course, this is not to deny that too-rapid population growth can cause crowding, poverty, and the depletion of natural resources. In this sense there can be real problems that need to be tackled.\n\nHow many people the Earth can sustain at a comfortable standard of living is a function of technological development (as well as of how resources are distributed). New technologies, from simple improvements in irrigation and management, to better mining techniques and more efficient power generation machinery, to genetically engineered crops, can continue to improve world resource and food output, while at the same time reducing environmental impact and animal suffering.\n\nEnvironmentalists are right to insist that the status quo is unsustainable. As a matter of physical necessity, things cannot stay as they are today indefinitely, or even for very long. If we continue to use up resources at the current pace, without finding more resources or learning how to use novel kinds of resources, then we will run into serious shortages sometime around the middle of this century. The deep greens have an answer to this: they suggest we turn back the clock and return to an idyllic pre-industrial age to live in sustainable harmony with nature. The problem with this view is that the pre-industrial age was anything but idyllic. It was a life of poverty, misery, disease, heavy manual toil from dawn to dusk, superstitious fears, and cultural parochialism. Nor was it environmentally sound – as witness the deforestation of England and the Mediterranean region, desertification of large parts of the middle east, soil depletion by the Anasazi in the Glen Canyon area, destruction of farm land in ancient Mesopotamia through the accumulation of mineral salts from irrigation, deforestation and consequent soil erosion by the ancient Mexican Mayas, overhunting of big game almost everywhere, and the extinction of the dodo and other big featherless birds in the South Pacific. Furthermore, it is hard to see how more than a few hundred million people could be maintained at a reasonable standard of living with pre-industrial production methods, so some ninety percent of the world population would somehow have to vanish in order to facilitate this nostalgic return.\n\nTranshumanists propose a much more realistic alternative: not to retreat to an imagined past, but to press ahead as intelligently as we can. The environmental problems that technology creates are problems of intermediary, inefficient technology, of placing insufficient political priority on environmental protection as well as of a lack of ecological knowledge. Technologically less advanced industries in the former Soviet-bloc pollute much more than do their advanced Western counterparts. High-tech industry is typically relatively benign. Once we develop molecular nanotechnology, we will not only have clean and efficient manufacturing of almost any commodity, but we will also be able to clean up much of the mess created by today’s crude fabrication methods. This would set a standard for a clean environment that today’s traditional environmentalists could scarcely dream of.\n\nNanotechnology will also make it cheaper to colonize space. From a cosmic point of view, Earth is an insignificant speck. It has sometimes been suggested that we ought to leave space untouched in its pristine glory. This view is hard to take seriously. Every hour, through entirely natural processes, vast amounts of resources – millions of times more than the sum total of what the human species has consumed throughout its career – are transformed into radioactive substances or wasted as radiation escaping into intergalactic space. Can we not think of some more creative way of using all this matter and energy?\n\nEven with full-blown space colonization, however, population growth can continue to be a problem, and this is so even if we assume that an unlimited number of people could be transported from Earth into space. If the speed of light provides an upper bound on the expansion speed then the amount of resources under human control will grow only polynomially (~ t3). Population, on the other hand, can easily grow exponentially (~ et). If that happens, then, since a factor that grows exponentially will eventually overtake any factor that grows polynomially, average income will ultimately drop to subsistence levels, forcing population growth to slow. How soon this would happen depends primarily on reproduction rates. A change in average life span would not have a big effect. Even vastly improved technology can only postpone this inevitability for a relatively brief time. The only long-term method of assuring continued growth of average income is some form of population control, whether spontaneous or imposed, limiting the number of new persons created per year. This does not mean that population could not grow, only that the growth would have to be polynomial rather than exponential.\n\nSome additional points to consider:\n\nIn technologically advanced countries, couples tend to have fewer children, often below the replacement rate. As an empirical generalization, giving people increased rational control over their lives, especially through women’s education and participation in the labor market, causes couples to have fewer children.\n\nIf one took seriously the idea of controlling population by limiting life span, why not be more active about it? Why not encourage suicide? Why not execute anyone reaching the age of 75?\n\nIf slowing aging were unacceptable because it might lead to there being more people, what about efforts to cure cancer, reduce traffic deaths, or improve worker safety? Why use double standards?\n\nWhen transhumanists say they want to extend lifespans, what they mean is that they want to extend healthspans. This means that the extra person-years would be productive and would add economic value to society. We can all agree that there would be little point in living an extra ten years in a state of dementia.\n\nThe world population growth rate has been declining for several decades. It peaked in 1970 at 2.1%. In 2003, it was 1.2%; and it is expected to fall below 1.0% around 2015. (United Nations 2002). The doomsday predictions of the so-called “Club of Rome” from the early 1970s have consistently turned out to be wrong.\n\nThe more people there are, the more brains there will be working to invent new ideas and solutions.\n\nIf people can look forward to a longer healthy, active life, they will have a personal stake in the future and will hopefully be more concerned about the long-term consequences of their actions.\n\nReferences: United Nations. The World Population Prospects: The 2002 Revision (United Nations: New York, 2002). [*http://www.gov.za/reports/2003/unpdhighlights.pdf*](http://www.gov.za/reports/2003/unpdhighlights.pdf)\n\n\n\n\n\n---\n\n\n\n**Technologies and Projections Biotechnology, genetic engineering, stem cells, and cloning What are they and what are they good for?**\n\nBiotechnology is the application of techniques and methods based on the biological sciences. It encompasses such diverse enterprises as brewing, manufacture of human insulin, interferon, and human growth hormone, medical diagnostics, cell cloning and reproductive cloning, the genetic modification of crops, bioconversion of organic waste and the use of genetically altered bacteria in the cleanup of oil spills, stem cell research and much more. Genetic engineering is the area of biotechnology concerned with the directed alteration of genetic material.\n\nBiotechnology already has countless applications in industry, agriculture, and medicine. It is a hotbed of research. The completion of the human genome project – a “rough draft” of the entire human genome was published in the year 2000 – was a scientific milestone by anyone’s standards. Research is now shifting to decoding the functions and interactions of all these different genes and to developing applications based on this information.\n\nThe potential medical benefits are too many to list; researchers are working on every common disease, with varying degrees of success. Progress takes place not only in the development of drugs and diagnostics but also in the creation of better tools and research methodologies, which in turn accelerates progress. When considering what developments are likely over the long term, such improvements in the research process itself must be factored in. The human genome project was completed ahead of schedule, largely because the initial predictions underestimated the degree to which instrumentation technology would improve during the course of the project. At the same time, one needs to guard against the tendency to hype every latest advance. (Remember all those breakthrough cancer cures that we never heard of again?) Moreover, even in cases where the early promise is borne out, it usually takes ten years to get from proof-of-concept to successful commercialization.\n\nGenetic therapies are of two sorts: somatic and germ-line. In somatic gene therapy, a virus is typically used as a vector to insert genetic material into the cells of the recipient’s body. The effects of such interventions do not carry over into the next generation. Germ-line genetic therapy is performed on sperm or egg cells, or on the early zygote, and can be inheritable. (Embryo screening, in which embryos are tested for genetic defects or other traits and then selectively implanted, can also count as a kind of germ-line intervention.) Human gene therapy, except for some forms of embryo screening, is still experimental. Nonetheless, it holds promise for the prevention and treatment of many diseases, as well as for uses in enhancement medicine. The potential scope of genetic medicine is vast: virtually all disease and all human traits – intelligence, extroversion, conscientiousness, physical appearance, etc. – involve genetic predispositions. Single-gene disorders, such as cystic fibrosis, sickle cell anemia, and Huntington’s disease are likely to be among the first targets for genetic intervention. Polygenic traits and disorders, ones in which more than one gene is implicated, may follow later (although even polygenic conditions can sometimes be influenced in a beneficial direction by targeting a single gene).\n\nStem cell research, another scientific frontier, offers great hopes for regenerative medicine. Stem cells are undifferentiated (unspecialized) cells that can renew themselves and give rise to one or more specialized cell types with specific functions in the body. By growing such cells in culture, or steering their activity in the body, it will be possible to grow replacement tissues for the treatment of degenerative disorders, including heart disease, Parkinson’s, Alzheimer’s, diabetes, and many others. It may also be possible to grow entire organs from stem cells for use in transplantation. Embryonic stem cells seem to be especially versatile and useful, but research is also ongoing into adult stem cells and the “reprogramming” of ordinary cells so that they can be turned back into stem cells with pluripotent capabilities.\n\nThe term “human cloning” covers both therapeutic and reproductive uses. In therapeutic cloning, a preimplantation embryo (also known as a “blastocyst” – a hollow ball consisting of 30-150 undifferentiated cells) is created via cloning, from which embryonic stem cells could be extracted and used for therapy. Because these cloned stem cells are genetically identical to the patient, the tissues or organs they would produce could be implanted without eliciting an immune response from the patient’s body, thereby overcoming a major hurdle in transplant medicine. Reproductive cloning, by contrast, would mean the birth of a child who is genetically identical to the cloned parent: in effect, a younger identical twin.\n\nEverybody recognizes the benefit to ailing patients and their families that come from curing specific diseases. Transhumanists emphasize that, in order to seriously prolong the healthy life span, we also need to develop ways to slow aging or to replace senescent cells and tissues. Gene therapy, stem cell research, therapeutic cloning, and other areas of medicine that have the potential to deliver these benefits deserve a high priority in the allocation of research monies.\n\nBiotechnology can be seen as a special case of the more general capabilities that nanotechnology will eventually provide [see “What is molecular nanotechnology?”].\n\n\n\n\n\n---\n\n\n\n**What is molecular nanotechnology?**\n\nMolecular nanotechnology is an anticipated manufacturing technology that will make it possible to build complex three-dimensional structures to atomic specification using chemical reactions directed by nonbiological machinery. In molecular manufacturing, each atom would go to a selected place, bonding with other atoms in a precisely designated manner. Nanotechnology promises to give us thorough control of the structure of matter.\n\nSince most of the stuff around us and inside us is composed of atoms and gets its characteristic properties from the placement of these atoms, the ability to control the structure of matter on the atomic scale has many applications. As K. Eric Drexler wrote in Engines of Creation, the first book on nanotechnology (published in 1986):\n\nCoal and diamonds, sand and computer chips, cancer and healthy tissue: throughout history, variations in the arrangement of atoms have distinguished the cheap from the cherished, the diseased from the healthy. Arranged one way, atoms make up soil, air, and water arranged another, they make up ripe strawberries. Arranged one way, they make up homes and fresh air; arranged another, they make up ash and smoke.\n\nNanotechnology, by making it possible to rearrange atoms effectively, will enable us to transform coal into diamonds, sand into supercomputers, and to remove pollution from the air and tumors from healthy tissue.\n\nCentral to Drexler’s vision of nanotechnology is the concept of the assembler. An assembler would be a molecular construction device. It would have one or more submicroscopic robotic arms under computer control. The arms would be capable of holding and placing reactive compounds so as to positionally control the precise location at which a chemical reaction takes place. The assembler arms would grab a molecule (but not necessarily individual atoms) and add it to a work-piece, constructing an atomically precise object step by step. An advanced assembler would be able to make almost any chemically stable structure. In particular, it would be able to make a copy of itself. Since assemblers could replicate themselves, they would be easy to produce in large quantities.\n\nThere is a biological parallel to the assembler: the ribosome. Ribosomes are the tiny construction machines (a few thousand cubic nanometers big) in our cells that manufacture all the proteins used in all living things on Earth. They do this by assembling amino acids, one by one, into precisely determined sequences. These structures then fold up to form a protein. The blueprint that specifies the order of amino acids, and thus indirectly the final shape of the protein, is called messenger RNA. The messenger RNA is in turned determined by our DNA, which can be viewed (somewhat simplistically) as an instruction tape for protein synthesis. Nanotechnology will generalize the ability of ribosomes so that virtually any chemically stable structure can be built, including devices and materials that resemble nothing in nature.\n\nMature nanotechnology will transform manufacturing into a software problem. To build something, all you will need is a detailed design of the object you want to make and a sequence of instructions for its construction. Rare or expensive raw materials are generally unnecessary; the atoms required for the construction of most kinds of nanotech devices exist in abundance in nature. Dirt, for example, is full of useful atoms.\n\nBy working in large teams, assemblers and more specialized nanomachines will be able to build large objects quickly. Consequently, while nanomachines may have features on the scale of a billionth of a meter – a nanometer – the products could be as big as space vehicles or even, in a more distant future, the size of planets.\n\nBecause assemblers will be able to copy themselves, nanotech products will have low marginal production costs – perhaps on the same order as familiar commodities from nature’s own self-reproducing molecular machinery such as firewood, hay, or potatoes. By ensuring that each atom is properly placed, assemblers would manufacture products of high quality and reliability. Leftover molecules would be subject to this strict control, making the manufacturing process extremely clean.\n\nThe speed with which designs and instruction lists for making useful objects can be developed will determine the speed of progress after the creation of the first full-blown assembler. Powerful software for molecular modeling and design will accelerate development, possibly assisted by specialized engineering AI. Another accessory that might be especially useful in the early stages after the assembler-breakthrough is the disassembler, a device that can disassemble an object while creating a three-dimensional map of its molecular configuration. Working in concert with an assembler, it could function as a kind of 3D Xerox machine: a device for making atomically exact replicas of almost any existing solid object within reach.\n\nMolecular nanotechnology will ultimately make it possible to construct compact computing systems performing at least 1021 operations per second; machine parts of any size made of nearly flawless diamond; cell-repair machines that can enter cells and repair most kinds of damage, in all likelihood including frostbite [see “ REF \\_Ref50109542 h What is cryonics? Isn’t the probability of success too small?”]; personal manufacturing and recycling appliances; and automated production systems that can double capital stock in a few hours or less. It is also likely to make uploading possible [see “What is uploading?”].\n\nA key challenge in realizing these prospects is the bootstrap problem: how to build the first assembler. There are several promising routes. One is to improve current proximal probe technology. An atomic force microscope can drag individual atoms along a surface. Two physicists at IBM Almaden Labs in California illustrated this in 1989 when they used such a microscope to arrange 35 xenon atoms to spell out the trademark “I-B-M”, creating the world’s smallest logo. Future proximal probes might have more degrees of freedom and the ability to pick up and deposit reactive compounds in a controlled fashion.\n\nAnother route to the first assembler is synthetic chemistry. Cleverly designed chemical building blocks might be made to self-assemble in solution phase into machine parts. Final assembly of these parts might then be made with a proximal probe.\n\nYet another route is biochemistry. It might be possible to use ribosomes to make assemblers of more generic capabilities. Many biomolecules have properties that might be explored in the early phases of nanotechnology. For example, interesting structures, such as branches, loops, and cubes, have been made by DNA. DNA could also serve as a “tag” on other molecules, causing them to bind only to designated compounds displaying a complementary tag, thus providing a degree of control over what molecular complexes will form in a solution.\n\nCombinations of these approaches are of course also possible. The fact that there are multiple promising routes adds to the likelihood that success will eventually be attained.\n\nThat assemblers of general capabilities are consistent with the laws of chemistry was shown by Drexler in his technical book Nanosystems in 1992. This book also established some lower bounds on the capabilities of mature nanotechnology. Medical applications of nanotechnology were first explored in detail by Robert A. Freitas Jr. in his monumental work [*Nanomedicine*](http://www.nanomedicine.com/NMI.htm) , the first volume of which came out in 1999. Today, nanotech is a hot research field. The U.S. government spent more than 600 million dollars on its National Nanotechnology Initiative in 2002. Other countries have similar programs, and private investment is ample. However, only a small part of the funding goes to projects of direct relevance to the development of assembler-based nanotechnology; most of it is for more humdrum, near-term objectives.\n\nWhile it seems fairly well established that molecular nanotechnology is in principle possible, it is harder to determine how long it will take to develop. A common guess among the cognoscenti is that the first assembler may be built around the year 2018, give or take a decade, but there is large scope for diverging opinion on the upper side of that estimate.\n\nBecause the ramifications of nanotechnology are immense, it is imperative that serious thought be given to this topic now. If nanotechnology were to be abused the consequences could be devastating. Society needs to prepare for the assembler breakthrough and do advance planning to minimize the risks associated with it [see e.g. “Aren’t these future technologies very risky? Could they even cause our extinction?”]. Several organizations are working to preparing the world for nanotechnology, the oldest and largest being the Foresight Institute.\n\nReferences: Drexler, E. The Engines of Creation: The Coming Era of Nanotechnology. (New York: Anchor Books, 1986). [*http://www.foresight.org/EOC/index.html*](http://www.foresight.org/EOC/index.html) Drexler, E. Nanosystems: Molecular Machinery, Manufacturing, and Computation. (New York: John Wiley & Sons, Inc., 1992). Freitas, Jr., R. A. [*Nanomedicine, Volume I: Basic Capabilities.*](http://www.nanomedicine.com/NMI.htm) (Georgetown, Texas: Landes Bioscience, 1999). Foresight Institute. [*http://www.foresight.org*](http://www.foresight.org/)\n\n\n\n\n\n---\n\n\n\n**What is superintelligence?**\n\nA superintelligent intellect (a superintelligence, sometimes called “ultraintelligence”) is one that has the capacity to radically outperform the best human brains in practically every field, including scientific creativity, general wisdom, and social skills.\n\nSometimes a distinction is made between weak and strong superintelligence. Weak superintelligence is what you would get if you could run a human intellect at an accelerated clock speed, such as by uploading it to a fast computer [see “What is uploading?”]. If the upload’s clock-rate were a thousand times that of a biological brain, it would perceive reality as being slowed down by a factor of a thousand. It would think a thousand times more thoughts in a given time interval than its biological counterpart.\n\nStrong superintelligence refers to an intellect that is not only faster than a human brain but also smarter in a qualitative sense. No matter how much you speed up your dog’s brain, you’re not going to get the equivalent of a human intellect. Analogously, there might be kinds of smartness that wouldn’t be accessible to even very fast human brains given their current capacities. Something as simple as increasing the size or connectivity of our neuronal networks might give us some of these capacities. Other improvements may require wholesale reorganization of our cognitive architecture or the addition of new layers of cognition on top of the old ones.\n\nHowever, the distinction between weak and strong superintelligence may not be clear-cut. A sufficiently long-lived human who didn’t make any errors and had a sufficient stack of scrap paper at hand could in principle compute any Turing computable function. (According to Church’s thesis, the class of Turing computable functions is identical to the class of physically computable functions.)\n\nMany but not all transhumanists expect that superintelligence will be created within the first half of this century. Superintelligence requires two things: hardware and software.\n\nChip-manufacturers planning the next generation of microprocessors commonly rely on a well-known empirical regularity known as Moore’s Law. In its original 1965-formulation by Intel co-founder Gordon Moore, it stated that the number of components on a chip doubled every year. In contemporary use, the “law” is commonly understood as referring more generally to a doubling of computing power, or of computing power per dollar. For the past couple of years, the doubling time has hovered between 18 months and two years.\n\nThe human brain’s processing power is difficult to determine precisely, but common estimates range from 1014 instructions per second (IPS) up to 1017 IPS or more. The lower estimate, derived by Carnegie Mellon robotics professor Hans Moravec, is based on the computing power needed to replicate the signal processing performed by the human retina and assumes a significant degree of software optimization. The 1017 IPS estimate is obtained by multiplying the number of neurons in a human brain (~100 billion) with the average number of synapses per neuron (~1,000) and with the average spike rate (~100 Hz), and assuming ~10 instructions to represent the effect on one action potential traversing one synapse. An even higher estimate would be obtained e.g. if one were to suppose that functionally relevant and computationally intensive processing occurs within compartments of a dendrite tree.\n\nMost experts, Moore included, think that computing power will continue to double about every 18 months for at least another two decades. This expectation is based in part on extrapolation from the past and in part on consideration of developments currently underway in laboratories. The fastest computer under construction is IBM’s Blue Gene/L, which when it is ready in 2005 is expected to perform ~2\\*1014 IPS. Thus it appears quite likely that human-equivalent hardware will have been achieved within not much more than a couple of decades.\n\nHow long it will take to solve the software problem is harder to estimate. One possibility is that progress in computational neuroscience will teach us about the computational architecture of the human brain and what learning rules it employs. We can then implement the same algorithms on a computer. In this approach, the superintelligence would not be completely specified by the programmers but would instead have to grow by learning from experience the same way a human infant does. An alternative approach would be to use genetic algorithms and methods from classical AI. This might result in a superintelligence that bears no close resemblance to a human brain. At the opposite extreme, we could seek to create a superintelligence by uploading a human intellect and then accelerating and enhancing it [see “What is uploading?”]. The outcome of this might be a superintelligence that is a radically upgraded version of one particular human mind.\n\nThe arrival of superintelligence will clearly deal a heavy blow to anthropocentric worldviews. Much more important than its philosophical implications, however, would be its practical effects. Creating superintelligence may be the last invention that humans will ever need to make, since superintelligences could themselves take care of further scientific and technological development. They would do so more effectively than humans. Biological humanity would no longer be the smartest life form on the block.\n\nThe prospect of superintelligence raises many big issues and concerns that we should think deeply about in advance of its actual development. The paramount question is: What can be done to maximize the chances that the arrival of superintelligence will benefit rather than harm us? The range of expertise needed to address this question extends far beyond the community of AI researchers. Neuroscientists, economists, cognitive scientists, computer scientists, philosophers, ethicists, sociologists, science-fiction writers, military strategists, politicians, legislators, and many others will have to pool their insights if we are to deal wisely with what may be the most important task our species will ever have to tackle.\n\nMany transhumanists would like to become superintelligent themselves. This is obviously a long-term and uncertain goal, but it might be achievable either through uploading and subsequent enhancement or through the gradual augmentation of our biological brains, by means of future nootropics (cognitive enhancement drugs), cognitive techniques, IT tools (e.g. wearable computers, smart agents, information filtering systems, visualization software, etc.), neural-computer interfaces, or brain implants.\n\nReferences: Moravec, H. Mind Children (Harvard: Harvard University Press, 1989). \nBostrom, N. “How Long Before Superintelligence?” International Journal of Futures Studies. Vol. 2. (1998).\n\n\n\n\n\n---\n\n\n\n**What is virtual reality?**\n\nA virtual reality is a simulated environment that your senses perceive as real.\n\nTheatre, opera, cinema, television can be regarded as precursors to virtual reality. The degree of immersion (the feeling of “being there”) that you experience when watching television is quite limited. Watching football on TV doesn’t really compare to being in the stadium. There are several reasons for this. For starters, even a big screen doesn’t fill up your entire visual field. The number of pixels even on high-resolution screens is also too small (typically 1280\\*1224 rather than about 5000\\*5000 as would be needed in a flawless wide-angle display). Further, 3D vision is lacking, as is position tracking and focus effects (in reality, the picture on your retina changes continually as your head and eyeballs are moving). To achieve greater realism, a system should ideally include more sensory modalities, such as 3D sound (through headphones) to hear the crowd roaring, and tactile stimulation through a whole-body haptic interface so that you don’t have to miss out on the sensation of sitting on a cold, hard bench for hours.\n\nAn essential element of immersion is interactivity. Watching TV is typically a passive experience. Full-blown virtual reality, by contrast, will be interactive. You will be able to move about in a virtual world, pick up objects you see, and communicate with people you meet. (A real football experience crucially includes the possibility of shouting abuse at the referee.) To enable interactivity, the system must have sensors that pick up on your movements and utterances and adjust the presentation to incorporate the consequences of your actions.\n\nVirtual worlds can be modeled on physical realities. If you are participating in a remote event through VR, as in the example of the imagined football spectator, you are said to be telepresent at that event. Virtual environments can also be wholly artificial, like cartoons, and have no particular counterpart in physical reality. Another possibility, known as augmented reality, is to have your perception of your immediate surroundings partially overlaid with simulated elements. For example, by wearing special glasses, nametags could be made to appear over the heads of guests at a dinner party, or you could opt to have annoying billboard advertisements blotted out from your view.\n\nMany users of today’s VR systems experience “simulator sickness,” with symptoms ranging from unpleasantness and disorientation to headaches, nausea, and vomiting. Simulator sickness arises because different sensory systems provide conflicting cues. For example, the visual system may provide strong cues of self-motion while the vestibular system in your inner ear tells your brain that your head is stationary. Heavy head-mounted display helmets and lag times between tracking device and graphics update can also cause discomfort. Creating good VR that overcomes these problems is technically challenging.\n\nPrimitive virtual realities have been around for some time. Early applications included training modules for pilots and military personnel. Increasingly, VR is used in computer gaming. Partly because VR is computationally very intensive, simulations are still quite crude. As computational power increases, and as sensors, effectors and displays improve, VR could begin to approximate physical reality in terms of fidelity and interactivity.\n\nIn the long run, VR could unlock limitless possibilities for human creativity. We could construct artificial experiential worlds, in which the laws of physics can be suspended, that would appear as real as physical reality to participants. People could visit these worlds for work, entertainment, or to socialize with friends who may be living on the opposite site of the globe. Uploads [see “What is uploading?”], who could interact with simulated environments directly without the need of a mechanical interface, might spend most of their time in virtual realities.\n\n\n\n\n\n---\n\n\n\n**What is cryonics? Isn't the probability of success too small?**\n\nCryonics is an experimental medical procedure that seeks to save lives by placing in low-temperature storage persons who cannot be treated with current medical procedures and who have been declared legally dead, in the hope that technological progress will eventually make it possible to revive them.\n\nFor cryonics to work today, it is not necessary that we can currently reanimate cryo-preserved patients (which we cannot). All that is needed is that we can preserve patients in a state sufficiently intact that some possible technology, developed in the future, will one day be able to repair the freezing damage and reverse the original cause of deanimation. Only half of the complete cryonics procedure can be scrutinized today; the other half cannot be performed until the (perhaps distant) future.\n\nWhat we know now is that it is possible to stabilize a patient’s condition by cooling him or her in liquid nitrogen (- 196 C°). A considerable amount of cell damage is caused by the freezing process. This injury can be minimized by following suspension protocols that involve suffusing the deanimated body with cryoprotectants. The formation of damaging ice crystals can even be suppressed altogether in a process known as vitrification, in which the patient’s body is turned into a kind of glass. This might sound like an improbable treatment, but the purpose of cryonics is to preserve the structure of life rather than the processes of life, because the life processes can in principle be re-started as long as the information encoded in the structural properties of the body, in particular the brain, are sufficiently preserved. Once frozen, the patient can be stored for millennia with virtually no further tissue degradation.\n\nMany experts in molecular nanotechnology believe that in its mature stage nanotechnology will enable the revival of cryonics patients. Hence, it is possible that the suspended patients could be revived in as little as a few decades from now. The uncertainty about the ultimate technical feasibility of reanimation may very well be dwarfed by the uncertainty in other factors, such as the possibility that you deanimate in the wrong kind of way (by being lost at sea, for example, or by having the brain’s information content erased by Alzheimer’s disease), that your cryonics company goes bust, that civilization collapses, or that people in the future won’t be interested in reviving you. So, a cryonics contract is far short of a survival guarantee. As a cryonicist saying goes, being cryonically suspended is the second worst thing that can happen to you.\n\nWhen we consider the procedures that are routine today and how they might have been viewed in (say) the 1700s, we can begin to see how difficult it is to make a well-founded argument that future medical technology will never be able to reverse the injuries that occur during cryonic suspension. By contrast, your chances of a this-worldly comeback if you opt for one of the popular alternative treatments – such as cremation or burial – are zero. Seen in this light, signing up for cryonics, which is usually done by making a cryonics firm one of the beneficiaries of your life insurance, can look like a reasonable insurance policy. If it doesn’t work, you would be dead anyway. If it works, it may save your life. Your saved life would then likely be extremely long and healthy, given how advanced the state of medicine must be to revive you.\n\nBy no means are all transhumanists signed up for cryonics, but a significant fraction finds that, for them, a cost-benefit analysis justifies the expense. Becoming a cryonicist, however, requires courage: the courage to confront the possibility of your own death, and the courage to resist the peer-pressure from the large portion of the population which currently espouses deathist values and advocates complacency in the face of a continual, massive loss of human life.\n\nReferences: Merkle, R. “The Molecular Repair of the Brain.” Cryonics magazine, Vol. 15, No’s 1 & 2. (1994). [*http://www.merkle.com/cryo/techFeas.html*](http://www.merkle.com/cryo/techFeas.html)\n\n\n\n\n\n---\n\n\n\n**What is uploading?**\n\nUploading (sometimes called “downloading”, “mind uploading” or “brain reconstruction”) is the process of transferring an intellect from a biological brain to a computer.\n\nOne way of doing this might be by first scanning the synaptic structure of a particular brain and then implementing the same computations in an electronic medium. A brain scan of sufficient resolution could be produced by disassembling the brain atom for atom by means of nanotechnology. Other approaches, such as analyzing pieces of the brain slice by slice in an electron microscope with automatic image processing have also been proposed. In addition to mapping the connection pattern among the 100 billion-or-so neurons, the scan would probably also have to register some of the functional properties of each of the synaptic interconnections, such as the efficacy of the connection and how stable it is over time (e.g. whether it is short-term or long-term potentiated). Non-local modulators such as neurotransmitter concentrations and hormone balances may also need to be represented, although such parameters likely contain much less data than the neuronal network itself.\n\nIn addition to a good three-dimensional map of a brain, uploading will require progress in neuroscience to develop functional models of each species of neuron (how they map input stimuli to outgoing action potentials, and how their properties change in response to activity in learning). It will also require a powerful computer to run the upload, and some way for the upload to interact with the external world or with a virtual reality. (Providing input/output or a virtual reality for the upload appears easy in comparison to the other challenges.)\n\nAn alternative hypothetical uploading method would proceed more gradually: one neuron could be replaced by an implant or by a simulation in a computer outside of the body. Then another neuron, and so on, until eventually the whole cortex has been replaced and the person’s thinking is implemented on entirely artificial hardware. (To do this for the whole brain would almost certainly require nanotechnology.)\n\nA distinction is sometimes made between destructive uploading, in which the original brain is destroyed in the process, and non-destructive uploading, in which the original brain is preserved intact alongside the uploaded copy. It is a matter of debate under what conditions personal identity would be preserved in destructive uploading. Many philosophers who have studied the problem think that at least under some conditions, an upload of your brain would be you. A widely accepted position is that you survive so long as certain information patterns are conserved, such as your memories, values, attitudes, and emotional dispositions, and so long as there is causal continuity so that earlier stages of yourself help determine later stages of yourself. Views differ on the relative importance of these two criteria, but they can both be satisfied in the case of uploading. For the continuation of personhood, on this view, it matters little whether you are implemented on a silicon chip inside a computer or in that gray, cheesy lump inside your skull, assuming both implementations are conscious.\n\nTricky cases arise, however, if we imagine that several similar copies are made of your uploaded mind. Which one of them is you? Are they all you, or are none of them you? Who owns your property? Who is married to your spouse? Philosophical, legal, and ethical challenges abound. Maybe these will become hotly debated political issues later in this century.\n\nA common misunderstanding about uploads is that they would necessarily be “disembodied” and that this would mean that their experiences would be impoverished. Uploading according to this view would be the ultimate escapism, one that only neurotic body-loathers could possibly feel tempted by. But an upload’s experience could in principle be identical to that of a biological human. An upload could have a virtual (simulated) body giving the same sensations and the same possibilities for interaction as a non-simulated body. With advanced virtual reality, uploads could enjoy food and drink, and upload sex could be as gloriously messy as one could wish. And uploads wouldn’t have to be confined to virtual reality: they could interact with people on the outside and even rent robot bodies in order to work in or explore physical reality.\n\nPersonal inclinations regarding uploading differ. Many transhumanists have a pragmatic attitude: whether they would like to upload or not depends on the precise conditions in which they would live as uploads and what the alternatives are. (Some transhumanists may also doubt whether uploading will be possible.) Advantages of being an upload would include:\n\nUploads would not be subject to biological senescence.\n\nBack-up copies of uploads could be created regularly so that you could be re-booted if something bad happened. (Thus your lifespan would potentially be as long as the universe’s.)\n\nYou could potentially live much more economically as an upload since you wouldn’t need physical food, housing, transportation, etc.\n\nIf you were running on a fast computer, you would think faster than in a biological implementation. For instance, if you were running on a computer a thousand times more powerful than a human brain, then you would think a thousand times faster (and the external world would appear to you as if it were slowed down by a factor of a thousand). You would thus get to experience more subjective time, and live more, during any given day.\n\nYou could travel at the speed of light as an information pattern, which could be convenient in a future age of large-scale space settlements.\n\nRadical cognitive enhancements would likely be easier to implement in an upload than in an organic brain.\n\nA couple of other points about uploading:\n\nUploading should work for cryonics patients provided their brains are preserved in a sufficiently intact state.\n\nUploads could reproduce extremely quickly (simply by making copies of themselves). This implies that resources could very quickly become scarce unless reproduction is regulated.\n\n\n\n\n\n---\n\n\n\n**What is the singularity?**\n\nSome thinkers conjecture that there will be a point in the future when the rate of technological development becomes so rapid that the progress-curve becomes nearly vertical. Within a very brief time (months, days, or even just hours), the world might be transformed almost beyond recognition. This hypothetical point is referred to as the singularity. The most likely cause of a singularity would be the creation of some form of rapidly self-enhancing greater-than-human intelligence.\n\nThe concept of the singularity is often associated with Vernor Vinge, who regards it as one of the more probable scenarios for the future. (Earlier intimations of the same idea can be found e.g. in John von Neumann, as paraphrased by Ulam 1958, and in I. J. Good 1965.) Provided that we manage to avoid destroying civilization, Vinge thinks that a singularity is likely to happen as a consequence of advances in artificial intelligence, large systems of networked computers, computer-human integration, or some other form of intelligence amplification. Enhancing intelligence will, in this scenario, at some point lead to a positive feedback loop: smarter systems can design systems that are even more intelligent, and can do so more swiftly than the original human designers. This positive feedback effect would be powerful enough to drive an intelligence explosion that could quickly lead to the emergence of a superintelligent system of surpassing abilities.\n\nThe singularity-hypothesis is sometimes paired with the claim that it is impossible for us to predict what comes after the singularity. A post-singularity society might be so alien that we can know nothing about it. One exception might be the basic laws of physics, but even there it is sometimes suggested that there may be undiscovered laws (for instance, we don’t yet have an accepted theory of quantum gravity) or poorly understood consequences of known laws that could be exploited to enable things we would normally think of as physically impossible, such as creating traversable wormholes, spawning new “basement” universes, or traveling backward in time. However, unpredictability is logically distinct from abruptness of development and would need to be argued for separately.\n\nTranshumanists differ widely in the probability they assign to Vinge’s scenario. Almost all of those who do think that there will be a singularity believe it will happen in this century, and many think it is likely to happen within several decades.\n\nReferences: Good, I. J. “Speculations Concerning the First Ultraintelligent Machine,” in Advances in Computers, Vol. 6, Franz L. Alt and Morris Rubinoff, eds (Academic Press, 1965), pp. 31-88. Vinge, V. “The Coming Technological Singularity,” Whole Earth Review, Winter Issue (1993). [*http://www.ugcs.caltech.edu/~phoenix/vinge/vinge-sing.html*](http://www.ugcs.caltech.edu/~phoenix/vinge/vinge-sing.html) Ulam, S. “Tribute to John von Neumann,” Bulletin of the American Mathematical Society, Vol. 64, Nr. 3, Part II, pp. 1-49 (1958).\n\n\n\n\n\n---\n\n\n\n**Transhumanism and Nature: Why do transhumanists want to live longer?**\n\nThis is a personal matter, a matter of the heart. Have you ever been so happy that you felt like melting into tears? Has there been a moment in your life of such depth and sublimity that the rest of existence seemed like dull, gray slumber from which you had only just woken up?\n\nIt is so easy to forget how good things can be when they are at their best. But on those occasions when we do remember – whether it comes from the total fulfillment of being immersed in creative work or from the tender ecstasy of reciprocated love – then we realize just how valuable every single minute of existence can be, when it is this good. And you might have thought to yourself, “It ought to be like this always. Why can’t this last forever?”\n\nWell, maybe – just maybe – it could.\n\nWhen transhumanists seek to extend human life, they are not trying to add a couple of extra years at a care home spent drooling at one’s shoes. The goal is more healthy, happy, productive years. Ideally, everybody should have the right to choose when and how to die – or not to die. Transhumanists want to live longer because they want to do, learn, and experience more; have more fun and spend more time with loved ones; continue to grow and mature beyond the paltry eight decades allotted to us by our evolutionary past; and in order to get to see for themselves what wonders the future might hold. As the sales pitch for one cryonics organization goes:\n\n“The conduct of life and the wisdom of the heart are based upon time; in the last quartets of Beethoven, the last words and works of ‘old men’ like Sophocles and Russell and Shaw, we see glimpses of a maturity and substance, an experience and understanding, a grace and a humanity, that isn’t present in children or in teenagers. They attained it because they lived long; because they had time to experience and develop and reflect; time that we might all have. Imagine such individuals – a Benjamin Franklin, a Lincoln, a Newton, a Shakespeare, a Goethe, an Einstein [and a Gandhi] – enriching our world not for a few decades but for centuries. Imagine a world made of such individuals. It would truly be what Arthur C. Clarke called ‘Childhood’s End’ – the beginning of the adulthood of humanity.” (Alcor Life Extension Foundation)\n\nReferences: Alcor Life Extension Foundation. [*http://www.alcor.org/*](http://www.alcor.org/)\n\n\n\n\n\n---\n\n\n\n**Isn't this tampering with nature?**\n\nAbsolutely, and it is nothing to be ashamed of. It is often right to tamper with nature. One could say that manipulating nature is an important part of what civilization and human intelligence is all about; we have been doing it since the invention of the wheel. Alternatively, one could say that since we are part of nature, everything we do and create is in a sense natural too. In any case, there is no moral reason why we shouldn’t intervene in nature and improve it if we can, whether by eradicating diseases, improving agricultural yields to feed a growing world population, putting communication satellites into orbit to provide homes with news and entertainment, or inserting contact lenses in our eyes so we can see better. Changing nature for the better is a noble and glorious thing for humans to do. (On the other hand, to “pave paradise to put up a parking lot” would not be glorious; the qualification “for the better” is essential.) [See also “Are transhumanist technologies environmentally sound?”]\n\nIn many particular cases, of course, there are sound practical reasons for relying on “natural” processes. The point is that we cannot decide whether something is good or bad simply by asking whether it is natural or not. Some natural things are bad, such as starvation, polio, and being eaten alive by intestinal parasites. Some artificial things are bad, such as DDT-poisoning, car accidents, and nuclear war.\n\nTo pick a topical example, consider the debate about human cloning. Some argue that cloning humans is not unnatural because human clones are essentially just identical twins. They were right in this, of course, although one could also correctly remark that it is not natural for identical twins to be of different ages. But the more fundamental point is that it doesn’t matter whether human clones are natural or not. When thinking about whether to permit human reproductive cloning, we have to compare the various possible desirable consequences with the various possible undesirable consequences. We then have to try to estimate the likelihood of each of these consequences. This kind of deliberation is much harder than simply dismissing cloning as unnatural, but it is also more likely to result in good decisions.\n\nThese remarks hopefully should seem trivial. Yet it is astonishing how often polemicists can still get a way with arguments that are basically (thinly disguised) ways of saying, “It is good because it’s the way it has always been!” or “It is good because that’s the way Nature made it!”\n\n\n\n\n\n---\n\n\n\n**Will transhuman technologies make us inhuman?**\n\nThe important thing is not to be human but to be humane. Though we might wish to believe that Hitler was an inhuman monster, he was, in fact, a human monster; and Gandhi is noted not for being remarkably human but for being remarkably humane.\n\nThe attributes of our species are not exempt from ethical examination in virtue of being “natural” or “human”. Some human attributes, such as empathy and a sense of fairness, are positive; others, such as tendencies toward tribalism or groupishness, have left deep scars on human history. If there is value in being human, it does not comes from being “normal” or “natural”, but from having within us the raw material for being humane: compassion, a sense of humor, curiosity, the wish to be a better person. Trying to preserve “humanness,” rather than cultivating humaneness, would idolize the bad along with the good. One might say that if “human” is what we are, then “humane” is what we, as humans, wish we were. Human nature is not a bad place to start that journey, but we can’t fulfill that potential if we reject any progress past the starting point.\n\n\n\n\n\n---\n\n\n\n**Isn't death part of the natural order of things?**\n\nTranshumanists insist that whether something is natural or not is irrelevant to whether it is good or desirable [see also “Isn’t this tampering with nature?”, “Will extended life worsen overpopulation problems?”, and “Why do transhumanists want to live longer?”].\n\nAverage human life span hovered between 20 and 30 years for most of our species’ history. Most people today are thus living highly unnaturally long lives. Because of the high incidence of infectious disease, accidents, starvation, and violent death among our ancestors, very few of them lived much beyond 60 or 70. There was therefore little selection pressure to evolve the cellular repair mechanisms (and pay their metabolic costs) that would be required to keep us going beyond our meager three scores and ten. As a result of these circumstances in the distant past, we now suffer the inevitable decline of old age: damage accumulates at a faster pace than it can be repaired; tissues and organs begin to malfunction; and then we keel over and die.\n\nThe quest for immortality is one of the most ancient and deep-rooted of human aspirations. It has been an important theme in human literature from the very earliest preserved written story, The Epic of Gilgamesh, and in innumerable narratives and myths ever since. It underlies the teachings of world religions about spiritual immortality and the hope of an afterlife. If death is part of the natural order, so too is the human desire to overcome death.\n\nBefore transhumanism, the only hope of evading death was through reincarnation or otherworldly resurrection. Those who viewed such religious doctrines as figments of our own imagination had no alternative but to accept death as an inevitable fact of our existence. Secular worldviews, including traditional humanism, would typically include some sort of explanation of why death was not such a bad thing after all. Some existentialists even went so far as to maintain that death was necessary to give life meaning!\n\nThat people should make excuses for death is understandable. Until recently there was absolutely nothing anybody could do about it, and it made some degree of sense then to create comforting philosophies according to which dying of old age is a fine thing (“deathism”). If such beliefs were once relatively harmless, and perhaps even provided some therapeutic benefit, they have now outlived their purpose. Today, we can foresee the possibility of eventually abolishing aging and we have the option of taking active measures to stay alive until then, through life extension techniques and, as a last resort, cryonics. This makes the illusions of deathist philosophies dangerous, indeed fatal, since they teach helplessness and encourage passivity.\n\nEspousing a deathist viewpoint tends to go with a certain element of hypocrisy. It is to be hoped and expected that a good many of death’s apologists, if they were one day presented with the concrete choice between (A) getting sick, old, and dying, and (B) being given a new shot of life to stay healthy, vigorous and to remain in the company of friends and loved ones to participate in the unfolding of the future, would, when push came to shove, choose this latter alternative.\n\nIf some people would still choose death, that’s a choice that is of course to be regretted, but nevertheless this choice must be respected. The transhumanist position on the ethics of death is crystal clear: death should be voluntary. This means that everybody should be free to extend their lives and to arrange for cryonic suspension of their deanimated bodies. It also means that voluntary euthanasia, under conditions of informed consent, is a basic human right.\n\nIt may turn out to be impossible to live forever, strictly speaking, even for those who are lucky enough to survive to such a time when technology has been perfected, and even under ideal conditions. The amount of matter and energy that our civilization can lay its hands on before they recede forever beyond our reach (due to the universe’s expansion) is finite in the current most favored cosmological models. The heat death of the universe is thus a matter of some personal concern to optimistic transhumanists!\n\nIt is too early to tell whether our days are necessarily numbered. Cosmology and fundamental physics are still incomplete and in theoretical flux; theoretical possibilities for infinite information processing (which might enable an upload to live an infinite life) seem to open and close every few years. We have to live with this uncertainty, along with the much greater uncertainty about whether any of us will manage to avoid dying prematurely, before technology has become mature.\n\n\n\n\n\n---\n\n\n\n**Are transhumanist technologies environmentally sound?**\n\nThe environmental impact of a technology depends on how it is used. Safeguarding the natural environment requires political will as well as good technology. The technologies necessary for realizing the transhumanist vision can be environmentally sound. Information technology and medical procedures, for example, tend to be relatively clean.\n\nTranshumanists can in fact make a stronger claim regarding the environment: that current technologies are unsustainable. We are using up essential resources, such as oil, metal ores, and atmospheric pollution capacity, faster than they regenerate. At the present rate of consumption, we look set to exhaust these resources some time in this century. Any realistic alternatives that have been proposed involve taking technology to a more advanced level. Not only are transhumanist technologies ecologically sound, they may be the only environmentally viable option for the long term.\n\nWith mature molecular manufacturing [see “What is molecular nanotechnology?”], we will have a way of producing most any commodity without waste or pollution. Nanotechnology would also eventually make it economically feasible to build space-based solar plants, to mine extraterrestrial bodies for ore and minerals and to move heavy industries off-earth. The only truly long-term solution to resource shortage is space colonization.\n\nFrom a transhumanist point of view, humans and our artifacts and enterprises are part of the extended biosphere. There is no fundamental dichotomy between humanity and the rest of the world. One could say that nature has, in humanity, become conscious and self-reflective. We have the power to dream of a better ways for things to be and to deliberately set out to build our dreams, but we also have the responsibility to use this power in ways that are sustainable and that protect essential values.\n\n\n\n\n\n---\n\n\n\n**Transhumanism as a Philosophical and Cultural Viewpoint What are the philosophical and cultural antecedents of transhumanism?**\n\nThe human desire to acquire posthuman attributes is as ancient as the human species itself. Humans have always sought to expand the boundaries of their existence, be it ecologically, geographically, or mentally. There is a tendency in at least some individuals always to try to find a way around every limitation and obstacle.\n\nCeremonial burial and preserved fragments of religious writings show that prehistoric humans were deeply disturbed by the death of their loved ones and sought to reduce the cognitive dissonance by postulating an afterlife. Yet, despite the idea of an afterlife, people still endeavored to extend life. In the Sumerian Epic of Gilgamesh (approx. 2000 B.C.), a king embarks on a quest to find an herb that can make him immortal. It’s worth noting that it was assumed both that mortality was not inescapable in principle, and that there existed (at least mythological) means of overcoming it. That people really strove to live longer and richer lives can also be seen in the development of systems of magic and alchemy; lacking scientific means of producing an elixir of life, one resorted to magical means. This strategy was adopted, for example, by the various schools of esoteric Taoism in China, which sought physical immortality and control over or harmony with the forces of nature.\n\nThe Greeks were ambivalent about humans transgressing our natural confines. On the one hand, they were fascinated by the idea. We see it in the myth of Prometheus, who stole the fire from Zeus and gave it to the humans, thereby permanently improving the human condition. And in the myth of Daedalus, the gods are repeatedly challenged, quite successfully, by a clever engineer and artist, who uses non-magical means to extend human capabilities. On the other hand, there is also the concept of hubris: that some ambitions are off-limit and would backfire if pursued. In the end, Daedalus’ enterprise ends in disaster (not, however, because it was punished by the gods but owing entirely to natural causes).\n\nGreek philosophers made the first, stumbling attempts to create systems of thought that were based not purely on faith but on logical reasoning. Socrates and the sophists extended the application of critical thinking from metaphysics and cosmology to include the study of ethics and questions about human society and human psychology. Out of this inquiry arose cultural humanism, a very important current throughout the history of Western science, political theory, ethics, and law.\n\nIn the Renaissance, human thinking was awoken from medieval otherworldliness and the scholastic modes of reasoning that had predominated for a millennium, and the human being and the natural world again became legitimate objects of study. Renaissance humanism encouraged people to rely on their own observations and their own judgment rather than to defer in every matter to religious authorities. Renaissance humanism also created the ideal of the well-rounded personality, one that is highly developed scientifically, morally, culturally, and spiritually. A milestone is Giovanni Pico della Mirandola’s Oration on the Dignity of Man (1486), which states that man does not have a ready form but that it is man’s task to form himself. And crucially, modern science began to take form then, through the works of Copernicus, Kepler, and Galileo.\n\nThe Age of Enlightenment can be said to have started with the publication of Francis Bacon’s Novum Organum, “the new tool” (1620), in which he proposes a scientific methodology based on empirical investigation rather than a priori reasoning. Bacon advocates the project of “effecting all things possible,” by which he meant the achievement of mastery over nature in order to improve the condition of human beings. The heritage from the Renaissance combines with the influences of Isaac Newton, Thomas Hobbes, John Locke, Immanuel Kant, Marquis de Condorcet, and others to form the basis for rational humanism, which emphasizes science and critical reasoning – rather than revelation and religious authority – as ways of learning about the natural world and the destiny and nature of man and of providing a grounding for morality. Transhumanism traces its roots to this rational humanism.\n\nIn the 18th and 19th centuries we begin to see glimpses of the idea that even humans themselves can be developed through the appliance of science. Benjamin Franklin and Voltaire speculated about extending human life span through medical science. Especially after Darwin’s theory of evolution, atheism or agnosticism came to be seen as increasingly attractive alternatives. However, the optimism of the late 19th century often degenerated into narrow-minded positivism and the belief that progress was automatic. When this view collided with reality, some people reacted by turning to irrationalism, concluding that since reason was not sufficient, it was worthless. This resulted in the anti-technological, anti-intellectual sentiments whose sequelae we can still witness today in some postmodernist writers, in the New Age movement, and among the neo-Luddite wing of the anti-globalization agitators.\n\nA significant stimulus in the formation of transhumanism was the essay Daedalus: Science and the Future (1923) by the British biochemist J. B. S. Haldane, in which he discusses how scientific and technological findings may come to affect society and improve the human condition. This essay set off a chain reaction of future-oriented discussions, including The World, the Flesh and the Devil by J. D. Bernal (1929), which speculates about space colonization and bionic implants as well as mental improvements through advanced social science and psychology; the works of Olaf Stapledon; and the essay “Icarus: the Future of Science” (1924) by Bertrand Russell, who took a more pessimistic view, arguing that without more kindliness in the world, technological power will mainly serve to increase men’s ability to inflict harm on one another. Science fiction authors such as H. G. Wells and Olaf Stapledon also got many people thinking about the future evolution of the human race. One frequently cited work is Aldous Huxley’s Brave New World (1932), a dystopia where psychological conditioning, promiscuous sexuality, biotechnology, and opiate drugs are used to keep the population placid and contented in a static, totalitarian society ruled by an elite consisting of ten “world controllers”. Huxley’s novel warns of the dehumanizing potential of technology being used to arrest growth and to diminish the scope of human nature rather than enhance it.\n\nThe Second World War changed the direction of some of those currents that result in today’s transhumanism. The eugenics movement, which had previously found advocates not only among racists on the extreme right but also among socialists and progressivist social democrats, was thoroughly discredited. The goal of creating a new and better world through a centrally imposed vision became taboo and passé; and the horrors of the Stalinist Soviet Union again underscored the dangers of such an approach. Mindful of these historical lessons, transhumanists are often deeply suspicious of collectively orchestrated change, arguing instead for the right of individuals to redesign themselves and their own descendants.\n\nIn the postwar era, optimistic futurists tended to direct their attention more toward technological progress, such as space travel, medicine, and computers. Science began to catch up with speculation. Transhumanist ideas during this period were discussed and analyzed chiefly in the literary genre of science fiction. Authors such as Arthur C. Clarke, Isaac Asimov, Robert Heinlein, Stanislaw Lem, and later Bruce Sterling, Greg Egan, and Vernor Vinge have explored various aspects of transhumanism in their writings and contributed to its proliferation.\n\nRobert Ettinger played an important role in giving transhumanism its modern form. The publication of his book The Prospect of Immortality in 1964 led to the creation of the cryonics movement. Ettinger argued that since medical technology seems to be constantly progressing, and since chemical activity comes to a complete halt at low temperatures, it should be possible to freeze a person today and preserve the body until such a time when technology is advanced enough to repair the freezing damage and reverse the original cause of deanimation. In a later work, Man into Superman (1972), he discussed a number of conceivable improvements to the human being, continuing the tradition started by Haldane and Bernal.\n\nAnother influential early transhumanist was F. M. Esfandiary, who later changed his name to FM-2030. One of the first professors of future studies, FM taught at the New School for Social Research in New York in the 1960s and formed a school of optimistic futurists known as the UpWingers. In his book Are you a transhuman? (1989), he described what he saw as the signs of the emergence of the transhuman person, in his terminology indicating an evolutionary link towards posthumanity. (A terminological aside: an early use of the word “transhuman” was in the 1972-book of Ettinger, who doesn’t now remember where he first encountered the term. The word “transhumanism” may have been coined by Julian Huxley in New Bottles for New Wine (1957); the sense in which he used it, however, was not quite the contemporary one.) Further, its use is evidenced in T.S. Elliot’s writing around the same time (Vita-More, 2004). And it is known that Dante Alighieri referred to the notion of the transhuman in historical writings.\n\n“Where the word ‘transhumanism’ came from, no one is quite sure, as it, or parts of it, have been used at different times for different meanings. The central and spirited ideas can be traced from the transition and transformation of humans in overcoming odds.  However, the very first known reference to the transhumanism was written by poet Dante Alighieri in his magnum opus “Paradiso” of the *Divina Commedia*. (1312)  It is in this masterpiece that Dante invented the world “transhumanized” to describe what happens to humans through a ‘beatific vision.'[i]\n\n“Centuries later, T.S. Elliot, recipient of the Nobel Prize for Literature (1948), wrote about the isolation of the human condition in *The Cocktail Party*. ‘You and I don’t know the process by which the human is Transhumanized: what do we know of the kind of suffering they must undergo on the way of illumination?’ [ii] Biologist Julian Huxley wrote about evolutionary humanism, ‘… ‘transhumanism:’ … once there are enough people who can truly say that, the human species will be on the threshold of a new kind of existence, as different from ours as ours is from that of Peking man’ (1957)” (Vita-More, 2004).\n\nIn the 1970s and 1980s, several organizations sprung up for life extension, cryonics, space colonization, science fiction, media arts, and futurism. They were often isolated from one another, and while they shared similar views and values, they did not yet amount to any unified coherent worldview. One prominent voice from a standpoint with strong transhumanist elements during this era came from Marvin Minsky, an eminent artificial intelligence researcher.\n\nIn 1986, Eric Drexler published Engines of Creation, the first book-length exposition of molecular manufacturing. (The possibility of nanotechnology had been anticipated by Nobel Laureate physicist Richard Feynman in a now-famous after-dinner address in 1959 entitled “There is Plenty of Room at the Bottom”.) In this groundbreaking work, Drexler not only argued for the feasibility of assembler-based nanotechnology but also explored its consequences and began charting the strategic challenges posed by its development. Drexler’s later writings supplied more technical analyses that confirmed his initial conclusions. To prepare the world for nanotechnology and work towards it safe implementation, he founded the Foresight Institute together with his then wife Christine Peterson in 1986.\n\nEd Regis’s Great Mambo Chicken and the Transhuman Condition (1990) took a humorous look at transhumanism’s hubristic scientists and philosophers. Another couple of influential books were roboticist Hans Moravec’s seminal Mind Children (1988) about the future development of machine intelligence, and more recently Ray Kurzweil’s bestselling Age of Spiritual Machines (1999), which presented ideas similar to Moravec’s. Frank Tipler’s Physics of Immortality (1994), inspired by the writings of Pierre Teilhard de Chardin (a paleontologist and Jesuit theologian who saw an evolutionary telos in the development of an encompassing noosphere, a global consciousness) argued that advanced civilizations might come to have a shaping influence on the future evolution of the cosmos, although some were put off by Tipler’s attempt to blend science with religion. Many science advocates, such as Carl Sagan, Richard Dawkins, Steven Pinker, and Douglas Hofstadter, have also helped pave the way for public understanding of transhumanist ideas.\n\nIn 1988, the first issue of the Extropy Magazine was published by Max More and Tom Morrow, and in 1992 they founded the Extropy Institute (the term “extropy” being coined as an informal opposite of “entropy”). The magazine and the institute served as catalysts, bringing together disparate groups of people with futuristic ideas. More wrote the first definition of transhumanism in its modern sense, and created his own distinctive brand of transhumanism, which emphasized individualism, dynamic optimism, and the market mechanism in addition to technology. The transhumanist arts genre became more self-aware through the works of the artist Natasha Vita-More. During this time, an intense exploration of ideas also took place on various Internet mailing lists. Influential early contributors included Anders Sandberg (then a neuroscience doctoral student) and Robin Hanson (an economist and polymath) among many others.\n\nThe World Transhumanist Association was founded in 1998 by Nick Bostrom and David Pearce to act as a coordinating international nonprofit organization for all transhumanist-related groups and interests, across the political spectrum. The WTA focused on supporting transhumanism as a serious academic discipline and on promoting public awareness of transhumanist thinking. The WTA began publishing the Journal of Evolution and Technology, the first scholarly peer-reviewed journal for transhumanist studies in 1999 (which is also the year when the first version of this FAQ was published). In 2001, the WTA adopted its current constitution and is now governed by an executive board that is democratically elected by its full membership. James Hughes especially (a former WTA Secretary) among others helped lift the WTA to its current more mature stage, and a strong team of volunteers has been building up the organization to what it is today.\n\nHumanity+ developed after to rebrand transhumanism informing Humanity+as a cooperative organization, seeking to pull together the leaders of transhumanism: from the early 1990s: Max More, Natasha Vita-More, Anders Sandberg; the late 1990s: Nick Bostrom, David Pearce, James Hughes; the 2000s: James Clement, Ben Goertzel, Giulio Prisco and many others. In short, it is based on the early work of Extropy Institute and WTA.\n\nIn the past couple of years, the transhumanist movement has been growing fast and furiously. Local groups are mushrooming in all parts of the world. Awareness of transhumanist ideas is spreading. Transhumanism is undergoing the transition from being the preoccupation of a fringe group of intellectual pioneers to becoming a mainstream approach to understanding the prospects for technological transformation of the human condition. That technological advances will help us overcome many of our current human limitations is no longer an insight confined to a few handfuls of techno-savvy visionaries. Yet understanding the consequences of these anticipated possibilities and the ethical choices we will face is a momentous challenge that humanity will be grappling with over the coming decades. The transhumanist tradition has produced a (still evolving) body of thinking to illuminate these complex issues that is unparalleled in its scope and depth of foresight.\n\nReferences: \nBacon, F. Novum Organum. (New York: Colonial Press, 1899 [1620]). [*http://www.constitution.org/bacon/nov\\_org.htm*](http://www.constitution.org/bacon/nov_org.htm) \nBernal, J. D. The World, the Flesh & the Devil: An Enquiry into the Future of the Three Enemies of the Rational Soul. (Bloomington: Indiana University Press, 1969 [1929]). [*http://www.santafe.edu/~shalizi/Bernal/*](http://www.santafe.edu/~shalizi/Bernal/) \nDrexler, E. The Engines of Creation: The Coming Era of Nanotechnology. (New York: Anchor Books, 1986). \n[*http://www.foresight.org/EOC/index.html*](http://www.foresight.org/EOC/index.html) \nAlcor Life Extension foundation [*http://www.alcor.org*](http://www.alcor.org/) \nExtropy Institute. [*http://www.extropy.org*](http://www.extropy.org/) \nFeynman, R. “There is Plenty of Room at the Bottom.” Presentation given on December 29th, 1959 at the annual meeting of the American Physical Society at the California Institute of Technology, published in Engineering and Science, Feb 1960. [*http://www.zyvex.com/nanotech/feynman.html*](http://www.zyvex.com/nanotech/feynman.html) \nFM-2030. Are You a Transhuman? (New York: Warner Books, 1989). \nForesight Institute. [*http://www.foresight.org*](http://www.foresight.org/) \nHaldane, J. B. S. Daedalus or Science and the Future. (New York: E. P. Dutton & Co., Inc., 1924 [1923]). [*http://www.santafe.edu/~shalizi/Daedalus.html*](http://www.santafe.edu/~shalizi/Daedalus.html) \nHuxley, A. Brave New World. (San Bernadino: The Borgo Press, 1989 [1932]). \nHuxley, J. New Bottles for New Wine. (New York: Harper, 1957). \nJournal of Evolution and Technology. [*http://www.jetpress.org/*](http://www.jetpress.org/) \nMirandola, Giovanni Pico. Oration on the Dignity of Man. (1486). [*http://www.santafe.edu/~shalizi/Mirandola/*](http://www.santafe.edu/~shalizi/Mirandola/) \nMoravec, H. Mind Children (Harvard: Harvard University Press, 1988). \nRegis, E. Great Mambo Chicken and the Transhuman Condition (New York: Perseus, 1990). \nRussell, B. Icarus or The Future of Science. (New York: E. P Dutton & Company, 1924). [*http://www.santafe.edu/~shalizi/Icarus.html*](http://www.santafe.edu/~shalizi/Icarus.html) \nTipler, F. The Physics of Immortality (New York: Doubleday, 1994). \nVita-More, N. (2004). “Deconstructing Transhumanism”. [Research Paper, University of Plymouth. Presented in Gijon, Spain.]. Retrieved August 8, 2019. \nWorld Transhumanist Association. [*http://www.transhumanism.org*](http://www.transhumanism.org/)\n\n\n\n\n\n---\n\n\n\n**What currents are there within transhumanism?**\n\nIs Extropy (or extropianism) the same as transhumanism?\n\nThere is a rich variety of opinions within transhumanist thought. Many of the leading transhumanist thinkers hold complex and subtle views that are under constant revision and development and which often defy easy labeling. Some distinctive – although not always sharply defined – currents or flavors of transhumanism can nevertheless be discerned. The original worldview and philosophy of transhumanism stems from the Principles of Extropy:\n\nExtropy (The philosophy of Extropy). The name is derived from the term “extropy”, coined by T. O. Morrow in 1988, referring to “the extent of a system’s intelligence, information, order, vitality, and capacity for improvement”. The transhumanist philosophy of Extropy is defined by the Extropian Principles, a text authored by Max More (1998), who co-founded the Extropy Institute together with Morrow. Version 3.0 of this document lists seven principles that are important for transhumanists in the development of their thinking: Perpetual Progress, Self-Transformation, Practical Optimism, Intelligent Technology, Open Society, Self-Direction, and Rational Thinking. These are meant to codify general attitudes rather than specific dogmas.\n\nDemocratic transhumanism. This strand of transhumanism advocates both the right to use technology to transcend the limitations of the human body and the extension of democratic concerns beyond formal legal equality and liberty, into economic and cultural liberty and equality, in order to protect values such as equality, solidarity, and democratic participation in a transhuman context (Hughes 2002).\n\nThe Hedonistic Imperative. Another transhumanist current is represented by advocates of “paradise-engineering” as outlined in David Pearce (2003). Pearce argues on ethical grounds for a biological program to eliminate all forms of cruelty, suffering, and malaise. In the short-run, our emotional lives might be enriched by designer mood-drugs (i.e. not street-drugs). In the long-term, however, Pearce suggests that it will be technically feasible to rewrite the vertebrate genome, redesign the global ecosystem, and use biotechnology to abolish suffering throughout the living world. Pearce believes “post-Darwinian superminds” will enjoy genetically pre-programmed well-being and be animated by “gradients of bliss”.\n\nSingularitarianism. Singularitarian transhumanists focus on transhuman technologies that can potentially lead to the rise of smarter-than-human intelligence, such as brain-computer interfacing and Artificial Intelligence. Since our present-day intelligence is ultimately the source of our technology, singularitarians expect the technological creation of smarter-than-human intelligence to be a watershed moment in history, with an impact more comparable to the rise of Homo sapiens than to past breakthroughs in technology. Singularitarians stress the importance of ensuring that such intelligence be coupled with ethical sensibility (Yudkowsky 2003) [see also “What is the singularity?”].\n\nTheoretical transhumanism. This is not so much a specific version of a transhumanism as a research direction: the study of the constraints, possibilities, and consequences of potential future trajectories of technological and human development, using theoretical tools from economics, game theory, evolution theory, probability theory, and “theoretical applied science” i.e. the study of physically possible systems designs that we cannot yet build. For some examples, see Bostrom (2002, 2003a) and Hanson (1994, 1998). Investigations of ethical issues related to the transhumanist project – the project of creating a world where as many people as possible have the option of becoming posthuman – can also be included under this heading (see e.g. Bostrom 2003b).\n\nSalon transhumanism. Transhumanism as a network of people who share certain interests and like to spend long hours conversing about transhumanist matters on email lists or face-to-face.\n\nTranshumanism in arts and culture. Transhumanism as a source of inspiration in artistic creation and cultural activities, including efforts to communicate transhumanist ideas and values to a wider audience [see also “What kind of transhumanist art is there?”].\n\nReferences: Bostrom, N. “Existential Risks: Analyzing Human Extinction Scenarios.” \nJournal of Evolution and Technology. (2002), Vol. 9. [*http://jetpress.org/volume9/risks.html*](http://jetpress.org/volume9/risks.html)\n\nBostrom, N. “Are You Living In A Computer Simulation?” Philosophical \nQuarterly. (2003a), Vol. 53, No. 211, pp. 243-255. [*http://www.simulation-argument.com/simulation.html*](http://www.simulation-argument.com/simulation.html) \nBostrom, N. “Human Genetic Enhancements: A Transhumanist Perspective.” The \nJournal of Value Inquiry. (2003b), forthcoming. \nHanson, R. “What if Uploads Come First: The Crack of a Future Dawn.” \nExtropy, Vol. 6, No. 2 (1994). [*http://hanson.gmu.edu/uploads.html*](http://hanson.gmu.edu/uploads.html) \nHanson, R. “Burning the Cosmic Commons: Evolutionary Strategies for Interstellar Colonization.” (1998). [*http://hanson.gmu.edu/filluniv.pdf*](http://hanson.gmu.edu/filluniv.pdf) \nHughes, J. “Democratic Transhumanism.” Transhumanity, April 28, 2002. [*http://changesurfer.com/Acad/DemocraticTranshumanism.htm*](http://changesurfer.com/Acad/DemocraticTranshumanism.htm) \nPearce, D. The Hedonistic Imperative (version of 2003). [*http://www.hedweb.com/hedethic/hedonist.htm*](http://www.hedweb.com/hedethic/hedonist.htm) \nMore, M. “The Extropian Principles, v. 3.0.” (1998). [*http://www.maxmore.com/extprn3.htm*](http://www.maxmore.com/extprn3.htm) \nYudkowsky, E. “What is the Singularity.” (2003). [*http://www.singinst.org/what-singularity.html*](http://www.singinst.org/what-singularity.html)\n\n\n\n\n\n---\n\n\n\n**What kind of transhumanist art is there?**\n\nMany kinds, but what examples one would give depends on how one defines “transhumanist art”. If one defines it simply as art that is concerned with the human aspiration to overcome current limits, then a large portion of all art through the ages would count as transhumanist – from ancient myths of Promethean hubris, to religious transcendental iconography, architecture, and rituals, J. S. Bach’s fugues, Goethe’s Faust, through to the postmodern artists, many of whom conceived of their work as an attempt to explode conceptual barriers in order to widen the reach of human creativity.\n\nThee concept of transhumanist art would be to say that it is multi-media arts creative works produced by transhumanists. On this definition, examples have to be sought in recent times since the term “transhumanism” in its contemporary sense is quite new. Natasha Vita-More is one of the earliest and most prominent transhumanist artists in this sense. For instance, her recent visual and conceptual work, Primo Posthuman (3M+), presents a kind of sleek future shopping catalog entry for an entire body design with features such as memory enhancements, sonar sensors, solar protected skin with hue-texture changeability, gender reconfigurability, environmentally-friendly waste disposal, and which comes complete with warranty and upgradability. Vita-More is also the author of several transhumanist arts manifestos, in which transhumanist art becomes self-conscious for the first time. Other contemporary transhumanist artists include Leonal Moura, Stelarc, Lilia Morales y Mori, Anders Sandberg, Juan Meridalva; Elaine Walker, E. Shaun Russell, Emlyn O’Regan, Gustavo Muccillo Alves, and the band Cosmodelia (electronic music); Susan Rogers (puppet theatre); Jane Holt (performance art); and many others.\n\nIf we narrow the definition by adding the requirement that a transhumanist telos be coupled to a notion of the centrality of technological means, we get a different set of paradigmatic examples. The Frankenstein myth (based originally on the novel by Mary Shelly published in 1831, and elaborated in countless forms since then) is one classic, and in general science fiction has been the genre most intensely preoccupied with transhumanist themes, reaching back to Jules Verne and Karel Čapek, through Isaac Asimov, Robert A. Heinlein, Stanislav Lem, Arthur C. Clark, on to Vernor Vinge, Bruce Sterling, James Halperin, Greg Egan, and many others in the field of science ficiton. Many of these author’s stories have been adapted for the screen. (The Star Trek series features cool new technology but the same old humans, so it is not a very paradigmatic exemplar of transhumanist art.) Yet this in and of itself is a narrowing of the board and explorative scope of transhumanist arts. For example, Buckminster Fuller’s architectural understanding of the world and society, the “maker”, “quantified self”, and “DIY” cultures all reflect initiatives of transhumanist art because the key is to solve problems through creative endeavors. In this regard, the field of design is consequential, and equal to, if not more than, science fiction.\n\nReferences: Vita-More, N. Primo 3M+ (2002). [*http://www.natasha.cc/primo.htm*](http://www.natasha.cc/primo.htm) \nVita-More, N. “Transhumanist Arts Statement” (version of 2002). [*http://www.extropic-art.com/transart.htm*](http://www.extropic-art.com/transart.htm)\n\n\n\n\n\n---\n\n\n\n**How does transhumanism relate to religion?**\n\nTranshumanism is a philosophical and cultural movement concerned with promoting responsible ways of using technology to enhance human capacities and to increase the scope of human flourishing.\n\nWhile not a religion, transhumanism might serve a few of the same functions that people have traditionally sought in religion. It offers a sense of direction and purpose and suggests a vision that humans can achieve something greater than our present condition. Unlike most religious believers, however, transhumanists seek to make their dreams come true in this world, by relying not on supernatural powers or divine intervention but on rational thinking and empiricism, through continued scientific, technological, economic, and human development. Some of the prospects that used to be the exclusive thunder of the religious institutions, such as very long lifespan, unfading bliss, and godlike intelligence, are being discussed by transhumanists as hypothetical future engineering achievements.\n\nTranshumanism is a naturalistic outlook. At the moment, there is no hard evidence for supernatural forces or irreducible spiritual phenomena, and transhumanists prefer to derive their understanding of the world from rational modes of inquiry, especially the scientific method. Although science forms the basis for much of the transhumanist worldview, transhumanists recognize that science has its own fallibilities and imperfections, and that critical ethical thinking is essential for guiding our conduct and for selecting worthwhile aims to work towards.\n\nReligious fanaticism, superstition, and intolerance are not acceptable among transhumanists. In many cases, these weaknesses can be overcome through a scientific and humanistic education, training in critical thinking, and interaction with people from different cultures. Certain other forms of religiosity, however, may well be compatible with transhumanism.\n\nIt should be emphasized that transhumanism is not a fixed set of dogmas. It is an evolving worldview, or rather, a family of evolving worldviews – for transhumanists disagree with each other on many issues. The transhumanist philosophy, still in its formative stages, is meant to keep developing in the light of new experiences and new challenges. Transhumanists want to find out where they are wrong and to change their views accordingly.\n\n\n\n\n\n---\n\n\n\n**Won't things like uploading, cryonics, and AI fail because they can’t preserve or create the soul?**\n\nIf we answer this question from a religious standpoint, there is no clear ground for ruling out these technologies as incompatible with teachings about the soul. There is no scriptural basis in the Bible for assuming that God can’t get to our soul if we freeze our physical body, nor is there a single word in the Christian or Jewish scriptures, or the Quran, the Dhammapada, or the Tao Teh Ching, that prohibits cryonics. Or, for someone who believes in reincarnation, there are no traditional beliefs that say reincarnation is prevented when someone freezes to death or whose body is frozen after clinical death. If there is a soul and it enters the body at conception, then cryonics may well work – after all, human embryos have been frozen, stored for extended periods, and then implanted in their mothers, resulting in healthy children (who presumably have souls). Uploading and machine intelligence may reveal new things to us about the soul works. It is interesting to note that the Dalai Lama, when asked, did not rule out the possibility of reincarnating into computers (Hayward et al. 1992), pp. 152f.\n\nWhile the concept of a soul is not used much in a naturalistic philosophy such as transhumanism, many transhumanists do take an interest in the related problems concerning personal identity (Parfit 1984) and consciousness (Churchland 1988). These problems are being intensely studied by contemporary analytic philosophers, and although some progress has been made, e.g. in Derek Parfit’s work on personal identity, they have still not been resolved to general satisfaction.\n\nReferences: Churchland, P. Matter and Consciousness. (Cambridge, MA: MIT Press, 1988). \nHayward, J. et at. Gentle Bridges: Conversations with the Dalai Lama on the Sciences of the Mind. (Shambala Publications, 1992). \nParfit, D. Reasons and Persons. (Oxford: Oxford University Press, 1984).\n\nThe Transhumanist FAQ was conceived as an attempt to develop a broadly based consensus articulation of the basics of responsible transhumanism. The aim was a text that could serve both as a guide to those new to the field and as a reference work for more seasoned participants.", "url": "https://www.humanityplus.org/transhumanist-faq?rq=Transhumanist%20FAQ", "title": "Transhumanist FAQ 3.0", "source": "html_articles", "source_type": "blogPost", "source_filetype": "pdf", "date_published": "2016-12-31T23:00:00Z", "authors": ["Nick Bostrom"], "summary": [], "id": "bf23214c4e5b21c294e50239da49b31e"} {"text": "Artificial intelligence (AI) offers enormous potential to transform our businesses, solve some of our toughest problems and inspire the world to a better future. But our AI systems are only as good as the data we put into them. As AI becomes increasingly ubiquitous in all aspects of our lives, ensuring we’re developing and training these systems with data that is fair, interpretable and unbiased is critical.\n\n\n\n\n> \n> *“Our AI systems are only as good as the data we put into them.”*\n> \n> \n> \n\n\n\nBad data can contain implicit racial, gender, or ideological biases. It can be poorly researched, with vague and unsourced origins. For some, end results can be catastrophic: Qualified candidates can be disregarded for employment, while others can be subjected to unfair treatment in areas such as education or financial lending. In other words, that age-old saying, “garbage in, garbage out” still applies to data-driven AI systems.\n\n\nThe solution to reducing bias in AI may be our AI systems themselves. AI may actually hold the key to mitigating bias in AI systems – and offers an opportunity to shed light on the existing biases we hold as humans.\n\n\nWithout a process to guide the responsible development of trustworthy AI, our systems won’t benefit society — in fact, AI systems could exacerbate the negative consequences of unconscious bias. We therefore need to define an ethical framework that guides the development of this technology, roots out bias from our systems, and better aligns them with human values. This is a challenge for everyone in society, and it will require deep collaboration across industries, specialties and backgrounds. At IBM, we are committed to ensuring the responsible advancement and deployment of AI technologies. That includes providing clients and partners the ability to audit and understand how our AI systems arrived at a given decision or recommendation.\n\n\n \n\n\n**New Multi-Disciplinary Conference Dedicated to AI Ethics** \n\nToday marks a significant milestone in progressing these conversations. IBM’s Francesca Rossi, AI Ethics Global Leader and Distinguished Research Staff Member at [IBM Research AI,](http://www.research.ibm.com/ai/) will co-chair the inaugural [![](https://admin.blogs.prd.ibm.event.ibm.com/blogs/policy/wp-content/uploads/2018/01/IBM_AIES-300x171.jpeg)Artificial Intelligence, Ethics and Society (AIES)](http://www.aies-conference.com/) conference in New Orleans. This multi-disciplinary, multi-stakeholder event is designed to shift the dynamics of the conversation on AI and ethics to concrete actions that scientists, businesses and society alike can take to ensure this promising technology is ushered into the world responsibility. For the first time, academics, researchers and students across several disciplines and industries will come together to present research, collaborate, and, most importantly, share personal experiences and insights to accelerate our collective understanding of ethical AI imperatives.\n\n\nAlso for the first time, the AIES conference will bring together two leading scientific associations around the theme of AI Ethics – the Association for Computing Machinery (and its special interest group on AI) and the [Association for the Advancement of Artificial Intelligence (AAAI)](https://www.aaai.org/) – to reinforce scientific multi-disciplinary discussions on AI ethics. Over the course of three days, AIES attendees will present and discuss new peer-reviewed research on the ethical implications of artificial intelligence. Out of 165 submitted papers to the conference, 61 will be featured – including five by IBM Research – in sessions designed to ignite conversation and inspire actionable insight.\n\n\nThis conference is vital, because as we increasingly rely on apps and services that use AI, we need to be confident that AI is transparent, interpretable, unbiased, and trustworthy.\n\n\n \n\n\n**The Bias Test: New IBM Research on AI and Bias** \n\nAmong the five new research papers IBM will present at AIES, Rossi will unveil her most recent work, “[Towards Composable Bias Rating of AI Services](http://www.aies-conference.com/wp-content/papers/main/AIES_2018_paper_65.pdf),” executed in collaboration with IBM Researcher Biplav Srivastava. In this research, Rossi and Srivastava devised a testing methodology wherein deployed AI systems can be evaluated even if the training data is not available. This research proposes that an independent, three-level rating system can determine the relative fairness of an AI system: 1) It’s not biased, 2) It inherits the bias properties of its data or training, or 3) It has the potential to introduce bias whether the data is fair or not. From that independent evaluation, the AI end-user can determine the trustworthiness of each system, based on its level of bias.\n\n\nBut guidelines and evaluative testing systems aren’t the only viable approaches. Last December, an IBM Research AI effort by Flavio Calmon, Dennis Wei, Bhanu Vinzamuri, Karthi Ramamurthy and Kush Varshney [developed a methodology](https://www.ibm.com/blogs/research/2017/12/ai-reducing-discrimination/) to reduce the discrimination that may be present in a training dataset — this way, any AI algorithm that later learns from that dataset will perpetuate as little inequity as possible. The team’s [paper](https://nips.cc/Conferences/2017/Schedule?showEvent=9180), which was presented at the [Neural Information Processing Systems (NIPS)](https://nips.cc/) conference, introduces a probabilistic formulation of data pre-processing for reducing discrimination. They show that discrimination can be greatly reduced through effective data transformation.\n\n\n \n\n\n**Future State: AI Reduces Human Bias** \n\nResearch and multidisciplinary conversations like those taking place at AIES are crucial to the advancement of fair, trustworthy AI. By progressing new ethical frameworks for AI and thinking critically about the quality of our datasets and how humans perceive and work with AI, we can accelerate the artificial intelligence field in a way that will benefit everyone. IBM believes that artificial intelligence actually holds the keys to mitigating bias out of AI systems – and offers an unprecedented opportunity to shed light on the existing biases we hold as humans.", "url": "https://www.ibm.com/policy/bias-in-ai/", "title": "Bias in AI: How we Build Fair AI Systems and Less-Biased Humans", "source": "html_articles", "source_type": "blogPost", "source_filetype": "pdf", "date_published": "2018-01-31T23:00:00Z", "authors": ["Anonymous"], "summary": [], "id": "ecde2b9c6b06bcd1b5ff8acdc6e14289"} {"text": "Abstract\n--------\n\n**:**\nThere has been extensive attention to near-term and long-term AI technology and its accompanying societal issues, but the medium-term has gone largely overlooked. This paper develops the concept of medium-term AI, evaluates its importance, and analyzes some medium-term societal issues. Medium-term AI can be important in its own right and as a topic that can bridge the sometimes acrimonious divide between those who favor attention to near-term AI and those who prefer the long-term. The paper proposes the medium-term AI hypothesis: the medium-term is important from the perspectives of those who favor attention to near-term AI as well as those who favor attention to long-term AI. The paper analyzes medium-term AI in terms of governance institutions, collective action, corporate AI development, and military/national security communities. Across portions of these four areas, some support for the medium-term AI hypothesis is found, though in some cases the matter is unclear.\n\n\nKeywords: [near-term AI](/search?q=near-term+AI); [long-term AI](/search?q=long-term+AI); [medium-term AI](/search?q=medium-term+AI); [intermediate-term AI](/search?q=intermediate-term+AI); [mid-term AI](/search?q=mid-term+AI); [societal implications of AI](/search?q=societal+implications+of+AI)\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n 1. Introduction\n----------------\n\nAttention to AI technologies and accompanying societal issues commonly clusters into groups focusing on either near-term or long-term AI, with some acrimonious debate between them over which is more important. Following Baum [[1](#B1-information-11-00290)], the near-term camp may be called “presentists” and the long-term camp “futurists”.The current state of affairs suggests two reasons for considering the intermediate period between the near and long terms. First, the medium term (or, interchangeably, intermediate term or mid term) has gone neglected relative to its inherent importance. If there are important topics involving near-term and long-term AI, then perhaps the medium term has important topics as well. Second, the medium term may provide a common ground between presentists and futurists. Insofar as both sides consider the medium term to be important, it could offer a constructive topic to channel energy that may otherwise be spent on hashing out disagreements.Rare examples of previous studies with dedicated attention to medium-term AI are Parson et al. [[2](#B2-information-11-00290),[3](#B3-information-11-00290)]. (There is a lot of work that touches on medium-term AI topics, some of which is cited in this paper. However, aside from Parson et al. [[2](#B2-information-11-00290),[3](#B3-information-11-00290)], I am not aware of any publications that explicitly identify medium-term AI as a topic warranting dedicated attention.) Both studies [[2](#B2-information-11-00290),[3](#B3-information-11-00290)] recognize medium-term AI as important and neglected. Parson et al. [[2](#B2-information-11-00290)] acknowledges that some prior work in AI covers topics that are important across all time periods, and thus are also relevant to the medium term. It provides a definition of medium-term AI, which is discussed further below, and it provides some analysis of medium-term AI topics. Parson et al. [[3](#B3-information-11-00290)] posits that the neglect of the medium term may derive in part from the academic disciplines and methodologies of AI researchers, which may point the researchers toward either the near term or the long term but not the medium term. The present paper extends Parson et al.’s [[2](#B2-information-11-00290)] work on definitions and presents original analysis of a different mix of medium-term AI topics. The present paper also explores the medium term as a potential point of common ground between presentists and futurists.Several previous attempts have been made to bridge the presentist–futurist divide [[1](#B1-information-11-00290),[4](#B4-information-11-00290),[5](#B5-information-11-00290)]. An overarching theme in this literature is that the practical steps needed to make progress are often (though not always) the same for both near-term and long-term AI. Instead of expending energy debating the relative importance of near-term and long-term AI, it may often be more productive to focus attention on the practical steps that both sides of the debate agree are valuable. This practical synergy can arise for two distinct reasons, both with implications for medium-term AI.First, certain actions may improve near-term AI and the near-term conversation about long-term AI. Such actions will often also improve the near-term conversation about mid-term AI. For example, efforts to facilitate dialog between computer scientists and policymakers can improve the quality of policy discussions for near-, mid-, and long-term AI. Additionally, efforts encouraging AI developers to take more responsibility for the social and ethical implications of their work can influence work on near-, mid-, and long-term AI. For example, the ethics principles that many AI groups have recently established [[6](#B6-information-11-00290)] are often quite general and can apply to work on near-term and long-term AI, as can analyses of the limitations of these principles [[7](#B7-information-11-00290)]. Here it should be explained that there is near-term work aimed at developing systems that may only become operational over the mid or long term, especially work consisting of basic research toward major breakthroughs in AI capabilities.Second, certain actions may improve near-term AI, and, eventually, long-term AI. These actions may often also eventually improve mid-term AI. For example, some research on how to design near-term AI systems more safely may provide a foundation for also making mid- and long-term AI systems safer. This is seen in the AI safety study of Amodei et al. [[8](#B8-information-11-00290)], which is framed in terms of near-term AI; lead author Amodei describes the work as also being relevant for long-term AI [[9](#B9-information-11-00290)]. Additionally, AI governance institutions established over the near term may persist into the mid and long term, given the durability of many policy institutions. Of course, AI system designs and governance institutions that persist from the near term to the long term would also be present throughout the mid-term. Furthermore, evaluating their long-term persistence may require understanding of what happens during the mid-term.Dedicated attention to the medium term can offer another point of common ground between presentists and futurists: both sides may consider the medium term to be important. Presentists may find the medium term to be early enough for their tastes, while futurists find it late enough for theirs. As elaborated below, the reasons that presentists have for favoring near-term AI are different types of reasons than those of the futurists. Presentists tend to emphasize immediate feasibility, certainty, and urgency, whereas futurists tend to emphasize extreme AI capabilities and consequences. Potentially, the medium term features a widely appealing mix of feasibility, certainty, urgency, capabilities, and consequences. Or not: it is also possible that the medium term would sit in a “dead zone”, being too opaque to merit presentist interest and too insignificant to merit futurist interest. This matter will be a running theme throughout the paper and is worth expressing formally:\n\n> The medium-term AI hypothesis: There is an intermediate time period in which AI technology and accompanying societal issues are important from both presentist and futurist perspectives.\n\nThe medium-term AI hypothesis can be considered in either empirical or normative terms. As an empirical hypothesis, it proposes that presentists and futurists actually consider the medium term to be important, or that they would tend to agree that the medium term is important if given the chance to reflect on it. As a normative hypothesis, it proposes that presentists should agree that the medium term is important, given the value commitments of the presentist and futurist perspectives. Given the practical goal of bridging the presentist–futurist divide, the empirical form is ultimately more important: what matters is whether the specific people on opposite sides of the divide would, upon consideration, find common ground in the medium term. (It is unlikely that they currently do find common ground in the medium term, due to lack of attention to it.) Empirical study of presentist and futurist reactions to the medium term is beyond the scope of the present paper. Instead, the aim here is to clarify the nature of the presentist and futurist perspectives in terms of the attributes of the medium term that they should consider important and then to examine whether the medium term is likely to possess these attributes. The paper therefore proceeds mainly in normative terms, though grounded in empirical observation of the perspectives articulated by actual presentists and futurists.More precisely, the medium-term AI hypothesis proposes that the perspectives underlying both groups should rate the medium term as important. This presumes that “perspectives” can rate things as important even when detached from the people who hold them. Such detachment is permitted here simply so that the analysis can proceed without going through the more involved (but ultimately important) process of consulting with the people who hold presentist and futurist perspectives.Evaluating the medium-term AI hypothesis is one aim of this paper. First, though, more needs to be said on how the medium term is defined. 2. Defining the Medium Term\n----------------------------\n\nThe medium term is, of course, the period of time between the near term and the long term. However, discussions of near-term and long-term AI often do not precisely specify what constitutes near-term and long-term. Some ambiguity is inevitable due to uncertainty about future developments in AI. Additionally, different definitions may be appropriate for different contexts and purposes—for example, what qualifies as near-term may be different for a programmer than for a policymaker. Nonetheless, it is worth briefly exploring how the near, mid, and long terms can be defined for AI. Throughout, it should be understood that the near, mid, and long terms are all defined relative to the vantage point of the time of this writing (2019–2020). As time progresses, what classifies as near-, mid-, and long-term can shift.The first thing to note is that near- vs. mid- vs. long-term can be defined along several dimensions. The first is chronological: the near term goes from year A to year B, the mid term from year B to year C, and the long term from year C to year D. The second is in terms of the feasibility or ambitiousness of the AI: the near term is what is already feasible, the long term is the AI that would be most difficult to achieve, and the mid term is somewhere in between. Third, and related to the second, is the degree of certainty about the AI: the near term is what clearly can be built, the long term is the most uncertain and speculative, and the mid term is somewhere in between. Fourth is the degree of sophistication or capability of the AI: the near term is the least capable, the long term is the most capable, and the mid term is somewhere in between. Fifth, and related to the fourth, is with respect to impacts: the near term has (arguably; see below) the mildest impacts on human society and the world at large, the long term has the most extreme impacts, and the mid-term is somewhere in between. Sixth is urgency: the near term is (arguably) the most urgent, the long term the least urgent, and the mid term is somewhere in between.The dimension of impacts is somewhat complex and worth briefly unpacking. Near-term AI may have the mildest impacts, in the sense that if AI continues to grow more capable and be used more widely and in more consequential settings it will tend to have greater impacts on the human society that exists at that time. Put differently, if A = the impacts of near-term AI on near-term society, B = the impacts of mid-term AI on mid-term society, and C = the impacts of long-term AI on long-term society, then (it is supposed) A < B < C. There are, however, alternative ways of conceptualizing impacts. One could take a certain presentist view and argue that only present people matter for purposes of moral evaluation, such as is discussed by Arrhenius [[10](#B10-information-11-00290)], or that future impacts should be discounted, as in many economic cost–benefit evaluations. In these cases, near-term AI may be evaluated as having the largest impacts because the impacts of mid- and long-term AI matter less or not at all. Or, one could consider the impacts of a period of AI on all time periods: the impact of near-term AI on the near, mid, and long terms, the impacts of mid-term AI on the mid- and long-terms, and the impact of long-term AI on the long term. This perspective recognizes the potential for durable impacts of AI technology, and would tend to increase the evaluated size of the impacts of near- and mid-term AI. While recognizing the merits of these alternative conceptions of impacts, this paper uses the first conception, involving A vs. B vs. C.There may be no one correct choice of dimensions for defining the near/mid/long term. Different circumstances may entail different definitions. For example, Parson et al. [[2](#B2-information-11-00290)] are especially interested in societal impacts and implications for governance, and thus use definitions rooted primarily in impacts. They propose that, relative to near-term AI, medium-term AI has “greater scale of application, along with associated changes in scope, complexity, and integration” [[2](#B2-information-11-00290)] (pp. 8–9), and, relative to long-term AI, medium-term AI “is not self-directed or independently volitional, but rather is still to a substantial degree developed and deployed under human control” [[2](#B2-information-11-00290)] (p. 9). (One can quibble with these definitions. Arguably, near-term AI is already at a large scale of application, and there may be no clear demarcation in scale between near- and mid-term AI. Additionally, while it is proposed that long-term AI could escape human control, that would not necessarily be the case. Indeed, discussions of long-term AI sometimes focus specifically on the question of how to control such an AI [[11](#B11-information-11-00290)].) The medium term is a period with substantially greater use of AI in decision-making, potentially to the point in which “the meaning of governance” is challenged [[2](#B2-information-11-00290)] (p. 9), but humans remain ultimately in control. This is a reasonable definition of medium-term AI, especially for impacts and governance purposes.The present paper is more focused on the presentist/futurist debate, and so it is worth considering the definitions used in the debate. Elements of each of the six dimensions can be found, but they are not found uniformly. Presentists often emphasize feasibility and degree of certainty. Computer scientist Andrew Ng memorably likened attention to long-term AI to worrying about “overpopulation on Mars” [[12](#B12-information-11-00290)], by which Ng meant that it might eventually be important, but it is too opaque and disconnected from current AI to be worth current attention. Another presentist theme is urgency, especially with respect to the societal implications of near-term AI. Legal scholar Ryan Calo [[13](#B13-information-11-00290)] (p. 27) argues that “AI presents numerous pressing challenges to individuals and society in the very short term” and therefore commands attention relative to long-term AI. For their part, futurists often emphasize capability and impacts. Commonly cited is the early remark of I.J. Good [[14](#B14-information-11-00290)] (p. 33) that “ultraintelligent” AI (AI with intelligence significantly exceeding that of humans) could be “the last invention that man need ever make, provided that the machine is docile enough to tell us how to keep it under control”. Chronological definitions are less common. One exception is Etzioni [[15](#B15-information-11-00290)], who downplays long-term AI on grounds that it is unlikely to occur within 25 years. (In reply, futurists Dafoe and Russell [[16](#B16-information-11-00290)] argue that potential future events can still be worth caring about even if they will not occur within the next 25 years.)Taking the above into account, this paper will use a feasibility definition for near-term AI and a capability definition for long-term AI. The paper defines near-term AI as AI that already exists or is actively under development with a clear path to being built and deployed. Per this definition, near-term AI does not require any major research breakthroughs, but instead consists of straightforward applications of existing techniques. The terms “clear”, “major”, and “straightforward” are vague, and it may be reasonable to define them in different ways in different contexts. (This vagueness is relevant for the medium-term AI hypothesis; more on this below.) Nonetheless, this definition points to current AI systems plus the potential future AI systems that are likely to be built soon and do not depend on research breakthroughs that might or might not manifest.The paper defines long-term AI as AI that has at least human-level general intelligence. Interest in long-term AI often focuses on human-level artificial intelligence (HLAI), artificial general intelligence (AGI), strong AI, and artificial superintelligence (ASI). However, there may be narrow AI systems that are appropriate to classify as long-term. For example, Cave and ÓhÉigeartaigh [[4](#B4-information-11-00290)] (p. 5) include “wide-scale loss of jobs” as a long-term AI issue separately from the prospect of superintelligence. (Note that the most widespread loss of jobs may require AGI. For example, Ford [[17](#B17-information-11-00290)] (p. 3) writes “If, someday, machines can match or even exceed the ability of a human being to think and to conceive new ideas—while at the same time enjoying all the advantages of a computer in areas like computational speed and data access—then it becomes somewhat difficult to imagine just what jobs might be left for even the most capable human workers”.) A plausible alternative definition of long-term AI is AI that achieves major intellectual milestones and/or has large and transformative effects. This is more of a catch-all definition that could include sufficiently important narrow AI systems such as those involved in job loss. In this definition, the terms “major”, “large”, and “transformative” are vague. Indeed, current AI systems arguably meet this definition. Therefore, the paper will define long-term AI in terms of HLAI, while noting the case for the alternative definitions.The paper’s use of a feasibility definition for near-term and a capability definition for long-term may be consistent with common usage in AI discussions. However, the use of a different dimension for near-term (feasibility) than for long-term (capability) can induce some chronological blurring in two important respects.First, AI projects that are immediately practical may have long time horizons. This may be especially common for projects in which AI is only one component of a more complex and durable system. Military systems are one domain with long lifespans. A 2016 report found that some US nuclear weapon systems were still using 1970s-era 8-inch floppy disks [[18](#B18-information-11-00290)]. AI is currently being used and developed for a wide variety of military systems [[19](#B19-information-11-00290)]. Some of these could conceivably persist for many decades into the future—perhaps in the B-52H bomber, which was built in the 1960s and is planned to remain in service through the 2050s [[20](#B20-information-11-00290)]. (AI is used in bombers, for example, to improve targeting [[21](#B21-information-11-00290)]. AI is used more extensively in fighters, which execute complex aerial maneuvers at rapid speeds and can gain substantial tactical advantage from increased computational power and autonomy from human pilots [[22](#B22-information-11-00290)].) One can imagine the B-52H being outfitted with current AI algorithms and retaining these algorithms into the 2050s, just as the 8-inch floppy disks have been retained in other US military systems. Per this paper’s definitions, this B-52H AI would classify as near-term AI that happens to remain in use over a long time period, well beyond the 25 years that Etzioni [[15](#B15-information-11-00290)] treats as the “foreseeable horizon” worthy of attention.Second, AI systems with large and transformative effects, including AGI, could potentially be built over relatively short time scales. When AGI and related forms of AI will be built is a matter of considerable uncertainty and disagreement. Several studies have asked AI researchers—predominantly computer scientists—when they expect AI with human or superhuman capacity to be built [[23](#B23-information-11-00290),[24](#B24-information-11-00290),[25](#B25-information-11-00290),[26](#B26-information-11-00290)]. (Note that these studies are generally framed as being surveys of experts, but it is not clear that the survey participants are expert in the question of when AGI will be built. Earlier predictions about AI have often been unreliable [[27](#B27-information-11-00290)]. This may be a topic for which there are no experts; on this issue, see Morgan [[28](#B28-information-11-00290)].) The researchers present estimates spanning many decades, with some estimates being quite soon. [Figure 1](#information-11-00290-f001) presents median estimates from these studies. Median estimates conceal the range of estimates across survey participants, but the full range could not readily be presented in [Figure 1](#information-11-00290-f001) because, unfortunately, only Baum et al. [[23](#B23-information-11-00290)] included the full survey data. If the early estimates shown in [Figure 1](#information-11-00290-f001) are correct, then, by this paper’s definitions, long-term AI may be appearing fairly soon, potentially within the next 25 years. 3. The Medium-Term AI Hypothesis\n---------------------------------\n\nWith the above definitions in mind, it is worth revisiting the medium-term AI hypothesis. If presentists are, by definition, only interested in the present, then they would not care at all about the medium term. However, the line between the near term and the medium term is blurry. As defined above, near-term AI must have a clear path to being built and deployed, but “clearness” is a matter of degree. As the path to being built and deployed becomes less and less clear, the AI transitions from near-term to medium-term, and presentists may have less and less interest in it. From this standpoint, presentists may care somewhat about the medium term, especially the earlier portions of it, but not to the same extent as they care about the near term.Alternatively, presentists might care about the medium term because the underlying things they care about also arise in the medium term. Some presentists are interested in the implications of AI for social justice, or for armed conflict, or for transportation, and so on. Whereas it may be difficult to think coherently about the implications of long-term AI for these matters, it may not be so difficult for medium-term AI. For example, a major factor in debates about autonomous weapons (machines that use AI to select and fire upon targets) is whether these weapons could adequately discriminate between acceptable and unacceptable targets (e.g., enemy combatants vs. civilians) [[29](#B29-information-11-00290),[30](#B30-information-11-00290)]. Near-term AI cannot adequately discriminate; medium-term AI might be able to. Therefore, presentists concerned about autonomous weapons have reason to be interested in medium-term AI. Whether this interest extends to other presentist concerns (social justice, transportation, etc.) must be considered on a case-by-case basis.For futurists, the medium term may be important because it precedes and influences the long term. If the long term begins with the advent of human-level AGI, then this AI will be designed and built during the medium term. Some work on AGI is already in progress [[31](#B31-information-11-00290)], but it may be at a relatively early stage. [Figure 1](#information-11-00290-f001) illustrates the uncertainty: the earliest estimates for the onset of AGI (and similar forms of AI) may fall within the near term, whereas the latest estimates fall much, much later. Futurists may tend to be most interested in the period immediately preceding the long term because it has the most influence on AGI. Their interest in earlier periods may depend on the significance of its causal impact on AGI.It follows that there are two bases for assessing the medium-term AI hypothesis. First, the hypothesis could hold if AI that resembles near-term AI also influences long-term AI. In that case, the technology itself may be of interest to both presentists and futurists. Alternatively, the hypothesis could hold if the societal implications of medium-term AI raise similar issues as near-term AI, and if the medium-term societal context also influences long-term AI. For example, medium-term autonomous weapon technology could raise similar target discrimination issues as is found for near-term technology, and it could also feed arms races for long-term AI. (To avoid confusion, it should be understood that discussions of long-term AI sometimes use the term “arms race” to refer to general competition to be the first to build long-term AI, without necessarily any connection to military armaments [[32](#B32-information-11-00290)]. Nonetheless, military arms races for long-term AI are sometimes posited [[33](#B33-information-11-00290)].)Both of the above derive from some measure of continuity between the near, mid, and long terms. Continuity can be defined in terms of the extent of change in AI systems and related societal issues. If near-term AI techniques and societal dimensions persist to a significant extent through the end of the medium term (when long-term AI is built), then the medium-term AI hypothesis is likely to hold.The chronological duration of the medium term may be an important factor. [Figure 1](#information-11-00290-f001) includes a wide range of estimates for the start of the long term. If the later estimates prove correct, then the medium term could be quite long. A long duration would likely tend to mean less continuity across the near, mid, and long terms, and therefore less support for the medium-term AI hypothesis. That is not necessarily the case. One can imagine, for example, that AI just needs one additional technical breakthrough to go from current capabilities to AGI, and that it will take many decades for this breakthrough to be made. One can also imagine that the issues involving AI will remain fairly constant until this breakthrough is made. In that case, near-term techniques and issues would persist deep into the medium term. However, it is more likely that a long-lasting medium term would have less continuity and a larger dead zone period with no interest from either presentists or futurists. If AGI will not be built for, say, another 500 years, presentists are unlikely to take an interest.[Figure 2](#information-11-00290-f002) presents two sketches of the degree of interest that presentists and futurists may hold in the medium term. [Figure 2](#information-11-00290-f002)a shows a period of overlap in which both presentists and futurists have some interest; here, the medium-term AI hypothesis holds. [Figure 2](#information-11-00290-f002)b shows a dead zone with no overlap of interest; here, the medium-term AI hypothesis does not hold. [Figure 2](#information-11-00290-f002) is presented strictly for illustrative purposes and does not indicate any rigorously derived estimation of actual presentist or futurist interests. It serves to illustrate how presentists’ degree of interest could decline over time and futurists’ degree of interest could increase over time, with implications for the medium-term AI hypothesis. [Figure 2](#information-11-00290-f002) shows presentist/futurist interest decreasing/increasing approximately exponentially over time. There is no particular basis for this, and the curves could just as easily have been drawn differently.To sum up, assessing the medium-term AI hypothesis requires examining what medium-term AI techniques and societal dimensions may look like, and the extent of continuity between the near-, mid-, and long-term periods. 4. The Intrinsic Importance of Medium-Term AI\n----------------------------------------------\n\nThus far, the paper has emphasized the potential value of medium-term AI as a point of common interest between presentists and futurists. This “consensus value” will remain a major theme in the sections below. However, it is worth pausing to reiterate that medium-term AI can also be important in its own right, regardless of any implications for presentists and futurists. Assessing the extent to which it is intrinsically important requires having some metric for intrinsic importance. A detailed metric is beyond the scope of this paper. For present purposes, it suffices to consider that medium-term AI and its accompanying societal issues may be important for the world as it exists during the medium term. It is further worth positing that there may be opportunities for people today to significantly influence the medium term, such that the medium term merits attention today due to its intrinsic importance. With that in mind, the paper now turns to the details of medium-term AI and society. 5. Medium-Term AI Techniques\n-----------------------------\n\nMy own expertise is not in the computer science of AI, and so I can say relatively little about what computer science AI techniques may look like over the medium term. Therefore, this section serves as a placeholder to note that the space of potential medium-term AI techniques is a topic worthy of attention for those with the expertise to analyze and comment on it. 6. Medium-Term AI Societal Dimensions\n--------------------------------------\n\nWhile the medium-term societal dimensions of AI will, to at least some extent, depend on the capabilities of the medium-term AI techniques, it is nonetheless possible to paint at least a partial picture of the societal dimensions, even without clarity on the techniques. What follows is indeed a partial picture, shaped to a significant extent by my own areas of expertise. It aims to illustrate potential medium-term scenarios in several domains and discuss their implications for near-term and long-term AI and their prospects for bridging the presentist/futurist divide.#### 6.1. Governance Institutions\n\nGovernance institutions can be quite durable. For example, the United Nations was founded in 1945, and despite many calls for reform, the UN Security Council retains China, France, Russia, the United Kingdom, and the United States as permanent members. The “P5 countries” are an artifact of World War II that arguably does not match current international affairs, but changing the membership would require a consensus that is quite elusive. For example, a case could be made for adding Brazil and India, but then Argentina and Pakistan may object, so no change is made. Not all governance institutions are this ossified, but many of them are quite enduring. This continuity makes governance institutions a compelling candidate for the medium-term AI hypothesis.The near-term is an exciting time for AI governance. Institutions are now in the process of being designed and launched. Decisions being made now could have long-lasting implications, potentially all the way through the end of the medium term and the beginning of the long term. (It is harder to predict much of anything if and when AGI/ASI/HLAI is built, including the form of governance institutions. One attempt to make such predictions is Hanson [[34](#B34-information-11-00290)].)One notable example is the International Panel on Artificial Intelligence (IPAI) and Global Partnership on AI (GPAI). The IPAI/GPAI has recently been proposed by the governments of Canada and France, first under the IPAI name and later under the GPAI name [[35](#B35-information-11-00290),[36](#B36-information-11-00290)]. Documents on the IPAI/GPAI emphasize issues that are relevant in the near term and may continue to be relevant through the medium term. One set of issues listed for illustrative purposes is: “data collection and access; data control and privacy; trust in AI; acceptance and adoption of AI; future of work; governance, laws and justice; responsible AI and human rights; equity, responsibility and public good” [[35](#B35-information-11-00290)].The documents published on the IPAI/GPAI give no indication of any focus on long-term issues relating to AGI. (The future of work could arguably classify as a long-term issue.) However, the IPAI/GPAI may nonetheless be relevant for the long term. If the IPAI/GPAI takes hold then it could persist for a long time. For comparison, the Intergovernmental Panel on Climate Change (IPCC) was formed in 1988 and remains an active and important institution. The IPAI/GPAI follows a similar model as the IPCC and may prove similarly durable. Additionally, while long-term issues are not featured in the early-stage documents that have thus far been published on the IPAI/GPAI, that does not preclude the IPAI/GPAI from including long-term issues within its scope once it is up and running. Whether long-term issues are included could come down to whether people interested in the long-term take the initiative to participate in IPAI/GPAI processes. Indeed, one of the most thoughtful discussions of the IPAI/GPAI published to date is by Nicolas Miailhe [[37](#B37-information-11-00290)] of The Future Society, an organization explicitly working “to address holistically short, mid and long term governance challenges” in AI [[38](#B38-information-11-00290)]. Such activity suggests that the IPAI/GPAI could be an institution that works across the range of time scales and persists significantly into the future.#### 6.2. Collective Action\n\nAn important dynamic for the societal impacts of AI is whether AI development projects can successfully cooperate on collective action problems: situations in which the collective interest across all the projects diverges from the individual interests of the projects. Collective action has been a significant theme in discussions of long-term AI, focused on the prospect of projects cutting corners on safety to be the first to achieve important technological milestones [[32](#B32-information-11-00290),[39](#B39-information-11-00290)]. Collective action problems can also arise for near-term AI. One near-term concern is about military AI arms races [[40](#B40-information-11-00290)] (though this concern is not universally held [[41](#B41-information-11-00290)]).Social science research on collective action problems identifies three broad classes of solutions for how to get actors to cooperate: government regulation, private ownership, and community self-organizing [[42](#B42-information-11-00290)]. Each is worth briefly considering with an eye toward the medium term.Government regulation is perhaps the most commonly proposed solution for AI collective action problems. While some proposals focus on domestic measures [[43](#B43-information-11-00290)], global regimes may be favorable due to AI being developed worldwide. This is reflected in proposals for international treaties [[44](#B44-information-11-00290)] or, more ambitiously, global governance regimes with broad surveillance powers and the capacity to preemptively halt potentially dangerous AI projects through the use of force [[45](#B45-information-11-00290)]. This more ambitious approach may be theoretically attractive in terms of ensuring AI collective action, though it is also unattractive for its potential for abuse, up to and including catastrophic totalitarianism [[46](#B46-information-11-00290)]. Regardless, in practice, an intrusive global government is very likely a nonstarter at this time and for the foreseeable future, probably into the medium term. Nations are too unlikely to be willing to cede their national sovereignty to a global regime, especially on a matter of major economic and military significance. (Perhaps some future circumstances could change this, but the desire to preserve sovereignty, especially from rival and adversarial states, has been a durable feature of the international system.) Even a more modest international treaty may be asking too much. Treaties are difficult to create, especially if universal international consensus is needed (for example, because AI can be developed anywhere), and when access to and capability with the technology is unevenly distributed across the international community (as is very much the case with AI; for general discussion of emerging technology treaty challenges, see [[47](#B47-information-11-00290)]). Instead, government regulations are likely to be more modest, and play at most a partial role in facilitating collective action. Whatever it is that governments end up doing, there is strong potential for institutions that are durable across the medium term, as discussed in [Section 6.1](#sec6dot1-information-11-00290).Private ownership is commonly used for natural resource management. An entity that owns a natural resource has an incentive to sustain it and the means to do so by charging users for access at a sufficiently high fee. Private ownership schemes are difficult to apply to AI software due to the difficulty of restricting access. Hardware may offer a more viable option because hardware manufacturing facilities are geographically fixed and highly visible sites of major industrial infrastructure, in contrast with the ephemerality of software (For related discussion, see [[48](#B48-information-11-00290)]). Hardware manufacturing is also typically privately owned [[49](#B49-information-11-00290)]. AI collective action could conceivably be demanded by the manufacturers, especially the select manufacturers of the advanced hardware used in the most capable AI projects. However, the benefits of AI collective action are experienced by many entities, and therefore would predominantly classify as externalities from the perspective of hardware manufacturers, in the sense that the benefits would be gained by other people and not by the manufacturers. This reduces the manufacturers’ incentives to promote collective action and likewise reduces viability of private ownership schemes for AI collective action. Nonetheless, to the extent that hardware manufacturing can play a role, it could be a durable one. Hardware manufacturing is led by relatively durable corporations including Intel (founded 1968), Samsung Electronics (founded 1969), SK Hynix (formerly Hyundai Electronics, founded 1983), and Taiwan Semiconductor Manufacturing Company (founded 1987). These corporations are likely to remain important over medium-term and potentially also long-term time periods.Community self-organizing for AI collective action can be seen in several important areas. One is in initiatives to bring AI developers together for promoting ethical principles. The Partnership on AI is a notable example of this. Importantly, the Partnership has recently welcomed its first Chinese member, Baidu [[50](#B50-information-11-00290)]. This suggests that its emphasis on human rights (partners include Amnesty International and Human Rights Watch) will not limit its reach to Western organizations. Another area is in the collaborations between AI projects. For example, Baum [[31](#B31-information-11-00290)] documents numerous interconnections between AGI projects via common personnel and collaborations, suggesting a cooperative community. Community self-organizing may lack the theoretical elegance of government regulation or private ownership, but it is often successful in practice. Whether it is successful for AI remains to be seen. AI community initiatives are relatively young, making it more uncertain how they will play out over the medium and long term.#### 6.3. Corporate AI Development\n\nThe financial incentives of for-profit corporations could become a major challenge for the safe and ethical development of AI over all time periods. How can companies be persuaded to act in the public interest when their financial self-interest points in a different direction? This is of course a major question for many sectors, not just AI. It is an issue for AI right now, amid a “techlash” of concerns about AI in social media bots, surveillance systems, and weaponry. It could also be an issue for AI over the mid and long term.With regards to long-term AI, Baum [[31](#B31-information-11-00290)] (p. 19) introduces the term “AGI profit–R&D synergy”, defined as “any circumstance in which long-term AGI R&D delivers short-term profits”. If there is significant AGI profit–R&D synergy, then it could make AGI governance substantially more difficult by creating financial incentives that may not align with the public interest. AGI profit–R&D synergy concerns long-term AI, but it is inherently a medium-term phenomenon because it would occur when AGI is being developed. Assessing the prospect of AGI profit–R&D synergy requires an understanding of the technical computer science details of AI as it transitions from the medium term to the long term, which is beyond the scope of this paper. If the medium-term details have any sort of close relation to near-term AI, that could constitute a significant strengthening of the medium-term AI hypothesis.If AI companies’ financial self-interest diverges from the public interest, how would they behave? Ideally, they would act in the public interest. In some cases, perhaps they will, especially if they are pushed to do so by people both within and outside of the companies. Unfortunately, experience from other sectors shows that companies often opt to act against the public interest, as seen, for example, in pushback by the tobacco industry against regulations aimed at reducing cancer risk; by the fossil fuel industry against regulations aimed at reducing global warming risk [[51](#B51-information-11-00290)]; and by the industrial chemicals industry against regulations aimed at reducing neurological disease risk [[52](#B52-information-11-00290)]. It is worth considering the prospect that AI companies may (mis) behave similarly.It has been proposed that AI companies could politicize skepticism about AI and its risks to avoid regulations that would restrict their profitable activities [[53](#B53-information-11-00290)]. This sort of politicized skepticism has a long history, starting with tobacco industry skepticism about the link between cigarettes and cancer and continuing to this day with, for example, fossil fuel industry skepticism about global warming. One mechanism for this work is to fund nominally independent think tanks to produce publications that promote policies and issue stances consistent with the companies’ financial self-interest.Some attributes of this pattern can be seen in recent writing by the think tank the Center for Data Innovation, which warns of an “unchecked techno-panic” that is dampening public enthusiasm for AI and motivating government regulations [[54](#B54-information-11-00290)]. The extent to which this constitutes a case of politicized skepticism is unclear. Specifically, the extent of the Center for Data Innovation’s industry ties could not be ascertained for this paper. Likewise, it is not the intent of this paper to accuse this organization of conflicts of interest. It is also not the intent to claim the opposite—that there is no conflict of interest in this case. (Indeed, the presence of conflict of interest is often hidden—hence, industry firms fund the work of nominally independent think tanks instead of doing it in-house.) Instead, the intent is merely to provide an example that illustrates some aspects of the politicized skepticism pattern. Importantly, whereas the proposal of politicized AI skepticism focuses on skepticism about long-term AI [[53](#B53-information-11-00290)], the skepticism of the Center for Data Innovation is focused on the near term [[54](#B54-information-11-00290)]. Likewise, the pattern of politicized AI skepticism has the potential to play out across time periods, especially when there is significant profit–R&D synergy and concurrent prospects of government regulation.#### 6.4. Militaries and National Security Communities\n\nAdvanced militaries have long been involved with the forefront of AI in their capacity as research funders and increasingly as users of the technology. The advanced militaries also often have substantial technical expertise, as do the broader national security policy communities that they interface with. Furthermore, militaries are sometimes tasked with operations and planning across a range of time periods, and national security communities are likewise sometimes oriented toward thinking over such time periods. This is seen in the example cited above of the plan for the B-52H bomber to remain in service through the 2050s. It thus stands to reason that advanced militaries and national security communities could be interested in medium-term AI and its links between the near term and long term.There is already some military attention to AGI. One clear example is the JASON report Perspectives on Research in Artificial Intelligence and Artificial General Intelligence Relevant to DoD [[55](#B55-information-11-00290)], which was produced in response to a US Department of Defense query about AGI. Another is the excellent book [[19](#B19-information-11-00290)], which features a full chapter on AGI and ASI. Both publications provide nuanced accounts of long-term AI. The publications are produced by analysts who are especially technically savvy and are not representative of the entire military and national defense communities. Nonetheless, they are among the publications that people in these communities may consult and do indicate a degree of awareness about long-term AI.As documented by Baum [[31](#B31-information-11-00290)], there are some current AGI R&D projects with military connections. Most of these are US academic groups that receive funding from military research agencies such as DARPA and the Office of Naval Research. One is a small group at the primary national defense research agency of Singapore. None of them have any appearance of the sort of major strategic initiative that is sometimes postulated in literature on long-term AI [[33](#B33-information-11-00290)].Given the current state of affairs, it is highly likely that advanced militaries and national security communities will be engaged in AI throughout the medium term. That raises the question of their likely role. Despite common concerns within AI communities, as manifest for example in Google employee protest over Project Maven, militaries can actually be a constructive voice on ethics and safety. For example, a major theme of the [[55](#B55-information-11-00290)] report is that what it calls the “ilities”—“reliability, maintainability, accountability, verifiability, evolvability, attackability, and so forth” [[55](#B55-information-11-00290)] (p. 2) are a major concern for military applications and “a potential roadblock to DoD’s use of these modern AI systems, especially when considering the liability and accountability of using AI in lethal systems” [[55](#B55-information-11-00290)] (p. 27). Militaries are keen to avoid unintended consequences, especially for high-stakes battlefield technologies.It is also important to account for the geopolitical context in which militaries operate. Militaries can afford to be more restrained in their development and use of risky technologies when their nations are at peace. In an interview, Larry Schuette of the Office of Naval Research compares autonomous weapons to submarines [[19](#B19-information-11-00290)] (pp. 100–101). Schuette recounts that in the 1920s and 1930s, the US was opposed to unrestricted submarine warfare, but that changed immediately following the 7 December 1941 attack on Pearl Harbor. Similarly, the US is currently opposed to autonomous weapons, and on the question of whether it will remain opposed, Schuette replies, “Is it December eighth or December sixth”?It follows that the role of militaries in medium-term AI may depend heavily on the state of international relations during this period. It stands to reason that the prospects for cautious and ethical AI development are much greater during times of peace than times of war. There is an inherent tension between pushing a technology ahead for strategic advantage and exercising caution with respect to unintended consequences, as is articulated by Danzig [[56](#B56-information-11-00290)]. Peaceful international relations tips the calculus toward caution and can empower militaries and national security communities to be important voices on safety and ethics. 7. Conclusions\n---------------\n\nParson et al. [[2](#B2-information-11-00290)] argued that medium-term AI and its accompanying societal issues are important in their own right. This paper’s analysis yields the same conclusion. For each of the issue areas studied here—governance institutions, collective action, corporate development, and military/national security—the medium-term will include important processes. In a sense, this is not much of a conclusion. It is already clear that AI is important in the near term, and there is plenty of reason to believe that AI will become more important as the technology and its applications develop further.What then of the presentist–futurist debate? This paper proposes the medium-term AI hypothesis, which is that there is an intermediate time period that is important from both presentists and futurist perspectives. With the near term defined in terms of feasibility and the long term in terms of capability, it follows that the medium-term AI hypothesis is more likely to hold if near-term AI techniques and societal dimensions persist to a significant extent through the end of the medium term, when long-term AI is built. To the extent that the hypothesis holds, attention to the medium term could play an important role in bridging the divide that can be found between presentist and futurist communities.The paper finds mixed support for the medium-term AI hypothesis. Support is strong in the case of AI governance institutions, which are currently in development and may persist through the medium-term, with implications for long-term AI. Support is ambiguous for AI collective action: government initiatives to promote collective action may play relatively little role at any time, private ownership schemes are difficult to arrange for AI, and community self-organizing has potential that might or might not be realized. Each of these three schemes for achieving collective action could potentially play out over near- and medium-term periods, with implications for long-term AI, but whether they are likely to is unclear. Regarding corporate AI development, a key question is whether near-to-medium-term AI technology could serve a profitable precursor to AGI, creating AGI profit–R&D synergy. Whether the synergy would occur is an important question for future research. Finally, advanced militaries and national security communities are already paying attention to AGI and are likely to remain active in a range of AI technologies through the medium term. While it is unclear whether military/national security communities will be important actors in the development of AGI, there is substantial potential, providing support for the medium-term AI hypothesis.In closing, this paper has shown that at least some important AI processes are likely to play out over the medium term, and that they will be important in their own right and from both presentist and futurist perspectives. The exact nature and importance of medium-term AI is a worthy subject of future research. To the extent that medium-term AI can be understood, this can point to opportunities to positively influence them, resulting in better overall outcomes for society.\n\n\nFunding\n-------\n\nThis research was funded by the Gordon R. Irlam Charitable Foundation.Acknowledgments\n---------------\n\nThis paper has benefited from comments from Robert de Neufville, Matthijs Maas, Jun Hong Yap, Steven Umbrello, Richard Re, Ted Parson, two anonymous reviewers, and audiences at seminars hosted by the UC Berkeley Center for Human-Compatible AI and the Global Catastrophic Risk Institute. Robert de Neufville also provided research assistance. Dakota Norris provided assistance in manuscript preparation. Any remaining errors are the author’s alone.Conflicts of Interest\n---------------------\n\nThe author declares no conflict of interest.References\n----------\n\n1. Baum, S.D. Reconciliation between factions focused on near-term and long-term artificial intelligence. AI Soc. **2018**, 33, 565–572. [[Google Scholar](https://scholar.google.com/scholar_lookup?title=Reconciliation+between+factions+focused+on+near-term+and+long-term+artificial+intelligence&author=Baum,+S.D.&publication_year=2018&journal=AI+Soc.&volume=33&pages=565%E2%80%93572&doi=10.1007/s00146-017-0734-3)] [[CrossRef](https://doi.org/10.1007/s00146-017-0734-3)]\n2. Parson, E.; Re, R.; Solow-Niederman, A.; Zeide, E. Artificial Intelligence in Strategic Context: An Introduction. AI Pulse. 8 February 2019. Available online: (accessed on 2 February 2020).\n3. Parson, E.; Fyshe, A.; Lizotte, D. Artificial Intelligence’s Societal Impacts, Governance, and Ethics: Introduction to the 2019 Summer Institute on AI and Society and Its Rapid Outputs. AI Pulse. 26 September 2019. Available online: (accessed on 2 February 2020).\n4. Cave, S.; Ó hÉigeartaigh, S.S. Bridging near and long-term concerns about AI. Nat. Mach. Learn. **2019**, 1, 5–6. [[Google Scholar](https://scholar.google.com/scholar_lookup?title=Bridging+near+and+long-term+concerns+about+AI&author=Cave,+S.&author=%C3%93+h%C3%89igeartaigh,+S.S.&publication_year=2019&journal=Nat.+Mach.+Learn.&volume=1&pages=5%E2%80%936&doi=10.1038/s42256-018-0003-2)] [[CrossRef](https://doi.org/10.1038/s42256-018-0003-2)]\n5. Prunkl, C.; Whittlestone, J. Beyond near and long-term: Towards a clearer account of research priorities in AI ethics and society. In Proceedings of the Third AAAI/ACM Annual Conference on AI, Ethics, and Society, New York, NY, USA, 7 February 2020. [[Google Scholar](https://scholar.google.com/scholar_lookup?title=Beyond+near+and+long-term:+Towards+a+clearer+account+of+research+priorities+in+AI+ethics+and+society&conference=Proceedings+of+the+Third+AAAI/ACM+Annual+Conference+on+AI,+Ethics,+and+Society&author=Prunkl,+C.&author=Whittlestone,+J.&publication_year=2020)]\n6. Zeng, Y.; Lu, E.; Huangfu, C. Linking artificial intelligence principles. In Proceedings of the AAAI Workshop on Artificial Intelligence Safety, Honolulu, HI, USA, 12 December 2019. [[Google Scholar](https://scholar.google.com/scholar_lookup?title=Linking+artificial+intelligence+principles&conference=Proceedings+of+the+AAAI+Workshop+on+Artificial+Intelligence+Safety&author=Zeng,+Y.&author=Lu,+E.&author=Huangfu,+C.&publication_year=2019)]\n7. Whittlestone, J.; Nyrup, R.; Alexandrova, A.; Cave, S. The role and limits of principles in AI ethics: Towards a focus on tensions. In Proceedings of the Second AAAI / ACM Annual Conference on AI, Ethics, and Society, Honolulu, HI, USA, 27 January 2019. [[Google Scholar](https://scholar.google.com/scholar_lookup?title=The+role+and+limits+of+principles+in+AI+ethics:+Towards+a+focus+on+tensions&conference=Proceedings+of+the+Second+AAAI+/+ACM+Annual+Conference+on+AI,+Ethics,+and+Society&author=Whittlestone,+J.&author=Nyrup,+R.&author=Alexandrova,+A.&author=Cave,+S.&publication_year=2019)]\n8. Amodei, D.; Olah, C.; Steinhardt, J.; Christiano, P.; Schulman, J.; Mané, D. Concrete Problems in AI Safety. 2016. Available online: (accessed on 2 February 2020).\n9. Conn, A. Transcript: Concrete problems in AI safety with Dario Amodei and Seth Baum. Future of Life Institute. 2016. Available online: (accessed on 2 February 2020).\n10. Arrhenius, G. The person-affecting restriction, comparativism, and the moral status of potential people. Ethical Perspect. **2005**, 10, 185–195. [[Google Scholar](https://scholar.google.com/scholar_lookup?title=The+person-affecting+restriction,+comparativism,+and+the+moral+status+of+potential+people&author=Arrhenius,+G.&publication_year=2005&journal=Ethical+Perspect.&volume=10&pages=185%E2%80%93195&doi=10.2143/EP.10.3.503884&pmid=16206457)] [[CrossRef](https://doi.org/10.2143/EP.10.3.503884)] [[PubMed](http://www.ncbi.nlm.nih.gov/pubmed/16206457)][[Green Version](http://pdfs.semanticscholar.org/c64c/9c5429386e809701bb7555ae871a2e0564e5.pdf)]\n11. Bostrom, N. Superintelligence: Paths, Dangers, Strategies; Oxford University Press: Oxford, UK, 2014. [[Google Scholar](https://scholar.google.com/scholar_lookup?title=Superintelligence:+Paths,+Dangers,+Strategies&author=Bostrom,+N.&publication_year=2014)]\n12. Garling, C. Andrew Ng: Why ‘Deep Learning’ Is a Mandate for Humans, Not Just Machines. Wired. 2015. Available online: (accessed on 2 February 2020).\n13. Calo, R. Artificial Intelligence Policy: A Primer and Roadmap. Available online: (accessed on 2 February 2020).\n14. Good, I.J. Speculations concerning the first ultraintelligent machine. In Advances in Computers; Alt, F.L., Rubinoff, M., Eds.; Academic Press: New York, NY, USA, 1965; pp. 31–88. [[Google Scholar](https://scholar.google.com/scholar_lookup?title=Speculations+concerning+the+first+ultraintelligent+machine&author=Good,+I.J.&publication_year=1965&pages=31%E2%80%9388)]\n15. Etzioni, O. No, the Experts Don’t Think Superintelligent AI Is a Threat to Humanity. MIT Technology Review. 20 September 2016. Available online: (accessed on 20 February 2020).\n16. Dafoe, A.; Russell, S. Yes, We Are Worried about the Existential Risk of Artificial Intelligence. MIT Technology Review. 2 November 2016. Available online: (accessed on 2 February 2020).\n17. Ford, M. Could artificial intelligence create an unemployment crisis? Commun. ACM **2013**, 56, 1–3. [[Google Scholar](https://scholar.google.com/scholar_lookup?title=Could+artificial+intelligence+create+an+unemployment+crisis?&author=Ford,+M.&publication_year=2013&journal=Commun.+ACM&volume=56&pages=1%E2%80%933&doi=10.1145/2483852.2483865)] [[CrossRef](https://doi.org/10.1145/2483852.2483865)]\n18. Federal Agencies Need to Address Aging Legacy Systems. United States Government Accountability Office, GAO-16-468. 2016. Available online: (accessed on 2 February 2020).\n19. Scharre, P. Army of None: Autonomous Weapons and the Future of War; W. W. Norton: New York, NY, USA, 2018. [[Google Scholar](https://scholar.google.com/scholar_lookup?title=Army+of+None:+Autonomous+Weapons+and+the+Future+of+War&author=Scharre,+P.&publication_year=2018)]\n20. Mizokami, K. How B-52 Bombers Will Fly Until the 2050s. Popular Mechanics. 10 September 2018. Available online: (accessed on 2 February 2020).\n21. Roblin, S. Bombs away: Russia’s ‘New’ Tu-22M3M Bomber Might Look Familiar (and Still Deadly). The National Interest. 13 October 2018. Available online: (accessed on 2 February 2020).\n22. Byrnes, M.W. Nightfall: Machine autonomy in air-to-air combat. Air Space Power J. **2014**, May–June, 48–75. [[Google Scholar](https://scholar.google.com/scholar_lookup?title=Nightfall:+Machine+autonomy+in+air-to-air+combat&author=Byrnes,+M.W.&publication_year=2014&journal=Air+Space+Power+J.&volume=May%E2%80%93June&pages=48%E2%80%9375)]\n23. Baum, S.D.; Goertzel, B.; Goertzel, T.G. How long until human-level AI? Results from an expert assessment. Technol. Forecast. Soc. Chang. **2011**, 78, 185–195. [[Google Scholar](https://scholar.google.com/scholar_lookup?title=How+long+until+human-level+AI?+Results+from+an+expert+assessment&author=Baum,+S.D.&author=Goertzel,+B.&author=Goertzel,+T.G.&publication_year=2011&journal=Technol.+Forecast.+Soc.+Chang.&volume=78&pages=185%E2%80%93195&doi=10.1016/j.techfore.2010.09.006)] [[CrossRef](https://doi.org/10.1016/j.techfore.2010.09.006)]\n24. Sandberg, A.; Bostrom, N. Machine Intelligence Survey. Technical Report #2011-1, Future of Humanity Institute, Oxford University. 2011. Available online: (accessed on 27 May 2020).\n25. Müller, V.C.; Bostrom, N. Future progress in artificial intelligence: A poll among experts. In Fundamental Issues of Artificial Intelligence; Müller, V.C., Ed.; Springer: Berlin, Germany, 2016; pp. 555–572. [[Google Scholar](https://scholar.google.com/scholar_lookup?title=Future+progress+in+artificial+intelligence:+A+poll+among+experts&author=M%C3%BCller,+V.C.&author=Bostrom,+N.&publication_year=2016&pages=555%E2%80%93572)]\n26. Grace, K.; Salvatier, J.; Dafoe, A.; Zhang, B.; Evans, O. When will AI exceed human performance? Evidence from AI experts. J. Artif. Intell. Res. **2018**, 62, 729–754. [[Google Scholar](https://scholar.google.com/scholar_lookup?title=When+will+AI+exceed+human+performance?+Evidence+from+AI+experts&author=Grace,+K.&author=Salvatier,+J.&author=Dafoe,+A.&author=Zhang,+B.&author=Evans,+O.&publication_year=2018&journal=J.+Artif.+Intell.+Res.&volume=62&pages=729%E2%80%93754&doi=10.1613/jair.1.11222)] [[CrossRef](https://doi.org/10.1613/jair.1.11222)]\n27. Armstrong, S.; Sotala, K.; Ó hÉigeartaigh, S.S. The errors, insights and lessons of famous AI predictions—And what they mean for the future. J. Exp. Theor. Artif. Intell. **2014**, 26, 317–342. [[Google Scholar](https://scholar.google.com/scholar_lookup?title=The+errors,+insights+and+lessons+of+famous+AI+predictions%E2%80%94And+what+they+mean+for+the+future&author=Armstrong,+S.&author=Sotala,+K.&author=%C3%93+h%C3%89igeartaigh,+S.S.&publication_year=2014&journal=J.+Exp.+Theor.+Artif.+Intell.&volume=26&pages=317%E2%80%93342&doi=10.1080/0952813X.2014.895105)] [[CrossRef](https://doi.org/10.1080/0952813X.2014.895105)][[Green Version](http://www.fhi.ox.ac.uk/wp-content/uploads/FAIC.pdf)]\n28. Morgan, M.G. Use (and abuse) of expert elicitation in support of decision making for public policy. Proc. Natl. Acad. Sci. USA **2014**, 111, 7176–7184. [[Google Scholar](https://scholar.google.com/scholar_lookup?title=Use+(and+abuse)+of+expert+elicitation+in+support+of+decision+making+for+public+policy&author=Morgan,+M.G.&publication_year=2014&journal=Proc.+Natl.+Acad.+Sci.+USA&volume=111&pages=7176%E2%80%937184&doi=10.1073/pnas.1319946111&pmid=24821779)] [[CrossRef](https://doi.org/10.1073/pnas.1319946111)] [[PubMed](http://www.ncbi.nlm.nih.gov/pubmed/24821779)][[Green Version](https://www.pnas.org/content/pnas/111/20/7176.full.pdf)]\n29. Arkin, R. Lethal autonomous systems and the plight of the non-combatant. In The Political Economy of Robots; Kiggins, R., Ed.; Palgrave Macmillan: Cham, Switzerland, 2018; pp. 317–326. [[Google Scholar](https://scholar.google.com/scholar_lookup?title=Lethal+autonomous+systems+and+the+plight+of+the+non-combatant&author=Arkin,+R.&publication_year=2018&pages=317%E2%80%93326)]\n30. Rosert, E.; Sauer, F. Prohibiting autonomous weapons: Put human dignity first. Glob. Policy **2019**, 10, 370–375. [[Google Scholar](https://scholar.google.com/scholar_lookup?title=Prohibiting+autonomous+weapons:+Put+human+dignity+first&author=Rosert,+E.&author=Sauer,+F.&publication_year=2019&journal=Glob.+Policy&volume=10&pages=370%E2%80%93375&doi=10.1111/1758-5899.12691)] [[CrossRef](https://doi.org/10.1111/1758-5899.12691)][[Green Version](https://onlinelibrary.wiley.com/doi/pdfdirect/10.1111/1758-5899.12691)]\n31. Baum, S.D. A survey of artificial general intelligence projects for ethics, risk, & policy. GCRI Work. Pap. **2017**, 2017. [[Google Scholar](https://scholar.google.com/scholar_lookup?title=A+survey+of+artificial+general+intelligence+projects+for+ethics,+risk,+%2526+policy&author=Baum,+S.D.&publication_year=2017&journal=GCRI+Work.+Pap.&volume=2017&doi=10.2139/ssrn.3070741)] [[CrossRef](https://doi.org/10.2139/ssrn.3070741)]\n32. Armstrong, S.; Bostrom, N.; Shulman, C. Racing to the precipice: A model of artificial intelligence development. AI Soc. **2016**, 31, 201–206. [[Google Scholar](https://scholar.google.com/scholar_lookup?title=Racing+to+the+precipice:+A+model+of+artificial+intelligence+development&author=Armstrong,+S.&author=Bostrom,+N.&author=Shulman,+C.&publication_year=2016&journal=AI+Soc.&volume=31&pages=201%E2%80%93206&doi=10.1007/s00146-015-0590-y)] [[CrossRef](https://doi.org/10.1007/s00146-015-0590-y)]\n33. Shulman, C. Arms control and intelligence explosions. In Proceedings of the 7th European Conference on Computing and Philosophy, Bellaterra, Spain, 2–4 July 2009. [[Google Scholar](https://scholar.google.com/scholar_lookup?title=Arms+control+and+intelligence+explosions&conference=Proceedings+of+the+7th+European+Conference+on+Computing+and+Philosophy&author=Shulman,+C.&publication_year=2009)]\n34. Hanson, R. The Age of Em: Work, Love, and Life When Robots Rule the Earth; Oxford University Press: Oxford, UK, 2016. [[Google Scholar](https://scholar.google.com/scholar_lookup?title=The+Age+of+Em:+Work,+Love,+and+Life+When+Robots+Rule+the+Earth&author=Hanson,+R.&publication_year=2016)]\n35. Prime Minister of Canada. Mandate for the International Panel on Artificial Intelligence. Canada. 6 December 2018. Available online: (accessed on 2 February 2020).\n36. Kohler, K.; Oberholzer, P.; Zahn, N. Making Sense of Artificial Intelligence: Why Switzerland Should Support a Scientific UN Panel to Assess the Rise of AI; Swiss Forum on Foreign Policy: Geneva, Switzerland, 2019; Available online: (accessed on 2 February 2020).\n37. Miailhe, N. AI & Global Governance: Why We Need an Intergovernmental Panel for Artificial Intelligence. Centre for Policy Research, United Nations University. 20 December 2018. Available online: (accessed on 2 February 2020).\n38. The AI Initiative. The Future Society. 2018. Available online: (accessed on 2 February 2020).\n39. Cave, S.; Ó hÉigeartaigh, S.S. An AI race for strategic advantage: Rhetoric and risks. In Proceedings of the AAAI/ACM Annual Conference on AI, Ethics, and Society, New Orleans, LA, USA, 2–3 February 2018. [[Google Scholar](https://scholar.google.com/scholar_lookup?title=An+AI+race+for+strategic+advantage:+Rhetoric+and+risks&conference=Proceedings+of+the+AAAI/ACM+Annual+Conference+on+AI,+Ethics,+and+Society&author=Cave,+S.&author=%C3%93+h%C3%89igeartaigh,+S.S.&publication_year=2018)]\n40. Geist, E.M. It’s already too late to stop the AI arms race—We must manage it instead. Bull. At. Sci. **2016**, 72, 318–321. [[Google Scholar](https://scholar.google.com/scholar_lookup?title=It%E2%80%99s+already+too+late+to+stop+the+AI+arms+race%E2%80%94We+must+manage+it+instead&author=Geist,+E.M.&publication_year=2016&journal=Bull.+At.+Sci.&volume=72&pages=318%E2%80%93321&doi=10.1080/00963402.2016.1216672)] [[CrossRef](https://doi.org/10.1080/00963402.2016.1216672)]\n41. Roff, H.M. The frame problem: The AI “arms race” isn’t one. Bull. At. Sci. **2019**, 75, 95–98. [[Google Scholar](https://scholar.google.com/scholar_lookup?title=The+frame+problem:+The+AI+%E2%80%9Carms+race%E2%80%9D+isn%E2%80%99t+one&author=Roff,+H.M.&publication_year=2019&journal=Bull.+At.+Sci.&volume=75&pages=95%E2%80%9398&doi=10.1080/00963402.2019.1604836)] [[CrossRef](https://doi.org/10.1080/00963402.2019.1604836)]\n42. Ostrom, E. Governing the Commons: The Evolution of Institutions for Collective Action; Cambridge University Press: Cambridge, UK, 1990. [[Google Scholar](https://scholar.google.com/scholar_lookup?title=Governing+the+Commons:+The+Evolution+of+Institutions+for+Collective+Action&author=Ostrom,+E.&publication_year=1990)]\n43. Scherer, M.U. Regulating artificial intelligence systems: Risks, challenges, competencies, and strategies. Harv. J. Law Technol. **2016**, 29, 353–400. [[Google Scholar](https://scholar.google.com/scholar_lookup?title=Regulating+artificial+intelligence+systems:+Risks,+challenges,+competencies,+and+strategies&author=Scherer,+M.U.&publication_year=2016&journal=Harv.+J.+Law+Technol.&volume=29&pages=353%E2%80%93400&doi=10.2139/ssrn.2609777)] [[CrossRef](https://doi.org/10.2139/ssrn.2609777)]\n44. Wilson, G. Minimizing global catastrophic and existential risks from emerging technologies through international law. Va. Environ. Law J. **2013**, 31, 307–364. [[Google Scholar](https://scholar.google.com/scholar_lookup?title=Minimizing+global+catastrophic+and+existential+risks+from+emerging+technologies+through+international+law&author=Wilson,+G.&publication_year=2013&journal=Va.+Environ.+Law+J.&volume=31&pages=307%E2%80%93364)]\n45. Bostrom, N. The vulnerable world hypothesis. Glob. Policy **2019**, 10, 455–476. [[Google Scholar](https://scholar.google.com/scholar_lookup?title=The+vulnerable+world+hypothesis&author=Bostrom,+N.&publication_year=2019&journal=Glob.+Policy&volume=10&pages=455%E2%80%93476&doi=10.1111/1758-5899.12718)] [[CrossRef](https://doi.org/10.1111/1758-5899.12718)][[Green Version](https://onlinelibrary.wiley.com/doi/pdfdirect/10.1111/1758-5899.12718)]\n46. Caplan, B. The totalitarian threat. In Global Catastrophic Risks; Bostrom, N., Ćirković, M.M., Eds.; Oxford University Press: Oxford, UK, 2008; pp. 504–519. [[Google Scholar](https://scholar.google.com/scholar_lookup?title=The+totalitarian+threat&author=Caplan,+B.&publication_year=2008&pages=504%E2%80%93519)]\n47. Picker, C.B. A view from 40,000 feet: International law and the invisible hand of technology. Cardozo Law Rev. **2001**, 23, 149–219. [[Google Scholar](https://scholar.google.com/scholar_lookup?title=A+view+from+40,000+feet:+International+law+and+the+invisible+hand+of+technology&author=Picker,+C.B.&publication_year=2001&journal=Cardozo+Law+Rev.&volume=23&pages=149%E2%80%93219)]\n48. Hwang, T. Computational Power and the Social Impact of Artificial Intelligence. 2019. Available online: (accessed on 2 February 2020).\n49. List of Semiconductor Fabrication Plants. Wikipedia. Available online: (accessed on 2 February 2020).\n50. Introducing Our First Chinese Member to the Partnership on AI. Partnership on AI. 16 October 2018. Available online: (accessed on 2 February 2020).\n51. Oreskes, N.; Conway, E.M. Merchants of Doubt: How a Handful of Scientists Obscured the Truth on Issues from Tobacco Smoke to Global Warming; Bloomsbury: New York, NY, USA, 2010. [[Google Scholar](https://scholar.google.com/scholar_lookup?title=Merchants+of+Doubt:+How+a+Handful+of+Scientists+Obscured+the+Truth+on+Issues+from+Tobacco+Smoke+to+Global+Warming&author=Oreskes,+N.&author=Conway,+E.M.&publication_year=2010)]\n52. Grandjean, P. Only One Chance: How Environmental Pollution Impairs Brain Development—And How to Protect. the Brains of the Next Generation; Oxford University Press: Oxford, UK, 2013. [[Google Scholar](https://scholar.google.com/scholar_lookup?title=Only+One+Chance:+How+Environmental+Pollution+Impairs+Brain+Development%E2%80%94And+How+to+Protect.+the+Brains+of+the+Next+Generation&author=Grandjean,+P.&publication_year=2013)]\n53. Baum, S.D. Superintelligence skepticism as a political tool. Information **2018**, 9, 209. [[Google Scholar](https://scholar.google.com/scholar_lookup?title=Superintelligence+skepticism+as+a+political+tool&author=Baum,+S.D.&publication_year=2018&journal=Information&volume=9&pages=209&doi=10.3390/info9090209)] [[CrossRef](https://doi.org/10.3390/info9090209)][[Green Version](https://www.mdpi.com/2078-2489/9/9/209/pdf)]\n54. Castro, D. The U.S. May Lose the AI Race Because of an Unchecked Techno-Panic. Center for Data Innovation. 5 March 2019. Available online: (accessed on 2 February 2020).\n55. Potember, R. Perspectives on Research in Artificial Intelligence and Artificial General Intelligence Relevant to DoD; The MITRE Corporation: McLean, VA, USA, 2017. [[Google Scholar](https://scholar.google.com/scholar_lookup?title=Perspectives+on+Research+in+Artificial+Intelligence+and+Artificial+General+Intelligence+Relevant+to+DoD&author=Potember,+R.&publication_year=2017)]\n56. Danzig, R. Technology Roulette: Managing Loss of Control as Many Militaries Pursue Technological Superiority. Center for a New American Security. 30 May 2018. Available online: (accessed on 2 February 2020).\n\n\n\n![Information 11 00290 g001 550]()\n\n\n**Figure 1.**\nEstimates for when AI will reach superhuman capability (Baum et al.) [[23](#B23-information-11-00290)] and human-level capability (Sandberg and Bostrom, Müller and Bostrom, and Grace et al.) [[24](#B24-information-11-00290),[25](#B25-information-11-00290),[26](#B26-information-11-00290)]. Shown are estimates for when the probability that the milestone is reached is 10% (lower mark), 50% (square), and 90% (upper mark). For each study, the median estimates across the survey participants are plotted.\n\n\n\n\n **Figure 1.**\nEstimates for when AI will reach superhuman capability (Baum et al.) [[23](#B23-information-11-00290)] and human-level capability (Sandberg and Bostrom, Müller and Bostrom, and Grace et al.) [[24](#B24-information-11-00290),[25](#B25-information-11-00290),[26](#B26-information-11-00290)]. Shown are estimates for when the probability that the milestone is reached is 10% (lower mark), 50% (square), and 90% (upper mark). For each study, the median estimates across the survey participants are plotted.\n![Information 11 00290 g001]()\n\n\n\n![Information 11 00290 g002 550]()\n\n\n**Figure 2.**\nIllustrative sketches of presentist and futurist interest in the near, medium, and long term. (**a**) shows overlapping interest: the medium-term AI hypothesis holds; (**b**) shows a dead zone with no overlapping interest: the medium-term AI hypothesis does not hold. The sketches are strictly for illustrative purposes only. The phrase “new forms of AI built” is defined with reference to the definition of near-term AI in the main text.\n\n\n\n\n **Figure 2.**\nIllustrative sketches of presentist and futurist interest in the near, medium, and long term. (**a**) shows overlapping interest: the medium-term AI hypothesis holds; (**b**) shows a dead zone with no overlapping interest: the medium-term AI hypothesis does not hold. The sketches are strictly for illustrative purposes only. The phrase “new forms of AI built” is defined with reference to the definition of near-term AI in the main text.\n![Information 11 00290 g002]()\n\n \n© 2020 by the author. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license ().", "url": "https://www.mdpi.com/2078-2489/11/6/290", "title": "Medium-Term Artificial Intelligence and Society", "source": "html_articles", "source_type": "journalArticle", "source_filetype": "pdf", "date_published": "2020-05-31T22:00:00Z", "authors": ["Seth D. Baum"], "summary": [], "id": "39e542fee9baabd88215af6da99b6f86"} {"text": "Abstract\n--------\n\n**:**\nCorporations play a major role in artificial intelligence (AI) research, development, and deployment, with profound consequences for society. This paper surveys opportunities to improve how corporations govern their AI activities so as to better advance the public interest. The paper focuses on the roles of and opportunities for a wide range of actors inside the corporation—managers, workers, and investors—and outside the corporation—corporate partners and competitors, industry consortia, nonprofit organizations, the public, the media, and governments. Whereas prior work on multistakeholder AI governance has proposed dedicated institutions to bring together diverse actors and stakeholders, this paper explores the opportunities they have even in the absence of dedicated multistakeholder institutions. The paper illustrates these opportunities with many cases, including the participation of Google in the U.S. Department of Defense Project Maven; the publication of potentially harmful AI research by OpenAI, with input from the Partnership on AI; and the sale of facial recognition technology to law enforcement by corporations including Amazon, IBM, and Microsoft. These and other cases demonstrate the wide range of mechanisms to advance AI corporate governance in the public interest, especially when diverse actors work together.\n\n\nKeywords: [artificial intelligence](/search?q=artificial+intelligence); [corporate governance](/search?q=corporate+governance); [public interest](/search?q=public+interest); [technology governance](/search?q=technology+governance); [multistakeholderism](/search?q=multistakeholderism)\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n 1. Introduction\n----------------\n\nThe corporate governance of artificial intelligence (AI) can benefit from input and activity from a range of stakeholders, including those both within and outside of the corporation. Several recent initiatives call for multistakeholder governance institutions that bring diverse stakeholders together to inform AI governance. Examples include activities of the Global Partnership on AI [[1](#B1-information-12-00275)], the European Commission’s High-level Expert Group on AI [[2](#B2-information-12-00275)], and research by Cath et al. [[3](#B3-information-12-00275)]. To date, less attention has been paid to the important opportunities for different stakeholders to contribute to AI corporate governance in their own right—outside the context of dedicated multistakeholder institutions. Those opportunities are the focus of this paper.The importance of AI corporate governance is clear. Corporations play a major—perhaps the primary—role in AI research, development, and deployment. Corporate-affiliated researchers published over 50% more AI research papers than academics in the United States in 2018 [[4](#B4-information-12-00275)]. Corporate applications of AI touch on many important public issues, including social justice, economic vitality, and international security. Looking ahead, some have proposed that AI could displace large portions of the human labor pool, resulting in chronic unemployment for many people as well as massive profits for AI companies [[5](#B5-information-12-00275)]. Corporations are also active in the research and development of artificial general intelligence, a technology that some believe could transform the world in ways that are either radically beneficial or catastrophic [[6](#B6-information-12-00275)]. How AI is governed within corporations is therefore of profound societal importance.To the best of our knowledge, this paper is the first survey of the corporate governance of AI. As reviewed below, prior publications have focused on specific aspects of the topic. This paper offers a broad introduction to the topic and resource for a wide range of scholarships and initiatives to improve AI corporate governance.Although we aim for this paper to be a broad overview of opportunities in AI corporate governance, it does have some areas of focus. One is on select large corporations at the forefront of AI research and development, in particular Alphabet (the parent company of Google), Amazon, Facebook, and Microsoft. These corporations merit attention because they exercise significant influence on both technological developments and emerging regulatory methods. Additionally, within our discussion of government activities, there is a particular focus on the European Union, which has arguably the most mature regulatory landscape for AI to date. Finally, because this paper is focused on the practical mechanics of AI corporate governance, it mostly focuses on machine learning, the dominant AI paradigm today. These areas of focus are important in their own right; they also serve as examples to illustrate more general points about AI corporate governance that are applicable to other companies, political jurisdictions, and AI paradigms.Three running examples illustrate how different actors can influence AI corporate governance. The first is Google’s involvement in Project Maven, a U.S. Department of Defense project to classify the content of drone video. In 2018, Google management pulled Google out of Project Maven following media coverage and worker protests. The second example is on the open publication of potentially harmful AI research. In 2019, OpenAI announced its new strategy for publishing such research [[7](#B7-information-12-00275)], sparking further debate by, among others, Partnership on AI [[8](#B8-information-12-00275)]. The third example is on facial recognition for law enforcement. In 2020, a nexus of activity from nonprofits, the public, governments, and corporate management prompted several companies, including Amazon, IBM, and Microsoft, to stop providing facial recognition technology to law enforcement agencies. Although the paper also discusses other examples, these three run throughout the text and highlight the interconnected influence of different actors on AI corporate governance.The paper is organized as follows. [Section 2](#sec2-information-12-00275) presents the definitions of key terms. [Section 3](#sec3-information-12-00275) reviews the extant literature. [Section 4](#sec4-information-12-00275) assesses opportunities to improve AI corporate governance for a variety of actors: management, workers, investors, corporate partners and competitors, industry consortia, nonprofit organizations, the public, the media, and government. [Section 5](#sec5-information-12-00275) concludes. 2. Definitions\n---------------\n\nBroadly speaking, corporate governance refers to the ways in which corporations are managed, operated, regulated, and financed. Important elements of corporate governance include the legal status of corporations in a political jurisdiction, the relationship between investors and executives, information flows within and outside of the corporation, and specific operational decisions made throughout the corporation [[9](#B9-information-12-00275)]. Many people within and outside of a corporation can influence how the corporation is governed. For this reason, we take a broad view on the range of actors relevant for the corporate governance of AI.Our specific focus in this paper is on how AI corporate governance can be improved so as to better advance the public interest. The public interest can be defined in many ways, such as in terms of costs and benefits, or voter preferences, or fundamental rights and duties. Exactly how the public interest is defined can be important for AI corporate governance, as is seen in a variety of controversies over AI applications. This paper does not take sides on the most appropriate conception of the public interest, with one exception, which is to reject the view that corporations’ sole aim should be to maximize shareholder profits [[10](#B10-information-12-00275)], and instead argue that corporations have obligations to a wider range of stakeholders. We recognize that this position is not universally held in the field of corporate governance; however, it does reflect support from many business leaders [[11](#B11-information-12-00275)]. Broadly, our aim is to clarify the mechanisms through which corporate governance can be improved to better advance the public interest.The concept of stakeholders is also central to this paper. Stakeholders have been defined as “any group or individual who can affect or is affected by the achievement of the organization’s objectives” [[12](#B12-information-12-00275)] (p. 46). Our focus is specifically on those who can affect how a corporation governs AI. Those who are affected by AI but cannot act to affect it, such as members of future generations, are outside the scope of this paper, except insofar as their interests are part of the overall public interest. It is therefore perhaps more precise to say that we focus on actors, i.e., those who can act to affect AI corporate governance. Likewise, our approach also parallels, but ultimately differs from, the phenomenon of multistakeholderism, which refers to governance activities conducted with participation from multiple types of stakeholders such as governments, corporations, academia, and nonprofits [[13](#B13-information-12-00275)]. Multistakeholderism commonly manifests via dedicated institutions that invite participation from multiple types of stakeholders. Cath et al. call for new multistakeholder AI governance institutions [[3](#B3-information-12-00275)]. Existing examples include PAI and the OECD Network of Experts on AI, which bring together people from government, industry, academia, and civil society to advance the understanding and practice of AI governance. These are important institutions, and they share this paper’s interest in participation from a wide range of actors. This paper diverges from multistakeholderism by focusing on the full range of opportunities available to different actors and not just the opportunities afforded by dedicated multistakeholder institutions. The paper’s approach is perhaps more similar to the concept of stakeholder capitalism, which calls for corporations to be attentive to stakeholder actions and responsive to stakeholder interests [[14](#B14-information-12-00275)].Artificial intelligence has been defined in many ways. One prominent definition states that AI is an artificial agent that can “achieve goals in a wide range of environments” [[15](#B15-information-12-00275)]. However, current AI systems only perform well in certain settings, especially simpler environments for which there are ample data [[16](#B16-information-12-00275)]. For this paper, it suffices to employ a social definition: AI is what people generally consider to be AI. This is a bit of a moving target: as the technology has progressed, people’s minimum standards for what they consider AI have risen [[17](#B17-information-12-00275)]. This paper focuses on the computer techniques currently considered to be AI, which, in practice, are largely machine learning, as well as more advanced forms of AI that may be developed in the future.For AI corporate governance, it is also helpful to define AI activities in terms of the AI system lifecycle, i.e., the sequence of activities that take an AI system from its initial conception to its final use. Attention to the lifecycle can help identify and clarify opportunities to improve AI corporate governance. Different actors and activities will have varying influence over different phases of the AI system lifecycle within a corporation. This paper uses the AI system lifecycle to more precisely describe the influence of these actors and activities. In general, efforts to improve AI corporate governance must affect at least one phase of the lifecycle—otherwise, there is no effect on any actual AI systems.This paper uses an AI system lifecycle framework developed by the OECD Expert Group on AI [[18](#B18-information-12-00275)]. [Figure 1](#information-12-00275-f001) illustrates the four phases of the framework. Phase 1 concerns research and design of the AI system. Researchers identify a task for their system, choose a style of model, define performance measures, and select relevant data or other input. This phase includes data collection, cleaning, quality (including bias) checks, and documentation. Phase 2 tests the system to assess performance. This includes testing that covers regression (speed slowdowns), the comparison of previous model behavior to new behavior, and performance across many metrics, e.g., accuracy and calibration measures. Phase 3 puts the system into production. This may include launch testing for real-world use cases, checking compliance with relevant regulations, checking compatibility with legacy software, and assigning responsibilities for managing the AI system. Once the system is deployed, this phase also includes evaluating initial user experience. Phase 4 operates and monitors the AI system in deployment, assessing its outputs and impacts based on the designers’ initial intentions and performance metrics as well as ethical considerations. Problems are identified and addressed by reverting back to other phases or eliminating the AI system. 3. Prior Work\n--------------\n\nThis paper sits at the intersection of literatures on corporate governance and AI governance. Corporate governance is a large field of scholarship with a long history. Good introductions are offered by Monks and Minow [[19](#B19-information-12-00275)] and Gordon and Ringe [[20](#B20-information-12-00275)]. AI governance is a smaller and relatively new field of study. For more work in this area, see, for example, works from the AI Now Institute [[21](#B21-information-12-00275)], Data & Society [[22](#B22-information-12-00275)], World Economic Forum [[23](#B23-information-12-00275)], Future of Humanity Institute [[24](#B24-information-12-00275)], as well as Calo [[25](#B25-information-12-00275)].One body of literature on AI corporate governance studies public policy proposals, primarily for new, dedicated governance bodies. Calo [[26](#B26-information-12-00275)] calls for a federal body to address robotics policy. A similar proposal has been discussed in Europe by Floridi et al. [[27](#B27-information-12-00275)]. The European Commission has recently proposed to establish a European Artificial Intelligence Board [[28](#B28-information-12-00275)]. Scherer [[29](#B29-information-12-00275)] outlines a proposal for a dedicated government agency and a voluntary certification scheme that incentivizes companies to submit to agency oversight in return for limited legal liability. Wallach and Marchant [[30](#B30-information-12-00275)] propose a governance coordinating committee to support soft law governance that keeps pace with new and emerging AI. Erdélyi and Goldsmith [[31](#B31-information-12-00275)] call for an international regulatory agency to address international AI challenges; Cihon et al. [[32](#B32-information-12-00275)] argue that it is too soon to establish such an international structure and that further debate is first needed. Clark and Hadfield [[33](#B33-information-12-00275)] propose a markets-based approach to AI safety regulation.Some literature has analyzed existing regulations as they pertain to corporate AI. The E.U. General Data Protection Regulation (GDPR) has been of particular interest in this respect. For example, Wachter et al. [[34](#B34-information-12-00275)] argue that the GDPR does not afford a “right to explanation” of automated decision-making, whereas Goodman and Flaxman [[35](#B35-information-12-00275)] argue that it does. Another body of literature analyzes the European approach to AI regulation. Smuha [[36](#B36-information-12-00275)] analyzes the emerging regulatory environment for making AI trustworthy. Thelisson et al. [[37](#B37-information-12-00275)] and Stix [[38](#B38-information-12-00275)] survey the broader regulatory landscape in the EU. There is also some literature on a number of national regulations. For example, Wagner et al. [[39](#B39-information-12-00275)] analyze different corporate strategies for complying with algorithmic transparency requirements imposed by the German Network Enforcement Act.There is also an extensive body of literature on specific policy instruments and governance approaches. For example, Senden [[40](#B40-information-12-00275)] and Marsden [[41](#B41-information-12-00275)] disentangle the concepts of soft law, self-regulation, and co-regulation. Kaminski [[42](#B42-information-12-00275)] conceptualizes approaches between command and control regulation and self-regulation as “binary governance”, while Pagallo [[43](#B43-information-12-00275)] uses the framing of a “middle-out approach”. Zeitlin [[44](#B44-information-12-00275)] discusses the current state of transnational regulation within and beyond the E.U.The legal liability of corporations for harms caused by AI systems has been another major focus. Broadly speaking, liability regimes aim to compensate victims of harms caused by products and, in turn, encourage producers to avoid the harms in the first place. AI liability regimes are generally not written specifically for the corporate sector, but in practice mainly affect commercial products. The exact form of liability regimes can vary substantially across jurisdictions and circumstances. Separate literatures discuss AI and robotics under liability law in different jurisdictions, including the United States [[29](#B29-information-12-00275),[45](#B45-information-12-00275),[46](#B46-information-12-00275),[47](#B47-information-12-00275),[48](#B48-information-12-00275)], the E.U. [[49](#B49-information-12-00275),[50](#B50-information-12-00275),[51](#B51-information-12-00275),[52](#B52-information-12-00275),[53](#B53-information-12-00275)], and Germany [[54](#B54-information-12-00275),[55](#B55-information-12-00275),[56](#B56-information-12-00275)]. Additionally, more theoretical approach considers whether liability regimes could handle extreme catastrophic risks from AI, such as those of potential long-term artificial general intelligence [[57](#B57-information-12-00275)].A variety of other AI corporate governance topics have also been studied. Buolamwini and Gebru [[58](#B58-information-12-00275)] assess the efficacy of targeted audits and publicly shaming AI companies to address biases in facial recognition systems. More generally, Baum [[59](#B59-information-12-00275)] explores the social psychology of AI developers as a factor in efforts to steer their work in pro-social directions. Belfield [[60](#B60-information-12-00275)] details recent employee activism within the AI community and its impact on AI firms and technological development. Askell et al. [[61](#B61-information-12-00275)] analyze competitive pressures on AI firms in terms of their societal impacts. Solaiman et al. [[62](#B62-information-12-00275)] examine the societal implications of deciding to publicly disclose AI models, focusing on the case of OpenAI. Cihon [[63](#B63-information-12-00275)] reviews the role of international technical standards in governing AI research and development. Baum [[64](#B64-information-12-00275),[65](#B65-information-12-00275)] analyzes potential corporate efforts to manipulate public debate about AI risks. O’Keefe et al. [[66](#B66-information-12-00275)] propose a novel method of corporate social responsibility that sees AI firms contribute to the public benefit. Avin et al. [[67](#B67-information-12-00275)] and Ballard and Calo [[68](#B68-information-12-00275)] use forecasting and roleplay methods to study potential future behaviors of actors affecting corporate AI.A large number of articles published in magazines such as the Harvard Business Review and MIT Technology Review offer practical insights for managers on governing both AI development and adoption within firms. Hume and LaPlante [[69](#B69-information-12-00275)] analyze how companies can manage biases and risks along the AI building process. Tiell [[70](#B70-information-12-00275)] recommends corporations establish an ethics committee. Chamorro-Premuzic et al. [[71](#B71-information-12-00275)] offer a step-by-step approach on how companies can build ethical AI for human resources. Fountaine et al. [[72](#B72-information-12-00275)] outline how management should build AI-powered organizations. Abbasi et al. [[73](#B73-information-12-00275)] analyze how companies can mitigate the risks of automated machine learning. Hao [[74](#B74-information-12-00275),[75](#B75-information-12-00275)] urges AI companies to actually implement their ethical guidelines, while also emphasizing how difficult this will be.Consulting firms have also published reports on AI corporate governance. Burkhardt et al. [[76](#B76-information-12-00275)] of McKinsey describes how Chief Executive Officers (CEOs) can guide employees to build and use AI responsibly. Cheatham et al. [[77](#B77-information-12-00275)], also of McKinsey, discuss how managers can mitigate AI risks. Ransbotham et al. [[78](#B78-information-12-00275)] of the Boston Consulting Group survey more than 2500 corporate executives on AI topics including how companies are managing AI risks. Several major accounting firms have developed governance frameworks to promote ethical AI [[79](#B79-information-12-00275),[80](#B80-information-12-00275),[81](#B81-information-12-00275)]. Deloitte [[82](#B82-information-12-00275)] reports on AI risk management in the financial industry. Accenture [[83](#B83-information-12-00275)] covers corporate AI ethics committees. 4. Actor-Specific Opportunities to Improve AI Corporate Governance\n-------------------------------------------------------------------\n\nA variety of actors can improve AI corporate governance so as to better advance the public interest. This section considers nine types of actors. Three are internal to the corporation: managers, workers, and investors. Six are external: corporate partners and competitors, industry consortia, nonprofit organizations, the public, the media, and governments.Although presented separately for clarity, these actors interact, overlap, and co-exist in practice. These various types of actors have important interactions. For example, the media can channel worker influence within firms, facilitate public pressure, and precipitate government action. Actors have the potential to overlap, for example, with governments publishing media reports or taking over management of a company. Ultimately, all actors co-exist within political cultures, which may vary by country and over time [[84](#B84-information-12-00275)]: although the following sections describe actions that each actor could take to improve AI corporate governance, we do not offer analysis of the feasibility nor the desirability for such actions within their political cultural contexts.#### 4.1. Management\n\nManagement, as the term is used in this paper, includes all personnel with authority and oversight over other personnel, from top C-suite executives to mid- and lower-level managers. Management is an important—perhaps the most important—type of actor in corporate governance. Management establishes policies, implements processes, creates structures, and influences culture, all of which impact AI development.One way management can advance AI in the public interest is by establishing corporate policies. One type of policy is strategic objectives. For example, management could establish the objectives of pursuing AI development where it is clearly in the public interest and avoiding contentious settings such as law enforcement. Another type of policy is ethics guidelines that specify how the corporation should develop and use AI and related technologies. Recently, management at many companies have established AI ethics guidelines, including Google, IBM, Microsoft, and OpenAI [[85](#B85-information-12-00275)]. An ongoing challenge is to translate ethics guidelines into AI practice [[86](#B86-information-12-00275)]. The translation process can include more operational policies on the details of how a company should develop and use specific AI techniques. Likewise, a concern is that published principles could create the appearance of AI corporations acting in the public interest without them actually doing so [[87](#B87-information-12-00275)].Management can also enact processes that translate policies into practice. These processes can take many forms. For example, management can establish new review processes for AI or augment existing review processes, such as those conducted by compliance and risk management teams. Additionally, management can encourage or require the use of documentation methods that generate and distribute information needed to ensure compliance with AI principles [[88](#B88-information-12-00275)]. Notable examples include Datasheets for Datasets [[89](#B89-information-12-00275)], a standardized reporting document for dataset features, and Model Cards [[90](#B90-information-12-00275)], an approach to consistently describing an AI model and its intended use case. These processes could be improved if management were to review their efficacy and publicly share best practice.Management activity on policies and processes is seen, for example, in the caution of OpenAI on publishing potentially harmful AI work. In 2019, OpenAI released their GPT-2 language model in phases out of concern about its potential harmful applications [[7](#B7-information-12-00275),[62](#B62-information-12-00275)]. OpenAI created a review process to evaluate the social impacts of earlier releases before determining if and how to release more advanced versions of GPT-2. OpenAI’s discussion of its phased release [[7](#B7-information-12-00275)] references the OpenAI charter [[91](#B91-information-12-00275)], a policy document that expresses the principle of factoring safety and security concerns into decisions of what work to publish. (Note: the charter was published in 2018, when OpenAI was a nonprofit.) Although authorship of the charter is attributed to “OpenAI”, it is likely that OpenAI management played a central role in drafting and approving the document, which anchors the organization’s “primary fiduciary obligation” [[92](#B92-information-12-00275)]. Additionally, the OpenAI GPT-2 team includes a mix of workers and management, including OpenAI co-founder and Chief Scientist Ilya Sutskever; thus, it can be inferred that management was likely involved in the review process. In summary, the GPT-2 release appears to demonstrate how management may translate policies into processes to support AI development and use in the public interest.Management can also create structures within the company dedicated to advancing AI in the public interest. Such structures can perform oversight, make recommendations, and provide expertise to people throughout the company. They can consist of company staff and often interact with numerous teams across the organization. Prominent examples include the Microsoft advisory committee AI, Ethics, and Effects in Engineering and Research, the compliance-oriented Microsoft Office of Responsible AI, the Google Responsible Innovation Team, the AI Principles working group within the Google Cloud division, and a Facebook team of policy managers that work with product teams on fairness and explainability problems. Alternatively, the groups can consist of external advisors, such as the Google DeepMind Ethics & Society division’s group of external advisors, the Axon AI and Policing Technologies Ethics Board, and the short-lived Advanced Technology External Advisory Council at Google.Thus far, these dedicated structures have had mixed success. One success came at Axon. Its ethics board advised against the company selling facial recognition to law enforcement; management followed this advice [[93](#B93-information-12-00275)]. A failure occurred at Google, which disbanded its Advanced Technology External Advisory Council soon after its launch amid outcry about its membership [[94](#B94-information-12-00275)]. Overall, these structures are relatively new and not yet in wide use, and much is still being learned about them. Nonetheless, one early lesson is that AI governance teams ought to be interdisciplinary. Regardless of where such a team may be within the reporting structure, it may be expected to include lawyers, ethicists, data scientists, engineers, program managers, and other diverse occupational perspectives.Management can also build AI governance functions into pre-existing structures. Important areas for this may be in compliance and risk management. Current compliance and risk management teams may focus less on AI and more on established risks such as computer security [[95](#B95-information-12-00275)]. However, as governments increase their policy attention to AI, the need for corresponding activity within corporations will increase. It is likely that the pre-existing corporate structures could build expertise in AI risks over time, as the field of AI corporate governance matures, as standards are published, and as regulations enter into force. Forward-thinking management can advance this process by setting the groundwork, such as by building AI expertise into pre-existing structures.Finally, management can help cultivate a corporate culture that supports AI development in the public interest. Corporate culture can play a major role in how a company develops and uses AI [[59](#B59-information-12-00275),[96](#B96-information-12-00275)]. Employee onboarding and training could include a focus on responsible AI development [[97](#B97-information-12-00275)]. Recruiting efforts could select for, or aim to instill, knowledge of responsible AI development methods. Employee performance reviews and metrics could incentivize these methods’ use, from concretely assessing bias in training data at the design phase to more broadly upholding a culture of responsible development. On the latter, OpenAI has tied compensation levels to adherence to its charter [[98](#B98-information-12-00275)]. However, it is unclear what additional steps dominant AI firms are now taking to instill their AI principles into corporate culture.#### 4.2. Workers\n\nWe use the term workers to refer specifically to people who work at the company and do not have managerial authority. This includes, in their subordinate relationship to top management, lower- and mid-level managers. Workers include employees and contractors, both of which are common at AI companies. A wide range of workers affect AI, including researchers, engineers, and product developers. Despite being expected to follow directions from management, workers at AI firms have considerable power to shape corporate governance. Workers are often left with significant latitude to determine corporate activity within their areas of focus, and management is often (although certainly not always) influenced by worker suggestions.Workers can influence AI corporate governance both directly, through their actions affecting AI systems, and indirectly, by influencing management. While management makes many governance decisions, especially high-level decisions for the corporation and its divisions, many other decisions are left to workers, especially on the specifics of AI design and implementation. Worker opportunities for direct influence may be especially robust at earlier stages of the AI system lifecycle and at corporations and divisions whose management offer workers wide latitude for decision-making. Likewise, worker opportunities for indirect influence may be greatest at corporations and divisions whose management is especially receptive to worker input.Some of the best opportunities for worker direct influence may be for workers in groups conducting fundamental research, such as Facebook AI Research, Google Brain and DeepMind, Microsoft Research, and OpenAI. Workers in these groups may have significant autonomy from management to pursue their work as they see fit. Indeed, these workers may be influenced less by management and more by academic norms, research fashions, reputational concerns, conference requirements, journal expectations, and their own personal values. Likewise, those seeking to influence corporate AI researchers may find good opportunities via the broader field of AI, such as at leading conferences. For example, as of 2020, the NeurIPS conference uses an ethics review process and requires papers to include a social impact statement [[99](#B99-information-12-00275)]. These activities can be important for AI corporate governance due to the significant autonomy of corporate AI researchers.Workers also have significant opportunities to indirectly affect AI corporate governance by influencing management. However, these opportunities can be risky for workers because of managements’ control over—or at least considerable influence on—workers’ employment and advancement within the corporation. In general, activities that involve a higher degree of worker commitment and risk of reprisal by management will tend to have a greater effect on corporate governance. Low-commitment, low-risk activities can be as simple as raising concerns in project meetings over issues of ethical AI development. These activities tend to be low-profile and not well-documented; colleagues at AI companies inform us that these activities are nonetheless common. More ambitious and risky activities tend to be less common but more visible and more well-documented. These activities can include circulating letters critiquing corporate activity and calling for change, whistleblowing, organizing walkouts, forming unions, and more [[60](#B60-information-12-00275)].Likewise, the extent of indirect worker influence is shaped by management’s receptiveness to worker input and on related management decisions regarding corporate policy and culture. In extreme cases, management can fire workers who push back against management AI corporate governance decisions. For example, Google fired its Ethical AI team co-lead, Timnit Gebru, following a disagreement over the publication of a paper critical of the company’s research on large AI language models [[100](#B100-information-12-00275)]. Additionally, several Google employees claim to have been fired as retribution for labor organizing, in possible violation of U.S. labor law [[101](#B101-information-12-00275)]. Subtler dynamics include such matters as whether workers have dedicated spaces to organize and articulate their views. For example, Google has internal discussion forums for workers, although management recently hired a team to moderate them [[102](#B102-information-12-00275)]. Google management also recently eliminated its regular meetings where employees could address executives [[103](#B103-information-12-00275)]. In general, workers will have greater indirect influence on AI corporate governance when they can organize and express views to management without fear of retaliation.The size of the labor pool also affects both direct and indirect worker influence. Some governance goals may benefit from a large labor pool, such as the goal of solving difficult technical problems in orienting AI toward the public interest. Greater availability of worker talent may make these problems easier to solve. On the other hand, a larger labor pool can make it difficult for workers to self-organize and reach consensus. Likewise, a large labor pool relative to demand for their labor reduces indirect worker influence on AI systems via their influence on management [[60](#B60-information-12-00275)].At present, there is a shortage of talent in the computer science and engineering dimensions of AI, giving workers in these areas considerable indirect influence. These workers are hard to hire and to replace upon firing; therefore, management may be more inclined to accept their demands. This influence could fade if the labor market changes due to increased university enrollment in AI courses and the many government calls for training more people in AI [[104](#B104-information-12-00275)] (pp. 111–126); [[105](#B105-information-12-00275)]. Labor demand could also shrink if the applications of AI plateau, such as due to a failure to overcome limitations of current deep learning algorithms [[16](#B16-information-12-00275)] or due to the rejection of AI applications on moral, legal, or social grounds. For now, though, demand for AI is robust and growing, giving AI scientists and engineers substantial power.One powerful pattern of indirect worker influence starts with whistleblowing and continues with widely signed open letters. Workers with access to information about controversial AI projects leak this information to media outlets. Subsequent media reports spark dialogue and raise awareness. The media reports may also make it easier for other workers to speak publicly on the matter, because the workers would no longer have to shoulder the burden of being the one to make the story public. The open letters then provide a mechanism to channel mass worker concern into specific corporate governance actions to be taken by management. (See also [Section 4.1](#sec4dot1-information-12-00275) and [Section 4.8](#sec4dot8-information-12-00275) on the roles of management and the media.)This pattern can be seen in several recent episodes at Google. In 2018, Google’s participation in Project Maven, a U.S. Department of Defense project to use AI to classify the content of drone videos, was anonymously leaked to Gizmodo [[106](#B106-information-12-00275)]. The Gizmodo report does not explicitly identify its sources as Google workers, but this is a likely explanation. Subsequently, over 3000 employees signed an open letter opposing Google’s work on Project Maven [[107](#B107-information-12-00275)]. Google management later announced it would leave Project Maven and publish principles to guide its future work on defense and intelligence projects [[108](#B108-information-12-00275)]. Additionally, in 2018, The Intercept reported Google’s work on Project Dragonfly, a Chinese search engine with built-in censorship [[109](#B109-information-12-00275)]. The Intercept report was also based on an anonymous source that appears to be a Google worker. Subsequently, over 1000 employees signed a letter opposing the project [[110](#B110-information-12-00275)]. Google management later ended the project [[111](#B111-information-12-00275)].A somewhat similar pattern is observed in a 2018 episode involving sexual harassment at Google. A New York Times investigation of corporate and court documents and interviews of relevant people found that Google had made large payments to senior executives who left the company after being credibly accused of sexual harassment [[112](#B112-information-12-00275)]. Soon after, Google workers organized walkouts in which thousands of workers walked out in support of corporate policy changes on harassment and diversity [[113](#B113-information-12-00275)]. The organizers referenced the New York Times report but did not specify the extent to which the walkout was motivated by the report. The organizers later wrote that Google management made some but not all of their requested policy changes [[114](#B114-information-12-00275)].These sorts of worker initiatives are not always successful. In 2018, an unspecified number of Microsoft employees published an open letter calling on Microsoft to abandon its bid for the Joint Enterprise Defense Infrastructure contract, a U.S. Department of Defense cloud computing initiative [[115](#B115-information-12-00275)]. Microsoft did not abandon its bid, although Microsoft President Brad Smith did respond by articulating Microsoft policy on military contracts [[116](#B116-information-12-00275)]. Additionally, in 2018, hundreds of Amazon employees signed a letter demanding the company stop selling facial recognition services to law enforcement [[117](#B117-information-12-00275)]. Management did not stop. Again in 2018, approximately 6000 Amazon employees signed a letter calling on the company to stop using AI for oil extraction. The letter was accompanied by a shareholder resolution making the same argument—an example of investor activity ([Section 4.3](#sec4dot3-information-12-00275)). Again, management did not stop [[118](#B118-information-12-00275)].#### 4.3. Investors\n\nCorporations take investments in a variety of forms, including by selling shares of stock or issuing bonds. Investors are important because AI is often capital-intensive, requiring extensive funding for research, development, and deployment. Shareholders are the investors with the most capacity to influence corporate governance and are therefore the focus of this section. Indeed, a central theme in corporate governance is the principal–agent problem in which the principals (i.e., shareholders) seek to ensure that their agents (i.e., corporate management) act in the principals’ best interests rather than in those of the agents. In contrast, issuers of bonds are generally less influential, in part because markets for bonds are highly competitive—a corporation can readily turn to other issuers instead of following one issuer’s governance requests.Investors can influence corporations in several ways. First, investors can voice their concerns to corporate management, including at the annual shareholder meetings required of U.S. companies. Investor concerns can, in turn, factor into management decisions. Second, shareholders can vote in shareholder resolutions, which offer guidance that is generally non-binding but often followed [[19](#B19-information-12-00275)] (p. 117). Indeed, even resolutions that fail to pass can still succeed at improving corporate governance; evidence for this has been documented in the context of environmental, social, and governance (ESG) issues [[119](#B119-information-12-00275)]. Third, shareholders can replace a corporation’s board of directors, which has ultimate responsibility to manage the corporation, determines strategic direction, and appoints the CEO. For example, shareholders could seek to add more diversity to a board, noting that boards with increased diversity are associated with greater support for corporate social responsibility efforts [[120](#B120-information-12-00275)]. Fourth, investors can signal disapproval with corporate governance practices by selling off their investments, and, perhaps, investing in a better governed competitor. Fifth, shareholders can file lawsuits against the corporation for failing to meet certain obligations [[121](#B121-information-12-00275)]. These lawsuits are often settled in ways that improve corporate governance [[19](#B19-information-12-00275)] (p. 117).In principle, investors can wield extensive power over a corporation via their control over the board of directors. If management does not follow investor concerns or shareholder resolutions, the shareholders can replace the board with people who will. In practice, however, investor power is often limited. Efforts to replace a board of directors are expensive and rare [[19](#B19-information-12-00275)] (p. 117). One study found that activist investors launched 205 campaigns in 2019 and won only 76 board seats [[122](#B122-information-12-00275)]. This reality gives management substantial latitude in corporate governance. Nonetheless, management often does respond to investor preferences, especially, but not exclusively, when their preferences affect the company’s stock price.The power of investors is also influenced by the availability of alternative investment options. A diverse market of AI investment opportunities would provide investors with opportunities to shift their assets to companies that further the public interest. Current market prospects are mixed. On one hand, much of the sector is dominated by a few large companies, especially Google (Alphabet), Amazon, Facebook, and Microsoft. On the other hand, there is also a booming AI start-up scene today; one study identified 4403 AI-related companies that received a total of USD 55.7 billion in funding in the year ending July 2019 [[4](#B4-information-12-00275)] (p. 91). Companies such as H20 and Fiddler specifically aim to advance explainable AI systems, creating additional opportunities for investors to promote AI in the public interest.Investor initiatives should be well-informed by the state of affairs in the company. This requires some corporate transparency. For example, the U.S. Securities and Exchange Commission (SEC) requires companies to disclose investor risk factors [[123](#B123-information-12-00275)]. In its disclosure, Google (Alphabet) lists AI-related “ethical, technological, legal, regulatory, and other challenges.” Amazon cites uncertainty about the potential government regulation of AI. Facebook mentions that AI creates a risk of inaccuracies in its community metrics. Microsoft lists AI as a risk of “reputational harm or liability”. However, at each company, AI is only a small portion of the overall disclosure, suggesting that the companies see AI as a minor area of risk. Investors could consider the possibility that the companies are not giving AI risks the attention they deserve.The efficacy of investor initiatives as an approach to improving AI corporate governance depends on the willingness of investors to take the matter on and the investors’ degree of influence within the company. Investors are diverse and have many interests. Investors with an existing interest in ESG may be especially receptive to promoting AI in the public interest. For example, Hermes, an ESG-oriented investment management business, has written on responsible AI [[124](#B124-information-12-00275)] and participated in an investor initiative to create a Societal Risk Oversight Committee of the Board at Alphabet [[125](#B125-information-12-00275)]. That initiative ultimately failed [[126](#B126-information-12-00275)], in part because it lacked the support of Alphabet’s founders. Alphabet is structured such that their founders retain a majority of shareholder voting power even though they do not own a majority of the shares; Facebook is structured similarly [[127](#B127-information-12-00275)]. Although Alphabet and Facebook are extreme cases, in general, investor initiatives will tend to be more successful when they are supported by investors who own a larger portion of shareholder voting power. This applies to public corporations that have issued stock. Investors in private corporations may be especially influential, especially for smaller firms, which often have less access to capital markets. Venture capital firms seeking to promote the public interest may be especially successful in improving AI corporate governance among smaller firms.The limited influence of shareholder resolutions can also be illustrated by the failed attempt to restrict Amazon’s sale of facial recognition technology to the government. In 2019, shareholders expressed their concern that Rekognition, Amazon’s facial recognition service, poses risk to civil and human rights, as well as shareholder value. They requested the Board of Directors to prohibit sales of such technology to government agencies [[128](#B128-information-12-00275)] (pp. 18–19). In 2020, another resolution requested an independent study of Rekognition, including information about the extent to which such technology may endanger civil rights and is sold to authoritarian or repressive governments [[129](#B129-information-12-00275)] (pp. 25–26). Both resolutions failed. Even though they would have been non-binding, Amazon tried to block the vote. This unusual attempt was ultimately stopped by the SEC [[130](#B130-information-12-00275)]. Although these resolutions did not succeed in achieving reform, they demonstrate that shareholder activism has begun to focus on AI risks in particular.In short, shareholders wield some influence in the corporate governance of AI. This influence is limited by the sheer volume and variety of risks that weigh on them: AI is not a top of mind. The most used activity available to shareholders thus far has been the resolution. Given that shareholder resolutions are difficult to pass and non-binding when passed, it is unclear if such activities will do much to change corporate governance aside from publicizing particular AI-related governance problems. Over time, as companies continue to emerge that seek competitive differentiation through responsible AI development and as shareholders, particularly institutional investors, continue to value ESG criteria and apply them to AI, the role of investors in responsible AI governance may continue to grow.#### 4.4. Corporate Partners and Competitors\n\nOther corporations exert influence on a corporation developing or using AI in important ways. These other corporations can be direct competitors, themselves developing or deploying AI systems. Alternatively, they can be corporate partners (or, for brevity, “partners”) that have contractual relationships with said company. Partners can be, among other things, suppliers, customers, or insurers. Partners can use their relationship with the AI company to influence it to advance the public interest.Competing AI corporations can influence each other through direct market competition and in other ways. As classic economic theory explains, competition can result in greater market share for corporations whose products better advance the public interest. There are exceptions, including where there are negative externalities, i.e., harms of market activity that are not captured by market prices, and monopolies, i.e., where a large market share can be used to exclude competition and set relatively high prices. For example, direct competition to develop more powerful machine learning systems can result in better performance for important applications such as healthcare and transportation, but it can also result in more energy consumption and the externalities of global warming via the use of large amounts of computer hardware [[131](#B131-information-12-00275)].Competitors can also influence each other as peers in the overall field of AI [[132](#B132-information-12-00275)]. One AI corporation’s initiatives in the public interest can be adopted or adapted by other AI corporations. For example, in 2020, IBM announced that it would no longer sell facial recognition technology to law enforcement agencies [[133](#B133-information-12-00275)]. Amazon [[134](#B134-information-12-00275)] and then Microsoft [[135](#B135-information-12-00275)] did the same shortly after. These announcements came amidst heightened attention to police misconduct sparked by the killing of George Floyd by the Minneapolis Police Department and subsequent widespread Black Lives Matter protests; therefore, it is possible that each company would have changed its behavior on its own without the others doing the same. However, in an interview, Microsoft’s President explicitly recognized IBM’s and Amazon’s steps [[135](#B135-information-12-00275)]. To some extent, the companies may have been jockeying for market share in the face of shifting public opinion, but they may also have been motivated by each other’s example to advance the public interest.Partners’ ability to influence AI companies can depend significantly on their relative market power. The AI sector is led by some of the largest companies in the world. These companies are often in position to dictate the terms of their partner relationships; they have sometimes used this power to ensure that AI is used in the public interest. For example, Google has limited the use of its facial recognition services to a narrow customer base via its Celebrity Recognition API, and has an extended terms of service to regulate how the technology is used [[136](#B136-information-12-00275)]. Similarly, Microsoft vets customers for its facial recognition services; before its blanket 2020 policy was implemented, it reviewed and denied a California law enforcement agency’s request to install the technology on body cameras and vehicle cameras [[137](#B137-information-12-00275)].Partners can impact the reputation of AI companies and, in turn, influence actions to protect that reputation. For example, Article One Advisors, a consulting firm, worked with Microsoft to develop its human rights policies through external engagement, and then publicized this work [[138](#B138-information-12-00275)]. The publicity likely boosts Microsoft’s reputation and incentivizes Microsoft to follow through on its public commitments. Corporate partners can also harm AI corporations’ reputations. For example, Google attracted widespread criticism when Randstad, a staffing contractor, allegedly collected facial scans of homeless African Americans in order to improve the performance of Google’s Pixel phone facial recognition features [[139](#B139-information-12-00275)]. Reputation is important for large, public-facing corporations such as Microsoft and Google, making this a valuable tool for their corporate partners.Insurers have distinctive opportunities to influence AI companies. When AI is not in the public interest, that can create substantial risks that may require insurance payouts. Insurers therefore have both the vested interest and the contractual means to compel AI companies to act in the public interest. For comparison, insurers have mandated the adoption of cybersecurity governance and risk frameworks [[140](#B140-information-12-00275),[141](#B141-information-12-00275)]; they could do the same for AI. Doing so would improve corporate governance in the insured AI companies. Additionally, it could have further benefits by popularizing innovative practices for AI governance and risk management that are adopted by even uninsured companies. However, some AI risks are not readily handled by insurers, such as emerging risks that are difficult to quantify and price and risks that are too extreme, such as risks from long-term artificial general intelligence.Finally, it should be noted that corporate partners and competitors consist of management, workers, and investors, whose influence parallels that of their counterparts in AI corporations as discussed in [Section 4.1](#sec4dot1-information-12-00275), [Section 4.2](#sec4dot2-information-12-00275) and [Section 4.3](#sec4dot3-information-12-00275). Workers, managers, and investors who seek to improve AI corporate governance may find additional opportunities in corporate partners and competitors. As an illustrative example, in 2020, FedEx investors pushed FedEx to call for the Washington Redskins American football team to change their name, given its racist connotations. FedEx is a major partner of the team. The initiative was successful: the team name will be changed [[142](#B142-information-12-00275)]. This example is not from the AI industry, but it nonetheless speaks to the capacity for actors in corporate partners and competitors to affect positive change.#### 4.5. Industry Consortia\n\nIndustry consortia, as the term is used here, are entities in which multiple corporations come together for collective efforts related to AI governance. We define industry consortia to include entities that include more than just corporations. For example, PAI membership includes corporations, nonprofits, media outlets, and governmental bodies [[143](#B143-information-12-00275)]. PAI is perhaps most precisely described as a multistakeholder organization, but it is also an industry consortium. The same holds true for other entities, such as the IEEE Standards Association, whose members include corporations and individuals [[144](#B144-information-12-00275)].Industry consortia can be instrumental in identifying and promoting best practices for AI in the public interest. AI corporations face many of the same challenges and issues. They likewise benefit from best practices being developed for the whole sector and then distributed to each corporation, instead of each corporation “reinventing the wheel”. Industry consortia are well-positioned to serve as the entity that develops best practices for the whole sector. They can query member corporations on what has worked well—or poorly—for them, pooling their collective experience together. They can also conduct in-house research on best practices, with researchers hired by the pooled funds of their member corporations. It may not be worthwhile for every AI corporation to hire their own in-house experts on various facets of AI in the public interest, but it may be worthwhile for the sector as the whole to do so. Industry consortia enable that to happen.An illustration of these dynamics is seen in PAI’s recent work on best practices on the publishing of potentially harmful AI research. PAI’s work on this was prompted by the work of one of its members. Specifically, OpenAI released its language model GPT-2 in phases out of concern about its potentially harmful uses [[62](#B62-information-12-00275)]. Soon after, PAI hosted discussions with OpenAI and other organizations about best practices in publishing potentially harmful research [[145](#B145-information-12-00275)], launched a project to develop guidance [[8](#B8-information-12-00275)], and advised Salesforce on the release of their language model CTRL [[146](#B146-information-12-00275),[147](#B147-information-12-00275)]. PAI’s status as an industry consortium has enabled it to advance publishing practices across organizations.As best practices are formulated, industry consortia can take the additional step of formalizing them as standards. For example, the Consumer Technology Association (CTA) convened over 50 companies, some but not all of which were CTA members, to develop a standard for the use of AI in healthcare [[148](#B148-information-12-00275)]. The IEEE Standards Association is also active on AI standards, as is the International Organization for Standardization [[63](#B63-information-12-00275)], although the latter is not an industry consortium. By formalizing best practices into standards, industry consortia can enable corporations across the sector to improve their practices.Best practices and standards developed by industry consortia can go on to play a legal or quasi-legal role. Governments sometimes enact policies requiring corporations to adhere to certain broad principles of conduct without specifying the details of what particular conduct does or does not meet these principles [[149](#B149-information-12-00275)]. The best practices and standards formulated by industry consortia can fill in the details of good conduct. Additionally, regulatory agencies and courts handling liability cases sometimes treat compliance with industry best practices and standards as satisfactory, such that corporations meeting these practices or standards avoid regulatory fines or court judgments of liability to which they would otherwise be subject [[150](#B150-information-12-00275)] (p. 17). This can dilute the public benefits of government action, but it also incentives corporations to meet or exceed these standards and practices, potentially bringing net gains for the public interest.The above are examples of soft law, which can be defined as obligations that, although not legally binding themselves, are created with the expectation that they will be given some indirect legal effect through related binding obligations under either international or domestic law [[151](#B151-information-12-00275)]. Soft law has been advocated for AI corporate governance due to its flexibility and ease of adoption [[152](#B152-information-12-00275),[153](#B153-information-12-00275)]. In general, it is difficult for governments to create detailed and rigorous laws for complex issues such as those pertaining to AI. The dynamism of emerging technologies such as AI is especially challenging for the development and enactment of “hard” laws. Industry consortia are often better positioned than governments to master the details of the technology and its changes over time, due to the availability of expertise among consortium members. Furthermore, any more binding “hard law” measures enacted by governments are likely to draw on the particulars of pre-existing soft law instruments. These are additional reasons for industry consortia to pursue robust best practices and standards for AI corporate governance in the public interest.Industry consortium activities do not necessarily advance the public interest. For example, they can pool the resources of member corporations to lobby governments for public policies and conduct information and public relations campaigns that advance industry interests at the public’s expense. In other sectors, such lobbying has often been a major impediment to good public policy [[154](#B154-information-12-00275)]. In the coming years, industry consortia could possibly present similar challenges to the public interest.#### 4.6. Nonprofit Organizations\n\nNonprofit organizations play several important roles in advancing AI corporate governance in the public interest, including research, advocacy, organizing coalitions, and education. Nonprofits can be advocacy organizations, labor unions, think tanks, political campaigns, professional societies, universities, and more. The distinction between these types of organizations is often blurry, with one organization playing multiple roles.To date, research has been a primary focus of nonprofit organizations working on AI corporate governance. Research contributions from nonprofit universities and think tanks are too numerous to compile here; many are in [Section 3](#sec3-information-12-00275). What follows are some select examples of nonprofit research aimed at influencing corporate governance. Note that all universities mentioned in this section are nonprofit. Upturn, a nonprofit dedicated to advancing equity and justice in technology, worked with researchers at Northeastern University and University of Southern California to produce evidence of previously suspected illegal discrimination in housing advertisements served by Facebook [[155](#B155-information-12-00275)]. Ranking Digital Rights (e.g., [[156](#B156-information-12-00275)]) reports technology companies’ human rights records and encourages companies to improve their performance. The Electronic Frontier Foundation reports companies’ cooperation with government demands for censorship and also encourages them to improve their performance [[157](#B157-information-12-00275)]. The AI Now Institute at New York University publishes reports to provide AI developers with suggestions to reduce bias and increase the public accountability of AI systems [[158](#B158-information-12-00275)]. Finally, this paper is another work of nonprofit research on AI corporate governance.Nonprofit advocacy efforts can often draw on such research. For example, a 2018 advocacy campaign by the nonprofit American Civil Liberties Union (ACLU) opposed Amazon selling facial recognition software to governments [[159](#B159-information-12-00275)]. The campaign was supported by research on biases in the software by the ACLU [[160](#B160-information-12-00275)] and prior research from a pair of researchers at Microsoft and the Massachusetts Institute of Technology [[58](#B58-information-12-00275)]. The ACLU later reported that their campaign was unsuccessful [[161](#B161-information-12-00275)]. However, in 2020, following anti-police brutality protests, Amazon stopped selling facial recognition software to law enforcement agencies, as discussed in [Section 4.4](#sec4dot4-information-12-00275) and [Section 4.7](#sec4dot7-information-12-00275). The ACLU campaign and the research on which it drew may have laid the groundwork for Amazon’s action.Finally, nonprofits have conducted some work to build the field of AI in directions beneficial to public interest. Black in AI and AI4All are nonprofit organizations that promote diversity within the field of AI. Increased diversity could, in turn, help reduce bias in the design, interpretation, and implementation of AI systems. Additionally, the Future of Life Institute has hosted conferences on beneficial AI and built coalitions in support of open letters calling for AI in the public interest. These field-building initiatives are not specifically focused on corporate governance, but they have included people from AI corporations and are likely to have at least some effect on AI corporate governance.One challenge facing nonprofit organizations is funding. This is a challenge for nonprofits working on all cause areas, and AI corporate governance is no exception. Firstly, nonprofits may struggle to raise the funds they need to advance their missions. Secondly, some AI nonprofits may turn to AI companies for funding, creating potential conflicts of interest [[162](#B162-information-12-00275),[163](#B163-information-12-00275)]. Thirdly, where companies disagree with the nonprofits’ aims, the companies can use their wealth to push back. Although such a dynamic has perhaps not been seen much to date in AI, it has been observed in other sectors, such as the tobacco industry pushing back on the link between cigarettes and cancer and the fossil fuel industry pushing back on the risks of global warming [[64](#B64-information-12-00275)]. The extreme wealth of some AI corporations makes the potential for conflict of interest and the imbalance in resources particularly acute. Where these issues arise, nonprofits may fail to advance the public interest.With that in mind, it can be helpful to distinguish between adversarial and cooperative nonprofit activity. Adversarial activity pushes AI companies in ways that the companies do not want. Cooperative activity proceeds in ways that the companies are broadly supportive of. Cooperative activity may tend to have a more limited scope, limited by the bounds of what companies are willing to support. On the other hand, adversarial activity may struggle to effect change if the companies do not agree with the proposed changes, and in some cases could backfire by galvanizing opposition. Whether adversarial or cooperative approaches are warranted should be assessed on a case-by-case basis.#### 4.7. The Public\n\nThe public, as the term is used here, refers to people acting in ways that are non-exclusive in the sense that broad populations can participate. In the context of AI corporate governance, the primary roles of the public are (1) as users of AI technology, including when the users pay for it and when it is free to them, subsidized by advertising, and (2) citizens who can vote and speak out on matters of public policy. Although not discussed in this section, members of the public can also impact AI corporate governance indirectly by exerting influence on other members of the public.Users of AI technology can improve AI corporate governance by choosing goods and services that are in the public interest. In other (non-AI) industries, customers are often—although not always—willing to pay more for branded ethical standards [[164](#B164-information-12-00275)]. AI users may also be willing to pay more; or, when they are using the technology for free, they could accept a product that has higher ethical standards but is inferior in other respects. This effect can even determine which technologies become dominant, rejecting certain uses and prioritizing others—regardless of how they are marketed [[165](#B165-information-12-00275)]. One example in this direction is the gradual shift in social media platform popularity from public platforms such as MySpace, Facebook, and Twitter toward private messaging platforms, such as Snapchat and WhatsApp [[166](#B166-information-12-00275)].Citizens can improve AI corporate governance by supporting good (and opposing bad) AI public policies. They can do this by speaking up, such as in anti-racism and police brutality protests that have influenced AI facial recognition practices, and by voting for politicians who will enact good policies. Public opinion has played an important role in regulatory responses to other advanced technologies such as genetically modified organisms [[167](#B167-information-12-00275),[168](#B168-information-12-00275)]. Growing public concern about digital technology, dubbed “techlash”, has prompted calls for antitrust and other policy response. Thus far, AI has not been extensively regulated, although further shifts in public opinion could help to change this.Public opinion has played an important role with regard to the sale of facial recognition software to law enforcement. As discussed in [Section 4.4](#sec4dot4-information-12-00275) and [Section 4.6](#sec4dot6-information-12-00275), following 2020 protests against racism and police brutality, several AI companies moved away from providing facial recognition tools for law enforcement. It is worth noting that the protests garnered broad public support [[169](#B169-information-12-00275)]. Therefore, the AI corporations’ responses show how public protests and changes in public opinion can advance AI corporate governance in the public interest.Finally, members of the public can voice their views about AI, prompting changes. For example, the initial launch of Google Glass in 2014 was widely criticized for violations of privacy [[170](#B170-information-12-00275)]; it was discontinued and subsequently relaunched for industrial and professional users instead of for the general public [[171](#B171-information-12-00275)]. Google Photos, an AI photo classification system, sparked public outcry for labeling a person with dark skin as a gorilla, prompting Google Photos to remove gorilla and other non-human primate terms from its lexicon [[172](#B172-information-12-00275)]. In general, public pressure will tend to be more pronounced for highly visible brands [[173](#B173-information-12-00275),[174](#B174-information-12-00275)]; the same is likely to also apply for AI companies.The public faces several challenges to supporting the corporate governance of AI in the public benefit. One is the complexity of AI issues, which makes it hard for people lacking specialized training to know what the issues are or what stances to take. This can be mitigated by efforts for public education, such as Elements of AI [[175](#B175-information-12-00275)], an online course which aims at educating 1% of European citizens in the basics of AI. Despite such initiatives, public education remains a difficult challenge due to the complexity of the issues and the competition for public attention. Likewise, public education can take time, in which case it may be most valuable in the medium term. (On medium-term AI issues, see Ref. [[176](#B176-information-12-00275)])Furthermore, some AI issues are important but arcane and not conducive to media coverage or other means of capturing public attention. This holds in particular for low-visibility AI companies, including those that do not market to the public but instead sell their AI to governments or other companies.In some cases, AI technology users may be relatively disinclined to opt for the more ethical option due to the difficulty of switching from one AI product to another. Impediments can include (1) significant learning curves, as is common for software in general, (2) transition costs, such as the need to re-upload one’s photos and other information to a new site, or the need to inform one’s contacts of their new email address, and (3) network effects, in which a product’s value to one user depends on its use by other users, as in the distinctive communities of people on specific social media platforms. Someone concerned about AI at one company may not have good alternatives, dissuading them from choosing a product more aligned with the public interest. Additionally, AI is often only part of consumer-facing products. Concern about AI may be outweighed by other concerns. For example, a user may appreciate the cultural and educational value of a media sharing site (such as YouTube or Instagram) even if they dislike its recommendation algorithm.Finally, public action may tend to be primarily oriented toward how AI systems are deployed. Earlier phases of the AI system lifecycle have fewer direct ties to the public and are therefore less likely to garner public attention. For these phases of the AI lifecycle, other types of action may tend to be more important.#### 4.8. The Media\n\nThe media, as the term is used in this paper, refers to both professional and amateur journalists together with their diverse means of distribution, from traditional newspapers to online social media platforms. The media can play an important role in improving AI corporate governance by researching, documenting, analyzing, and drawing attention to good practices, problems, and areas for improvement. The media serves as an important link between actors internal and external to the corporation, and it plays a vital role in distilling and explaining complex technology and business detail in terms that can be understood and used by outside audiences including the public and policymakers. Indeed, media reports have been essential for the insights contained in this paper.AI corporate governance is often in the news. Several newspapers have dedicated technology sections including The New York Times, Wall Street Journal, and Financial Times. Dedicated technology media sources include The Verge, Wired, and the MIT Technology Review. The Markup is specifically focused on the societal impacts of digital technology companies. All of these outlets devote extensive attention to AI corporate governance.Media coverage has been instrumental in highlighting problems in AI corporate governance and mobilizing pressure for change. For example, the media has published several reports based on worker whistleblowing, presenting issues at AI companies that otherwise may have stayed internal to the companies (see [Section 4.2](#sec4dot2-information-12-00275)). In the case of Google’s participation in Project Maven, Gizmodo reported about the project based on leaked information [[106](#B106-information-12-00275)]. The media also covered the subsequent protests by Google workers, further amplifying their concerns [[107](#B107-information-12-00275),[177](#B177-information-12-00275),[178](#B178-information-12-00275)]. Google management later announced it would leave the project [[108](#B108-information-12-00275)]. This example demonstrates how broad media coverage combined with employee activism can have significant influence on corporate decision-making.Other reporting focuses on adverse societal impacts of AI. One prominent example is a ProPublica investigative report on biased outcomes of the COMPAS algorithm for assessing the risk of a person committing a future crime [[179](#B179-information-12-00275)]. The report has been credited with focusing researchers’ attention on fairness in machine learning [[180](#B180-information-12-00275)] (p. 29). A more recent example is a New York Times exposé on the little-known facial recognition company Clearview [[181](#B181-information-12-00275)], which was scraping personal photos uploaded online to serve as training data. The Times report prompted technology companies to take legal action [[182](#B182-information-12-00275)] and nonprofit advocacy organizations to lobby the U.S. government to intervene [[183](#B183-information-12-00275)].Media coverage appears to be most robust for later phases of the AI system lifecycle. Later phases, such as the deployment of AI products, tend to be more publicly visible and of more direct interest to the public and other outside stakeholders. In contrast, earlier phases, such as basic research and development, are less visible and less directly relevant to stakeholders, and therefore may tend to receive less coverage. Coverage of internal corporate activities may be impossible without whistleblowers; these activities can occur across the lifecycle but may be especially frequent during earlier phases.Other factors can also shape trends in the media coverage of corporate AI. Coverage may tend to be greater where AI intersects with other topics of great public interest, such as racism or unemployment, or for prominent AI companies or celebrity entrepreneurs. Certain events can also draw new attention to existing practices, such as the longstanding privacy flaws of the Zoom videoconferencing platform gaining newfound attention due to the heavy use of Zoom during the COVID-19 pandemic [[184](#B184-information-12-00275)]. Risky AI practices may tend to receive the greatest attention in the immediate aftermath of an incident of that type of risk, unless such incidents are too commonplace to be considered newsworthy. This can result in less coverage of emerging and extreme AI risks for which incidents have not previously occurred. (The tendency to overlook extreme risks has been dubbed “the tragedy of the uncommons” [[185](#B185-information-12-00275)])Finally, as with nonprofits, the media faces financial challenges. The business models of many media outlets have been harmed by the rise of the internet and other recent factors. This has resulted in less investigative journalism, meaning fewer resources to report on issues in AI corporate governance. Meanwhile, the AI industry is amidst a financial boom, making it difficult for the media to hire people with expertise in AI. There can even be conflicts of interest, as has been a concern since Amazon founder Jeff Bezos purchased the Washington Post (a concern that Bezos and the Post deny [[186](#B186-information-12-00275)]). The media clearly has an important role in advancing AI corporate governance in the public interest, making it vital that its various challenges be overcome.#### 4.9. Government\n\nGovernment, as the term is used here, refers to institutions with legal authority over some geographic jurisdiction, whether national (e.g., the United States), subnational (e.g., California), or supranational (e.g., the E.U.). Governments can influence AI corporate governance to promote the public interest by using various policy instruments. In the following, we consider four widely accepted categories of policy instruments: command and control regulation, market-based instruments, soft law, and information and education. We also consider procurement, which is important for the government use of AI. Our focus on these categories of instruments reflects current practices to influence the corporate governance of AI; however, more methods could be used in the future, including the role of state-owned enterprises and direct subsidies.Command and control regulation uses binding legal rules to specify the required behavior, and enforcement measures to correct or halt non-compliant behavior [[187](#B187-information-12-00275)] (p. 107). Many existing regulations are applicable to AI, although they do not address AI specifically. For example, in the E.U., AI systems must comply with existing data protection, consumer protection, and anti-discrimination laws [[188](#B188-information-12-00275)] (p. 13). The GDPR contains detailed rules governing the processing of personal data using automated decision-making [[189](#B189-information-12-00275)]. In this case, the person whose data are being processed can request “meaningful information about the logic involved, as well as the significance and the envisaged consequences of such processing,” although whether this entails a right to explanation is disputed [[34](#B34-information-12-00275),[35](#B35-information-12-00275)]. Although these rules are not explicitly about AI (automated decision-making as defined in the GDPR is not synonymous with AI), they are nonetheless applicable to many AI systems [[190](#B190-information-12-00275)]. Similarly, labor law is generally not written explicitly for work in AI, but it nonetheless affects how workers and management are allowed to act, as seen for example in allegations of former Google workers that they were fired because of their labor organizing, which may have been in violation of U.S. labor law ([Section 4.2](#sec4dot2-information-12-00275)) [[101](#B101-information-12-00275)].In recent years, governments around the world have started to work on AI-specific command and control regulations. Proposals have been published, among others, by the E.U. [[28](#B28-information-12-00275),[188](#B188-information-12-00275),[191](#B191-information-12-00275)], China [[192](#B192-information-12-00275)], and the United States [[193](#B193-information-12-00275),[194](#B194-information-12-00275)]. For example, in April 2021, the European Commission published a proposal for an Artificial Intelligence Act [[28](#B28-information-12-00275)], following its White Paper on AI [[188](#B188-information-12-00275)] and the High-Level Expert Group on AI’s Ethics Guidelines for Trustworthy AI [[191](#B191-information-12-00275)]. The new proposal follows a risk-based approach with different requirements for different levels of risk. It prohibits practices which pose unacceptable risks (e.g., social scoring by governments or systems that exploit vulnerabilities of children) and contains specific rules for high-risk systems (e.g., biometric identification systems). These rules include requirements regarding the quality of datasets used; technical documentation and record keeping; transparency and the provision of information to users; human oversight; and robustness, accuracy and cybersecurity. The proposed regulation contains very light and mostly voluntary provisions for AI systems with low or minimal risk. The vast majority of AI systems currently used in the E.U. fall into this category. The requirements for high-risk systems are command and control because they require specific behavior that will be enforced by supervisory authorities. It is worth noting that the proposal still has to be adopted by the European Parliament and the member states.Market-based instruments affect corporate activity through economic incentives [[195](#B195-information-12-00275)] (p. 22); [[187](#B187-information-12-00275)] (p. 117). For example, taxes could be used to encourage safe behavior or discourage unsafe behavior, such as via tax discounts for AI systems that have been certified or audited by a third party. Civil liability could incentivize AI companies to mitigate risks from accidents or the misuse of AI systems. Subsidies could support corporate initiatives on AI in the public interest, for example, to support research and development on AI techniques that improve safety. These benefits may also support the public good insofar as they address a market failure or incentivize innovation with future public benefit. The DARPA Grand Challenge for Autonomous Vehicles is one such example, which helped catalyze private investment in the field in the early 2000s [[196](#B196-information-12-00275)].Procurement refers to government purchases of goods and services [[197](#B197-information-12-00275),[198](#B198-information-12-00275)], including AI systems. For example, law enforcement agencies have recently sought to purchase facial recognition software. As discussed throughout this paper, this procurement is controversial, with many arguing that it is not in the public interest. This controversy speaks to the fact that governments face important decisions on which AI systems to procure and how to use them to best advance the public interest. Governments can additionally use procurement to influence AI corporate governance by procuring systems that meet high standards of safety and ethics. This incentivizes industry to adopt and maintain such standards. Procurement is thus, in a sense, a demand-side market-based instrument in its potential to use market incentives to advance AI corporate governance in the public interest.As discussed in [Section 4.5](#sec4dot5-information-12-00275), soft law is the non-binding expectation of behavior that has some indirect legal basis. One specific form of soft law in which governments play a central role is co-regulation. It is worth noting that co-regulation is not necessarily a form of soft law, but the concepts are at least interconnected [[40](#B40-information-12-00275),[41](#B41-information-12-00275)]. Co-regulation expands on corporate self-regulation to include some government involvement, typically to ensure enforcement [[195](#B195-information-12-00275)] (p. 35). For example, the U.S. Federal Trade Commission (FTC) encourages firms to declare privacy policies, and prosecutes firms who deviate from their statements [[199](#B199-information-12-00275)]. Conceivably, the FTC could also punish violations of companies’ self-stated AI principles [[30](#B30-information-12-00275)]. However, such enforcement has been ineffective. In 2011, Facebook agreed to a settlement with the FTC after being accused of violating its privacy policy [[200](#B200-information-12-00275)], but the violations continued, most notably with the 2018 Cambridge Analytica scandal [[201](#B201-information-12-00275)]. In 2019, Facebook again settled with the FTC, this time for an unprecedented USD 5 billion and stringent monitoring requirements [[202](#B202-information-12-00275)]. However, even the USD 5 billion fine could be seen as simply the cost of business, given that Facebook’s 2019 profit was nearly USD 18.5 billion [[203](#B203-information-12-00275)], and especially if the costs can be amortized across multiple years.Governments can also lead public information and education campaigns [[187](#B187-information-12-00275)] (p. 116). A better educated public could incentivize AI companies to improve their governance, as detailed in [Section 4.7](#sec4dot7-information-12-00275). Education campaigns could also foster constructive public debate on AI ethics and safety [[191](#B191-information-12-00275)] (p. 23). Education takes time, and thus is unlikely to be effective in time-critical situations, but it is otherwise found to often be a cost-effective policy option [[195](#B195-information-12-00275)]. Governments can lead AI education campaigns. They can also obtain information about AI corporations, such as by establishing information disclosure requirements as discussed above.The efficacy of policy instruments can depend on their enforcement. This applies to command and control regulation as well as certain soft law and market-based instruments. Where enforcement is applicable, it is used to ensure compliance. Noncompliance is commonly sanctioned by fines. In extreme cases, corporate activities can be shut down and noncompliant corporate personnel can be imprisoned. A lesser punishment could see companies added to published lists of noncompliant companies, as is the practice in European financial market regulation [[204](#B204-information-12-00275)]. However, in practice, governments do not always vigorously enforce compliance. Monitoring and enforcement can be expensive, and governments may not always have the resources or motivation to do so. Weak enforcement can limit the influence of government rules and regulations on AI corporate governance. Additionally, if the people responsible for complying with AI regulations disagree with or resent them—and are sufficiently empowered to act on this disagreement—then it could prompt backlash, such that the people decline to comply and may even do more of the disallowed or discouraged behavior than would occur without the regulation [[59](#B59-information-12-00275)].When selecting a policy instrument to improve the corporate governance of AI, governments need to consider a number of factors. One of these factors is the underlying regulatory approach. Most AI-specific proposals follow a risk-based approach. This approach ensures that regulation does not exceed what is necessary to achieve the underlying policy objective, as is required by the principle of proportionality in E.U. law. Apart from that, governments need to decide between regulation focused on specific technologies, such as AI, or regulation that addresses general issues that can be applicable to multiple technologies, such as privacy. Finally, governments need to decide whether the regulation should be cross-sector or for specific sectors, such as healthcare or transportation.AI-specific policy instruments face a particular challenge in defining their scope of application [[29](#B29-information-12-00275)] (pp. 359–362). There is no generally accepted definition of the term AI, and existing definitions do not meet the specific requirements for legal definitions [[205](#B205-information-12-00275)]. A possible solution to this problem would be to avoid the term AI and define other properties of the system, such as certain use cases or technical approaches. This idea may be a worthy focus of future AI policy research and activity.Policy shapes innovation [[206](#B206-information-12-00275)] (p. 249), and this will be no different with AI. Regulation can impose costs on or the outright prohibition of certain types of AI research and applications, thereby limiting innovation in particular areas while making others more attractive. Meanwhile, market mechanisms and procurement may subsidize or otherwise incentivize some types of AI research and development over others. For example, law enforcement procurement of facial recognition may already be stimulating innovation in that branch of AI. Taken as a whole, then, government regulation may be expected to shape AI innovation. Poorly constructed regulation may shape, or particularly limit, innovation in ways that undermine the public interest; this is a bug to be remedied, not a feature. For example, poorly constructed regulation may place a disproportionate burden of AI-related regulatory compliance that falls on smaller companies with fewer resources.When a government uses policy instruments, the effects are not always limited to that government’s jurisdiction, a phenomenon known as regulatory diffusion. Corporations that work across jurisdictions often follow the regulations of the most stringent jurisdiction across all jurisdictions to gain the efficiencies of a single standardized compliance operation. They may even lobby other jurisdictions to adopt similar rules so as to similarly bind local competitors. Influential jurisdictions in this regard include California and the European Union, sometimes referred to as the “California Effect” [[207](#B207-information-12-00275)] and the “Brussels Effect” [[208](#B208-information-12-00275)]. Given that leading AI corporations are multinational, policy instruments from a wide range of jurisdictions could shape corporate governance in (their conception of) the public interest globally. Even if companies are not incentivized to comply globally, other jurisdictions may pass similar regulation, imitating the first mover. Regulations do not always diffuse, and corporations may shift operations to jurisdictions with relatively lax regulations. Nonetheless, regulatory diffusion can increase the impacts of policy innovation.Regulation on law enforcement use of facial recognition technology has demonstrated such regulatory imitation. Municipalities seem to have taken the lead in the United States. Following research on AI and gender and race biases [[58](#B58-information-12-00275),[209](#B209-information-12-00275)], some municipalities have started to ban the use of facial recognition technology for law enforcement purposes, including San Francisco [[210](#B210-information-12-00275)] and Boston [[211](#B211-information-12-00275)]. Even though municipal action has set the agenda for wider action, leading to multiple bills that have been introduced in the U.S. Congress on this topic [[212](#B212-information-12-00275),[213](#B213-information-12-00275)], there is not yet a regulation of facial recognition technology at the federal level. Due to the absence of federal regulation, the legal treatment of the technology is currently highly variable across the United States. In keeping with their incentives for regulatory consistency across jurisdictions, Microsoft has repeatedly called for the federal regulation of facial recognition technology [[135](#B135-information-12-00275),[214](#B214-information-12-00275),[215](#B215-information-12-00275)]. Under the proposed E.U. AI regulation, all remote biometric identification of persons will be considered high-risk and subject to third party conformity assessment [[28](#B28-information-12-00275)]. Certain applications for the purpose of law enforcement will be prohibited in principle with a few narrow exceptions.Governments may not simply wait for AI policy instruments to passively diffuse; they may support institutionalized international coordination. For example, the OECD AI Principles have been adopted by 45 nations and informed similar principles agreed to by the G-20 countries [[216](#B216-information-12-00275)]. The OECD AI Policy Observatory is now developing implementation guidance for the principles, aimed at both government and corporate decision-makers. International coordination is further supported by several UN initiatives [[217](#B217-information-12-00275)]. Pre-existing international agreements and initiatives can shape AI corporate governance. Prominent examples include the UN Guiding Principles for Business and Human Rights and the UN Global Compact, which offer guidance for business practices to promote the public interest. Already, Google has adapted some AI systems according to this UN guidance [[218](#B218-information-12-00275)]. Additionally, the OECD has published general corporate governance guidance that has been adopted into national regulation [[219](#B219-information-12-00275),[220](#B220-information-12-00275)]. 5. Conclusions\n---------------\n\nA wide range of actors can help improve the corporate governance of AI so as to better advance the public interest. It is not, as one might think, a matter for the exclusive attention of a narrow mix of insider corporate elites. To be sure, the opportunities may often be better for some types of actors than others. However, significant opportunities can be found for many actors both within and outside of AI corporations. Importantly, these opportunities are available even in the absence of dedicated multistakeholder institutions that are designed to invite contributions from a more diverse group. Multistakeholder institutions have an important role to play, but they are only one of many means through which diverse stakeholders can improve AI corporate governance.Often, progress depends on coordination and collaboration across different types of actors, as illustrated in the three primary cases used throughout the paper. First, the example of Google’s Project Maven shows that workers and the media can collaborate to be particularly successful in influencing management. Second, the example of law enforcement use of facial recognition technology demonstrates that novel research, activism by nonprofits, and broad media coverage can build on each other to achieve change in corporate governance. Third, the example of the publication of potentially harmful research shows management, workers, and industry consortia interacting to establish, implement, and share best practices for AI in the public interest. People in each type of actor category would do well to understand not just their own opportunities, but also the broader ecosystem of actors and their interactions. Questions of how best to pursue coordination and collaboration across actors must be resolved on a case-by-case basis, in consideration of the particulars of the issues at play and the relative roles and capabilities of different actors.Opportunities to improve AI corporate governance are likely to change over time. For example, workers’ influence may diminish over time if the expected increase in the supply of skilled AI workers outpaces demand increases for AI systems. Changes in the economic, cultural, and political significance of AI can alter the opportunities available to many types of actors, such as by shaping the political viability of government regulations. Changes in underlying AI technologies can also be impactful. For example, if there are major breakthroughs toward AI systems that can substitute for most forms of human labor or even approach an artificial general intelligence, then the corporations could end up with substantially greater economic and political clout. This could increase the importance of actions from within the companies, especially from management and investors. On the other hand, such breakthroughs could also increase public and policymaker interest in AI in ways that facilitate more extensive government activity over time. The delay in government responses to emerging technologies, often called the “pacing problem” [[221](#B221-information-12-00275)], creates a clear, even if interim, role for other actors in improving AI corporate governance in the public interest as AI research and development continues.This paper has presented a broad survey of opportunities to improve AI corporate governance across a range of actors. It is, to the best of our knowledge, the first such survey published. As a first pass through a large and complex topic, the paper’s survey has been largely qualitative, and it has also not been comprehensive in its scope. We have sought to map out the overall terrain of AI corporate governance without necessarily identifying or measuring all the hills and valleys. We have focused on select larger corporations, with less attention to smaller ones. We have focused on the United States and Europe, with less attention to other parts of the world. Additionally, to at least some extent, we have covered a convenience sample of cases. The result is a broad but somewhat limited map of the AI corporate governance landscape.One important area for future research is on evaluating the quality of opportunities to improve AI corporate governance. Specific actors may benefit from guidance on how best to focus their activities. Some actors, such as researchers and philanthropists, have opportunities to bolster the efforts of other types of actors and would benefit from guidance on which other actors are most worth supporting. (These supporting actions fall outside the scope of this paper’s framework and would make for a further area for future research.) To a large extent, the quality of opportunities facing specific actors must be assessed on a case-by-case basis accounting for context and technological particulars, and therefore fall outside the scope of broad surveys such as this paper. An important activity would be to bridge the gap between the more general insights of this survey and the specific insights needed for corporate governance decision-making in the public interest for particular categories of AI policy problems and types of AI systems. Such work should include more specific conceptions of the public interest, because different conceptions can underlie different evaluative standards and generate different practical guidance.Finally, further research could investigate how AI corporate governance may change over time, especially as companies develop increasingly capable AI systems. This paper has emphasized near-term cases in order to give its study of AI corporate governance a better empirical basis and more immediate practical value. Nonetheless, the potential for extremely large consequences from long-term AI make it a worthy subject of attention. Important questions include how near-term actions could affect long-term AI corporate governance, such as through path dependence in governance regimes, and how future actors can best position themselves to influence long-term corporate AI for the better. One good starting point may be to look more closely at the earliest phases of the AI lifecycle, especially basic research and development, on the grounds that this may be where future advanced forms of AI first appear within corporations.As the deployment of AI systems and research and development of AI technologies continue, the role of AI corporate governance is expected to only increase over time. With it too increases the importance of experimentation and iteration to develop actors’ strategies to improve the corporate governance of AI companies in the public interest. This paper has surveyed the landscape with the aim of empowering practitioners and catalyzing necessary further research. It is only with this continued work, by the full range of actors considered here, that AI corporations may be expected to support the public interest today, tomorrow, and into the future.\n\n\nAuthor Contributions\n--------------------\n\nConceptualization, P.C., J.S. and S.D.B.; research, P.C., J.S. and S.D.B.; analysis, P.C., J.S. and S.D.B.; writing, P.C., J.S and S.D.B.; funding acquisition, S.D.B. All authors have read and agreed to the published version of the manuscript.Funding\n-------\n\nThis research was funded by the Gordon R. Irlam Charitable Foundation.Institutional Review Board Statement\n------------------------------------\n\nNot applicable.Informed Consent Statement\n--------------------------\n\nNot applicable.Data Availability Statement\n---------------------------\n\nNot applicable.Acknowledgments\n---------------\n\nFor helpful input on this research, we thank participants in a seminar hosted by the Global Catastrophic Risk Institute. For input on an initial draft, we are grateful to Ramiro de Avila Peres, Rosie Campbell, Alexis Carlier, Sam Clarke, Jessica Cussins-Newman, Moritz Kleinaltenkamp, Yolanda Lannquist, Joel Lehman, Jeremy Nixon, Dakota Norris, and Cullen O’Keefe. We thank Oliver Couttolenc for research assistance and feedback and McKenna Fitzgerald for assistance in manuscript formatting. Any remaining errors are the authors’ alone. Views expressed here are those of the authors and do not necessarily represent the views of their employers.Conflicts of Interest\n---------------------\n\nThe authors declare no conflict of interest. The paper was researched and drafted prior to Peter Cihon joining his current employer, GitHub, a subsidiary of Microsoft.References\n----------\n\n1. Government of France. Launch of the Global Partnership on Artificial Intelligence. 2020. Available online: (accessed on 11 September 2020).\n2. European Commission High-Level Expert Group on AI. Policy and Investment Recommendations for Trustworthy Artificial Intelligence. 2019. European Commission Website. Available online: (accessed on 11 September 2020).\n3. Cath, C.; Wachter, S.; Mittelstadt, B.; Taddeo, M.; Floridi, L. Artificial intelligence and the ‘good society’: The US, EU, and UK approach. Sci. Eng. Ethics **2017**, 24, 505–528. [[Google Scholar](https://scholar.google.com/scholar_lookup?title=Artificial+intelligence+and+the+%E2%80%98good+society%E2%80%99:+The+US,+EU,+and+UK+approach&author=Cath,+C.&author=Wachter,+S.&author=Mittelstadt,+B.&author=Taddeo,+M.&author=Floridi,+L.&publication_year=2017&journal=Sci.+Eng.+Ethics&volume=24&pages=505%E2%80%93528&doi=10.1007/s11948-017-9901-7)] [[CrossRef](https://doi.org/10.1007/s11948-017-9901-7)]\n4. Perrault, R.; Shoham, Y.; Brynjolfsson, E.; Clark, J.; Etchemendy, J.; Grosz, B.; Lyons, T.; Manyika, J.; Mishra, S.; Niebles, J.C. The AI Index 2019 Annual Report; Human-Centered AI Institute, Stanford University: Stanford, CA, USA, 2019. [[Google Scholar](https://scholar.google.com/scholar_lookup?title=The+AI+Index+2019+Annual+Report&author=Perrault,+R.&author=Shoham,+Y.&author=Brynjolfsson,+E.&author=Clark,+J.&author=Etchemendy,+J.&author=Grosz,+B.&author=Lyons,+T.&author=Manyika,+J.&author=Mishra,+S.&author=Niebles,+J.C.&publication_year=2019)]\n5. Frey, C.B.; Osborne, M.A. The future of employment: How susceptible are jobs to computerisation? Technol. Forecast. Soc. Chang. **2017**, 114, 254–280. [[Google Scholar](https://scholar.google.com/scholar_lookup?title=The+future+of+employment:+How+susceptible+are+jobs+to+computerisation?&author=Frey,+C.B.&author=Osborne,+M.A.&publication_year=2017&journal=Technol.+Forecast.+Soc.+Chang.&volume=114&pages=254%E2%80%93280&doi=10.1016/j.techfore.2016.08.019)] [[CrossRef](https://doi.org/10.1016/j.techfore.2016.08.019)]\n6. Baum, S.D. A Survey of Artificial General Intelligence Projects for Ethics, Risk, and Policy Working Paper 17-1. 2017. Available online: (accessed on 24 June 2021).\n7. Radford, A.; Wu, J.; Amodei, D.; Amodei, D.; Clark, J.; Brundage, M.; Sutskever, I. Better language models and their implications. Available online: (accessed on 11 September 2020).\n8. Partnership on AI. Partnership on AI Publication Norms for Responsible AI. Available online: (accessed on 11 September 2020).\n9. Gilson, R.J. From corporate law to corporate governance. In The Oxford Handbook of Corporate Law and Governance; Gordon, J.N., Ringe, W.-G., Eds.; Oxford University Press: Oxford, UK, 2016; Volume 1, pp. 3–27. ISBN 9780198743682. [[Google Scholar](https://scholar.google.com/scholar_lookup?title=From+corporate+law+to+corporate+governance&author=Gilson,+R.J.&publication_year=2016&pages=3%E2%80%9327)]\n10. Stout, L.A. The Shareholder Value Myth: How Putting Shareholders First Harms Investors, Corporations, and the Public; Berrett-Koehler Publishers: San Francisco, CA, USA, 2012; ISBN 9781605098135. [[Google Scholar](https://scholar.google.com/scholar_lookup?title=The+Shareholder+Value+Myth:+How+Putting+Shareholders+First+Harms+Investors,+Corporations,+and+the+Public&author=Stout,+L.A.&publication_year=2012)]\n11. Business Roundtable. Business Roundtable Redefines the Purpose of a Corporation to Promote ‘An Economy That Serves All Americans’. 2019. Available online: (accessed on 11 September 2020).\n12. Freeman, R.E. Strategic Management: A Stakeholder Approach; Pitman: Boston, MA, USA, 1984; ISBN 9780273019138. [[Google Scholar](https://scholar.google.com/scholar_lookup?title=Strategic+Management:+A+Stakeholder+Approach&author=Freeman,+R.E.&publication_year=1984)]\n13. Raymond, M.; DeNardis, L. Multistakeholderism: Anatomy of an inchoate global institution. Int. Theory **2015**, 7, 572–616. [[Google Scholar](https://scholar.google.com/scholar_lookup?title=Multistakeholderism:+Anatomy+of+an+inchoate+global+institution&author=Raymond,+M.&author=DeNardis,+L.&publication_year=2015&journal=Int.+Theory&volume=7&pages=572%E2%80%93616&doi=10.1017/S1752971915000081)] [[CrossRef](https://doi.org/10.1017/S1752971915000081)][[Green Version](https://www.cambridge.org/core/services/aop-cambridge-core/content/view/B69E6361B5965C98CFD400F75AA8DC53/S1752971915000081a.pdf/div-class-title-multistakeholderism-anatomy-of-an-inchoate-global-institution-div.pdf)]\n14. Freeman, E.; Martin, K.; Parmar, B. Stakeholder capitalism. J. Bus. Ethics **2007**, 74, 303–314. [[Google Scholar](https://scholar.google.com/scholar_lookup?title=Stakeholder+capitalism&author=Freeman,+E.&author=Martin,+K.&author=Parmar,+B.&publication_year=2007&journal=J.+Bus.+Ethics&volume=74&pages=303%E2%80%93314&doi=10.1007/s10551-007-9517-y)] [[CrossRef](https://doi.org/10.1007/s10551-007-9517-y)]\n15. Legg, S.; Hutter, M. Universal intelligence: A definition of machine intelligence. Minds Mach. **2007**, 17, 391–444. [[Google Scholar](https://scholar.google.com/scholar_lookup?title=Universal+intelligence:+A+definition+of+machine+intelligence&author=Legg,+S.&author=Hutter,+M.&publication_year=2007&journal=Minds+Mach.&volume=17&pages=391%E2%80%93444&doi=10.1007/s11023-007-9079-x)] [[CrossRef](https://doi.org/10.1007/s11023-007-9079-x)][[Green Version](http://arxiv.org/pdf/0712.3329)]\n16. Marcus, G.; Davis, E. Rebooting AI: Building Artificial Intelligence We Can Trust; Pantheon Books: New York, NY, USA, 2019; ISBN 9780525566045. [[Google Scholar](https://scholar.google.com/scholar_lookup?title=Rebooting+AI:+Building+Artificial+Intelligence+We+Can+Trust&author=Marcus,+G.&author=Davis,+E.&publication_year=2019)]\n17. McCorduck, P. Machines Who Think: A Personal Inquiry into the History and Prospects of Artificial Intelligence; 25th Anniversary Update; A.K. Peters Ltd.: Natick, MA, USA, 2004; ISBN 9781568812052. [[Google Scholar](https://scholar.google.com/scholar_lookup?title=Machines+Who+Think:+A+Personal+Inquiry+into+the+History+and+Prospects+of+Artificial+Intelligence&author=McCorduck,+P.&publication_year=2004)]\n18. OECD. Scoping the OECD AI Principles: Deliberations of the Expert Group on Artificial Intelligence at the OECD (AIGO); OECD Digital Economy Papers No. 291; OECD: Paris, France, 2015. [[Google Scholar](https://scholar.google.com/scholar_lookup?title=Scoping+the+OECD+AI+Principles:+Deliberations+of+the+Expert+Group+on+Artificial+Intelligence+at+the+OECD+(AIGO)&author=OECD&publication_year=2015)]\n19. Monks, R.A.G.; Minow, N. Corporate Governance, 5th ed.; John Wiley & Sons: Hoboken, NJ, USA, 2011; ISBN 9780470972595. [[Google Scholar](https://scholar.google.com/scholar_lookup?title=Corporate+Governance&author=Monks,+R.A.G.&author=Minow,+N.&publication_year=2011)]\n20. Gordon, J.N.; Ringe, W.-G. The Oxford Handbook of Corporate Law and Governance; Oxford Handbooks, 1st ed.; Oxford University Press: Oxford, UK, 2018; ISBN 9780198743682. [[Google Scholar](https://scholar.google.com/scholar_lookup?title=The+Oxford+Handbook+of+Corporate+Law+and+Governance;+Oxford+Handbooks&author=Gordon,+J.N.&author=Ringe,+W.-G.&publication_year=2018)]\n21. Crawford, K.; Dobbe, R.; Dryer, T.; Fried, G.; Green, B.; Kaziunas, E.; Kak, A.; Mathur, V.; McElroy, E.; Sánchez, A.N.; et al. AI Now 2019 Report; AI Now Institute: New York, NY, USA, 2019. [[Google Scholar](https://scholar.google.com/scholar_lookup?title=AI+Now+2019+Report&author=Crawford,+K.&author=Dobbe,+R.&author=Dryer,+T.&author=Fried,+G.&author=Green,+B.&author=Kaziunas,+E.&author=Kak,+A.&author=Mathur,+V.&author=McElroy,+E.&author=S%C3%A1nchez,+A.N.&publication_year=2019)]\n22. Metcalf, J.; Moss, E.; Boyd, D. Owning Ethics: Corporate Logics, Silicon Valley, and the Institutionalization of Ethics. Soc. Res. Int. Q. **2019**, 82, 449–476. [[Google Scholar](https://scholar.google.com/scholar_lookup?title=Owning+Ethics:+Corporate+Logics,+Silicon+Valley,+and+the+Institutionalization+of+Ethics&author=Metcalf,+J.&author=Moss,+E.&author=Boyd,+D.&publication_year=2019&journal=Soc.+Res.+Int.+Q.&volume=82&pages=449%E2%80%93476)]\n23. World Economic Forum. Empowering AI Leadership. 2020. Available online: (accessed on 11 September 2020).\n24. Dafoe, A. AI Governance: A Research Agenda; Centre for the Governance of AI, Future of Humanity Institute, University of Oxford: Oxford, UK, 2017. [[Google Scholar](https://scholar.google.com/scholar_lookup?title=AI+Governance:+A+Research+Agenda&author=Dafoe,+A.&publication_year=2017)]\n25. Calo, R. Artificial intelligence policy: A primer and roadmap. UC Davis Law Rev. **2017**, 51, 399–435. [[Google Scholar](https://scholar.google.com/scholar_lookup?title=Artificial+intelligence+policy:+A+primer+and+roadmap&author=Calo,+R.&publication_year=2017&journal=UC+Davis+Law+Rev.&volume=51&pages=399%E2%80%93435)]\n26. Calo, R. The Case for a Federal Robotics Commission; Brookings Institute: Washington, DC, USA, 2014; Available online: (accessed on 11 September 2020).\n27. Floridi, L.; Cowls, J.; Beltrametti, M.; Chatila, R.; Chazerand, P.; Dignum, V.; Luetge, C.; Madelin, R.; Pagallo, U.; Rossi, F.; et al. AI4People—An ethical framework for a good AI society: Opportunities, risks, principles, and recommendations. Minds Mach. **2018**, 28, 689–707. [[Google Scholar](https://scholar.google.com/scholar_lookup?title=AI4People%E2%80%94An+ethical+framework+for+a+good+AI+society:+Opportunities,+risks,+principles,+and+recommendations&author=Floridi,+L.&author=Cowls,+J.&author=Beltrametti,+M.&author=Chatila,+R.&author=Chazerand,+P.&author=Dignum,+V.&author=Luetge,+C.&author=Madelin,+R.&author=Pagallo,+U.&author=Rossi,+F.&publication_year=2018&journal=Minds+Mach.&volume=28&pages=689%E2%80%93707&doi=10.1007/s11023-018-9482-5)] [[CrossRef](https://doi.org/10.1007/s11023-018-9482-5)][[Green Version](http://mediatum.ub.tum.de/doc/1524463/file.pdf)]\n28. European Commission. Proposal for a Regulation of the European Parliament and of the Council Laying Down Harmonised Rules on Artificial Intelligence (Artificial Intelligence Act) and Amending Certain Union Legislative Acts (COM(2021) 206 Final); European Commission: Brussels, Belgium, 2021. [[Google Scholar](https://scholar.google.com/scholar_lookup?title=Proposal+for+a+Regulation+of+the+European+Parliament+and+of+the+Council+Laying+Down+Harmonised+Rules+on+Artificial+Intelligence+(Artificial+Intelligence+Act)+and+Amending+Certain+Union+Legislative+Acts+(COM(2021)+206+Final)&author=European+Commission&publication_year=2021)]\n29. Scherer, M.U. Regulating artificial intelligence systems: Risks, challenges, competencies, and strategies. Harv. J. Law Technol. **2016**, 29, 354–400. [[Google Scholar](https://scholar.google.com/scholar_lookup?title=Regulating+artificial+intelligence+systems:+Risks,+challenges,+competencies,+and+strategies&author=Scherer,+M.U.&publication_year=2016&journal=Harv.+J.+Law+Technol.&volume=29&pages=354%E2%80%93400&doi=10.2139/ssrn.2609777)] [[CrossRef](https://doi.org/10.2139/ssrn.2609777)]\n30. Wallach, W.; Marchant, G.E. An agile ethical/legal model for the international and national governance of AI and robotics. In Proceedings of the 2018 AAAI/ACM Conference on AI, Ethics, and Society, New Orleans, LA, USA, 2–3 February 2018; ACM: New York, NY, USA, 2018. [[Google Scholar](https://scholar.google.com/scholar_lookup?title=An+agile+ethical/legal+model+for+the+international+and+national+governance+of+AI+and+robotics&conference=Proceedings+of+the+2018+AAAI/ACM+Conference+on+AI,+Ethics,+and+Society&author=Wallach,+W.&author=Marchant,+G.E.&publication_year=2018)]\n31. Erdelyi, O.J.; Goldsmith, J. Regulating artificial intelligence proposal for a global solution. In Proceedings of the 2018 AAAI/ACM Conference on AI, Ethics, and Society, New Orleans, LA, USA, 2–3 February 2018; ACM: New York, NY, USA, 2018; pp. 95–101. [[Google Scholar](https://scholar.google.com/scholar_lookup?title=Regulating+artificial+intelligence+proposal+for+a+global+solution&conference=Proceedings+of+the+2018+AAAI/ACM+Conference+on+AI,+Ethics,+and+Society&author=Erdelyi,+O.J.&author=Goldsmith,+J.&publication_year=2018&pages=95%E2%80%93101)]\n32. Cihon, P.; Maas, M.M.; Kemp, L. Should artificial intelligence governance be centralised? Design lessons from history. In Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, New York, NY, USA, 7–9 February 2020; ACM: New York, NY USA, 2020; pp. 228–234. [[Google Scholar](https://scholar.google.com/scholar_lookup?title=Should+artificial+intelligence+governance+be+centralised?+Design+lessons+from+history&conference=Proceedings+of+the+AAAI/ACM+Conference+on+AI,+Ethics,+and+Society&author=Cihon,+P.&author=Maas,+M.M.&author=Kemp,+L.&publication_year=2020&pages=228%E2%80%93234)]\n33. Clark, J.; Hadfield, G.K. Regulatory markets for AI safety. In Proceedings of the 2019 Safe Machine Learning Workshop at ICLR, New Orleans, LA, USA, 6 May 2019; ICLR: La Jolla, CA, USA, 2019. [[Google Scholar](https://scholar.google.com/scholar_lookup?title=Regulatory+markets+for+AI+safety&conference=Proceedings+of+the+2019+Safe+Machine+Learning+Workshop+at+ICLR&author=Clark,+J.&author=Hadfield,+G.K.&publication_year=2019)]\n34. Wachter, S.; Mittelstadt, B.; Floridi, L. Why a right to explanation of automated decision-making does not exist in the general data protection regulation. Int. Data Priv. Law **2017**, 7, 76–99. [[Google Scholar](https://scholar.google.com/scholar_lookup?title=Why+a+right+to+explanation+of+automated+decision-making+does+not+exist+in+the+general+data+protection+regulation&author=Wachter,+S.&author=Mittelstadt,+B.&author=Floridi,+L.&publication_year=2017&journal=Int.+Data+Priv.+Law&volume=7&pages=76%E2%80%9399&doi=10.1093/idpl/ipx005)] [[CrossRef](https://doi.org/10.1093/idpl/ipx005)][[Green Version](https://academic.oup.com/idpl/article-pdf/7/2/76/17932196/ipx005.pdf)]\n35. Goodman, B.; Flaxman, S. European Union regulations on algorithmic decision-making and a “right to explanation”. AI Mag. **2017**, 38, 50–57. [[Google Scholar](https://scholar.google.com/scholar_lookup?title=European+Union+regulations+on+algorithmic+decision-making+and+a+%E2%80%9Cright+to+explanation%E2%80%9D&author=Goodman,+B.&author=Flaxman,+S.&publication_year=2017&journal=AI+Mag.&volume=38&pages=50%E2%80%9357&doi=10.1609/aimag.v38i3.2741)] [[CrossRef](https://doi.org/10.1609/aimag.v38i3.2741)][[Green Version](http://arxiv.org/pdf/1606.08813)]\n36. Smuha, N.A. From a “Race to AI” to a “Race to AI Regulation”—Regulatory Competition for Artificial Intelligence. 2019. Available online: (accessed on 11 September 2020).\n37. Thelisson, E.; Padh, K.; Celis, E.L. Regulatory Mechanisms and Algorithms towards Trust in AI/ML. 2017. Available online: (accessed on 11 September 2020).\n38. Stix, C. A Survey of the European Union’s Artificial Intelligence Ecosystem; Lverhulme Centre for the Future of Intelligence, University of Cambridge: Cambridge, UK, 2019. [[Google Scholar](https://scholar.google.com/scholar_lookup?title=A+Survey+of+the+European+Union%E2%80%99s+Artificial+Intelligence+Ecosystem&author=Stix,+C.&publication_year=2019)]\n39. Wagner, B.; Rozgonyi, K.; Sekwenz, M.-T.; Cobbe, J.; Singh, J. Regulating Transparency? Facebook, Twitter and the German Network Enforcement Act. In Proceedings of the Conference on Fairness, Accountability, and Transparency (FAT\\* ’20), Barcelona, Spain, 27–30 January 2020; ACM: New York, NY, USA, 2020; pp. 261–271. [[Google Scholar](https://scholar.google.com/scholar_lookup?title=Regulating+Transparency?+Facebook,+Twitter+and+the+German+Network+Enforcement+Act&conference=Proceedings+of+the+Conference+on+Fairness,+Accountability,+and+Transparency+(FAT*+%E2%80%9920)&author=Wagner,+B.&author=Rozgonyi,+K.&author=Sekwenz,+M.-T.&author=Cobbe,+J.&author=Singh,+J.&publication_year=2020&pages=261%E2%80%93271)]\n40. Senden, L. Soft law, self-regulation and co-regulation in European law: Where do they meet? EJCL **2005**, 9, 1–27. [[Google Scholar](https://scholar.google.com/scholar_lookup?title=Soft+law,+self-regulation+and+co-regulation+in+European+law:+Where+do+they+meet?&author=Senden,+L.&publication_year=2005&journal=EJCL&volume=9&pages=1%E2%80%9327)]\n41. Marsden, C.T. Internet Co-Regulation European Law, Regulatory Governance and Legitimacy in Cyberspace; Cambridge University Press: Cambridge, UK, 2011. [[Google Scholar](https://scholar.google.com/scholar_lookup?title=Internet+Co-Regulation+European+Law,+Regulatory+Governance+and+Legitimacy+in+Cyberspace&author=Marsden,+C.T.&publication_year=2011)] [[CrossRef](https://doi.org/10.1017/CBO9780511763410)]\n42. Kaminski, M.E. Binary governance: Lessons from the GDPR’s approach to algorithmic accountability. South. Calif. Law Rev. **2019**, 92, 1529–1616. [[Google Scholar](https://scholar.google.com/scholar_lookup?title=Binary+governance:+Lessons+from+the+GDPR%E2%80%99s+approach+to+algorithmic+accountability&author=Kaminski,+M.E.&publication_year=2019&journal=South.+Calif.+Law+Rev.&volume=92&pages=1529%E2%80%931616&doi=10.2139/ssrn.3351404)] [[CrossRef](https://doi.org/10.2139/ssrn.3351404)]\n43. Pagallo, U. The middle-out approach: Assessing models of legal governance in data protection, artificial intelligence, and the web of data. Theory Pract. Legis. **2019**, 7, 1–25. [[Google Scholar](https://scholar.google.com/scholar_lookup?title=The+middle-out+approach:+Assessing+models+of+legal+governance+in+data+protection,+artificial+intelligence,+and+the+web+of+data&author=Pagallo,+U.&publication_year=2019&journal=Theory+Pract.+Legis.&volume=7&pages=1%E2%80%9325&doi=10.1080/20508840.2019.1664543)] [[CrossRef](https://doi.org/10.1080/20508840.2019.1664543)]\n44. Zeitlin, J. Extending Experimentalist Governance? The European Union and Transnational Regulation; Oxford University Press: Oxford, UK, 2015; ISBN 9780198724506. [[Google Scholar](https://scholar.google.com/scholar_lookup?title=Extending+Experimentalist+Governance?+The+European+Union+and+Transnational+Regulation&author=Zeitlin,+J.&publication_year=2015)]\n45. Marchant, G.; Lindor, R. The coming collision between autonomous vehicles and the liability System. St. Clara Law Rev. **2012**, 52, 1321–1340. [[Google Scholar](https://scholar.google.com/scholar_lookup?title=The+coming+collision+between+autonomous+vehicles+and+the+liability+System&author=Marchant,+G.&author=Lindor,+R.&publication_year=2012&journal=St.+Clara+Law+Rev.&volume=52&pages=1321%E2%80%931340)]\n46. LeValley, D. Autonomous vehicle liability—Application of common carrier liability. Seattle Univ. Law Rev. Supra **2013**, 36, 5–26. [[Google Scholar](https://scholar.google.com/scholar_lookup?title=Autonomous+vehicle+liability%E2%80%94Application+of+common+carrier+liability&author=LeValley,+D.&publication_year=2013&journal=Seattle+Univ.+Law+Rev.+Supra&volume=36&pages=5%E2%80%9326)]\n47. Zohn, J.R. When robots attack: How should the law handle self-driving cars that cause damages. J. Law Technol. Policy **2015**, 2015, 461–485. [[Google Scholar](https://scholar.google.com/scholar_lookup?title=When+robots+attack:+How+should+the+law+handle+self-driving+cars+that+cause+damages&author=Zohn,+J.R.&publication_year=2015&journal=J.+Law+Technol.+Policy&volume=2015&pages=461%E2%80%93485)]\n48. Bathaee, Y. The artificial intelligence black box and the failure of intent and causation. Harv. J. Law Technol. **2018**, 31, 889–938. [[Google Scholar](https://scholar.google.com/scholar_lookup?title=The+artificial+intelligence+black+box+and+the+failure+of+intent+and+causation&author=Bathaee,+Y.&publication_year=2018&journal=Harv.+J.+Law+Technol.&volume=31&pages=889%E2%80%93938)]\n49. Lohmann, M.F. Ein europäisches Roboterrecht—Überfällig oder überflüssig? ZRP **2017**, 6, 168–171. [[Google Scholar](https://scholar.google.com/scholar_lookup?title=Ein+europ%C3%A4isches+Roboterrecht%E2%80%94%C3%9Cberf%C3%A4llig+oder+%C3%BCberfl%C3%BCssig?&author=Lohmann,+M.F.&publication_year=2017&journal=ZRP&volume=6&pages=168%E2%80%93171)]\n50. Cauffman, C. Robo-liability: The European Union in search of the best way to deal with liability for damage caused by artificial intelligence. Maastricht J. Eur. Comp. Law **2018**, 25, 527–532. [[Google Scholar](https://scholar.google.com/scholar_lookup?title=Robo-liability:+The+European+Union+in+search+of+the+best+way+to+deal+with+liability+for+damage+caused+by+artificial+intelligence&author=Cauffman,+C.&publication_year=2018&journal=Maastricht+J.+Eur.+Comp.+Law&volume=25&pages=527%E2%80%93532&doi=10.1177/1023263X18812333)] [[CrossRef](https://doi.org/10.1177/1023263X18812333)][[Green Version](https://journals.sagepub.com/doi/pdf/10.1177/1023263X18812333)]\n51. European Parliament. European Parliament Resolution of 16 February 2017 with Recommendations to the Commission on Civil Law Rules on Robotics (2015/2103(INL)). 2017. Available online: (accessed on 11 September 2020).\n52. Expert Group on Liability and New Technologies—New Technologies Formation. Liability for Artificial Intelligence and Other Emerging Digital Technologies; European Commission: Brussels, Belgium, 2019; ISBN 9789276129592. [[Google Scholar](https://scholar.google.com/scholar_lookup?title=Liability+for+Artificial+Intelligence+and+Other+Emerging+Digital+Technologies&author=Expert+Group+on+Liability+and+New+Technologies%E2%80%94New+Technologies+Formation&publication_year=2019)]\n53. European Commission. Report from the Commission to the European Parliament, the Council, and the European Economic and Social Committee (COM(2020) 324 final); European Commission: Brussels, Belgium, 2020. [[Google Scholar](https://scholar.google.com/scholar_lookup?title=Report+from+the+Commission+to+the+European+Parliament,+the+Council,+and+the+European+Economic+and+Social+Committee+(COM(2020)+324+final)&author=European+Commission&publication_year=2020)]\n54. Denga, M. Deliktische Haftung für künstliche Intelligenz—Warum die Verschuldenshaftung des BGB auch künftig die bessere Schadensausgleichsordnung bedeutet. CR **2018**, 69–78. [[Google Scholar](https://scholar.google.com/scholar_lookup?title=Deliktische+Haftung+f%C3%BCr+k%C3%BCnstliche+Intelligenz%E2%80%94Warum+die+Verschuldenshaftung+des+BGB+auch+k%C3%BCnftig+die+bessere+Schadensausgleichsordnung+bedeutet&author=Denga,+M.&publication_year=2018&journal=CR&pages=69%E2%80%9378&doi=10.9785/cr-2018-0203)] [[CrossRef](https://doi.org/10.9785/cr-2018-0203)]\n55. Borges, G. Rechtliche Rahmenbedingungen für autonome Systeme. NJW **2018**, 40, 977–982. [[Google Scholar](https://scholar.google.com/scholar_lookup?title=Rechtliche+Rahmenbedingungen+f%C3%BCr+autonome+Systeme&author=Borges,+G.&publication_year=2018&journal=NJW&volume=40&pages=977%E2%80%93982)]\n56. Graf von Westphalen, F. Haftungsfragen beim Einsatz Künstlicher Intelligenz in Ergänzung der Produkthaftungs-RL 85/374/EWG. ZIP **2020**, 40, 889–895. [[Google Scholar](https://scholar.google.com/scholar_lookup?title=Haftungsfragen+beim+Einsatz+K%C3%BCnstlicher+Intelligenz+in+Erg%C3%A4nzung+der+Produkthaftungs-RL+85/374/EWG&author=Graf+von+Westphalen,+F.&publication_year=2020&journal=ZIP&volume=40&pages=889%E2%80%93895)]\n57. White, T.N.; Baum, S.D. Liability for present and future robotics technology. In Robot Ethics 2.0: From Autonomous Cars to Artificial Intelligence; Lin, P., Abney, K., Jenkins, R., Eds.; Oxford University Press: Oxford, UK, 2017; Volume 1, pp. 66–79. [[Google Scholar](https://scholar.google.com/scholar_lookup?title=Liability+for+present+and+future+robotics+technology&author=White,+T.N.&author=Baum,+S.D.&publication_year=2017&pages=66%E2%80%9379)]\n58. Buolamwini, J.; Gebru, T. Gender shades: Intersectional accuracy disparities in commercial gender classification. In Proceedings of the Conference on Fairness, Accountability, and Transparency (FAT\\* ’18), New York, NY, USA, 23–24 February 2018; ACM: New York, NY, USA, 2018; pp. 77–91. [[Google Scholar](https://scholar.google.com/scholar_lookup?title=Gender+shades:+Intersectional+accuracy+disparities+in+commercial+gender+classification&conference=Proceedings+of+the+Conference+on+Fairness,+Accountability,+and+Transparency+(FAT*+%E2%80%9918)&author=Buolamwini,+J.&author=Gebru,+T.&publication_year=2018&pages=77%E2%80%9391)]\n59. Baum, S.D. On the promotion of safe and socially beneficial artificial intelligence. AI Soc. **2017**, 32, 543–551. [[Google Scholar](https://scholar.google.com/scholar_lookup?title=On+the+promotion+of+safe+and+socially+beneficial+artificial+intelligence&author=Baum,+S.D.&publication_year=2017&journal=AI+Soc.&volume=32&pages=543%E2%80%93551&doi=10.1007/s00146-016-0677-0)] [[CrossRef](https://doi.org/10.1007/s00146-016-0677-0)]\n60. Belfield, H. Activism by the AI community: Analysing recent achievements and future prospects. In Proceedings of the 2020 AAAI/ACM Conference on AI, Ethics and Society, New York, NY, USA, 7–8 February 2020; ACM: New York, NY, USA, 2020; pp. 15–21. [[Google Scholar](https://scholar.google.com/scholar_lookup?title=Activism+by+the+AI+community:+Analysing+recent+achievements+and+future+prospects&conference=Proceedings+of+the+2020+AAAI/ACM+Conference+on+AI,+Ethics+and+Society&author=Belfield,+H.&publication_year=2020&pages=15%E2%80%9321)]\n61. Askell, A.; Brundage, M.; Hadfield, G. The Role of Cooperation in Responsible AI Development. 2019. Available online: (accessed on 11 September 2020).\n62. Solaiman, I.; Brundage, M.; Clark, J.; Askell, A.; Herbert-Voss, A.; Wu, J.; Radford, A.; Krueger, G.; Kim, J.W.; Kreps, S.; et al. Release Strategies and the Social Impacts of Language Models. OpenAI. 2019. Available online: (accessed on 11 September 2020).\n63. Cihon, P. Standards for AI Governance: International Standards to Enable Global Coordination in AI Research & Development; Future of Humanity Institute, University of Oxford: Oxford, UK, 2019. [[Google Scholar](https://scholar.google.com/scholar_lookup?title=Standards+for+AI+Governance:+International+Standards+to+Enable+Global+Coordination+in+AI+Research+&+Development&author=Cihon,+P.&publication_year=2019)]\n64. Baum, S.D. Superintelligence skepticism as a political tool. Information **2018**, 9, 209. [[Google Scholar](https://scholar.google.com/scholar_lookup?title=Superintelligence+skepticism+as+a+political+tool&author=Baum,+S.D.&publication_year=2018&journal=Information&volume=9&pages=209&doi=10.3390/info9090209)] [[CrossRef](https://doi.org/10.3390/info9090209)][[Green Version](https://www.mdpi.com/2078-2489/9/9/209/pdf)]\n65. Baum, S.D. Countering Superintelligence Misinformation. Information **2018**, 9, 244. [[Google Scholar](https://scholar.google.com/scholar_lookup?title=Countering+Superintelligence+Misinformation&author=Baum,+S.D.&publication_year=2018&journal=Information&volume=9&pages=244&doi=10.3390/info9100244)] [[CrossRef](https://doi.org/10.3390/info9100244)][[Green Version](https://www.mdpi.com/2078-2489/9/10/244/pdf)]\n66. O’Keefe, C.; Cihon, P.; Garfinkel, B.; Flynn, C.; Leung, J.; Dafoe, A. The Windfall Clause: Distributing the benefits of AI for the common good. In Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, New York, NY, USA, 7–8 February 2020; ACM: New York, NY, USA, 2020; pp. 327–331. [[Google Scholar](https://scholar.google.com/scholar_lookup?title=The+Windfall+Clause:+Distributing+the+benefits+of+AI+for+the+common+good&conference=Proceedings+of+the+AAAI/ACM+Conference+on+AI,+Ethics,+and+Society&author=O%E2%80%99Keefe,+C.&author=Cihon,+P.&author=Garfinkel,+B.&author=Flynn,+C.&author=Leung,+J.&author=Dafoe,+A.&publication_year=2020&pages=327%E2%80%93331)]\n67. Avin, S.; Gruetzemacher, R.; Fox, J. Exploring AI futures through role play. In Proceedings of the 2018 AAAI/ACM Conference on AI, Ethics, and Society, New Orleans, LA, USA, 2–3 February 2018; ACM: New York, NY, USA, 2018; pp. 8–14. [[Google Scholar](https://scholar.google.com/scholar_lookup?title=Exploring+AI+futures+through+role+play&conference=Proceedings+of+the+2018+AAAI/ACM+Conference+on+AI,+Ethics,+and+Society&author=Avin,+S.&author=Gruetzemacher,+R.&author=Fox,+J.&publication_year=2018&pages=8%E2%80%9314)]\n68. Ballard, S.; Calo, R. Taking futures seriously: Forecasting as method in robotics law and policy. In Proceedings of the 2019 We Robot Conference, We Robot, Miami, FL, USA, 12–13 April 2019. [[Google Scholar](https://scholar.google.com/scholar_lookup?title=Taking+futures+seriously:+Forecasting+as+method+in+robotics+law+and+policy&conference=Proceedings+of+the+2019+We+Robot+Conference,+We+Robot&author=Ballard,+S.&author=Calo,+R.&publication_year=2019)]\n69. Hume, K.; LaPlante, A. Managing Bias and Risk at Every Step of the AI-Building Process. Harvard Business Review. 30 October 2019. Available online: (accessed on 11 September 2020).\n70. Tiell, S. Create an ethics committee to keep your AI Initiative in check. Harvard Business Review. 15 November 2019. Available online: (accessed on 11 September 2020).\n71. Chamorro-Premuzic, T.; Polli, F.; Dattner, B. Building ethical AI for talent management. Harvard Business Review. 21 November 2019. Available online: (accessed on 11 September 2020).\n72. Fountaine, T.; McCarthy, B.; Saleh, T. Building the AI-powered organization. Harvard Business Review. 1 July 2019. Available online: (accessed on 11 September 2020).\n73. Abbasi, A.; Kitchens, B.; Ahmad, F. The risks of AutoML and how to avoid them. Harvard Business Review. 24 October 2019. Available online: (accessed on 11 September 2020).\n74. Hao, K. Establishing an AI code of ethics will be harder than people think. MIT Technology Review. 21 October 2018. Available online: (accessed on 11 September 2020).\n75. Hao, K. In 2020, let’s stop AI ethics-washing and actually do something. MIT Technology Review. 27 December 2019. Available online: (accessed on 11 September 2020).\n76. Burkhardt, R.; Hohn, N.; Wigley, C. Leading your organization to responsible AI. McKinsey Co. 2 May 2019. Available online: (accessed on 11 September 2020).\n77. Cheatham, B.; Javanmardian, K.; Samandari, H. Confronting the risks of artificial intelligence. McKinsey Co. 26 April 2019. Available online: (accessed on 11 September 2020).\n78. Ransbotham, S.; Khodabandeh, S.; Fehling, R.; LaFountain, B.; Kiron, D. Winning with AI: Pioneers Combine Strategy, Organizational Behavior, and Technology; MIT Sloan Management Review and Boston Consulting Group: Boston, MA, USA, 2019. [[Google Scholar](https://scholar.google.com/scholar_lookup?title=Winning+with+AI:+Pioneers+Combine+Strategy,+Organizational+Behavior,+and+Technology&author=Ransbotham,+S.&author=Khodabandeh,+S.&author=Fehling,+R.&author=LaFountain,+B.&author=Kiron,+D.&publication_year=2019)]\n79. PWC. A practical guide to Responsible Artificial Intelligence (AI). 2019. Available online: (accessed on 11 September 2020).\n80. Ernst & Young Global Limited. How Do You Teach AI the Value of Trust? Report No. 03880-183Gbl; Ernst & Young Global Limited: London, UK, 2018; Available online: (accessed on 11 September 2020).\n81. KPMG. Controlling AI: The Imperative for Transparency and Explainability. 2019. Available online: (accessed on 11 September 2020).\n82. Deloitte. AI and Risk Management. Available online: (accessed on 11 September 2020).\n83. Accenture. Building Data and Ethics Committees. 2019. Available online: (accessed on 11 September 2020).\n84. Pye, L.W.; Verba, S. Political Culture and Political Development; Princeton University Press: Princeton, NJ, USA, 1965; p. 7. [[Google Scholar](https://scholar.google.com/scholar_lookup?title=Political+Culture+and+Political+Development&author=Pye,+L.W.&author=Verba,+S.&publication_year=1965)]\n85. Jobin, A.; Ienca, M.; Vayena, E. The global landscape of AI ethics guidelines. Nat. Mach. Intell. **2019**, 1, 389–399. [[Google Scholar](https://scholar.google.com/scholar_lookup?title=The+global+landscape+of+AI+ethics+guidelines&author=Jobin,+A.&author=Ienca,+M.&author=Vayena,+E.&publication_year=2019&journal=Nat.+Mach.+Intell.&volume=1&pages=389%E2%80%93399&doi=10.1038/s42256-019-0088-2)] [[CrossRef](https://doi.org/10.1038/s42256-019-0088-2)]\n86. Morley, J.; Floridi, L.; Kinsey, L.; Elhalal, A. From what to how: An initial review of publicly available AI ethics tools, methods and research to translate principles into practices. Sci. Eng. Ethics **2020**, 26, 2141–2168. [[Google Scholar](https://scholar.google.com/scholar_lookup?title=From+what+to+how:+An+initial+review+of+publicly+available+AI+ethics+tools,+methods+and+research+to+translate+principles+into+practices&author=Morley,+J.&author=Floridi,+L.&author=Kinsey,+L.&author=Elhalal,+A.&publication_year=2020&journal=Sci.+Eng.+Ethics&volume=26&pages=2141%E2%80%932168&doi=10.1007/s11948-019-00165-5)] [[CrossRef](https://doi.org/10.1007/s11948-019-00165-5)][[Green Version](https://link.springer.com/content/pdf/10.1007/s11948-019-00165-5.pdf)]\n87. Gibney, E. The battle for ethical AI at the world’s biggest machine-learning conference. Nature **2020**, 577, 609. [[Google Scholar](https://scholar.google.com/scholar_lookup?title=The+battle+for+ethical+AI+at+the+world%E2%80%99s+biggest+machine-learning+conference&author=Gibney,+E.&publication_year=2020&journal=Nature&volume=577&pages=609&doi=10.1038/d41586-020-00160-y)] [[CrossRef](https://doi.org/10.1038/d41586-020-00160-y)]\n88. Raji, I.D.; Smart, A.; White, R.N.; Mitchell, M.; Gebru, T.; Hutchinson, B.; Smith-Loud, J.; Theron, D.; Barnes, P. Closing the AI accountability gap: Defining an end-to-end framework for internal algorithmic auditing. In Proceedings of the Conference on Fairness, Accountability, and Transparency (FAT\\* ’19), Atlanta, GA, USA, 29–31 January 2019; ACM: New York, NY, USA, 2019; pp. 220–229. [[Google Scholar](https://scholar.google.com/scholar_lookup?title=Closing+the+AI+accountability+gap:+Defining+an+end-to-end+framework+for+internal+algorithmic+auditing&conference=Proceedings+of+the+Conference+on+Fairness,+Accountability,+and+Transparency+(FAT*+%E2%80%9919)&author=Raji,+I.D.&author=Smart,+A.&author=White,+R.N.&author=Mitchell,+M.&author=Gebru,+T.&author=Hutchinson,+B.&author=Smith-Loud,+J.&author=Theron,+D.&author=Barnes,+P.&publication_year=2019&pages=220%E2%80%93229)]\n89. Gebru, T.; Morgenstern, J.; Vecchione, B.; Vaughan, J.W.; Wallach, H.; Daumé, H., III; Crawford, K. Datasheets for Datasets. 2020. Available online: (accessed on 11 September 2020).\n90. Mitchell, M.; Wu, S.; Zaldivar, A.; Barnes, P.; Vasserman, L.; Hutchinson, B.; Spitzer, E.; Raji, I.D.; Gebru, T. Model Cards for Model Reporting. In Proceedings of the Conference on Fairness, Accountability, and Transparency (FAT\\* ’19), Atlanta, GA, USA, 29–31 January; ACM: New York, NY, USA, 2019; pp. 220–229. [[Google Scholar](https://scholar.google.com/scholar_lookup?title=Model+Cards+for+Model+Reporting&conference=Proceedings+of+the+Conference+on+Fairness,+Accountability,+and+Transparency+(FAT*+%E2%80%9919)&author=Mitchell,+M.&author=Wu,+S.&author=Zaldivar,+A.&author=Barnes,+P.&author=Vasserman,+L.&author=Hutchinson,+B.&author=Spitzer,+E.&author=Raji,+I.D.&author=Gebru,+T.&publication_year=2019&pages=220%E2%80%93229)]\n91. OpenAI Charter. OpenAI. Available online: (accessed on 11 September 2020).\n92. Brockman, G.; Sutskever, I.; OpenAI LP. OpenAI. 11 March 2019. Available online: (accessed on 11 September 2020).\n93. Smith, R. The future of Face Matching at Axon and AI Ethics Board Report. Axon. 27 June 2019. Available online: (accessed on 11 September 2020).\n94. Piper, K. Exclusive: Google cancels AI ethics board in response to outcry. Vox. 4 April 2019. Available online: (accessed on 11 September 2020).\n95. Google. Google’s Approach to IT Security: A Google White Paper; Google: Mountain View, CA, USA, 2012; Available online: (accessed on 11 September 2020).\n96. Cooper, D. Towards a model of safety culture. Saf. Sci. **2000**, 36, 111–136. [[Google Scholar](https://scholar.google.com/scholar_lookup?title=Towards+a+model+of+safety+culture&author=Cooper,+D.&publication_year=2000&journal=Saf.+Sci.&volume=36&pages=111%E2%80%93136&doi=10.1016/S0925-7535(00)00035-7)] [[CrossRef](https://doi.org/10.1016/S0925-7535(00)00035-7)]\n97. Kinstler, L. Ethicists were hired to save tech’s soul. Will anyone let them? Protocol. 5 February 2020. Available online: (accessed on 11 September 2020).\n98. Hao, K. The messy, secretive reality behind OpenAI’s bid to save the world. MIT Technology Review. 17 February 2020. Available online: (accessed on 11 September 2020).\n99. Johnson, K. NeurIPS requires AI researchers to account for societal impact and financial conflicts of interest. VentureBeat. 24 February 2020. Available online: (accessed on 11 September 2020).\n100. Simonite, T. What really happened when Google ousted Timnit Gebru. Wired. 8 June 2021. Available online: (accessed on 15 June 2021).\n101. De Vynck, G.; Bergen, M.; Gallagher, R.; Barr, A. Google fires four employees, citing data-security violations. Bloomberg Law. 25 November 2019. Available online: (accessed on 11 September 2020).\n102. Nicas, J. Google tries to corral its staff after ugly internal debates. The New York Times. 23 August 2019. Available online: (accessed on 11 September 2020).\n103. Conger, K.; Wakabayashi, D. Google fires 4 workers active in labor organizing. The New York Times. 25 November 2019. Available online: (accessed on 11 September 2020).\n104. Shoham, Y.; Perrault, R.; Brynjolfsson, E.; Clark, J.; Manyika, J.; Niebles, J.C.; Lyons, T.; Etchemendy, J.; Grosz, B.; Bauer, Z. The AI Index 2018 Annual Report; Human-Centered AI Institute, Stanford University: Stanford, CA, USA, 2018. [[Google Scholar](https://scholar.google.com/scholar_lookup?title=The+AI+Index+2018+Annual+Report&author=Shoham,+Y.&author=Perrault,+R.&author=Brynjolfsson,+E.&author=Clark,+J.&author=Manyika,+J.&author=Niebles,+J.C.&author=Lyons,+T.&author=Etchemendy,+J.&author=Grosz,+B.&author=Bauer,+Z.&publication_year=2018)]\n105. Dutton, T. Building an AI World: Report on National and Regional AI Strategies; CIFAR: Ontario, Canada, 2018; Available online: (accessed on 11 September 2020).\n106. Cameron, D.; Conger, K. Google Is Helping the Pentagon Build AI for Drones. Gizmodo. 6 March 2018. Available online: (accessed on 11 September 2020).\n107. Shane, S.; Wakabayashi, D. ‘The business of war’: Google employees protest work for the Pentagon. The New York Times. 4 April 2018. Available online: (accessed on 11 September 2020).\n108. Wakabayashi, D.; Shane, S. Google will not renew Pentagon contract that upset employees. The New York Times. 1 June 2018. Available online: (accessed on 11 September 2020).\n109. Gallagher, R. Google plans to launch censored search engine in China, leaked documents reveal. The Intercept. 1 August 2018. Available online: (accessed on 11 September 2020).\n110. Google Employees Against Dragonfly. We are Google employees. Google must drop Dragonfly. Medium. 27 November 2018. Available online: (accessed on 10 September 2020).\n111. Alba, D. A Google VP told the US Senate the company has “terminated” the Chinese search app Dragonfly. BuzzFeed News. 6 July 2019. Available online: (accessed on 11 September 2020).\n112. Wakabayashi, D.; Benner, K. How Google protected Andy Rubin, the ‘Father of Android’. The New York Times. 25 October 2018. Available online: (accessed on 11 September 2020).\n113. Stapleton, C.; Gupta, T.; Whittaker, M.; O’Neil-Hart, C.; Parker, S.; Anderson, E.; Gaber, A. We’re the organizers of the Google walkout. Here are our demands. The Cut. 1 November 2018. Available online: (accessed on 11 September 2020).\n114. Google Walkout for Real Change. #GoogleWalkout update: Collective action works, but we need to keep working. Medium. 11 November 2018. Available online: (accessed on 11 September 2020).\n115. Employees of Microsoft. An open letter to Microsoft: Don’t bid on the US military’s Project JEDI. Medium. 16 October 2018. Available online: (accessed on 10 September 2020).\n116. Smith, B. Technology and the US military. Microsoft. 26 October 2018. Available online: (accessed on 11 September 2020).\n117. An Amazon Employee. I’m an Amazon employee. My company shouldn’t sell facial recognition tech to police. Medium. 16 October 2018. Available online: (accessed on 11 September 2020).\n118. Merchant, B. 6000 Amazon employees, including a VP and directors, are now calling on Jeff Bezos to stop automating oil extraction. Gizmodo. 1 April 2019. Available online: (accessed on 11 September 2020).\n119. Grewal, J.; Serafeim, G.; Yoon, A. Shareholder Activism on Sustainability Issues; Harvard Business School Working Paper, No. 17-003; Harvard Business School: Boston, MA, USA, 2016; Available online: (accessed on 11 September 2020).\n120. Ben-Amar, W.; Chang, M.; McIlkenny, P. Board gender diversity and corporate response to sustainability initiatives: Evidence from the Carbon Disclosure Project. J. Bus. Ethics **2017**, 142, 369–383. [[Google Scholar](https://scholar.google.com/scholar_lookup?title=Board+gender+diversity+and+corporate+response+to+sustainability+initiatives:+Evidence+from+the+Carbon+Disclosure+Project&author=Ben-Amar,+W.&author=Chang,+M.&author=McIlkenny,+P.&publication_year=2017&journal=J.+Bus.+Ethics&volume=142&pages=369%E2%80%93383&doi=10.1007/s10551-015-2759-1)] [[CrossRef](https://doi.org/10.1007/s10551-015-2759-1)][[Green Version](https://ro.uow.edu.au/cgi/viewcontent.cgi?article=2409&context=buspapers)]\n121. Sharton, B.R.; Stegmaier, G.M.; Procter, G. Breaches in the boardroom: What directors and officers can do to reduce the risk of personal liability for data security breaches. Reuters. Available online: (accessed on 11 September 2020).\n122. Sawyer, M. Annual Review and Analysis of 2019 U.S. Shareholder Activism; Sullivan & Cromwell LLP: New York, NY, USA, 2019; Available online: (accessed on 11 September 2020).\n123. U.S. Securities Exchange Commission. How to Read a 10-K. 2011. Available online: (accessed on 11 September 2020).\n124. Chow, C.; Frame, K.; Likhtman, S.; Spooner, N.; Wong, J. Investors’ Expectations on Responsible Artificial Intelligence and Data Governance; Hermes Investment Management: London, UK, 2019; Available online: (accessed on 11 September 2020).\n125. Hermes EOS calls on Alphabet to lead responsible A.I. practice. In U.S. Securities Exchange Commission Website; 17 June 2019. Available online: (accessed on 11 September 2020).\n126. Lahoti, S. Google rejects all 13 shareholder proposals at its annual meeting, despite protesting workers. Packt Hub. 20 June 2019. Available online: (accessed on 11 September 2020).\n127. Aten, J. Google has a date with shareholders today and they are telling the company it’s time for a break up. Inc. 19 June 2019. Available online: (accessed on 11 September 2020).\n128. Amazon. Proxy Statement: 2019 Annual Meeting of Shareholders. 2019. Available online: (accessed on 11 September 2020).\n129. Amazon. Notice of 2020 Annual Meeting of Shareholders & Proxy Statement. 2020. Available online: (accessed on 11 September 2020).\n130. Dastin, J.; Kerber, R.U.S. blocks Amazon efforts to stop shareholder votes on facial recognition. Reuters. 5 April 2019. Available online: (accessed on 11 September 2020).\n131. Strubell, E.; Ganesh, A.; McCallum, A. Energy and policy considerations for deep learning in NLP. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, Florence, Italy, 28 July–2 August 2019; ACL: Stroudsburg, PA, USA, 2019; pp. 3645–3650. [[Google Scholar](https://scholar.google.com/scholar_lookup?title=Energy+and+policy+considerations+for+deep+learning+in+NLP&conference=Proceedings+of+the+57th+Annual+Meeting+of+the+Association+for+Computational+Linguistics&author=Strubell,+E.&author=Ganesh,+A.&author=McCallum,+A.&publication_year=2019&pages=3645%E2%80%933650)]\n132. DiMaggio, P.J.; Powell, W.W. The iron cage revisited: Institutional isomorphism and collective rationality in organizational fields. Am. Sociol. Rev. **1983**, 48, 147–160. [[Google Scholar](https://scholar.google.com/scholar_lookup?title=The+iron+cage+revisited:+Institutional+isomorphism+and+collective+rationality+in+organizational+fields&author=DiMaggio,+P.J.&author=Powell,+W.W.&publication_year=1983&journal=Am.+Sociol.+Rev.&volume=48&pages=147%E2%80%93160&doi=10.2307/2095101)] [[CrossRef](https://doi.org/10.2307/2095101)][[Green Version](http://pdfs.semanticscholar.org/d54d/998a526bf2c9e673a9f27be18f742f10c6d6.pdf)]\n133. IBM. IBM CEO’s Letter to Congress on Racial Justice Reforms. 2020. Available online: (accessed on 11 September 2020).\n134. Amazon. We Are Implementing a One-Year Moratorium on Police Use of Rekognition. 2020. Available online: (accessed on 11 September 2020).\n135. Washington Post Live (@postlive) Washington Post Live on Twitter: “Microsoft president @BradSmi says the company does not sell facial recognition software to police depts. in the U.S. today and will not sell the tools to police until there is a national law in place ‘grounded in human rights.’ #postlive ”. Twitter. 11 June 2020. Available online: (accessed on 11 September 2020).\n136. Google. Celebrity Recognition. Cloud Vision API. Available online: (accessed on 11 September 2020).\n137. Menn, J. Microsoft turned down facial-recognition sales on human rights concerns. Reuters. 4 April 2019. Available online: (accessed on 11 September 2020).\n138. Article One Advisors. Case Studies: Microsoft. Available online: (accessed on 11 September 2020).\n139. Nicas, J. Atlanta asks Google whether it targeted Black homeless people. The New York Times. 4 October 2019. Available online: (accessed on 11 September 2020).\n140. Kumar, R.S.S.; Nagle, F. The Case for AI Insurance. Harvard Business Review. 29 April 2020. Available online: (accessed on 11 September 2020).\n141. Franke, U. The cyber insurance market in Sweden. Comput. Secur. **2017**, 68, 130–144. [[Google Scholar](https://scholar.google.com/scholar_lookup?title=The+cyber+insurance+market+in+Sweden&author=Franke,+U.&publication_year=2017&journal=Comput.+Secur.&volume=68&pages=130%E2%80%93144&doi=10.1016/j.cose.2017.04.010)] [[CrossRef](https://doi.org/10.1016/j.cose.2017.04.010)]\n142. Kosik, A. FedEx asks the Washington Redskins to change their name after pressure from investor groups. CNN. 3 July 2020. Available online: (accessed on 11 September 2020).\n143. Partnership on AI. Meet the Partners. Available online: (accessed on 11 September 2020).\n144. IEEE SA. IEEE Standards Association Membership. Available online: (accessed on 11 September 2020).\n145. Leibowicz, C.; Adler, S.; Eckersley, P. When is it appropriate to publish high-stakes AI research? Partnership on AI. 2 April 2019. Available online: (accessed on 11 September 2020).\n146. Socher, R. Introducing a conditional transformer language model for controllable generation. Salesforce. 11 September 2019. Available online: (accessed on 11 September 2020).\n147. Keskar, N.S.; McCann, B.; Varshney, L.R.; Xiong, C.; Socher, R. CTRL: A Conditional Transformer Language Model for Controllable Generation. 2019. Available online: (accessed on 11 September 2020).\n148. Anandwala, R.; Cassagnol, D. CTA launches first-ever industry-led standard for AI in health care. Consumer Technology Association. 25 February 2020. Available online: (accessed on 11 September 2020).\n149. Black, J.; Hopper, M.; Band, C. Making a success of principles-based regulation. Law Financ. Mark. Rev. **2007**, 1, 191–206. [[Google Scholar](https://scholar.google.com/scholar_lookup?title=Making+a+success+of+principles-based+regulation&author=Black,+J.&author=Hopper,+M.&author=Band,+C.&publication_year=2007&journal=Law+Financ.+Mark.+Rev.&volume=1&pages=191%E2%80%93206&doi=10.1080/17521440.2007.11427879)] [[CrossRef](https://doi.org/10.1080/17521440.2007.11427879)]\n150. Cihon, P.; Gutierrez, G.M.; Kee, S.; Kleinaltenkamp, M.J.; Voigt, T. Why Certify? Increasing Adoption of the Proposed EU Cybersecurity Certification Framework; Judge Business School, University of Cambridge: Cambridge, UK, 2018. [[Google Scholar](https://scholar.google.com/scholar_lookup?title=Why+Certify?+Increasing+Adoption+of+the+Proposed+EU+Cybersecurity+Certification+Framework&author=Cihon,+P.&author=Gutierrez,+G.M.&author=Kee,+S.&author=Kleinaltenkamp,+M.J.&author=Voigt,+T.&publication_year=2018)]\n151. Meyer, T. Soft law as delegation. Fordham Int. Law J. **2009**, 32, 888–942. [[Google Scholar](https://scholar.google.com/scholar_lookup?title=Soft+law+as+delegation&author=Meyer,+T.&publication_year=2009&journal=Fordham+Int.+Law+J.&volume=32&pages=888%E2%80%93942)]\n152. Marchant, G.E. “Soft law” governance of artificial intelligence. AI Pulse. 25 January 2019. Available online: (accessed on 11 September 2020).\n153. Google. Perspectives on Issues in AI Governance; Google: Mountain View, CA, USA, 2019; Available online: (accessed on 11 September 2020).\n154. Oreskes, N.; Conway, E.M. Merchants of Doubt: How a Handful of Scientists Obscured the Truth on Issues from Tobacco Smoke to Global Warming; Bloomsbury Press: New York, NY, USA, 2010; ISBN 9781596916104. [[Google Scholar](https://scholar.google.com/scholar_lookup?title=Merchants+of+Doubt:+How+a+Handful+of+Scientists+Obscured+the+Truth+on+Issues+from+Tobacco+Smoke+to+Global+Warming&author=Oreskes,+N.&author=Conway,+E.M.&publication_year=2010)]\n155. Ali, M.; Sapiezynski, P.; Bogen, M.; Korolova, A.; Mislove, A.; Rieke, A. Discrimination through optimization: How Facebook’s ad delivery can lead to skewed outcomes. In Proceedings of the ACM on Human-Computer Interaction, Lake Buena Vista, FL, USA, 26–28 July 2019; ACM: New York, NY, USA, 2019; Volume 3, pp. 1–30. [[Google Scholar](https://scholar.google.com/scholar_lookup?title=Discrimination+through+optimization:+How+Facebook%E2%80%99s+ad+delivery+can+lead+to+skewed+outcomes&conference=Proceedings+of+the+ACM+on+Human-Computer+Interaction&author=Ali,+M.&author=Sapiezynski,+P.&author=Bogen,+M.&author=Korolova,+A.&author=Mislove,+A.&author=Rieke,+A.&publication_year=2019&pages=1%E2%80%9330)]\n156. Ranking Digital Rights. 2019 RDR Corporate Accountability Index; Ranking Digital Rights: Budapest, Hungary, 2019; Available online: (accessed on 11 September 2020).\n157. Gebhart, G. Who Has Your Back? Censorship Edition 2019; Electronic Frontier Foundation: San Francisco, CA, USA, 2019; Available online: (accessed on 11 September 2020).\n158. AI Now Institute. Publications. Available online: (accessed on 11 September 2020).\n159. ACLU. Petition: Amazon: Get Out of the Surveillance Business. Available online: (accessed on 11 September 2020).\n160. Snow, J. Amazon’s Face recognition falsely matched 28 members of Congress with mugshots. ACLU. 26 July 2018. Available online: (accessed on 11 September 2020).\n161. ACLU National. An open letter to Amazon shareholders. Medium. 20 May 2019. Available online: (accessed on 11 September 2020).\n162. Mullins, B.; Nicas, J. Paying professors: Inside Google’s academic influence campaign. Wall Street Journal. 14 July 2017. Available online: (accessed on 11 September 2020).\n163. Taplin, J. Google’s disturbing influence over think tanks. The New York Times. 30 August 2017. Available online: (accessed on 11 September 2020).\n164. Tully, S.M.; Winer, R.S. The role of the beneficiary in willingness to pay for socially responsible products: A meta-analysis. Soc. Responsib. Prod. Supply Chain Manag. EJournal **2014**. [[Google Scholar](https://scholar.google.com/scholar_lookup?title=The+role+of+the+beneficiary+in+willingness+to+pay+for+socially+responsible+products:+A+meta-analysis&author=Tully,+S.M.&author=Winer,+R.S.&publication_year=2014&journal=Soc.+Responsib.+Prod.+Supply+Chain+Manag.+EJournal&doi=10.2139/ssrn.2420537)] [[CrossRef](https://doi.org/10.2139/ssrn.2420537)][[Green Version](http://pdfs.semanticscholar.org/bcea/a0c4b259911a50520fd4d0f60c132d3c0a6b.pdf)]\n165. Bijker, W.E. , Hughes, T.P., Pinch, T. The Social Construction of Technological Systems: New Directions in the Sociology and History of Technology, Anniversary ed.; MIT Press: Cambridge, MA, USA, 2012; ISBN 9780262517607. [[Google Scholar](https://scholar.google.com/scholar_lookup?title=,+Hughes,+T.P.,+Pinch,+T.+The+Social+Construction+of+Technological+Systems:+New+Directions+in+the+Sociology+and+History+of+Technology,+Anniversary+ed.&author=Bijker,+W.E.&publication_year=2012)]\n166. Business Insider Intelligence. The messaging apps report: Messaging apps are now bigger than social networks. Business Insider. 16 September 2016. Available online: (accessed on 11 September 2020).\n167. Legge, J.S., Jr.; Durant, R.F. Public opinion, risk assessment, and biotechnology: Lessons from attitudes toward genetically modified foods in the European Union. Rev. Policy Res. **2010**, 27, 59–76. [[Google Scholar](https://scholar.google.com/scholar_lookup?title=Public+opinion,+risk+assessment,+and+biotechnology:+Lessons+from+attitudes+toward+genetically+modified+foods+in+the+European+Union&author=Legge,+J.S.,+Jr.&author=Durant,+R.F.&publication_year=2010&journal=Rev.+Policy+Res.&volume=27&pages=59%E2%80%9376&doi=10.1111/j.1541-1338.2009.00427.x)] [[CrossRef](https://doi.org/10.1111/j.1541-1338.2009.00427.x)]\n168. Wiener, J.B.; Rogers, M.D. Comparing precaution in the United States and Europe. J. Risk Res. **2002**, 5, 317–349. [[Google Scholar](https://scholar.google.com/scholar_lookup?title=Comparing+precaution+in+the+United+States+and+Europe&author=Wiener,+J.B.&author=Rogers,+M.D.&publication_year=2002&journal=J.+Risk+Res.&volume=5&pages=317%E2%80%93349&doi=10.1080/13669870210153684)] [[CrossRef](https://doi.org/10.1080/13669870210153684)]\n169. Parker, K.; Horowitz, J.M.; Anderson, M. Majorities across racial, ethnic groups express support for the Black Lives Matter movement. Pew Research Center. 12 June 2020. Available online: (accessed on 11 September 2020).\n170. Brewster, T. The many ways Google Glass users risk breaking British privacy laws. Forbes. 30 June 2014. Available online: (accessed on 11 September 2020).\n171. Google. Google Glass. Available online: (accessed on 11 September 2020).\n172. Simonite, T. When it comes to gorillas, Google Photos remains blind. Wired. 11 January 2018. Available online: (accessed on 11 September 2020).\n173. Vogel, D. The Market for Virtue: The Potential and Limits of Corporate Social Responsibility; Brookings Institution Press: Washington, DC, USA, 2006; ISBN 9780815790761. [[Google Scholar](https://scholar.google.com/scholar_lookup?title=The+Market+for+Virtue:+The+Potential+and+Limits+of+Corporate+Social+Responsibility&author=Vogel,+D.&publication_year=2006)]\n174. Porter, M.E.; Kramer, M.R. Strategy and society: The link between competitive advantage and corporate social responsibility. Harvard Business Review. 1 December 2006. Available online: (accessed on 11 September 2020).\n175. Elements of AI. Elements of AI—Join the movement! Available online: (accessed on 11 September 2020).\n176. Baum, S.D. Medium-term artificial intelligence and society. Information **2020**, 11, 290. [[Google Scholar](https://scholar.google.com/scholar_lookup?title=Medium-term+artificial+intelligence+and+society&author=Baum,+S.D.&publication_year=2020&journal=Information&volume=11&pages=290&doi=10.3390/info11060290)] [[CrossRef](https://doi.org/10.3390/info11060290)]\n177. Deahl, D. Google employees demand the company pull out of Pentagon AI project. The Verge. 4 April 2018. Available online: (accessed on 11 September 2020).\n178. Griffith, E. Google won’t renew controversial Pentagon AI project. Wired. 1 June 2018. Available online: (accessed on 11 September 2020).\n179. Angwin, J.; Larson, J.; Mattu, S.; Kirchner, L. Machine bias. ProPublica. 23 May 2016. Available online: (accessed on 11 September 2020).\n180. Partnership on AI. Report on Algorithmic Risk Assessment Tools in the U.S. Criminal Justice System; Partnership on AI: San Francisco, CA, USA; Available online: (accessed on 11 September 2020).\n181. Hill, K. The secretive company that might end privacy as we know it. The New York Times. 1 January 2020. Available online: (accessed on 11 September 2020).\n182. Statt, N. Controversial facial recognition firm Clearview AI facing legal claims after damning NYT report. The Verge. 24 January 2020. Available online: (accessed on 11 September 2020).\n183. Alianza Naciónal de Campesinas; Algorithmic Justice League; American-Arab Anti-Discrimination Committee; American Friends Service Committee; Black and Brown Activism Defense Collective; Campaign for a Commercial-Free Childhood; Center for Digital Democracy; Coalition for Humane Immigrant Rights; Color of Change; Constitutional Alliance; et al. PCLOB Letter of Suspension of Facial Recognition Technology; Electronic Privacy Information Center: Washington, DC, USA, 2020; Available online: (accessed on 11 September 2020).\n184. Paul, K. Zoom releases security updates in response to “Zoom-bombings”. The Guardian. 23 April 2020. Available online: [http://www.theguardian.com/technology/2020/apr/23/zoom-update-security-encryption-bombing](https://www.theguardian.com/technology/2020/apr/23/zoom-update-security-encryption-bombing) (accessed on 11 September 2020).\n185. Wiener, J.B. The tragedy of the uncommons: On the politics of apocalypse. Glob. Policy **2016**, 7, 67–80. [[Google Scholar](https://scholar.google.com/scholar_lookup?title=The+tragedy+of+the+uncommons:+On+the+politics+of+apocalypse&author=Wiener,+J.B.&publication_year=2016&journal=Glob.+Policy&volume=7&pages=67%E2%80%9380&doi=10.1111/1758-5899.12319)] [[CrossRef](https://doi.org/10.1111/1758-5899.12319)][[Green Version](https://onlinelibrary.wiley.com/doi/pdfdirect/10.1111/1758-5899.12319)]\n186. Wieczner, J. How Jeff Bezos reacts to ‘negative’ Amazon articles in the Washington Post. Fortune. 27 October 2017. Available online: (accessed on 11 September 2020).\n187. European Commission. Better Regulation “Toolbox”; European Commission: Brussels, Belguim, 2017. [[Google Scholar](https://scholar.google.com/scholar_lookup?title=Better+Regulation+%E2%80%9CToolbox%E2%80%9D&author=European+Commission&publication_year=2017)]\n188. European Commission. White Paper on Artificial Intelligence—A European Approach to Excellence and Trust; European Commission: Brussels, Belgium, 2020. [[Google Scholar](https://scholar.google.com/scholar_lookup?title=White+Paper+on+Artificial+Intelligence%E2%80%94A+European+Approach+to+Excellence+and+Trust&author=European+Commission&publication_year=2020)]\n189. Wachter, S.; Mittelstadt, B.; Russell, C. Counterfactual explanations without opening the Black Box: Automated decisions and the GDPR. Harv. J. Law Technol. **2018**, 31. [[Google Scholar](https://scholar.google.com/scholar_lookup?title=Counterfactual+explanations+without+opening+the+Black+Box:+Automated+decisions+and+the+GDPR&author=Wachter,+S.&author=Mittelstadt,+B.&author=Russell,+C.&publication_year=2018&journal=Harv.+J.+Law+Technol.&volume=31&doi=10.2139/ssrn.3063289)] [[CrossRef](https://doi.org/10.2139/ssrn.3063289)][[Green Version](http://arxiv.org/pdf/1711.00399)]\n190. Sartor, G.; European Parliament; European Parliamentary Research Service; Scientific Foresight Unit. The Impact of the General Data Protection Regulation (GDPR) on Artificial Intelligence: Study; European Parliamentary Research Service: Brussels, Belgium, 2020; ISBN 9789284667710. [[Google Scholar](https://scholar.google.com/scholar_lookup?title=The+Impact+of+the+General+Data+Protection+Regulation+(GDPR)+on+Artificial+Intelligence:+Study&author=Sartor,+G.&author=European+Parliament&author=European+Parliamentary+Research+Service&author=Scientific+Foresight+Unit&publication_year=2020)]\n191. Independent High-Level Expert Group on Artificial Intelligence. Ethics Guidelines for Trustworthy AI; Report B-1049; European Commission: Brussels, Belgium, 2019. [[Google Scholar](https://scholar.google.com/scholar_lookup?title=Ethics+Guidelines+for+Trustworthy+AI&author=Independent+High-Level+Expert+Group+on+Artificial+Intelligence&publication_year=2019)]\n192. Webster, G.; Creemers, R.; Triolo, P.; Kania, E. Full translation: China’s “New Generation Artificial Intelligence Development Plan”. New American. 1 August 2017. Available online: (accessed on 11 September 2020).\n193. The White House. Executive Order on Maintaining American Leadership in Artificial Intelligence. In The White House; 11 February 2019. Available online: (accessed on 11 September 2020).\n194. Vought, R.T. Memorandum for the Heads of Executive Departments and Agencies. In The White House; 2020. Available online: (accessed on 11 September 2020).\n195. Hepburn, G. Alternatives to Traditional Regulation; OECD: Paris, France, 2006. [[Google Scholar](https://scholar.google.com/scholar_lookup?title=Alternatives+to+Traditional+Regulation&author=Hepburn,+G.&publication_year=2006)]\n196. DARPA. The Grand Challenge for Autonomous Vehicles. Available online: (accessed on 11 September 2020).\n197. Edler, J.; Georghiou, L. Public procurement and innovation—Resurrecting the demand side. Res. Policy **2007**, 36, 949–963. [[Google Scholar](https://scholar.google.com/scholar_lookup?title=Public+procurement+and+innovation%E2%80%94Resurrecting+the+demand+side&author=Edler,+J.&author=Georghiou,+L.&publication_year=2007&journal=Res.+Policy&volume=36&pages=949%E2%80%93963&doi=10.1016/j.respol.2007.03.003)] [[CrossRef](https://doi.org/10.1016/j.respol.2007.03.003)]\n198. Edquist, C.; Zabala-Iturriagagoitia, J.M. Public procurement for innovation as mission-oriented innovation policy. Res. Policy **2012**, 41, 1757–1769. [[Google Scholar](https://scholar.google.com/scholar_lookup?title=Public+procurement+for+innovation+as+mission-oriented+innovation+policy&author=Edquist,+C.&author=Zabala-Iturriagagoitia,+J.M.&publication_year=2012&journal=Res.+Policy&volume=41&pages=1757%E2%80%931769&doi=10.1016/j.respol.2012.04.022)] [[CrossRef](https://doi.org/10.1016/j.respol.2012.04.022)]\n199. Hetcher, S. The FTC as internet privacy norm entrepreneur. Vanderbilt Law Rev. **2000**, 53, 2041–2062. [[Google Scholar](https://scholar.google.com/scholar_lookup?title=The+FTC+as+internet+privacy+norm+entrepreneur&author=Hetcher,+S.&publication_year=2000&journal=Vanderbilt+Law+Rev.&volume=53&pages=2041%E2%80%932062&doi=10.2139/ssrn.253317)] [[CrossRef](https://doi.org/10.2139/ssrn.253317)]\n200. U.S. Federal Trade Commission. Facebook Settles FTC Charges That It Deceived Consumers by Failing to Keep Privacy Promises. 2011. Available online: (accessed on 11 September 2020).\n201. Confessore, N. Cambridge Analytica and Facebook: The scandal and the fallout so far. The New York Times. 4 April 2018. Available online: (accessed on 11 September 2020).\n202. Fair, L. FTC’s $5 billion Facebook settlement: Record-breaking and history-making. In U.S. Federal Trade Commission; 424 July 2019. Available online: (accessed on 11 September 2020).\n203. Facebook Investor Relations. Facebook Reports Fourth Quarter and Full Year 2019 Results; Facebook: Menlo Park, CA, USA, 2020; Available online: (accessed on 11 September 2020).\n204. European Parliament; Council of the European Union. Regulation (EU) No 596/2014 of the European Parliament and of the Council of 16 April 2014 on market abuse (market abuse regulation) and repealing Directive 2003/6/EC of the European Parliament and of the Council and Commission Directives 2003/124/EC, 2003/125/EC and 2004/72/EC Text with EEA relevance. OJL **2014**, 173, 1–61. [[Google Scholar](https://scholar.google.com/scholar_lookup?title=Regulation+(EU)+No+596/2014+of+the+European+Parliament+and+of+the+Council+of+16+April+2014+on+market+abuse+(market+abuse+regulation)+and+repealing+Directive+2003/6/EC+of+the+European+Parliament+and+of+the+Council+and+Commission+Directives+2003/124/EC,+2003/125/EC+and+2004/72/EC+Text+with+EEA+relevance&author=European+Parliament&author=Council+of+the+European+Union&publication_year=2014&journal=OJL&volume=173&pages=1%E2%80%9361)]\n205. Schuett, J. A Legal Definition of AI. arXiv **2019**, arXiv:1909.01095. [[Google Scholar](https://scholar.google.com/scholar_lookup?title=A+Legal+Definition+of+AI&author=Schuett,+J.&publication_year=2019&journal=arXiv&doi=10.2139/ssrn.3453632)] [[CrossRef](https://doi.org/10.2139/ssrn.3453632)]\n206. Blind, K.; Petersen, S.S.; Riillo, C.A.F. The impact of standards and regulation on innovation in uncertain markets. Res. Policy **2017**, 46, 249–264. [[Google Scholar](https://scholar.google.com/scholar_lookup?title=The+impact+of+standards+and+regulation+on+innovation+in+uncertain+markets&author=Blind,+K.&author=Petersen,+S.S.&author=Riillo,+C.A.F.&publication_year=2017&journal=Res.+Policy&volume=46&pages=249%E2%80%93264&doi=10.1016/j.respol.2016.11.003)] [[CrossRef](https://doi.org/10.1016/j.respol.2016.11.003)]\n207. Vogel, D. Trading Up: Consumer and Environmental Regulation in a Global Economy; Harvard University Press: Cambridge, MA, USA, 1995; ISBN 9780674900837. [[Google Scholar](https://scholar.google.com/scholar_lookup?title=Trading+Up:+Consumer+and+Environmental+Regulation+in+a+Global+Economy&author=Vogel,+D.&publication_year=1995)]\n208. Bradford, A. The Brussels Effect: How the European Union Rules the World; Oxford University Press: New York, NY, USA, 2020; ISBN 9780190088583. [[Google Scholar](https://scholar.google.com/scholar_lookup?title=The+Brussels+Effect:+How+the+European+Union+Rules+the+World&author=Bradford,+A.&publication_year=2020)]\n209. West, S.M.; Whittaker, M.; Crawford, K. Discriminating Systems: Gender, Race, and Power in AI; AI Now Institute: New York, NY, USA, 2019; p. 33. [[Google Scholar](https://scholar.google.com/scholar_lookup?title=Discriminating+Systems:+Gender,+Race,+and+Power+in+AI&author=West,+S.M.&author=Whittaker,+M.&author=Crawford,+K.&publication_year=2019)]\n210. Conger, K.; Fausset, R.; Kovaleski, S.F. San Francisco bans facial recognition technology. The New York Times. 14 May 2019. Available online: (accessed on 11 September 2020).\n211. Johnson, K. Boston bans facial recognition due to concern about racial bias. VentureBeat. 24 June 2020. Available online: (accessed on 11 September 2020).\n212. Blunt, R. S.847—116th Congress (2019–2020): Commercial Facial Recognition Privacy Act of 2019. 14 March 2019. [[Google Scholar](https://scholar.google.com/scholar_lookup?title=S.847%E2%80%94116th+Congress+(2019%E2%80%932020):+Commercial+Facial+Recognition+Privacy+Act+of+2019&author=Blunt,+R.&publication_year=2019)]\n213. Merkley, J. S.3284—116th Congress (2019–2020): Ethical Use of Facial Recognition Act. 12 February 2020. [[Google Scholar](https://scholar.google.com/scholar_lookup?title=S.3284%E2%80%94116th+Congress+(2019%E2%80%932020):+Ethical+Use+of+Facial+Recognition+Act&author=Merkley,+J.&publication_year=2020)]\n214. Smith, B. Facial recognition technology: The need for public regulation and corporate responsibility. Microsoft. 13 July 2018. Available online: (accessed on 11 September 2020).\n215. Smith, B. Facial recognition: It’s time for action. Microsoft. 6 December 2018. Available online: (accessed on 11 September 2020).\n216. OECD. Principles on Artificial Intelligence. Available online: (accessed on 11 September 2020).\n217. Butcher, J.; Beridze, I. What is the state of artificial intelligence governance globally? RUSI J. **2019**, 164, 88–96. [[Google Scholar](https://scholar.google.com/scholar_lookup?title=What+is+the+state+of+artificial+intelligence+governance+globally?&author=Butcher,+J.&author=Beridze,+I.&publication_year=2019&journal=RUSI+J.&volume=164&pages=88%E2%80%9396&doi=10.1080/03071847.2019.1694260)] [[CrossRef](https://doi.org/10.1080/03071847.2019.1694260)]\n218. BSR. Google Celebrity Recognition API Human Rights Assessment Executive Summary. Available online: (accessed on 11 September 2020).\n219. OECD. Due Diligence Guidance for Responsible Business Conduct; OECD Publishing: Paris, France, 2018. [[Google Scholar](https://scholar.google.com/scholar_lookup?title=Due+Diligence+Guidance+for+Responsible+Business+Conduct&author=OECD&publication_year=2018)]\n220. OECD. Due Diligence Guidance for Responsible Supply Chains of Minerals from Conflict-Affected and High-Risk Areas, 3rd ed.; OECD Publishing: Paris, France, 2016; ISBN 9789264252387. [[Google Scholar](https://scholar.google.com/scholar_lookup?title=Due+Diligence+Guidance+for+Responsible+Supply+Chains+of+Minerals+from+Conflict-Affected+and+High-Risk+Areas&author=OECD&publication_year=2016)]\n221. Marchant, G.E.; Allenby, B.R.; Herkert, J.R. The Growing Gap Between Emerging Technologies and Legal-Ethical Oversight the Pacing Problem; The International Library of Ethics, Law and Technology; Springer: Amsterdam, The Netherlands, 2011; Volume 7, ISBN 9789400713567. [[Google Scholar](https://scholar.google.com/scholar_lookup?title=The+Growing+Gap+Between+Emerging+Technologies+and+Legal-Ethical+Oversight+the+Pacing+Problem&author=Marchant,+G.E.&author=Allenby,+B.R.&author=Herkert,+J.R.&publication_year=2011)]\n\n\n\n![Information 12 00275 g001 550]()\n\n\n**Figure 1.**\nAI system lifecycle.\n\n\n\n\n **Figure 1.**\nAI system lifecycle.\n![Information 12 00275 g001]()\n\n\n| | |\n| --- | --- |\n| | **Publisher’s Note:** MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations. |\n\n\n \n© 2021 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license ().", "url": "https://www.mdpi.com/2078-2489/12/7/275", "title": "Corporate Governance of Artificial Intelligence in the Public Interest", "source": "html_articles", "source_type": "journalArticle", "source_filetype": "pdf", "date_published": "2021-07-04T22:00:00Z", "authors": ["Peter Cihon", "Jonas Schuett", "Seth D. Baum"], "summary": [], "id": "be162f0165be8745fc3f5503cabce21c"} {"text": "Abstract\n--------\n\n**:**\nSuperintelligence is a potential type of future artificial intelligence (AI) that is significantly more intelligent than humans in all major respects. If built, superintelligence could be a transformative event, with potential consequences that are massively beneficial or catastrophic. Meanwhile, the prospect of superintelligence is the subject of major ongoing debate, which includes a significant amount of misinformation. Superintelligence misinformation is potentially dangerous, ultimately leading bad decisions by the would-be developers of superintelligence and those who influence them. This paper surveys strategies to counter superintelligence misinformation. Two types of strategies are examined: strategies to prevent the spread of superintelligence misinformation and strategies to correct it after it has spread. In general, misinformation can be difficult to correct, suggesting a high value of strategies to prevent it. This paper is the first extended study of superintelligence misinformation. It draws heavily on the study of misinformation in psychology, political science, and related fields, especially misinformation about global warming. The strategies proposed can be applied to lay public attention to superintelligence, AI education programs, and efforts to build expert consensus.\n\n\nKeywords: [artificial intelligence](/search?q=artificial+intelligence); [superintelligence](/search?q=superintelligence); [misinformation](/search?q=misinformation)\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n 1. Introduction\n----------------\n\nAt present, there is an active scholarly and public debate regarding the future prospect of artificial superintelligence (henceforth just superintelligence), which is artificial intelligence (AI) that is significantly more intelligent than humans in all major respects. While much of the issue remains unsettled, some specific arguments are clearly incorrect, and as such can qualify as misinformation. (As is elaborated below, arguments can qualify as misinformation even when the issues are unsettled.) More generally, misinformation can be defined as “false or inaccurate information” [[1](#B1-information-09-00244)], or as “information that is initially presented as true but later found to be false” [[2](#B2-information-09-00244)] (p. 1). This paper addresses the question of what can be done to reduce the spread of and belief in superintelligence misinformation.While any misinformation is problematic, superintelligence misinformation is especially worrisome due to the high stakes involved. If built, superintelligence could have transformative consequences, which could be either massively beneficial or catastrophic. Catastrophe is more likely to come from a superintelligence built based on the wrong ideas—and it could also come from not building a superintelligence that would have been based on the right ideas, because a well-designed superintelligence could prevent other types of catastrophe, such that abstaining from building such a superintelligence could result in catastrophe. Thus, the very survival of the human species could depend on avoiding or rejecting superintelligence misinformation. Furthermore, the high stakes of superintelligence have the potential to motivate major efforts to attempt to build it or to prevent others from doing so. Such efforts could include massive investments or restrictive regulations on research and development (R&D), or plausibly even international conflict. It is important for these sorts of efforts to be based on the best available understanding of superintelligence.Superintelligence is also an issue that attracts a substantial amount of misinformation. The abundance of misinformation may be due to the many high-profile portrayals of superintelligence in science fiction, the tendency for popular media to circulate casual comments about superintelligence made by various celebrities, and the relatively low profile of more careful scholarly analyses. Whatever the cause, experts and others often find themselves responding to some common misunderstandings [[3](#B3-information-09-00244),[4](#B4-information-09-00244),[5](#B5-information-09-00244),[6](#B6-information-09-00244),[7](#B7-information-09-00244),[8](#B8-information-09-00244),[9](#B9-information-09-00244)].There is also potential for superintelligence disinformation: misinformation with the intent to deceive. There is a decades-long history of private industry and anti-regulation ideologues promulgating falsehoods about socio-technological issues in order to avoid government regulations. This practice was pioneered by the tobacco industry in the 1950s and has since been adopted by other industries including fossil fuels and industrial chemicals [[10](#B10-information-09-00244),[11](#B11-information-09-00244)]. AI is increasingly important for corporate profits and thus could be a new area of anti-regulatory disinformation [[12](#B12-information-09-00244)]. The history of corporate disinformation and the massive amounts of profit potentially at stake suggest that superintelligence disinformation campaigns could be funded at a large scale and could be a major factor in the overall issue. Superintelligence disinformation could potentially come from other sources as well, such as governments or even concerned citizens seeking to steer superintelligence debates and practices in particular directions.Finally, there is the subtler matter of the information that has not yet been established as misinformation, but is nonetheless incorrect. This misinformation is the subject of ongoing scholarly debates. Active superintelligence debates consider whether superintelligence will or will not be built, whether it will or will not be dangerous, and a number of other conflicting possibilities. Clearly, some of these positions are false and thus can qualify as misinformation. For example, claims that superintelligence will be built and that it will not be built cannot both be correct. However, it is not presently known which positions are false, and there is often no expert consensus on which positions are likely to be false. While the concept of misinformation is typically associated with information that is more obviously false, it nonetheless applies to these subtler cases, which can indeed be “information that is initially presented as true but later found to be false”. Likewise, countering misinformation presents a similar challenge regardless of whether the misinformation is spread before or after expert consensus is reached (though, as discussed below, expert consensus can be an important factor).In practical terms, the question then is what to do about it. There have been a number of attempts to reply to superintelligence misinformation in order to set the record straight [[3](#B3-information-09-00244),[4](#B4-information-09-00244),[5](#B5-information-09-00244),[6](#B6-information-09-00244),[7](#B7-information-09-00244),[8](#B8-information-09-00244),[9](#B9-information-09-00244)]. However, to the best of the present author’s knowledge, aside from a brief discussion in [[12](#B12-information-09-00244)], there have been no efforts to examine the most effective ways of countering superintelligence misinformation. Given the potential importance of the matter, a more careful examination is warranted. That is the purpose of this paper. The paper’s discussion is relevant to public debates about superintelligence, to AI education programs (e.g., in university computer science departments), and to efforts to build expert consensus about superintelligence.In the absence of dedicated literature on superintelligence misinformation, this paper draws heavily on the more extensive research literature studying misinformation about other topics, especially global warming (e.g., [[10](#B10-information-09-00244),[13](#B13-information-09-00244),[14](#B14-information-09-00244)]), as well as the general literature on misinformation in psychology, cognitive science, political science, sociology, and related fields (for reviews, see [[2](#B2-information-09-00244),[15](#B15-information-09-00244)]). This paper synthesizes insights from these literatures and applies them to the particular circumstances of superintelligence. The paper is part of a broader effort to develop the social science of superintelligence by leveraging insights from other issues [[12](#B12-information-09-00244),[16](#B16-information-09-00244)].The paper is organized as follows. [Section 2](#sec2-information-09-00244) presents some examples of superintelligence misinformation, in order to further motivate the overall discussion. [Section 3](#sec3-information-09-00244) surveys the major actors and audiences (i.e., the senders and receivers) of superintelligence misinformation, in order to provide some strategic guidance. [Section 4](#sec4-information-09-00244) presents several approaches for preventing the spread of superintelligence misinformation. [Section 5](#sec5-information-09-00244) presents approaches for countering superintelligence misinformation that has already spread. [Section 6](#sec6-information-09-00244) concludes. 2. Examples of Superintelligence Misinformation\n------------------------------------------------\n\nIt is often difficult to evaluate which information about superintelligence is false. This is because superintelligence is a possible future technology that may be substantially different from anything that currently exists, and because it is the subject of a relatively small amount of study. For comparison, other studies of misinformation have looked at such matters as whether Barack Obama was born in the United States, whether childhood vaccines cause autism, and whether Listerine prevents colds and sore throats [[17](#B17-information-09-00244)]. In each of these cases, there is clear and compelling evidence pointing in one direction or the other (the evidence clearly indicates that Obama was born in the US, that vaccines do not cause autism, and that Listerine does not prevent colds or sore throats, despite many claims to the contrary in all three cases). Therefore, an extra degree of caution is warranted when considering whether a particular claim about superintelligence qualifies as misinformation.That said, some statements about superintelligence are clearly false. For example, this statement from Steven Pinker: “As far as I know, there are no projects to build an AGI, not just because it would be commercially dubious, but also because the concept is barely coherent” [[18](#B18-information-09-00244)]. The acronym AGI stands for artificial general intelligence, which is a form of AI closely associated with superintelligence. Essentially, AGI is AI that is capable of reasoning across a wide range of domains. AGI may be difficult to build, but the concept is very much coherent. Indeed, it has a substantial intellectual history and ongoing study [[19](#B19-information-09-00244)], including a dedicated research journal (Journal of Artificial General Intelligence) and professional society (the Artificial General Intelligence Society). Furthermore, there are indeed projects to build AGI—one recent survey identifies 45, spread across many countries and institutions, including many for-profit corporations, the largest of which being DeepMind, acquired by Google in 2014 for £400 million, the Human Brain Project, an international project with $1 billion in funding from the European Commission, and OpenAI, a nonprofit with $1 billion in pledged funding [[20](#B20-information-09-00244)]. (DeepMind and OpenAI explicitly identify as working on AGI. The Human Brain Project does not, but it is working on simulating the human brain, which is considered to be a subfield of AGI [[19](#B19-information-09-00244)].) There is even an AGI project at Pinker’s own university. (Pinker and the AGI project MicroPsi [[21](#B21-information-09-00244)] are both at Harvard University.) Therefore, in the quoted statement, the “as far as I know” part may well be true, but the rest is clearly false. This particular point of misinformation is significant because it conveys the false impression that AGI (and superintelligence) is a nonissue, when in fact it is a very real and ongoing subject of R&D.A more controversial matter is the debate on the importance of consciousness to superintelligence. Searle [[22](#B22-information-09-00244)] argues that computers cannot be conscious and therefore, at least in a sense, cannot be intelligent, and likewise cannot have motivation to destroy humanity. Similar arguments have been made by Logan [[23](#B23-information-09-00244)], for example. A counterargument is that the important part is not the consciousness a computer but its capacity to affect the world [[4](#B4-information-09-00244),[24](#B24-information-09-00244),[25](#B25-information-09-00244)]. It has also been argued that AI could be harmful to humanity even if it is not specifically motivated to do so, because the AI could assess humanity as being in the way of it achieving some other goal [[25](#B25-information-09-00244),[26](#B26-information-09-00244)]. The fact that AI has already shown the capacity to outperform humans in some domains is suggestive of the possibility for it to outperform humans in a wider range of domains, regardless of whether the AI is conscious. However, this is an ongoing area of debate, and indeed Chalmers [[24](#B24-information-09-00244)] (p. 16) writes “I do not think the matter can be regarded as entirely settled”. Regardless, there must be misinformation on one side or the other: computers either can be conscious or they cannot, and consciousness either matters for superintelligence or it does not. Additionally, many parties to the debate maintain that those who believe that consciousness or conscious motivation matter are misinformed [[4](#B4-information-09-00244),[5](#B5-information-09-00244),[7](#B7-information-09-00244),[8](#B8-information-09-00244),[9](#B9-information-09-00244)], though it is not the purpose of this paper to referee this debate.There are even subtler debates among experts who believe in the prospect of superintelligence. For example, Bostrom [[25](#B25-information-09-00244)] worries that it would be difficult to test the safety of a superintelligence because it could trick its human safety testers into believing it is safe (the “treacherous turn”), while Goertzel [[27](#B27-information-09-00244)] proposes that the safety testing for a superintelligence would not be so difficult because the AI could be tested before it becomes superintelligent (the “sordid stumble”; the term is from [[28](#B28-information-09-00244)]). Essentially, Bostrom argues that an AI would become capable of deceiving humans before humans realize it is unsafe, whereas Goertzel argues the opposite. Only one of these views can be correct; the other would qualify as misinformation. More precisely, only one of these views can be correct for a given AI system—it is possible that some AI systems could execute a treacherous turn while others would make a sordid stumble. Which view is more plausible is a matter of ongoing study [[28](#B28-information-09-00244),[29](#B29-information-09-00244)]. This debate is important because it factors significantly into the riskiness of attempting to build a superintelligence.Many more additional examples could be presented, such as on the dimensionality of intelligence [[3](#B3-information-09-00244)], the rate of progress in AI [[7](#B7-information-09-00244),[8](#B8-information-09-00244)], the structure of AI goals [[6](#B6-information-09-00244),[7](#B7-information-09-00244),[8](#B8-information-09-00244)], and the relationship between human and AI styles of thinking [[6](#B6-information-09-00244),[8](#B8-information-09-00244)]. However, this is not the space for a detailed survey. Instead, the focus of this paper is on what to do about the misinformation. Likewise, this paper does not wish to take positions on open debates about superintelligence. Some positions may be more compelling, but arguments for or against them are tangential to this paper’s aim of reducing the preponderance of misinformation. In other words, this paper strives to be largely neutral on which information about superintelligence happens to be true or false. The above remarks by Pinker will occasionally be used as an example of superintelligence misinformation because they are so clearly false, whereas the falsity of other claims is more ambiguous.The above examples suggest two types of superintelligence misinformation: information that is already clearly false and information that may later be found to be false. In practice, there may be more of a continuum of how clearly true or false a piece of information is. Nonetheless, this distinction can be a useful construct for efforts to address superintelligence misinformation. The clearly false information can be addressed with the same techniques that are used for standard cases of misinformation, such as Obama’s place of birth. The not-yet-resolved information requires more careful analysis, including basic research about superintelligence, but it can nonetheless leverage some insights from the misinformation literature.The fact that superintelligence is full of not-yet-resolved information is important in its own right, and it has broader implications for superintelligence misinformation. Specifically, the extent of expert consensus is an important factor in the wider salience of misinformation. This matter is discussed in more detail below. Therefore, while this paper is mainly concerned with the type of misinformation that is clearly false, it will consider both types. With that in mind, the paper now starts to examine strategies for countering superintelligence misinformation. 3. Actors and Audiences\n------------------------\n\nSome purveyors of superintelligence misinformation can be more consequential than others. Ditto for the audiences for superintelligence misinformation. This is important to bear in mind because it provides strategic direction to any efforts to counter the misinformation. Therefore, this section reviews who the important actors and audiences may be.Among the most important are the R&D groups that may be building superintelligence. While they can be influential sources of ideas about superintelligence, they may be especially important as audiences. For example, if they are misinformed regarding the treacherous turn vs. the sordid stumble, then they could fail to correctly assess the riskiness of their AI system.Also important are the institutions that support the R&D. At present, most AGI R&D groups are based in either for-profit corporations or universities, and some also receive government funding [[20](#B20-information-09-00244)]. Regulatory bodies within these institutions could ensure that R&D projects are proceeding safely, such as via university research review boards [[30](#B30-information-09-00244),[31](#B31-information-09-00244)]. Successful regulation depends on being well-informed about the nature of AGI and superintelligence and its prospects and risks. The same applies to R&D funding decisions by institutional funders, private donors, and others. Additionally, while governments are not presently major developers of AGI, except indirectly as funders, they could become important developers should they later decide to do so, and they meanwhile can play important roles in regulation and in facilitating discussion across R&D groups.Corporations are of particular note due to their long history of spreading misinformation about their own technologies, in particular to convey the impression that the technologies are safer than they actually are [[10](#B10-information-09-00244)]. These corporate actors often wield enormous resources and have a correspondingly large effect on the overall issue, either directly or by sponsoring industry-aligned think tanks, writers, and other intermediaries. At this time, there are only hints of such behavior by AI corporations, but the profitability of AI and other factors suggest the potential for much more [[12](#B12-information-09-00244)].Thought leaders on superintelligence are another significant group. In addition to the aforementioned groups, this also includes people working on other aspects of superintelligence, such as safety and policy issues, as well as people working on other (non-superintelligence) forms of AI, and public intellectuals and celebrities. These are all people who can have outsized influence when they comment on superintelligence. That influence can be on the broader public, as well as in quieter conversations with AGI/superintelligence R&D groups, would-be regulators, and other major decision-makers.Finally, there is the lay public. The role of the public in superintelligence may be reduced due to the issue being driven by technology R&D that (for now at least) occurs primarily in the private sector. However, the public can play roles as citizens of governments that might regulate the R&D and as consumers of products of the corporations that host the R&D. The significance of the public for superintelligence is not well established at this time.While the above groups are presented in approximate order of importance, it would not be appropriate to formally rank them. What matters is not the importance of the group but the quality of the opportunity that one has to reduce misinformation. This will tend to vary heavily by the circumstances of whoever is seeking to reduce the extent of superintelligence misinformation.With that in mind, the paper now turns to strategies for reducing superintelligence misinformation. 4. Preventing Superintelligence Misinformation\n-----------------------------------------------\n\nThe cliché “an ounce of prevention is worth a pound of cure” may well be an understatement for misinformation. An extensive empirical literature finds that once misinformation enters into someone’s mind, it can be very difficult to remove.Early experiments showed that people can even make use of information that they acknowledge to be false. In these experiments, people were told a story and then were explained that some information in the story is false. When asked, subjects would correctly acknowledge the information to be false, but they would also use it in retelling the story as if it were true. For example, the story could be a fire caused by volatile chemicals, and then it is later explained that there were no volatile chemicals present. Subjects would acknowledge that the volatile chemicals were absent but then cite them as the cause of the fire. This is logically incoherent. The fact that people do this speaks to the cognitive durability that misinformation can have [[32](#B32-information-09-00244),[33](#B33-information-09-00244)].The root of the matter appears to be that human memory does not simply write and overwrite like computer memory. Corrected misinformation does not vanish. Ecker et al. [[15](#B15-information-09-00244)] trace this to the conflicting needs for memory stability and flexibility:\n\n> Human memory is faced with the conundrum of maintaining stable memory representations (which is the whole point of having a memory in the first place) while also allowing for flexible modulation of memory representations to keep up-to-date with reality. Memory has evolved to achieve both of these aims, and hence it does not work like a blackboard: Outdated things are rarely actually wiped out and over-written; instead, they tend to linger in the background, and access to them is only gradually lost.[[15](#B15-information-09-00244)] (p. 15)\n\nThere are some techniques for reducing the cognitive salience of misinformation; these are discussed in detail below. However, in many cases, it would be highly desirable to simply avoid the misinformation in the first place. Therefore, this section presents some strategies for preventing superintelligence misinformation.The ideas for preventing superintelligence misinformation are inevitably more speculative than those for correcting it. There are two reasons for this. One is that the correction of misinformation has been the subject of a relatively extensive literature, while the prevention of misinformation has received fairly little scholarly attention. (Rare examples of studies on preventing misinformation are [[34](#B34-information-09-00244),[35](#B35-information-09-00244)].) The other reason is that the correction of misinformation is largely cognitive and thus conducive to simple laboratory experiments, whereas the prevention of misinformation is largely sociological and thus requires a more complex and case-specific analysis. Nonetheless, given the importance of preventing superintelligence misinformation, it is important to consider potential strategies for doing so.#### 4.1. Educate Prominent Voices about Superintelligence\n\nPerhaps the most straightforward approach to preventing superintelligence misinformation is to educate people who have prominent voices in discussions about superintelligence. The aim here is to give them a more accurate understanding of superintelligence so that they can pass that along to their respective audiences. Prominent voices about superintelligence can include select scholars, celebrities, or journalists, among others.Educating the prominent may be easier said than done. For starters, they can be difficult to access, due to busy schedules and multitudes of other voices competing for their attention. Additionally, some of them they may already believe superintelligence misinformation, especially those who are already spreading it. Misinformation is difficult to correct in general, and may be even more difficult to correct for busy people who lack the mental attention to revise their thinking. (See [Section 5.4](#sec5dot4-information-09-00244) for further discussion of this point.) People already spreading misinformation may seem to be ideal candidates for educational efforts, in order to persuade them to change their tune, but it may actually be more productive to engage with people who have not yet made up their minds. Regardless, there is no universal formula for this sort of engagement, and the best opportunities may often be a matter of particular circumstance.One model that may be of some value is the effort to improve the understanding of global warming among broadcast meteorologists. Broadcast meteorologists are for many people the primary messenger of environmental science. Furthermore, as a group, meteorologists (broadcast and non-broadcast) have traditionally been more skeptical about global warming than most of their peers in other Earth sciences [[36](#B36-information-09-00244),[37](#B37-information-09-00244)]. In light of this, several efforts have been made to provide broadcast meteorologists with a better understanding of climate science, in hopes that they would pass this on to their audiences (e.g., [[38](#B38-information-09-00244),[39](#B39-information-09-00244)]).The case of broadcast meteorologists has important parallels to the many AI computer scientists who do not specialize in AGI or superintelligence. Both groups have expertise on a topic that is closely related to, but not quite the same as, the topic at hand. Broadcast meteorologists’ expertise is weather, whereas global warming is about climate. (Weather concerns the day-to-day fluctuations in meteorological conditions, whereas climate concerns the long-term trends. An important distinction is that while weather can only be forecast a few days in advance, climate can be forecasted years or decades in advance.) Similarly, most AI computer scientists focus on AI that has “narrow” intelligence (intelligence in a limited range of domains), not AGI. Additionally, broadcast meteorologists and narrow AI computer scientists are often asked to voice their views on climate change and AGI, respectively.#### 4.2. Create Reputational Costs for Misinformers\n\nWhen prominent voices cannot be persuaded to change their minds, they can at least be punished for getting it wrong. Legal punishment is possible in select cases ([Section 4.5](#sec4dot5-information-09-00244)). However, reputational punishment is almost always possible and has potential to be quite effective, especially for public intellectuals whose brands depend on a good intellectual reputation.In an analysis of US healthcare policy debates, Nyhan [[40](#B40-information-09-00244)] concludes that correcting misinformation is extremely difficult and that increasing reputational costs may be more effective. Nyhan [[40](#B40-information-09-00244)] identifies misinformation that was critical to two healthcare debates: in the 1990s, the false claim that the policy proposed by President Bill Clinton would prevent people from keeping their current doctors, and in the 2000s, the false claim that the policy proposed by President Barack Obama would have established government “death panels” to deny life-sustaining coverage to the elderly. Nyhan [[40](#B40-information-09-00244)] traces this misinformation to Betsy McCaughey, a scholar and politician generally allied with US conservative politics and opposed to these healthcare policy proposals:\n\n> “Until the media stops giving so much attention to misinformers, elites on both sides will often succeed in creating misperceptions, especially among sympathetic partisans. And once such beliefs take hold, few good options exist to counter them—correcting misperceptions is simply too difficult. The most effective approach may therefore be for concerned scholars, citizens, and journalists to (a) create negative publicity for the elites who are promoting misinformation, increasing the costs of making false claims in the public sphere, and (b) pressure the media to stop providing coverage to serial dissemblers”.[[40](#B40-information-09-00244)] (p. l6)\n\nNyhan [[40](#B40-information-09-00244)] further notes that while McCaughey’s false claims were widely praised in the 1990s, including with a National Magazine Award, they were heavily criticized in the 2000s, damaging her reputation and likely reducing the spread of the misinformation.There is some evidence indicating the possibility that reputational threats can succeed at reducing misinformation. Nyhan and Reifler [[34](#B34-information-09-00244)] sent a randomized group of US state legislators a series of letters warning them about the reputational and electoral harms that the legislators could face if an independent fact checker (specifically, PolitiFact) finds them to make false statements. The study found that the legislators receiving the warnings were significantly less likely to make false statements. This finding is especially applicable to superintelligence misinformation spread by politicians, whose statements are more likely to be evaluated by fact checker like PolitiFact. Conceivably, similar fact checking systems could be developed for other types of public figures, or even for more low-profile professional discourse such as occurs among scientists and other technical experts. Similarly, Tsipursky and Morford [[41](#B41-information-09-00244)] and Tsipursky et al. [[35](#B35-information-09-00244)] describe a Pro-Truth Pledge aimed at committing people to refrain from spreading misinformation and to ask other people to retract misinformation, which can serve as a reputational punishment for misinformers, as well as a reputational benefit for those who present accurate information. Initial evaluations provide at least anecdotal support for the pledge having a positive effect on the information landscape.For superintelligence misinformation, creating reputational costs has potential to be highly effective. A significant portion of influential voices in the debate have scholarly backgrounds and reputations that they likely wish to protect. For example, many of Steven Pinker’s remarks about superintelligence are clearly misinformed, including the one discussed in [Section 2](#sec2-information-09-00244) and several in his recent book Enlightenment Now [[42](#B42-information-09-00244)]. (For detailed analysis of Enlightenment Now, see Torres [[9](#B9-information-09-00244)].) Given Pinker’s scholarly reputation, it may be productive to spread a message such as ‘Steven Pinker is unenlightened about AI’.At the same time, it is important to recognize the potential downsides of imposing reputational costs. Criticizing a person can damage one’s relationship with them, reducing other sorts of opportunities. For example, criticizing people who may be building superintelligence could make them less receptive to other efforts to make their work safer. (Or, it could make them more receptive—this can be highly specific to individual personalities and contexts.) Additionally, it can impose reputational costs on the critic, such as a reputation of negativity or of seeking to restrict free speech. Caution is especially warranted for cases in which the misinformation comes from a professional contrarian, who may actually benefit from and relish in the criticism. For example, Marshall [[43](#B43-information-09-00244)] (p. 72–73) warns climate scientists against debating professional climate deniers, since the latter tend to be more skilled at debate, especially televised debate, even though the arguments of the former are more sound. The same could apply for superintelligence, if it is to ever have a similar class of professional debaters. Thus, the imposition of reputation costs is a strategy to pursue selectively in certain instances of superintelligence misinformation.#### 4.3. Mobilize against Institutional Misinformation\n\nThe most likely institutional sources of superintelligence misinformation are the corporations involved in AI R&D, especially R&D for AGI and superintelligence. These companies have a vested interest in cultivating the impression that their technologies are safe and good for the world.For these companies, reputational costs can also be significant. Corporate reputation can be important for consumer interest in the companies’ products, citizen and government interest in imposing regulations on the companies, investor expectations of future profits, and employee interest in working for the companies. Therefore, one potential strategy is to incentivize companies so as to align their reputation with accurate information about superintelligence.A helpful point of comparison is to corporate messaging about environmental issues, in particular the distinction between “greenwashing” and “brownwashing” [[44](#B44-information-09-00244)]. Greenwashing is when a company portrays itself as protecting the environment when it is actually causing much environmental harm. For example, a fossil fuel company may publicize the greenhouse gas emissions reductions from solar panels it installs on its headquarters building while downplaying the fact that its core business model is a major driver of greenhouse gas emissions. In contrast, brownwashing is when a company declines to publicize its efforts towards environmental protection, perhaps because they have customers who oppose environmental protection or investors who worry it reduces profitability. In short, greenwashing is aimed at audiences that value environmental protection, while brownwashing is aimed at audiences that disvalue it.Greenwashing is often criticized for giving companies a better environmental reputation than they deserve. In many cases that criticism may be fair. However, from an environmental communication standpoint, greenwashing does have the benefit of promoting a pro-environmental message. At a minimum, audiences of greenwashing are told that environmental protection is important. Audiences may also be given accurate information about environmental issues—for example, an advertisement that touts a fossil fuel company’s greenhouse gas emissions reductions may also correctly explain that global warming is real and is caused by human action.Similarly, there may be value in motivating AI companies to present accurate messages about superintelligence. This could be accomplished by cultivating demand for accurate messages among the companies’ audiences. For example, if the public wants to hear accurate messages about superintelligence, then corporate advertising may be designed accordingly. The advertising might overstate the company’s positive role, which would be analogous to greenwashing and could likewise be harmful for reducing accountability for bad corporate actors, but even then it would at least be spreading an accurate message about superintelligence.Another strategy is for the employees of AI companies to mobilize against the companies supporting superintelligence misinformation, or against misinformation in general. At present, this may be a particularly promising strategy. There is a notable recent precedent for this in the successful employee action against Google’s participation in Project Maven, a defense application of AI [[45](#B45-information-09-00244)]. While not specifically focused on misinformation, this incident demonstrates the potential for employee action to change the practices of AI companies, including when those practices would otherwise be profitable for the company.#### 4.4. Focus Media Attention on Constructive Debates\n\nPublic media can inadvertently spread misinformation via the journalistic norm of balance. For the sake of objectivity, journalists often aim to cover “both sides” of an issue. While this can be constructive for some issues, it can also spread misinformation. For example, media coverage has often presented “both sides” of the “debate” over whether tobacco causes cancer or whether human activity causes global warming, even when one side is clearly correct and the other side has a clear conflict of interest [[10](#B10-information-09-00244),[13](#B13-information-09-00244)].One potential response for this is to attempt to focus media attention on legitimate open questions about a given issue, questions for which there are two meaningful sides to cover. For global warming, this could be a debate over the appropriate role of nuclear power or the merits of carbon taxes. For superintelligence, it could be a debate over the appropriate role of government regulations, or over the values that superintelligence (or AI in general) should be designed to promote. These sorts of debates satisfy the journalistic interest in covering two sides of an issue and provide a dramatic tension that can make for a better story, all while drawing attention to important open questions and affirming basic information about the topic.#### 4.5. Establish Legal Requirements\n\nFinally, there may be some potential to legally require certain actors, especially corporations, to refrain from spreading misinformation. A notable precedent is the court decision of United States v. Philip Morris, in which nine tobacco companies and two tobacco trade organizations were found guilty of conspiring to deceive the public about the link between tobacco and cancer. Such legal decisions can have powerful effect.However, legal requirements may be poorly suited to superintelligence misinformation. First, legal requirements can be slow to develop. The court case United States v. Philip Morris began in 1999, an initial ruling was reached in 2006, and that ruling was upheld in 2009. Furthermore, United States v. Philip Morris came only after several decades of tobacco industry misinformation. Given the evolving nature of AI technology, it could be difficult to pin down which information is correct over such long time periods. Second, superintelligence is a future technology for which much of the correct information cannot be established with the same degree of rigor. Furthermore, if and when superintelligence is built, it could be so transformative as to render current legal systems irrelevant. (For more general discussion of the applicability of legal mechanisms to superintelligence, see [[46](#B46-information-09-00244),[47](#B47-information-09-00244),[48](#B48-information-09-00244)].) For these reasons, legal requirements are less likely to play a significant role in preventing superintelligence misinformation. 5. Correcting Superintelligence Misinformation\n-----------------------------------------------\n\nCorrecting misinformation is sufficiently difficult that it will often be better to prevent it from spreading in the first place. However, when superintelligence misinformation cannot be prevented, there are strategies available for correcting it in the minds of those who are exposed to it. Correcting misinformation is the subject of a fairly extensive literature in psychology, political science, and related fields [[2](#B2-information-09-00244),[15](#B15-information-09-00244),[33](#B33-information-09-00244),[49](#B49-information-09-00244)]. For readers unfamiliar with this literature, Cook et al. [[2](#B2-information-09-00244)] provide an introductory overview accessible to an interdisciplinary readership, while Ecker et al. [[15](#B15-information-09-00244)] provide a more detailed and technical survey. This section applies this literature to the correction of superintelligence misinformation.#### 5.1. Build Expert Consensus and the Perception Thereof\n\nAt present, there exists substantial expert disagreement about a wide range of aspects of superintelligence, from basic matters such as whether superintelligence is possible [[50](#B50-information-09-00244),[51](#B51-information-09-00244),[52](#B52-information-09-00244)] and when it might occur if it does [[53](#B53-information-09-00244),[54](#B54-information-09-00244),[55](#B55-information-09-00244)] to subtler matters such as the treacherous turn vs. the sordid stumble. The situation stands in contrast to the extensive expert consensus on other issues such as global warming [[56](#B56-information-09-00244)]. (Experts lack consensus on some important details about global warming, such as how severe the damage is likely to be, but they have a high degree of consensus on the basic contours of the issue.).The case of global warming shows that expert consensus on its own does not counteract misinformation. On the contrary, misinformation about global warming continues to thrive despite the existence of consensus. However, there is reason to believe that the consensus helps. For starters, much of the misinformation is specifically oriented towards creating the false perception that there is no consensus [[10](#B10-information-09-00244)]. The scientific consensus is a target of misinformation because it is believed to be an important factor in people’s overall beliefs. Indeed, several studies have documented a strong correlation among the lay public between rejection of the science of global warming and belief that there is no consensus [[57](#B57-information-09-00244),[58](#B58-information-09-00244)]. Further studies find that presenting messages describing the consensus increases belief in climate science and support for policy to reduce greenhouse gas emissions [[14](#B14-information-09-00244),[59](#B59-information-09-00244)]. Notably, this effect is observed for people across the political spectrum, including those who would have political motivation to doubt the science. (Such motivations are discussed further in [Section 5.2](#sec5dot2-information-09-00244).) All of this indicates an important role for expert consensus in broader beliefs about global warming.For superintelligence, at present there is no need to spread misinformation about the existence of consensus because there is rather little consensus. Therefore, a first step is to work towards consensus. (This of course should be consensus grounded on the best possible analysis, not consensus for the sake of consensus.) This may be difficult for superintelligence because of the inherent challenge of understanding future technologies and the complexity of advanced AI. Global warming has its own complexities, but the core science is relatively simple: increased atmospheric greenhouse gas concentrations trap sunlight and raise temperatures. However, at least some aspects of superintelligence should be easy enough to get consensus on, starting with the fact that there are a number of R&D groups attempting to build AGI. Other aspects may be more difficult to build consensus on, but this consensus is at least something that can be pursued via normal channels of expert communication: research articles, conference symposia, private correspondence, and so on.Given the existence of consensus, it is also important to raise awareness about it. The consensus cannot counteract misinformation if nobody knows about it. The global warming literature provides good models for documenting expert consensus [[56](#B56-information-09-00244)], and such findings of consensus can be likewise be publicized.#### 5.2. Address Pre-Existing Motivations for Believing Misinformation\n\nThe human mind tends to not process new information in isolation, but instead processes it in relation to wider beliefs and understandings of the world. This can be very valuable, enabling us to understand the context behind new information and relate it to existing knowledge. For example, people would typically react with surprise and confusion upon seeing an object rise up to the ceiling instead of fall down to the floor. This new information is related to a wider understanding of the fact that objects fall downwards. People may even struggle to believe their own eyes unless there is a compelling explanation. (For example, perhaps the object and the ceiling are both magnetized.). Additionally, if people did not see it with their own eyes, but instead heard it reported by someone else, they may be even less likely to believe it. In other words, they are motivated to believe that the story is false, even if it is true. This phenomenon is known as motivated reasoning.While generally useful, motivated reasoning can be counterproductive in the context of misinformation, prompting people to selectively believe misinformation over correct information. This occurs in particular when the misinformation accords better with preexisting beliefs than the correct information. In the above example, misinformation could be that the object fell down to the floor instead of rising to the ceiling.Motivated reasoning is a major factor in the belief of misinformation about politically contentious issues such as climate change. The climate science consensus is rejected mainly by people who believe that government regulation of industry is generally a bad thing [[14](#B14-information-09-00244),[59](#B59-information-09-00244)]. In principle, belief that humans are warming the planet should have nothing to do with belief that government regulations are harmful. It is logically coherent to believe in global warming yet argue that carbon emissions should not be regulated. However, in practice, the science of global warming often threatens people’s wider beliefs about regulations, and so they find themselves motivated to reject the science.Motivated reasoning can also be a powerful factor for beliefs about superintelligence. A basic worldview is that humans are in control. Per this worldview, human technology is a tool; the idea that it could rise up against humanity is a trope for science fiction, not something to be taken seriously. The prospect of superintelligence threatens this worldview, predisposing people to not take superintelligence seriously. In this context, it may not help that media portrayals of the scholarly debate about superintelligence commonly include reference to science fiction, such as by using pictures of the Terminator. As one expert who is concerned about superintelligence states, “I think that at this point all of us on all sides of this issue are annoyed with the journalists who insist on putting a picture of the Terminator on every single article they publish of this topic” [[60](#B60-information-09-00244)].Motivated reasoning has been found to be linked to people’s sense of self-worth. As one study puts it, “the need for self-integrity—to see oneself as good, virtuous, and efficacious—is a basic human motivation” [[61](#B61-information-09-00244)] (p. 415). When correct information threatens people’s self-worth, they are more motivated to instead believe misinformation, so as to preserve their self-worth. Furthermore, motivated reasoning can be reduced by having people consciously reaffirm their own self-worth, such as by recalling to themselves ways in which they successfully live up to their personal values [[61](#B61-information-09-00244)]. Essentially, with their sense of self-worth firmed up, they become more receptive to information that would otherwise threaten their self-worth.As a technology that could outperform humans, superintelligence could pose an especially pronounced threat to people’s sense of self-worth. It may be difficult for people to feel good and efficacious if they would soon be superseded by computers. For at least some people, this could be a significant reason to reject information about the prospect of superintelligence, even if that information is true. At the same time, it may still be valuable for messages about superintelligence to be paired with messages of affirmation.Another important set of motivations comes from the people active in superintelligence debates. Many people in the broader computer science field of AI have been skeptical of claims about superintelligence. These people may be motivated by a desire to protect the reputation and funding of the field of AI, and in turn protect their self-worth as AI researchers. AI has a long history of boom-bust cycles in which hype about superintelligence and related advanced AI falls flat and contributes to an “AI winter”. Peter Bentley, an AI computer scientist who has spoken out against contemporary claims about superintelligence, is explicit about this:\n\n> “Large claims lead to big publicity, which leads to big investment, and new regulations. And then the inevitable reality hits home. AI does not live up to the hype. The investment dries up. The regulation stifles innovation. And AI becomes a dirty phrase that no-one dares speak. Another AI Winter destroys progress” [[62](#B62-information-09-00244)] (p. 10). “Do not be fearful of AI—marvel at the persistence and skill of those human specialists who are dedicating their lives to help create it. And appreciate that AI is helping to improve our lives every day” (p. 11).\n\nWhile someone’s internal motivations can only be inferred from such text, the text is at least suggestive of motivations to protect self-worth and livelihood as an AI researcher, as well as a worldview in which AI is a positive force for society.To take another example, Torres [[9](#B9-information-09-00244)] proposes that Pinker’s dismissal of AGI and superintelligence is motivated by Pinker’s interest in promoting a narrative in which science and technology bring progress—a narrative that could be threatened by the potential catastrophic risk from superintelligence.Conversely, some people involved in superintelligence debates may be motivated to believe in the prospect of superintelligence. For example, researcher Jürgen Schmidhuber writes on his website that “since age 15 or so, the main goal of professor Jürgen Schmidhuber has been to build a self-improving Artificial Intelligence (AI) smarter than himself, then retire.” [[63](#B63-information-09-00244)] Superintelligence is also sometimes considered the “grand dream” of AI [[64](#B64-information-09-00244)]. Other common motivations include a deep interest in transformative future outcomes [[65](#B65-information-09-00244)] and a deep concern about extreme catastrophic risks [[4](#B4-information-09-00244),[66](#B66-information-09-00244),[67](#B67-information-09-00244)]. People with these worldviews may be predisposed to believe certain types of claims about superintelligence. If it turns out that superintelligence will not be built, or would not have transformative or catastrophic effects, then this can undercut people’s deeply held beliefs in the importance of superintelligence, transformative futures, and/or catastrophic risks.For each of these motivations for interest in superintelligence, there can be information that is rejected because it cuts against the motivations and misinformation that is accepted because it supports the motivations. Therefore, in order to advance superintelligence debates, it can be valuable to affirm people’s motivations when presenting conflicting information. For example, one could affirm that AI computer scientists are making impressive and important contributions to the world, and then explain reasons why superintelligence may nonetheless be a possibility worth considering. One could affirm that science and technology are bringing a great deal of progress, and then explain reasons why some technologies could nonetheless be dangerous. One could affirm that superintelligence is indeed a worthy dream, or that transformative futures are indeed important to pay attention to, and then explain reasons why superintelligence might not be built. Finally, one could affirm that extreme catastrophic risks are indeed an important priority for human society, and then explain reasons why superintelligence may not be such a large risk after all. These affirming messaging strategies could predispose participants in superintelligence debates to consider a wider range of possibilities and make more progress on the issue, including progress towards expert consensus.Another strategy is to align motivations with accurate beliefs about superintelligence. For example, some AI computer scientists may worry that belief in the possibility of superintelligence could damage reputation and funding. However, if belief in the possibility of superintelligence would bring reputational and funding benefits, then the same people may be more comfortable expressing such belief. Reputational benefits could be created, for example, via slots in high-profile conferences and journals, or by association with a critical mass of reputable computer scientists who also believe in the possibility of superintelligence. Funding could likewise be made available. Noting that funding and space in conferences and journals are often scarce resources, it could be advantageous to target these resources at least in part toward shifting motivations of important actors in superintelligence debates. This example of course assumes that it is correct to believe in the possibility of superintelligence. The same general strategy of aligning motivations may likewise be feasible for other beliefs about superintelligence.The above examples—concerning the reputations and funding of AI computer scientists, the possibility of building superintelligence, and the importance of transformative futures and catastrophic risks—all involve experts or other communities that are relatively attentive to the prospect of superintelligence. Other motivations could be significant for the lay public, policy makers, and other important actors. Research on the public understanding of science finds that cultural factors, such as political ideology, can factor significantly in the interpretation of scientific information [[68](#B68-information-09-00244),[69](#B69-information-09-00244)]. Kahan et al. [[69](#B69-information-09-00244)] (p. 79) propose to “shield” scientific evidence and related information “from antagonistic cultural information”. For superintelligence, this could mean attempting to frame superintelligence (or, more generally, AI) as a nonpartisan social issue. At least in the US, if an issue becomes politically partisan, legislation typically becomes substantially more difficult to pass. Likewise, discussions of AI and superintelligence should, where reasonably feasible, attempt to avoid close association with polarizing ideologies and cultural divisions.The fact that early US legislation on AI has been bipartisan is encouraging. For example, H.R.4625, FUTURE of Artificial Intelligence Act of 2017, sponsored by John Delaney (Democrat) and co-sponsored by Pete Olson (Republican), and H.R.5356, National Security Commission Artificial Intelligence Act of 2018, sponsored by Elise Stefanik (Republican) and co-sponsored by James Langevin (Democrat). This is a trend that should be praised and encouraged to continue.#### 5.3. Inoculate with Advance Warnings\n\nThe misinformation literature has developed the concept of inoculation, in which people are preemptively educated about a piece of misinformation so that they will not believe it if and when they later hear it. For example, someone might be told that there is a false rumor that vaccines cause autism, such that when they later hear the rumor, they know to recognize it as false. The aim is to get people to correctly understand the truth about a piece of misinformation from the beginning, so that their minds never falsely encode it. Inoculation has been found to work better than simply telling people the correct information [[70](#B70-information-09-00244)].Inoculation messages can include why a piece of misinformation is incorrect as well as why it is being spread [[71](#B71-information-09-00244)]. For example, misinformation casting doubt on the idea that global temperatures are rising could be inoculated with an explanation of how scientists have established that global temperatures are rising. The inoculation could also explain that industries are intentionally casting doubt about global temperature increases in order to avoid regulations and increase profits. Likewise, for superintelligence, misinformation claiming that there are no projects seeking to build AGI could be inoculated by explanations of the existence of AGI R&D projects, and perhaps also explanations of the motivations of people who claim that there are no such projects. For example, Torres [[9](#B9-information-09-00244)] proposes that Pinker’s dismissal of AGI and superintelligence is motivated by Pinker’s interest in promoting a narrative in which science and technology bring progress—a narrative that could be threatened by the potential catastrophic risk from superintelligence.#### 5.4. Explain Misinformation and Corrections\n\nWhen people are exposed to misinformation, it can be difficult to correct, as first explained in [Section 4](#sec4-information-09-00244). This phenomenon has been studied in great depth, with the terms “continued influence” and “belief perseverance” used for cases in which debunked information continues to influence people’s thinking [[72](#B72-information-09-00244),[73](#B73-information-09-00244)]. There is also an “illusion of truth”, in which information explained to be false is later misremembered as true—essentially, the mind remembers the information but forgets its falsity [[74](#B74-information-09-00244)]. The difficulty of correcting misinformation is why this paper has emphasized strategies to prevent of misinformation from spreading in the first place.Adding to the challenge is the fact that attempts to debunk misinformation can inadvertently reinforce it. This phenomenon is known as the “backfire effect” [[74](#B74-information-09-00244)]. Essentially, when someone hears “X is false”, it can strengthen their mental representation of X, thereby reinforcing the misinformation. This effect has been found to be especially pronounced among the elderly [[74](#B74-information-09-00244)]. One explanation is that correcting the misinformation (i.e., successfully processing “X is false”) requires the use of strategic memory, but strategic memory requires dedicated mental effort and is less efficient among the elderly [[15](#B15-information-09-00244)]. Unless enough strategic memory is allocated to processing “X is false”, the statement can end up reinforcing belief in X.These findings about the backfire effect have important consequences for superintelligence misinformation. Fortunately, many important audiences for superintelligence misinformation are likely to have strong strategic memories. Among the prominent actors in superintelligence debates, relatively few are elderly, and many of them have intellectual pedigrees that may endow them with strong strategic memories. On the other hand, many of the prominent actors are busy people with limited mental energy available for processing corrections about superintelligence information. As a practical matter, people attempting to debunk superintelligence misinformation should generally avoid “X is false” messages, especially when their audience may be paying limited attention.One technique that has been particularly successful at correcting misinformation is the use of refutational text, which provides detailed explanations of why the misinformation is incorrect, what the correct information is, and why it is correct. Refutational text has been used mainly as a classroom tool for helping students overcome false preexisting beliefs about course topics [[75](#B75-information-09-00244),[76](#B76-information-09-00244)]. Refutational text has even been used to turn misinformation into a valuable teaching tool [[77](#B77-information-09-00244)]. A meta-analysis found refutational text to be the most effective technique for correcting misinformation in the context of science education—that is, for enabling students to overcome preexisting misconceptions about science topics [[78](#B78-information-09-00244)].A drawback of refutational text is that it can require more effort and attention than simpler techniques. Refutational text may be a valuable option in classrooms or other settings in which one has an audience’s extended attention. Such settings include many venues of scholarly communication, which can be important for superintelligence debates. However, refutational texts may be less viable in other settings, such as social media and television news program interviews, in which one can often only get in a short sound bite. Therefore, refutational text may be relatively well-suited for interactions with experts and other highly engaged participants in superintelligence debates, and relatively poorly suited for much of the lay public and others who may only hear occasional passing comments about superintelligence. That said, it may still be worth producing and disseminating extended refutations for lay public audiences, such as in long-format videos and articles for television, magazines, and online. These may tend to only reach the most motivated segments of the lay public, but they can nonetheless be worthwhile. 6. Conclusions\n---------------\n\nSuperintelligence is a high-stakes potential future technology as well as a highly contested socio-technological issue. It is also fertile terrain for misinformation. Making progress on the issue requires identifying and rejecting misinformation and accepting accurate information. Some progress will require technical research to clarify the nature of superintelligence. However, a lot of progress will likely also require the sorts of sociological and psychological strategies outlined in this paper. The most progress may come from interdisciplinary projects connecting computer science, social science, and other relevant fields. Computer science is a highly technical field, but as with all fields, it is ultimately composed of human beings. By appreciating the nuances of the human dimensions of the field, it may be possible to make better progress towards understanding superintelligence and acting responsibly about it.As the first dedicated study of strategies for countering superintelligence misinformation, this paper has taken a broad view, surveying a range of options. Despite this breadth, there may still be additional options worth further attention. Indeed, this paper has only mined a portion of the insights contained within the existing literature on misinformation. There may also be compelling options that go beyond the literature. Likewise, because of this paper’s breadth, it has given relatively shallow treatment to each of the options. More detailed attention to the various option would be another worthy focus of future research.An especially valuable focus would be the proposed strategies for preventing superintelligence misinformation. Because misinformation can be so difficult to correct, preventing it may be the more effective strategy. There is also less prior research on the prevention of misinformation. For these reasons, there is likely to be an abundance of important research opportunities on the prevention of misinformation, certainly for superintelligence misinformation and perhaps also for misinformation in general.For the prevention of superintelligence misinformation, a strategy that may be particularly important to study further is dissuading AI corporations from using their substantial resources to spread superintelligence misinformation. The long history of corporations engaging in such tactics, with a major impact on the surrounding debates, suggests that this could be a highly important factor for superintelligence [[12](#B12-information-09-00244)]. It may be especially valuable to study this at an early stage, before such tactics are adopted.For the correction of superintelligence misinformation, a particularly promising direction is on the motivations and worldviews of prominent actors and audiences in superintelligence debates. Essentially, what are people’s motivations with respect to superintelligence? Are AI experts indeed motivated to protect their field? Are superintelligence developers motivated by the “grand dream”? Are others who believe in the prospect of superintelligence motivated by beliefs about transformative futures or catastrophic risks? Can attention to these sorts of motivations help them overcome their divergent worldviews and make progress towards consensus on the topic? Finally, are people in general motivated to retain their sense of self-worth in the face of a technology that could render them inferior?Most important, however, is not the research on superintelligence misinformation, but the efforts to prevent and correct it. It can often be stressful and thankless work, especially amidst the heated debates, but it is essential to ensuring positive outcomes. This paper is one effort towards helping this work succeed. Given the exceptionally high potential stakes, it is vital that decisions about superintelligence be well-informed.\n\n\nFunding\n-------\n\nThis research received no external funding.Acknowledgments\n---------------\n\nOlle Häggström, Tony Barrett, Brendan Nyhan, Maurizio Tinnirello, Stephan Lewandowsky, Michael Laakasuo, Phil Torres, and three anonymous reviewers provided helpful feedback on earlier versions of this paper. All remaining errors are the author’s alone. The views expressed in this paper are the author’s and not necessarily the views of the Global Catastrophic Risk Institute.Conflicts of Interest\n---------------------\n\nThe author declares no conflict of interest.References\n----------\n\n1. Definition of Misinformation in English by Oxford Dictionaries. Available online: (accessed on 9 September 2018).\n2. Cook, J.; Ecker, U.; Lewandowsky, S. Misinformation and how to correct it. In Emerging Trends in the Social and Behavioral Sciences; John Wiley & Sons: Hoboken, NJ, USA, 2015; pp. 1–17. [[Google Scholar](https://scholar.google.com/scholar_lookup?title=Misinformation+and+how+to+correct+it&author=Cook,+J.&author=Ecker,+U.&author=Lewandowsky,+S.&publication_year=2015&pages=1%E2%80%9317)]\n3. Kelly, K. The Myth of a Superhuman AI. Available online: (accessed on 24 September 2018).\n4. Häggström, O. Here Be Dragons: Science, Technology and the Future of Humanity; Oxford University Press: Oxford, UK, 2016. [[Google Scholar](https://scholar.google.com/scholar_lookup?title=Here+Be+Dragons:+Science,+Technology+and+the+Future+of+Humanity&author=H%C3%A4ggstr%C3%B6m,+O.&publication_year=2016)]\n5. Häggström, O. Michael Shermer Fails in His Attempt to Argue That AI Is Not an Existential Threat. Häggström Hävdar. 19 September 2017. Available online: [http://haggstrom.blogspot.com/2017/09/michael-shermer-fails-in-his-attempt-to.html](https://haggstrom.blogspot.com/2017/09/michael-shermer-fails-in-his-attempt-to.html) (accessed on 24 September 2018).\n6. Häggström, O. The AI meeting in Brussels Last Week. Häggström Hävdar. 23 October 2017. Available online: [http://haggstrom.blogspot.com/2017/10/the-ai-meeting-in-brussels-last-week.html](https://haggstrom.blogspot.com/2017/10/the-ai-meeting-in-brussels-last-week.html) (accessed on 9 September 2018).\n7. Muehlhauser, L. Three Misconceptions in Edge.org’s Conversation on “The Myth of AI”; Machine Intelligence Research Institute: Berkeley, CA, USA, 18 November 2014; Available online: (accessed on 24 September 2018).\n8. Torres, P. Why Superintelligence Is a Threat That Should Be Taken Seriously. Bulletin of the Atomic Scientists. 24 October 2017. Available online: (accessed on 24 September 2018).\n9. Torres, P. A Detailed Critique of One Section of Steven Pinker’s Chapter “Existential Threats” in Enlightenment Now. Project for Future Human Flourishing Technical Report 2, Version 1.2. 2018. Available online: (accessed on 9 September 2018).\n10. Oreskes, N.; Conway, E.M. Merchants of Doubt: How a Handful of Scientists Obscured the Truth on Issues from Tobacco Smoke to Global Warming; Bloomsbury: New York, NY, USA, 2010. [[Google Scholar](https://scholar.google.com/scholar_lookup?title=Merchants+of+Doubt:+How+a+Handful+of+Scientists+Obscured+the+Truth+on+Issues+from+Tobacco+Smoke+to+Global+Warming&author=Oreskes,+N.&author=Conway,+E.M.&publication_year=2010)]\n11. Grandjean, P. Only One Chance: How Environmental Pollution Impairs Brain Development—And How to Protect the Brains of the Next Generation; Oxford University Press: Oxford, UK, 2013. [[Google Scholar](https://scholar.google.com/scholar_lookup?title=Only+One+Chance:+How+Environmental+Pollution+Impairs+Brain+Development%E2%80%94And+How+to+Protect+the+Brains+of+the+Next+Generation&author=Grandjean,+P.&publication_year=2013)]\n12. Baum, S.D. Superintelligence skepticism as a political tool. Information **2018**, 9, 209. [[Google Scholar](https://scholar.google.com/scholar_lookup?title=Superintelligence+skepticism+as+a+political+tool&author=Baum,+S.D.&publication_year=2018&journal=Information&volume=9&pages=209&doi=10.3390/info9090209)] [[CrossRef](https://doi.org/10.3390/info9090209)]\n13. Boykoff, M.T.; Boykoff, J.M. Balance as bias: Global warming and the US prestige press. Glob. Environ. Chang. **2004**, 14, 125–136. [[Google Scholar](https://scholar.google.com/scholar_lookup?title=Balance+as+bias:+Global+warming+and+the+US+prestige+press&author=Boykoff,+M.T.&author=Boykoff,+J.M.&publication_year=2004&journal=Glob.+Environ.+Chang.&volume=14&pages=125%E2%80%93136&doi=10.1016/j.gloenvcha.2003.10.001)] [[CrossRef](https://doi.org/10.1016/j.gloenvcha.2003.10.001)]\n14. Lewandowsky, S.; Gignac, G.E.; Vaughan, S. The pivotal role of perceived scientific consensus in acceptance of science. Nat. Clim. Chang. **2013**, 3, 399–404. [[Google Scholar](https://scholar.google.com/scholar_lookup?title=The+pivotal+role+of+perceived+scientific+consensus+in+acceptance+of+science&author=Lewandowsky,+S.&author=Gignac,+G.E.&author=Vaughan,+S.&publication_year=2013&journal=Nat.+Clim.+Chang.&volume=3&pages=399%E2%80%93404&doi=10.1038/nclimate1720)] [[CrossRef](https://doi.org/10.1038/nclimate1720)]\n15. Ecker, U.K.H.; Swire, B.; Lewandowsky, S. Correcting misinformation—A challenge for education and cognitive science. In Processing Inaccurate Information: Theoretical and Applied Perspectives from Cognitive Science and the Educational Sciences; Rapp, D.N., Braasch, J.L.G., Eds.; MIT Press: Cambridge, MA, USA, 2014; pp. 13–38. [[Google Scholar](https://scholar.google.com/scholar_lookup?title=Correcting+misinformation%E2%80%94A+challenge+for+education+and+cognitive+science&author=Ecker,+U.K.H.&author=Swire,+B.&author=Lewandowsky,+S.&publication_year=2014&pages=13%E2%80%9338)]\n16. Baum, S.D. On the promotion of safe and socially beneficial artificial intelligence. AI Soc. **2017**, 32, 543–551. [[Google Scholar](https://scholar.google.com/scholar_lookup?title=On+the+promotion+of+safe+and+socially+beneficial+artificial+intelligence&author=Baum,+S.D.&publication_year=2017&journal=AI+Soc.&volume=32&pages=543%E2%80%93551&doi=10.1007/s00146-016-0677-0)] [[CrossRef](https://doi.org/10.1007/s00146-016-0677-0)]\n17. Lewandowsky, S.; Ecker, U.K.H.; Seifert, C.M.; Schwarz, N.; Cook, J. Misinformation and its correction: Continued influence and successful debiasing. Psychol. Sci. Public Interest **2012**, 13, 106–131. [[Google Scholar](https://scholar.google.com/scholar_lookup?title=Misinformation+and+its+correction:+Continued+influence+and+successful+debiasing&author=Lewandowsky,+S.&author=Ecker,+U.K.H.&author=Seifert,+C.M.&author=Schwarz,+N.&author=Cook,+J.&publication_year=2012&journal=Psychol.+Sci.+Public+Interest&volume=13&pages=106%E2%80%93131&doi=10.1177/1529100612451018&pmid=26173286)] [[CrossRef](https://doi.org/10.1177/1529100612451018)] [[PubMed](http://www.ncbi.nlm.nih.gov/pubmed/26173286)]\n18. Pinker, S. We’re Told to Fear Robots. But Why Do We Think They’ll Turn on Us?”. Popular Science. 13 February 2018. Available online: (accessed on 9 September 2018).\n19. Goertzel, B. Artificial general intelligence: Concept, state of the art, and future prospects. J. Artif. Gen. Intell. **2014**, 5, 1–48. [[Google Scholar](https://scholar.google.com/scholar_lookup?title=Artificial+general+intelligence:+Concept,+state+of+the+art,+and+future+prospects&author=Goertzel,+B.&publication_year=2014&journal=J.+Artif.+Gen.+Intell.&volume=5&pages=1%E2%80%9348&doi=10.2478/jagi-2014-0001)] [[CrossRef](https://doi.org/10.2478/jagi-2014-0001)]\n20. Baum, S.D. A Survey of Artificial General Intelligence Projects for Ethics, Risk, and Policy. Global Catastrophic Risk Institute Working Paper 17-1. 2017. Available online: (accessed on 9 September 2018).\n21. Cognitive Artificial Intelligence: The MicroPsi Project. Available online: (accessed on 9 September 2018).\n22. Searle, J.R. What Your Computer Can’t Know. New York Review of Books, 9 October 2014. [[Google Scholar](https://scholar.google.com/scholar_lookup?title=What+Your+Computer+Can%E2%80%99t+Know&author=Searle,+J.R.&publication_year=2014)]\n23. Logan, R.K. Can computers become conscious, an essential condition for the Singularity? Information **2017**, 8, 161. [[Google Scholar](https://scholar.google.com/scholar_lookup?title=Can+computers+become+conscious,+an+essential+condition+for+the+Singularity?&author=Logan,+R.K.&publication_year=2017&journal=Information&volume=8&pages=161&doi=10.3390/info8040161)] [[CrossRef](https://doi.org/10.3390/info8040161)]\n24. Chalmers, D.J. The singularity: A philosophical analysis. J. Conscious. Stud. **2010**, 17, 7–65. [[Google Scholar](https://scholar.google.com/scholar_lookup?title=The+singularity:+A+philosophical+analysis&author=Chalmers,+D.J.&publication_year=2010&journal=J.+Conscious.+Stud.&volume=17&pages=7%E2%80%9365)]\n25. Bostrom, N. Superintelligence: Paths, Dangers, Strategies; Oxford University Press: Oxford, UK, 2014. [[Google Scholar](https://scholar.google.com/scholar_lookup?title=Superintelligence:+Paths,+Dangers,+Strategies&author=Bostrom,+N.&publication_year=2014)]\n26. Omohundro, S.M. The basic AI drives. In Artificial General Intelligence 2008: Proceedings of the First AGI Conference; Wang, P., Goertzel, B., Franklin, S., Eds.; IOS: Amsterdam, The Netherlands, 2008; pp. 483–492. [[Google Scholar](https://scholar.google.com/scholar_lookup?title=The+basic+AI+drives&author=Omohundro,+S.M.&publication_year=2008&pages=483%E2%80%93492)]\n27. Goertzel, B. Infusing advanced AGIs with human-like value systems: Two theses. J. Evol. Technol. **2016**, 26, 50–72. [[Google Scholar](https://scholar.google.com/scholar_lookup?title=Infusing+advanced+AGIs+with+human-like+value+systems:+Two+theses&author=Goertzel,+B.&publication_year=2016&journal=J.+Evol.+Technol.&volume=26&pages=50%E2%80%9372)]\n28. Baum, S.D.; Barrett, A.M.; Yampolskiy, R.V. Modeling and interpreting expert disagreement about artificial superintelligence. Informatica **2017**, 41, 419–428. [[Google Scholar](https://scholar.google.com/scholar_lookup?title=Modeling+and+interpreting+expert+disagreement+about+artificial+superintelligence&author=Baum,+S.D.&author=Barrett,+A.M.&author=Yampolskiy,+R.V.&publication_year=2017&journal=Informatica&volume=41&pages=419%E2%80%93428)]\n29. Danaher, J. Why AI doomsayers are like sceptical theists and why it matters. Minds Mach. **2015**, 25, 231–246. [[Google Scholar](https://scholar.google.com/scholar_lookup?title=Why+AI+doomsayers+are+like+sceptical+theists+and+why+it+matters&author=Danaher,+J.&publication_year=2015&journal=Minds+Mach.&volume=25&pages=231%E2%80%93246&doi=10.1007/s11023-015-9365-y)] [[CrossRef](https://doi.org/10.1007/s11023-015-9365-y)]\n30. Hughes, J.J. Global technology regulation and potentially apocalyptic technological threats. In Nanoethics: The Ethical and Social Implications of Nanotechnology; Allhof, F., Ed.; Wiley: Hoboken, NJ, USA, 2007; pp. 201–214. [[Google Scholar](https://scholar.google.com/scholar_lookup?title=Global+technology+regulation+and+potentially+apocalyptic+technological+threats&author=Hughes,+J.J.&publication_year=2007&pages=201%E2%80%93214)]\n31. Yampolskiy, R.; Fox, J. Safety engineering for artificial general intelligence. Topoi **2013**, 32, 217–226. [[Google Scholar](https://scholar.google.com/scholar_lookup?title=Safety+engineering+for+artificial+general+intelligence&author=Yampolskiy,+R.&author=Fox,+J.&publication_year=2013&journal=Topoi&volume=32&pages=217%E2%80%93226&doi=10.1007/s11245-012-9128-9)] [[CrossRef](https://doi.org/10.1007/s11245-012-9128-9)]\n32. Wilkes, A.L.; Leatherbarrow, M. Editing episodic memory following the identification of error. Q. J. Exp. Psychol. **1988**, 40A, 361–387. [[Google Scholar](https://scholar.google.com/scholar_lookup?title=Editing+episodic+memory+following+the+identification+of+error&author=Wilkes,+A.L.&author=Leatherbarrow,+M.&publication_year=1988&journal=Q.+J.+Exp.+Psychol.&volume=40A&pages=361%E2%80%93387&doi=10.1080/02724988843000168)] [[CrossRef](https://doi.org/10.1080/02724988843000168)]\n33. Johnson, H.M.; Seifert, C.M. Sources of the continued influence effect: When misinformation in memory affects later inferences. J. Exp. Psychol. Learn. Mem. Cognit. **1994**, 20, 1420–1436. [[Google Scholar](https://scholar.google.com/scholar_lookup?title=Sources+of+the+continued+influence+effect:+When+misinformation+in+memory+affects+later+inferences&author=Johnson,+H.M.&author=Seifert,+C.M.&publication_year=1994&journal=J.+Exp.+Psychol.+Learn.+Mem.+Cognit.&volume=20&pages=1420%E2%80%931436&doi=10.1037/0278-7393.20.6.1420)] [[CrossRef](https://doi.org/10.1037/0278-7393.20.6.1420)]\n34. Nyhan, B.; Reifler, J. The effect of fact-checking on elites: A field experiment on U.S. state legislators. Am. J. Political Sci. **2015**, 59, 628–640. [[Google Scholar](https://scholar.google.com/scholar_lookup?title=The+effect+of+fact-checking+on+elites:+A+field+experiment+on+U.S.+state+legislators&author=Nyhan,+B.&author=Reifler,+J.&publication_year=2015&journal=Am.+J.+Political+Sci.&volume=59&pages=628%E2%80%93640&doi=10.1111/ajps.12162)] [[CrossRef](https://doi.org/10.1111/ajps.12162)]\n35. Tsipursky, G.; Votta, F.; Roose, K.M. Fighting fake news and post-truth politics with behavioral science: The pro-truth pledge. Behav. Soc. Issues **2018**, 27, 47–70. [[Google Scholar](https://scholar.google.com/scholar_lookup?title=Fighting+fake+news+and+post-truth+politics+with+behavioral+science:+The+pro-truth+pledge&author=Tsipursky,+G.&author=Votta,+F.&author=Roose,+K.M.&publication_year=2018&journal=Behav.+Soc.+Issues&volume=27&pages=47%E2%80%9370&doi=10.2139/ssrn.3138238)] [[CrossRef](https://doi.org/10.2139/ssrn.3138238)]\n36. Doran, P.T.; Zimmerman, M.K. Examining the scientific consensus on climate change. Eos **2009**, 90, 22–23. [[Google Scholar](https://scholar.google.com/scholar_lookup?title=Examining+the+scientific+consensus+on+climate+change&author=Doran,+P.T.&author=Zimmerman,+M.K.&publication_year=2009&journal=Eos&volume=90&pages=22%E2%80%9323&doi=10.1029/2009EO030002)] [[CrossRef](https://doi.org/10.1029/2009EO030002)]\n37. Stenhouse, N.; Maibach, E.; Cobb, S.; Ban, R.; Bleistein, A.; Croft, P.; Bierly, E.; Seitter, K.; Rasmussen, G.; Leiserowitz, A. Meteorologists’ views about global warming: A survey of American Meteorological Society professional members. Bull. Am. Meteorol. Soc. **2014**, 95, 1029–1040. [[Google Scholar](https://scholar.google.com/scholar_lookup?title=Meteorologists%E2%80%99+views+about+global+warming:+A+survey+of+American+Meteorological+Society+professional+members&author=Stenhouse,+N.&author=Maibach,+E.&author=Cobb,+S.&author=Ban,+R.&author=Bleistein,+A.&author=Croft,+P.&author=Bierly,+E.&author=Seitter,+K.&author=Rasmussen,+G.&author=Leiserowitz,+A.&publication_year=2014&journal=Bull.+Am.+Meteorol.+Soc.&volume=95&pages=1029%E2%80%931040&doi=10.1175/BAMS-D-13-00091.1)] [[CrossRef](https://doi.org/10.1175/BAMS-D-13-00091.1)]\n38. De La Harpe, J. TV Meteorologists, Weathercasters Briefed by Climate Experts at AMS Short Course. Yale Climate Connnections. 9 July 2009. Available online: (accessed on 9 September 2018).\n39. Ward, B. 15 Midwest TV Meteorologists, Weathercasters Weigh Climate Science at Chicago’s Field Museum Climate Science for Meteorologists. Yale Climate Connnections. 5 May 2009. Available online: (accessed on 9 September 2018).\n40. Nyhan, B. Why the ‘death panel’ myth wouldn’t die: Misinformation in the health care reform debate. The Forum **2010**, 8. [[Google Scholar](https://scholar.google.com/scholar_lookup?title=Why+the+%E2%80%98death+panel%E2%80%99+myth+wouldn%E2%80%99t+die:+Misinformation+in+the+health+care+reform+debate&author=Nyhan,+B.&publication_year=2010&journal=The+Forum&volume=8&doi=10.2202/1540-8884.1354)] [[CrossRef](https://doi.org/10.2202/1540-8884.1354)]\n41. Tsipursky, G.; Morford, Z. Addressing behaviors that lead to sharing fake news. Behav. Soc. Issues **2018**, 27, AA6–AA10. [[Google Scholar](https://scholar.google.com/scholar_lookup?title=Addressing+behaviors+that+lead+to+sharing+fake+news&author=Tsipursky,+G.&author=Morford,+Z.&publication_year=2018&journal=Behav.+Soc.+Issues&volume=27&pages=AA6%E2%80%93AA10)]\n42. Pinker, S. Enlightenment Now: The Case for Reason, Science, Humanism, and Progress; Penguin: New York, NY, USA, 2018. [[Google Scholar](https://scholar.google.com/scholar_lookup?title=Enlightenment+Now:+The+Case+for+Reason,+Science,+Humanism,+and+Progress&author=Pinker,+S.&publication_year=2018)]\n43. Marshall, G. Don’t Even Think about It: Why Our Brains Are Wired to Ignore Climate Change; Bloomsbury: New York, NY, USA, 2014. [[Google Scholar](https://scholar.google.com/scholar_lookup?title=Don%E2%80%99t+Even+Think+about+It:+Why+Our+Brains+Are+Wired+to+Ignore+Climate+Change&author=Marshall,+G.&publication_year=2014)]\n44. Kim, E.-H.; Lyon, T.P. Greenwash vs. brownwash: Exaggeration and undue modesty in corporate sustainability disclosure. Organ. Sci. **2014**, 26, 705–723. [[Google Scholar](https://scholar.google.com/scholar_lookup?title=Greenwash+vs.+brownwash:+Exaggeration+and+undue+modesty+in+corporate+sustainability+disclosure&author=Kim,+E.-H.&author=Lyon,+T.P.&publication_year=2014&journal=Organ.+Sci.&volume=26&pages=705%E2%80%93723&doi=10.1287/orsc.2014.0949)] [[CrossRef](https://doi.org/10.1287/orsc.2014.0949)]\n45. BBC. Google ‘to end’ Pentagon Artificial Intelligence Project. BBC. 2 June 2018. Available online: (accessed on 9 September 2018).\n46. McGinnis, J.O. Accelerating AI. Northwest. Univ. Law Rev. **2010**, 104, 366–381. [[Google Scholar](https://scholar.google.com/scholar_lookup?title=Accelerating+AI&author=McGinnis,+J.O.&publication_year=2010&journal=Northwest.+Univ.+Law+Rev.&volume=104&pages=366%E2%80%93381)]\n47. Wilson, G. Minimizing global catastrophic and existential risks from emerging technologies through international law. Va. Environ. Law J. **2013**, 31, 307–364. [[Google Scholar](https://scholar.google.com/scholar_lookup?title=Minimizing+global+catastrophic+and+existential+risks+from+emerging+technologies+through+international+law&author=Wilson,+G.&publication_year=2013&journal=Va.+Environ.+Law+J.&volume=31&pages=307%E2%80%93364)]\n48. White, T.N.; Baum, S.D. Liability law for present and future robotics technology. In Robot Ethics 2.0; Lin, P., Abney, K., Jenkins, R., Eds.; Oxford University Press: Oxford, UK, 2017; pp. 66–79. [[Google Scholar](https://scholar.google.com/scholar_lookup?title=Liability+law+for+present+and+future+robotics+technology&author=White,+T.N.&author=Baum,+S.D.&publication_year=2017&pages=66%E2%80%9379)]\n49. Ecker, U.K.H.; Lewandowsky, S.; Tang, D.T.W. Explicit warnings reduce but do not eliminate the continued influence of misinformation. Mem. Cognit. **2010**, 38, 1087–1100. [[Google Scholar](https://scholar.google.com/scholar_lookup?title=Explicit+warnings+reduce+but+do+not+eliminate+the+continued+influence+of+misinformation&author=Ecker,+U.K.H.&author=Lewandowsky,+S.&author=Tang,+D.T.W.&publication_year=2010&journal=Mem.+Cognit.&volume=38&pages=1087%E2%80%931100&doi=10.3758/MC.38.8.1087&pmid=21156872)] [[CrossRef](https://doi.org/10.3758/MC.38.8.1087)] [[PubMed](http://www.ncbi.nlm.nih.gov/pubmed/21156872)][[Green Version](https://link.springer.com/content/pdf/10.3758%2FMC.38.8.1087.pdf)]\n50. Bringsjord, S. Belief in the singularity is logically brittle. J. Conscious. Stud. **2012**, 19, 14–20. [[Google Scholar](https://scholar.google.com/scholar_lookup?title=Belief+in+the+singularity+is+logically+brittle&author=Bringsjord,+S.&publication_year=2012&journal=J.+Conscious.+Stud.&volume=19&pages=14%E2%80%9320)]\n51. McDermott, D. Response to the singularity by David Chalmers. J. Conscious. Stud. **2012**, 19, 167–172. [[Google Scholar](https://scholar.google.com/scholar_lookup?title=Response+to+the+singularity+by+David+Chalmers&author=McDermott,+D.&publication_year=2012&journal=J.+Conscious.+Stud.&volume=19&pages=167%E2%80%93172)]\n52. Chalmers, D. The Singularity: A reply. J. Conscious. Stud. **2012**, 19, 141–167. [[Google Scholar](https://scholar.google.com/scholar_lookup?title=The+Singularity:+A+reply&author=Chalmers,+D.&publication_year=2012&journal=J.+Conscious.+Stud.&volume=19&pages=141%E2%80%93167)]\n53. Baum, S.D.; Goertzel, B.; Goertzel, T.G. How long until human-level AI? Results from an expert assessment. Technol. Forecast. Soc. Chang. **2011**, 78, 185–195. [[Google Scholar](https://scholar.google.com/scholar_lookup?title=How+long+until+human-level+AI?+Results+from+an+expert+assessment&author=Baum,+S.D.&author=Goertzel,+B.&author=Goertzel,+T.G.&publication_year=2011&journal=Technol.+Forecast.+Soc.+Chang.&volume=78&pages=185%E2%80%93195&doi=10.1016/j.techfore.2010.09.006)] [[CrossRef](https://doi.org/10.1016/j.techfore.2010.09.006)]\n54. Müller, V.C.; Bostrom, N. Future progress in artificial intelligence: A survey of expert opinion. In Fundamental Issues of Artificial Intelligence; Müller, V.C., Ed.; Springer: Cham, Switzerland, 2016; pp. 555–572. [[Google Scholar](https://scholar.google.com/scholar_lookup?title=Future+progress+in+artificial+intelligence:+A+survey+of+expert+opinion&author=M%C3%BCller,+V.C.&author=Bostrom,+N.&publication_year=2016&pages=555%E2%80%93572)]\n55. Grace, K.; Salvatier, J.; Dafoe, A.; Zhang, B.; Evans, O. When will AI exceed human performance? Evidence from AI experts. J. Artif. Intell. Res. **2018**, 62, 729–754. [[Google Scholar](https://scholar.google.com/scholar_lookup?title=When+will+AI+exceed+human+performance?+Evidence+from+AI+experts&author=Grace,+K.&author=Salvatier,+J.&author=Dafoe,+A.&author=Zhang,+B.&author=Evans,+O.&publication_year=2018&journal=J.+Artif.+Intell.+Res.&volume=62&pages=729%E2%80%93754&doi=10.1613/jair.1.11222)] [[CrossRef](https://doi.org/10.1613/jair.1.11222)]\n56. Oreskes, N. The scientific consensus on climate change. Science **2004**, 306, 1686. [[Google Scholar](https://scholar.google.com/scholar_lookup?title=The+scientific+consensus+on+climate+change&author=Oreskes,+N.&publication_year=2004&journal=Science&volume=306&pages=1686&doi=10.1126/science.1103618&pmid=15576594)] [[CrossRef](https://doi.org/10.1126/science.1103618)] [[PubMed](http://www.ncbi.nlm.nih.gov/pubmed/15576594)]\n57. Ding, D.; Maibach, E.W.; Zhao, X.; Roser-Renouf, C.; Leiserowitz, A. Support for climate policy and societal action are linked to perceptions about scientific agreement. Nat. Clim. Chang. **2011**, 1, 462–466. [[Google Scholar](https://scholar.google.com/scholar_lookup?title=Support+for+climate+policy+and+societal+action+are+linked+to+perceptions+about+scientific+agreement&author=Ding,+D.&author=Maibach,+E.W.&author=Zhao,+X.&author=Roser-Renouf,+C.&author=Leiserowitz,+A.&publication_year=2011&journal=Nat.+Clim.+Chang.&volume=1&pages=462%E2%80%93466&doi=10.1038/nclimate1295)] [[CrossRef](https://doi.org/10.1038/nclimate1295)]\n58. McCright, A.M.; Dunlap, R.E.; Xiao, C. Perceived scientific agreement and support for government action on climate change in the USA. Clim. Chang. **2013**, 119, 511–518. [[Google Scholar](https://scholar.google.com/scholar_lookup?title=Perceived+scientific+agreement+and+support+for+government+action+on+climate+change+in+the+USA&author=McCright,+A.M.&author=Dunlap,+R.E.&author=Xiao,+C.&publication_year=2013&journal=Clim.+Chang.&volume=119&pages=511%E2%80%93518&doi=10.1007/s10584-013-0704-9)] [[CrossRef](https://doi.org/10.1007/s10584-013-0704-9)]\n59. Van der Linden, S.L.; Leiserowitz, A.A.; Feinberg, G.D.; Maibach, E.W. The scientific consensus on climate change as a gateway belief: Experimental evidence. PLoS ONE **2015**, 10, e0118489. [[Google Scholar](https://scholar.google.com/scholar_lookup?title=The+scientific+consensus+on+climate+change+as+a+gateway+belief:+Experimental+evidence&author=Van+der+Linden,+S.L.&author=Leiserowitz,+A.A.&author=Feinberg,+G.D.&author=Maibach,+E.W.&publication_year=2015&journal=PLoS+ONE&volume=10&pages=e0118489&doi=10.1371/journal.pone.0118489&pmid=25714347)] [[CrossRef](https://doi.org/10.1371/journal.pone.0118489)] [[PubMed](http://www.ncbi.nlm.nih.gov/pubmed/25714347)]\n60. Bensinger, R. Sam Harris and Eliezer Yudkowsky on ‘AI: Racing toward the Brink’. Machine Intelligence Research Institute. 28 February 2018. Available online: (accessed on 9 September 2018).\n61. Cohen, G.L.; Sherman, D.K.; Bastardi, A.; Hsu, L.; McGoey, M.; Ross, L. Bridging the partisan divide: Self-affirmation reduces ideological closed-mindedness and inflexibility in negotiation. J. Pers. Soc. Psychol. **2007**, 93, 415–430. [[Google Scholar](https://scholar.google.com/scholar_lookup?title=Bridging+the+partisan+divide:+Self-affirmation+reduces+ideological+closed-mindedness+and+inflexibility+in+negotiation&author=Cohen,+G.L.&author=Sherman,+D.K.&author=Bastardi,+A.&author=Hsu,+L.&author=McGoey,+M.&author=Ross,+L.&publication_year=2007&journal=J.+Pers.+Soc.+Psychol.&volume=93&pages=415%E2%80%93430&doi=10.1037/0022-3514.93.3.415&pmid=17723057)] [[CrossRef](https://doi.org/10.1037/0022-3514.93.3.415)] [[PubMed](http://www.ncbi.nlm.nih.gov/pubmed/17723057)]\n62. Bentley, P.J. The three laws of artificial intelligence: Dispelling common myths. In Should We Fear Artificial Intelligence? In-Depth Analysis; Boucher, P., Ed.; European Parliamentary Research Service, Strategic Foresight Unit: Brussels, Belgium, 2018; pp. 6–12. [[Google Scholar](https://scholar.google.com/scholar_lookup?title=The+three+laws+of+artificial+intelligence:+Dispelling+common+myths&author=Bentley,+P.J.&publication_year=2018&pages=6%E2%80%9312)]\n63. Jürgen Schmidhuber’s Home Page. Available online: (accessed on 9 September 2018).\n64. Legg, S. Machine Super Intelligence. Doctoral’s Thesis, University of Lugano, Lugano, Switzerland, 2008. [[Google Scholar](https://scholar.google.com/scholar_lookup?title=Machine+Super+Intelligence&author=Legg,+S.&publication_year=2008)]\n65. More, M.; Vita-More, N. (Eds.) The Transhumanist Reader: Classical and Contemporary Essays on the Science, Technology, and Philosophy of the Human Future; Wiley: New York, NY, USA, 2010. [[Google Scholar](https://scholar.google.com/scholar_lookup?title=The+Transhumanist+Reader:+Classical+and+Contemporary+Essays+on+the+Science,+Technology,+and+Philosophy+of+the+Human+Future&author=More,+M.&author=Vita-More,+N.&publication_year=2010)]\n66. Bostrom, N. Existential risk prevention as global priority. Glob. Policy **2013**, 4, 15–31. [[Google Scholar](https://scholar.google.com/scholar_lookup?title=Existential+risk+prevention+as+global+priority&author=Bostrom,+N.&publication_year=2013&journal=Glob.+Policy&volume=4&pages=15%E2%80%9331&doi=10.1111/1758-5899.12002)] [[CrossRef](https://doi.org/10.1111/1758-5899.12002)]\n67. Torres, P. Morality, Foresight & Human Flourishing an Introduction to Existential Risks; Pitchstone Publishing: Durham, NC, USA, 2017. [[Google Scholar](https://scholar.google.com/scholar_lookup?title=Morality,+Foresight+&+Human+Flourishing+an+Introduction+to+Existential+Risks&author=Torres,+P.&publication_year=2017)]\n68. Kahan, D.M.; Jenkins-Smith, H.; Braman, D. Cultural cognition of scientific consensus. J. Risk Res. **2011**, 14, 147–174. [[Google Scholar](https://scholar.google.com/scholar_lookup?title=Cultural+cognition+of+scientific+consensus&author=Kahan,+D.M.&author=Jenkins-Smith,+H.&author=Braman,+D.&publication_year=2011&journal=J.+Risk+Res.&volume=14&pages=147%E2%80%93174&doi=10.1080/13669877.2010.511246)] [[CrossRef](https://doi.org/10.1080/13669877.2010.511246)][[Green Version](https://scholarship.law.gwu.edu/cgi/viewcontent.cgi?article=1269&context=faculty_publications)]\n69. Kahan, D.M.; Peters, E.; Dawson, E.C.; Slovic, P. Motivated numeracy and enlightened self-government. Behav. Public Policy **2017**, 1, 54–86. [[Google Scholar](https://scholar.google.com/scholar_lookup?title=Motivated+numeracy+and+enlightened+self-government&author=Kahan,+D.M.&author=Peters,+E.&author=Dawson,+E.C.&author=Slovic,+P.&publication_year=2017&journal=Behav.+Public+Policy&volume=1&pages=54%E2%80%9386&doi=10.1017/bpp.2016.2)] [[CrossRef](https://doi.org/10.1017/bpp.2016.2)]\n70. Banas, J.A.; Rains, S.A. A meta-analysis of research on inoculation theory. Commun. Monogr. **2010**, 77, 281–311. [[Google Scholar](https://scholar.google.com/scholar_lookup?title=A+meta-analysis+of+research+on+inoculation+theory&author=Banas,+J.A.&author=Rains,+S.A.&publication_year=2010&journal=Commun.+Monogr.&volume=77&pages=281%E2%80%93311&doi=10.1080/03637751003758193)] [[CrossRef](https://doi.org/10.1080/03637751003758193)]\n71. Cook, J.; Lewandowsky, S.; Ecker, U.K.H. Neutralizing misinformation through inoculation: Exposing misleading argumentation techniques reduces their influence. PLoS ONE **2017**, 12, e0175799. [[Google Scholar](https://scholar.google.com/scholar_lookup?title=Neutralizing+misinformation+through+inoculation:+Exposing+misleading+argumentation+techniques+reduces+their+influence&author=Cook,+J.&author=Lewandowsky,+S.&author=Ecker,+U.K.H.&publication_year=2017&journal=PLoS+ONE&volume=12&pages=e0175799&doi=10.1371/journal.pone.0175799&pmid=28475576)] [[CrossRef](https://doi.org/10.1371/journal.pone.0175799)] [[PubMed](http://www.ncbi.nlm.nih.gov/pubmed/28475576)][[Green Version](http://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0175799&type=printable)]\n72. Cobb, M.D.; Nyhan, B.; Reifler, J. Beliefs don’t always persevere: How political figures are punished when positive information about them is discredited. Political Psychol. **2013**, 34, 307–326. [[Google Scholar](https://scholar.google.com/scholar_lookup?title=Beliefs+don%E2%80%99t+always+persevere:+How+political+figures+are+punished+when+positive+information+about+them+is+discredited&author=Cobb,+M.D.&author=Nyhan,+B.&author=Reifler,+J.&publication_year=2013&journal=Political+Psychol.&volume=34&pages=307%E2%80%93326&doi=10.1111/j.1467-9221.2012.00935.x)] [[CrossRef](https://doi.org/10.1111/j.1467-9221.2012.00935.x)]\n73. Nyhan, B.; Reifler, J. Displacing misinformation about events: An experimental test of causal corrections. J. Exp. Political Sci. **2015**, 2, 81–93. [[Google Scholar](https://scholar.google.com/scholar_lookup?title=Displacing+misinformation+about+events:+An+experimental+test+of+causal+corrections&author=Nyhan,+B.&author=Reifler,+J.&publication_year=2015&journal=J.+Exp.+Political+Sci.&volume=2&pages=81%E2%80%9393&doi=10.1017/XPS.2014.22)] [[CrossRef](https://doi.org/10.1017/XPS.2014.22)]\n74. Skurnik, I.; Yoon, C.; Park, D.C.; Schwarz, N. How warnings about false claims become recommendations. J. Consum. Res. **2005**, 31, 713–724. [[Google Scholar](https://scholar.google.com/scholar_lookup?title=How+warnings+about+false+claims+become+recommendations&author=Skurnik,+I.&author=Yoon,+C.&author=Park,+D.C.&author=Schwarz,+N.&publication_year=2005&journal=J.+Consum.+Res.&volume=31&pages=713%E2%80%93724&doi=10.1086/426605)] [[CrossRef](https://doi.org/10.1086/426605)]\n75. Kowalski, P.; Taylor, A.K. The effect of refuting misconceptions in the introductory psychology class. Teach. Psychol. **2009**, 36, 153–159. [[Google Scholar](https://scholar.google.com/scholar_lookup?title=The+effect+of+refuting+misconceptions+in+the+introductory+psychology+class&author=Kowalski,+P.&author=Taylor,+A.K.&publication_year=2009&journal=Teach.+Psychol.&volume=36&pages=153%E2%80%93159&doi=10.1080/00986280902959986)] [[CrossRef](https://doi.org/10.1080/00986280902959986)]\n76. Kuhn, D.; Crowell, A. Dialogic argumentation as a vehicle for developing young adolescents’ thinking. Psychol. Sci. **2011**, 22, 545–552. [[Google Scholar](https://scholar.google.com/scholar_lookup?title=Dialogic+argumentation+as+a+vehicle+for+developing+young+adolescents%E2%80%99+thinking&author=Kuhn,+D.&author=Crowell,+A.&publication_year=2011&journal=Psychol.+Sci.&volume=22&pages=545%E2%80%93552&doi=10.1177/0956797611402512&pmid=21422465)] [[CrossRef](https://doi.org/10.1177/0956797611402512)] [[PubMed](http://www.ncbi.nlm.nih.gov/pubmed/21422465)]\n77. Bedford, D. Agnotology as a teaching tool: Learning climate science by studying misinformation. J. Geogr. **2010**, 109, 159–165. [[Google Scholar](https://scholar.google.com/scholar_lookup?title=Agnotology+as+a+teaching+tool:+Learning+climate+science+by+studying+misinformation&author=Bedford,+D.&publication_year=2010&journal=J.+Geogr.&volume=109&pages=159%E2%80%93165&doi=10.1080/00221341.2010.498121)] [[CrossRef](https://doi.org/10.1080/00221341.2010.498121)]\n78. Guzzetti, B.J.; Snyder, T.E.; Glass, G.V.; Gamas, W.S. Promoting conceptual change in science: A comparative meta-analysis of instructional interventions from reading education and science education. Read. Res. Q. **1993**, 28, 117–159. [[Google Scholar](https://scholar.google.com/scholar_lookup?title=Promoting+conceptual+change+in+science:+A+comparative+meta-analysis+of+instructional+interventions+from+reading+education+and+science+education&author=Guzzetti,+B.J.&author=Snyder,+T.E.&author=Glass,+G.V.&author=Gamas,+W.S.&publication_year=1993&journal=Read.+Res.+Q.&volume=28&pages=117%E2%80%93159&doi=10.2307/747886)] [[CrossRef](https://doi.org/10.2307/747886)]\n\n \n© 2018 by the author. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license ().", "url": "https://gcrinstitute.org/countering-superintelligence-misinformation/", "title": "Countering Superintelligence Misinformation", "source": "html_articles", "source_type": "journalArticle", "source_filetype": "pdf", "date_published": "2017-12-31T23:00:00Z", "authors": ["Seth Baum"], "summary": [], "id": "354510135d694ae7deb69b44c4fd4eb0"} {"text": "Abstract\n--------\n\n**:**\nThis paper explores the potential for skepticism about artificial superintelligence to be used as a tool for political ends. Superintelligence is AI that is much smarter than humans. Superintelligence does not currently exist, but it has been proposed that it could someday be built, with massive and potentially catastrophic consequences. There is substantial skepticism about superintelligence, including whether it will be built, whether it would be catastrophic, and whether it is worth current attention. To date, superintelligence skepticism appears to be mostly honest intellectual debate, though some of it may be politicized. This paper finds substantial potential for superintelligence skepticism to be (further) politicized, due mainly to the potential for major corporations to have a strong profit motive to downplay concerns about superintelligence and avoid government regulation. Furthermore, politicized superintelligence skepticism is likely to be quite successful, due to several factors including the inherent uncertainty of the topic and the abundance of skeptics. The paper’s analysis is based on characteristics of superintelligence and the broader AI sector, as well as the history and ongoing practice of politicized skepticism on other science and technology issues, including tobacco, global warming, and industrial chemicals. The paper contributes to literatures on politicized skepticism and superintelligence governance.\n\n\nKeywords: [artificial intelligence](/search?q=artificial+intelligence); [superintelligence](/search?q=superintelligence); [skepticism](/search?q=skepticism)\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n 1. Introduction\n----------------\n\nThe purpose of this paper is to explore the potential for skepticism about artificial superintelligence to be used for political ends. Artificial superintelligence (for brevity, henceforth just superintelligence) refers to AI that is much smarter than humans. Current AI is not superintelligent, but the prospect of superintelligence is a topic of much discussion in scholarly and public spheres. Some believe that superintelligence could someday be built, and that, if it is built, it would have massive and potentially catastrophic consequences. Others are skeptical of these beliefs. While much of the existing skepticism appears to be honest intellectual debate, there is potential for it to be politicized for other purposes.In simple terms (to be refined below), politicized skepticism can be defined as public articulation of skepticism that is intended to achieve some outcome other than an improved understanding of the topic at hand. Politicized skepticism can be contrasted with intellectual skepticism, which seeks an improved understanding. Intellectual skepticism is essential to scholarly inquiry; politicized skepticism is not. The distinction between the two is not always clear; statements of skepticism may have both intellectual and political motivations. The two concepts can nonetheless be useful for understanding debates over issues such as superintelligence.There is substantial precedent for politicized skepticism. Of particular relevance for superintelligence is politicized skepticism about technologies and products that are risky but profitable, henceforth risk–profit politicized skepticism. This practice dates to 1950s debates over the link between tobacco and cancer and has since been dubbed the tobacco strategy [[1](#B1-information-09-00209)]. More recently, the strategy has been applied to other issues including the link between fossil fuels and acid rain, the link between fossil fuels and global warming, and the link between industrial chemicals and neurological disease [[1](#B1-information-09-00209),[2](#B2-information-09-00209)]. The essence of the strategy is to promote the idea that the science underlying certain risks is unresolved, and therefore the implicated technologies should not be regulated. The strategy is typically employed by an interconnected mix of industry interests and ideological opponents of regulation. The target audience is typically a mix of government officials and the general public, and not the scientific community.As is discussed in more detail below, certain factors suggest the potential for superintelligence to be a focus of risk–profit politicized skepticism. First and foremost, superintelligence could be developed by major corporations with a strong financial incentive to avoid regulation. Second, there already exists a lot of skepticism about superintelligence, which could be exploited for political purposes. Third, as an unprecedented class of technology, it is inherently uncertain, which suggests that superintelligence skepticism may be especially durable, even within apolitical scholarly communities. These and other factors do not guarantee that superintelligence skepticism will be politicized, or that its politicization would follow the same risk–profit patterns as the tobacco strategy. However, these factors are at least suggestive of the possibility.Superintelligence skepticism may also be politicized in a different way: to protect the reputations and funding of the broader AI field. This form of politicized skepticism is less well-documented than the tobacco strategy, and appears to be less common. However, there are at least hints of it for fields of technology involving both grandiose future predictions and more mundane near-term work. AI is one such field of technology, in which grandiose predictions of superintelligence and other future AI breakthroughs contrast with more modest forms of near-term AI. Another example is nanotechnology, in which grandiose predictions of molecular machines contrast with near-term nanoscale science and technology [[3](#B3-information-09-00209)].The basis of the paper’s analysis is twofold. First, the paper draws on the long history of risk–profit politicized skepticism. This history suggests certain general themes that may also apply to superintelligence. Second, the paper examines characteristics of superintelligence development to assesses the prospect of skepticism being used politically in this context. To that end, the paper draws on the current state of affairs in the AI sector, especially for artificial general intelligence, which is a type of AI closely related to superintelligence. The paper further seeks to inform efforts to avoid any potential harmful effects from politicized superintelligence skepticism. The effects would not necessarily be harmful, but the history of risk–profit politicized skepticism suggests that they could be.This paper contributes to literatures on politicized skepticism and superintelligence governance. Whereas most literature on politicized skepticism (and similar concepts such as denial) is backward-looking, consisting of historical analysis of skepticisms that have already occurred [[1](#B1-information-09-00209),[2](#B2-information-09-00209),[4](#B4-information-09-00209),[5](#B5-information-09-00209),[6](#B6-information-09-00209),[7](#B7-information-09-00209)], this paper is largely (but not exclusively) forward-looking, consisting of prospective analysis of skepticisms that could occur at some point in the future. Meanwhile, the superintelligence governance literature has looked mainly at institutional regulations to prevent research groups from building dangerous superintelligence and support for research on safety measures [[8](#B8-information-09-00209),[9](#B9-information-09-00209),[10](#B10-information-09-00209),[11](#B11-information-09-00209)]. This paper contributes to a smaller literature on the role of corporations in superintelligence development [[12](#B12-information-09-00209)] and on social and psychological aspects of superintelligence governance [[13](#B13-information-09-00209)].This paper does not intend to take sides on which beliefs about superintelligence are most likely to be correct. Its interest is in the potential political implications of superintelligence skepticism, not in the underlying merits of the skepticism. The sole claim here is that the possibility of politicized superintelligence skepticism is a worthy topic of study. It is worth studying due to: (1) the potential for large consequences if superintelligence is built; and (2) the potential for superintelligence to be an important political phenomenon regardless of whether it is built. Finally, the topic is also of inherent intellectual interest as an exercise in prospective socio-political analysis on a possible future technology.The paper is organized as follows. [Section 2](#sec2-information-09-00209) presents a brief overview of superintelligence concerns and skepticisms. [Section 3](#sec3-information-09-00209) further develops the concept of politicized skepticism and surveys the history of risk–profit politicized skepticism, from its roots in tobacco to the present day. [Section 4](#sec4-information-09-00209) discusses prospects for politicized superintelligence skepticism. [Section 5](#sec5-information-09-00209) discusses opportunities for constructive action. [Section 6](#sec6-information-09-00209) concludes. 2. Superintelligence and Its Skeptics\n--------------------------------------\n\nThe idea of humans being supplanted by their machines dates to at least the 1863 work of Butler [[14](#B14-information-09-00209)]. In 1965, Good presented an early exposition on the topic within the modern field of computer science [[15](#B15-information-09-00209)]. Good specifically proposed an “intelligence explosion” in which intelligent machines make successively more intelligent machines until they are much smarter than humans, which would be “the last invention that man need ever make, provided that the machine is docile enough to tell us how to keep it under control” [[15](#B15-information-09-00209)] (p. 33). This intelligence explosion is one use of the term technological singularity, though the term can also refer to wider forms of radical technological change [[16](#B16-information-09-00209)]. The term superintelligence refers specifically to AI that is much more intelligent than humans and dates to at least the 1998 work of Bostrom [[17](#B17-information-09-00209)]. A related term is artificial general intelligence, which is AI capable of reasoning across many intellectual domains. A superintelligent AI is likely to have general intelligence, and the development of artificial general intelligence could be a major precursor to superintelligence. Artificial general intelligence is also an active subfield of AI [[18](#B18-information-09-00209),[19](#B19-information-09-00209)].Superintelligence is notable as a potential technological accomplishment with massive societal implications. The effects of superintelligence could include anything from solving a significant portion of the world’s problems (if superintelligence is designed well) to causing the extinction of humans and other species (if it is designed poorly). Much of the interest in superintelligence derives from these high stakes. Superintelligence is also of intellectual interest as perhaps the ultimate accomplishment within the field of AI, sometimes referred to as the “grand dream” of AI [[20](#B20-information-09-00209)] (p. 125).Currently, most AI research is on narrow AI that is not oriented towards this grand dream. The focus on narrow AI dates to early struggles in the field to make progress towards general AI or superintelligence. After an initial period of hype fell short, the field went through an “AI winter” marked by diminished interest and more modest expectations [[21](#B21-information-09-00209),[22](#B22-information-09-00209)] This prompted a focus on smaller, incremental progress on narrow AI. It should be noted that the term AI winter most commonly refers to a lull in AI in the mid-to-late 1980s and early 1990s. A similar lull occurred in the 1970s, and concerns about a new winter can be found as recently as 2008 [[23](#B23-information-09-00209)].With most of the field focused on narrow AI, artificial general intelligence has persisted only as a small subfield of AI [[18](#B18-information-09-00209)]. The AI winter also caused many AI computer scientists to be skeptical of superintelligence, on grounds that superintelligence has turned out to be much more difficult than initially expected, and likewise to be averse to attention to superintelligence, on grounds that such hype could again fall short and induce another AI winter. This is an important historical note because it indicates that superintelligence skepticism has wide salience across the AI computer science community and may already be politicized towards the goal of protecting the reputation of and funding for AI. (More on this below.)Traces of superintelligence skepticism predate AI winter. Early AI skepticism dates to 1965 work by Dreyfus [[24](#B24-information-09-00209)]. Dreyfus [[24](#B24-information-09-00209)] critiqued the overall field of AI, with some attention to human-level AI though not to superintelligence. Dreyfus traced this skepticism of machines matching human intelligence to a passage in Descartes’ 1637 Discourse On Method [[25](#B25-information-09-00209)]: “it must be morally impossible that there should exist in any machine a diversity of organs sufficient to enable it to act in all the occurrences of life, in the way in which our reason enables us to act.”In recent years, superintelligence has attracted considerable attention. This has likely been prompted by several factors, including a growing scholarly literature (e.g., [[9](#B9-information-09-00209),[19](#B19-information-09-00209),[26](#B26-information-09-00209),[27](#B27-information-09-00209),[28](#B28-information-09-00209),[29](#B29-information-09-00209)]), highly publicized remarks by several major science and technology celebrities (e.g., Bill Gates [[30](#B30-information-09-00209)], Stephen Hawking [[31](#B31-information-09-00209)], and Elon Musk [[32](#B32-information-09-00209)]), and breakthroughs in the broader field of AI, which draw attention to AI and may make the prospect of superintelligence seem more plausible (e.g., [[33](#B33-information-09-00209),[34](#B34-information-09-00209)]). This attention to superintelligence has likewise prompted some more outspoken skepticism. The following is a brief overview of the debate, including both the arguments of the debate and some biographical information about the debaters. (Biographical details are taken from personal and institutional webpages and are accurate as of the time of this writing, May 2018; they are not necessarily accurate as of the time of the publication of the cited literature.) The biographies can be politically significant because, in public debates, some people’s words carry more weight than others’. The examples presented below are intended to be illustrative and at least moderately representative of the arguments made in existing superintelligence skepticism (some additional examples are presented in [Section 4](#sec4-information-09-00209)). A comprehensive survey of superintelligence skepticism is beyond the scope of this paper.#### 2.1. Superintelligence Cannot Be Built\n\nBringsjord [[35](#B35-information-09-00209)] argued that superintelligence cannot be built based on reasoning from computational theory. Essentially, the argument is that superintelligence requires a more advanced class of computing, which cannot be produced by humans or existing AI. Bringsjord is Professor of Cognitive Science at Rensselaer Polytechnic University and Director of the Rensselaer AI and Reasoning Lab. Chalmers [[36](#B36-information-09-00209)] countered that superintelligence does not necessarily require a more advanced class of computing. Chalmers is University Professor of Philosophy and Neural Science at New York University and co-director of the NYU Center for Mind, Brain, and Consciousness.McDermott [[37](#B37-information-09-00209)] argued that advances in hardware and algorithms may be sufficient to exceed human intelligence, but not to massively exceed it. McDermott is Professor of Computer Science at Yale University. Chalmers [[36](#B36-information-09-00209)] countered that, while there may be limits to the potential advances in hardware and software, these limits may not be so restrictive as to preclude superintelligence.#### 2.2. Superintelligence Is Not Imminent Enough to Merit Attention\n\nCrawford [[38](#B38-information-09-00209)] argued that superintelligence is a distraction from issues with existing AI, especially AI that worsens inequalities. Crawford is co-founder and co-director of the AI Now Research Institute at New York University, a Senior Fellow at the NYU Information Law Institute, and a Principal Researcher at Microsoft Research.Ng argued that superintelligence may be possible, but it is premature to worry about, in particular because it is too different from existing AI systems. Ng memorably likened worrying about superintelligence to worrying about “overpopulation on Mars” [[39](#B39-information-09-00209)]. Ng is Vice President and Chief Scientist of Baidu, Co-Chairman and Co-Founder of Coursera, and an Adjunct Professor of Computer Science at Stanford University.Etzioni [[40](#B40-information-09-00209)] argued that superintelligence is unlikely to be built within the next 25 years and is thus not worth current attention. Etzioni is Chief Executive Officer of the Allen Institute for Artificial Intelligence and Professor of Computer Science at University of Washington. Dafoe and Russell [[41](#B41-information-09-00209)] countered that superintelligence is worth current attention even if it would take more than 25 years to build. Dafoe is Assistant Professor of Political Science at Yale University and Co-Director of the Governance of AI Program at the University of Oxford. Russell is Professor of Computer Science at University of California, Berkeley. (An alternative counter is that some measures to improve AI outcomes apply to both near-term AI and superintelligence, and thus it is not essential to debate which of the two types of AI should be prioritized [[42](#B42-information-09-00209)].)#### 2.3. Superintelligence Would (Probably) Not Be Catastrophic\n\nGoertzel [[43](#B43-information-09-00209)] argued that superintelligence could be built and is worth paying attention to, but also that superintelligence is less likely to result in catastrophe than is sometimes suggested. Specifically, Goertzel argued that it may be somewhat difficult, but very difficult, to build superintelligence with values that are considered desirable, and that the human builders of superintelligence would have good opportunities to check that the superintelligence has the right values. Goertzel is the lead for the OpenCog and SingularityNET projects for developing artificial general intelligence. Goertzel [[43](#B43-information-09-00209)] wrote in response to Bostrom [[28](#B28-information-09-00209)], who suggested that, if built, superintelligence is likely to result in catastrophe. Bostrom is Professor of Applied Ethics at University of Oxford and Director of the Oxford Future of Humanity Institute. (For a more detailed analysis of this debate, see [[44](#B44-information-09-00209)].)Views similar to Goertzel [[43](#B43-information-09-00209)] were also presented by Bieger et al. [[45](#B45-information-09-00209)], in particular that the AI that is the precursor to superintelligence could be trained by its human developers to have safe and desirable values. Co-authors Bieger and Thórisson are Ph.D. student and Professor of Computer Science at Reykjavik University; co-author Wang is Associate Professor of Computer and Information Sciences at Temple University.Searle [[46](#B46-information-09-00209)] argued that superintelligence is unlikely to be catastrophic, because it would be an unconscious machine incapable of deciding for itself to attack humanity, and thus humans would need to explicitly program it to cause harm. Searle is Professor Emeritus of the Philosophy of Mind and Language at the University of California, Berkeley. Searle [[46](#B46-information-09-00209)] wrote in response to Bostrom [[28](#B28-information-09-00209)], who arqued that superintelligence could be dangerous to humans regardless of whether it is conscious. 3. Skepticism as a Political Tool\n----------------------------------\n\n#### 3.1. The Concept of Politicized Skepticism\n\nThere is a sense in which any stated skepticism can be political, insofar as it seeks to achieve certain desired changes within a group. Even the most honest intellectual skepticism can be said to achieve the political aim of advancing a certain form of intellectual inquiry. However, this paper uses the term “politicized skepticism” more narrowly to refer to skepticism with other, non-intellectual aims.Even with this narrower conception, the distinction between intellectual and politicized skepticism can in practice be blurry. The same skeptical remark can serve both intellectual and (non-intellectual) political aims. People can also have intellectual skepticism that is shaped, perhaps subconsciously, by political factors, as well as politicized skepticism that is rooted in honest intellectual beliefs. For example, intellectuals (academics and the like) commonly have both intellectual and non-intellectual aims, the latter including advancing their careers or making the world a better place per whatever notion of “better” they subscribe to. This can be significant for superintelligence skepticism aimed at protecting the reputations and funding of AI researchers.It should be stressed that the entanglement of intellectual inquiry and (non-intellectual) political aims does not destroy the merits of intellectual inquiry. This is important to bear in mind at a time when trust in science and other forms of expertise is dangerously low [[47](#B47-information-09-00209),[48](#B48-information-09-00209)]. Scholarship can be a social and political process, but, when performed well, it can nonetheless deliver important insights about the world. For all people, scholars included, improving one’s understanding of the world takes mental effort, especially when one is predisposed to believe otherwise. Unfortunately, many people are not inclined to make the effort, and other people are making efforts to manipulate ideas for their own aims. An understanding of politicized skepticism is essential for addressing major issues in this rather less-than-ideal epistemic era.Much of this paper is focused on risk–profit politicized skepticism, i.e., skepticism about concerns about risky and profitable technologies and products. Risk–profit politicized skepticism is a major social force, as discussed throughout this paper, although it is not the only form of politicized skepticism. Other forms include politicized skepticism by concerned citizens, such as skepticism about scientific claims that vaccines or nuclear power plants are safe; by religious activists and institutions, expressing skepticism about claims that humans evolved from other species; by politicians and governments, expressing skepticism about events that cast them in an unfavorable light; and by intellectuals as discussed above. Thus, while this paper largely focuses on skepticism aimed at casting doubt about concerns about risky and profitable technologies and products, it should be understood that this is not the only type of politicized skepticism.#### 3.2. Tobacco Roots\n\nAs mentioned above, risk–profit politicized skepticism traces to 1950s debates on the link between tobacco and cancer. Specifically, in 1954, the tobacco industry formed the Tobacco Industry Research Committee, an “effort to foster the impression of debate, primarily by promoting the work of scientists whose views might be useful to the industry” [[1](#B1-information-09-00209)] (p. 17). The committee was led by C. C. Little, who was a decorated genetics researcher and past president of the University of Michigan, as well as a eugenics advocate who believed cancer was due to genetic weakness and not to smoking.In the 1950s, there was substantial evidence linking tobacco to cancer, but it was not as conclusive of a link as is now available. The tobacco industry exploited this uncertainty in public discussions of the issue. It succeeded in getting major media to often present the issue as a debate between scientists who agreed vs. disagreed in the tobacco–cancer link. Among the media figures to do this was the acclaimed journalist Edward Murrow, himself a smoker who, in tragic irony, later died from lung cancer. Oreskes and Conway speculated that, “Perhaps, being a smoker, he was reluctant to admit that his daily habit was deadly and reassured to hear that the allegations were unproven” [[1](#B1-information-09-00209)] (pp. 19–20).Over subsequent decades, the tobacco industry continued to fund work that questioned the tobacco–cancer link, enabling it to dodge lawsuits and regulations. Then, in 1999, the United States Department of Justice filed a lawsuit against nine tobacco companies and two tobacco trade organizations (United States v. Philip Morris). The US argued that the tobacco industry conspired over several decades to deceive the public, in violation of the Racketeer Influenced and Corrupt Organizations (RICO) Act, which covers organized crime. In 2006, the US District Court for the District of Columbia found the tobacco industry guilty, upheld unanimously in 2009 by the US Court of Appeals. This ruling and other measures have helped to protect people from lung cancer, but many more could have also avoided lung cancer were it not for the tobacco industry’s politicized skepticism.#### 3.3. The Character and Methods of Risk–Profit Politicized Skepticism\n\nThe tobacco case provided a blueprint for risk–profit politicized skepticism that has since been used for other issues. Writing in the context of politicized environmental skepticism, Jacques et al. [[4](#B4-information-09-00209)] (pp. 353–354) listed four overarching themes: (1) rejection of scientific findings of environmental problems; (2) de-prioritization of environmental problems relative to other issues; (3) rejection of government regulation of corporations and corporate liability; and (4) portrayal of environmentalism as a threat to progress and development. The net effect is to reduce interest in government regulation of corporate activities that may pose harms to society.The two primary motivations of risk–profit politicized skepticism are the protection of corporate profits and the advancement of anti-regulatory political ideology. The protection of profits is straightforward: from the corporation’s financial perspective, the investment in politicized skepticism can bring a substantial return. The anti-regulatory ideology is only slightly subtler. Risk–profit politicized skepticism is often associated with pro-capitalist, anti-socialist, and anti-communist politics. For example, some political skeptics liken environmentalists to watermelons: “green on the outside, red on the inside” [[1](#B1-information-09-00209)] (p. 248), while one feared that the Earth Summit was a socialist plot to establish a “World Government with central planning by the United Nations” [[1](#B1-information-09-00209)] (p. 252). For these people, politicized skepticism is a way to counter discourses that could harm their political agenda.Notably, both the financial and the ideological motivations are not inherently about science. Instead, the science is manipulated towards other ends. This indicates that the skepticism is primarily political and not intellectual. It may still be intellectually honest in the sense that the people stating the skepticism are actually skeptical. That would be consistent with author Upton Sinclair’s saying that “It is difficult to get a man to understand something when his salary depends upon his not understanding it.” The skepticism may nonetheless violate that essential intellectual virtue of letting conclusions follow from analysis, and not the other way around. For risk–profit politicized skepticism, the desired conclusion is typically the avoidance of government regulation of corporate activity, and the skepticism is crafted accordingly.To achieve this end, the skeptics will often engage in tactics that clearly go beyond honest intellectual skepticism and ordinary intellectual exchange. For example, ExxonMobil has been found to express extensive skepticism about climate change in its public communications (such as newspaper advertisements), but much less skepticism in its internal communications and peer-reviewed publications [[7](#B7-information-09-00209)]. This finding suggests that ExxonMobil was aware of the risks of climate change and misled the public about the risks. ExxonMobil reportedly used its peer-reviewed publications for “the credentials required to speak with authority in this area”, including in its conversations with government officials [[7](#B7-information-09-00209)] (p. 15), even though these communications may have presented climate change risk differently than the peer-reviewed publications did. (As an aside, it may be noted that the ExxonMobil study [[7](#B7-information-09-00209)], published in 2017, has already attracted a skeptic critique by Stirling [[49](#B49-information-09-00209)]. Stirling is Communications Manager of the Canadian nonprofit Friends of Science. Both Stirling and Friends of Science are frequent climate change skeptics [[50](#B50-information-09-00209)].)While the skeptics do not publicly confess dishonesty, there are reports that some of them have privately done so. For example, Marshall [[51](#B51-information-09-00209)] (p. 180) described five energy corporation presidents who believed that climate change was a problem and “admitted, off the record, that the competitive environment forced them to suppress the truth about climate change” to avoid government regulations. Similarly, US Senator Sheldon Whitehouse, an advocate of climate policy to reduce greenhouse gas emissions, reported that some of his colleagues publicly oppose climate policy but privately support it, with one even saying “Let’s keep talking—but don’t tell my staff. Nobody else can know” [[52](#B52-information-09-00209)] (p. 176). Needless to say, any instance in which skepticism is professed by someone who is not actually skeptical is a clear break from the intellectual skepticism of ordinary scholarly inquiry.One particularly distasteful tactic is to target individual scientists, seeking to discredit their work or even intimidate them. For example, Philippe Grandjean, a distinguished environmental health researcher, reported that the tuna industry once waged a $25 million advertising campaign criticizing work by himself and others who have documented links between tuna, mercury, and neurological disease. Grandjean noted that $25 million is a small sum for the tuna industry but more than the entire sum of grant funding he received for mercury research over his career, indicating a highly uneven financial playing field [[2](#B2-information-09-00209)] (pp. 119–120). In another example, climate scientists accused a climate skeptic of bullying and intimidation and reported receiving “a torrent of abusive and threatening e-mails after being featured on” the skeptic’s blog, which calls for climate scientists “to be publicly flogged” [[51](#B51-information-09-00209)] (p. 151).Much of the work, however, is far subtler than this. Often, it involves placing select individuals in conferences, committees, or hearings, where they can ensure that the skeptical message is heard in the right places. For example, Grandjean [[2](#B2-information-09-00209)] (p. 129) recounted a conference sponsored by the Electric Power Research Institute, which gave disproportionate floor time to research questioning the health effects of mercury. In another episode, the tobacco industry hired a recently retired World Health Organization committee chair to “volunteer” as an advisor to the same committee, which then concluded to not restrict use of a tobacco pesticide [[2](#B2-information-09-00209)] (p. 125).Another common tactic is to use outside organizations as the public face of the messaging. This tactic is accused of conveying the impression that the skepticism is done in the interest of the public and not of private industry. Grandjean [[2](#B2-information-09-00209)] (p. 121) wrote that “organizations, such as the Center for Science and Public Policy the Center for Indoor Air Research or the Citizens for Fire Safety Institute, may sound like neutral and honest establishments, but they turned out to be ‘front groups’ for financial interests.” Often, the work is done by think tanks. Jacques et al. [[4](#B4-information-09-00209)] found that over 90% of books exhibiting environmental skepticism are linked to conservative think tanks, and 90% of conservative think tanks are active in environmental skepticism. This finding is consistent with recent emphasis in US conservatism on unregulated markets. (Earlier strands of US conservatism were more supportive of environmental protection, such as the pioneering American conservative Russell Kirk, who wrote that “There is nothing more conservative than conservation” [[53](#B53-information-09-00209)].)#### 3.4. The Effectiveness of Politicized Skepticism\n\nSeveral broader phenomena help make politicized skepticism so potent, especially for risk–profit politicized skepticism. One is the enormous amounts of corporate money at stake with certain government regulations. When corporations use even a tiny fraction of this for politicized skepticism, it can easily dwarf other efforts. Similarly, US campaign finance laws are highly permissive. Whitehouse [[52](#B52-information-09-00209)] traced the decline in bipartisan Congressional support for climate change policy to the Supreme Court’s 2010 Citizens United ruling, which allows unlimited corporate spending in elections. However, even without election spending, corporate assets tilt the playing field substantially in the skeptics’ favor.Another important factor is the common journalistic norm of balance, in which journalists seek to present “both sides” of an issue. This can put partisan voices on equal footing with independent science, as seen in early media coverage of tobacco. It can also amplify a small minority of dissenting voices, seen more recently in media coverage of climate change. Whereas the scientific community has overwhelming consensus that climate change is happening, that it is caused primarily by human activity, and that the effects will be mainly harmful, public media features climate change skepticism much more than its scientific salience would suggest [[54](#B54-information-09-00209)]. (For an overview of the scientific issues related to climate change skepticism, see [[55](#B55-information-09-00209)]; for documentation of the scientific consensus, see [[56](#B56-information-09-00209)].)A third factor is the tendency of scientists to be cautious with respect to uncertainty. Scientists often aspire to avoid stating anything incorrect and to focus on what can be rigorously established instead of discussing more speculative possibilities. Scientists will also often highlight remaining uncertainties even when basic trends are clear. “More research is needed” is likely the most ubiquitous conclusion of any scientific research. This tendency makes it easier for other parties to make the state of the science appear less certain than it actually is. Speaking to this point in a report on climate change and national security, former US Army Chief of Staff Gordon Sullivan states “We seem to be standing by and, frankly, asking for perfectness in science… We never have 100 percent certainty. We never have it. If you wait until you have 100 percent certainty, something bad is going to happen on the battlefield” [[57](#B57-information-09-00209)] (p. 10).A fourth factor is the standard, found in some (but not all) policy contexts, of requiring robust evidence of harm before pursuing regulation. In other words, the burden of proof is on those who wish to regulate, and the potentially harmful product is presumed innocent until proven guilty. Grandjean [[2](#B2-information-09-00209)] cited this as the most important factor preventing the regulation of toxic chemicals in the US. Such a protocol makes regulation very difficult, especially for complex risks that resist precise characterization. In these policy contexts, the amplification of uncertainty can be particularly impactful.To sum up, risk–profit politicized skepticism is a longstanding and significant tool used to promote certain political goals. It has been used heavily by corporations seeking to protect profits and people with anti-regulatory ideologies, and it has proven to be a powerful tool. In at least one case, the skeptics were found guilty in a court of law of conspiracy to deceive the public. The skeptics use a range of tactics that deviate from standard intellectual practice, and they exploit several broader societal phenomena that make the skepticism more potent. 4. Politicized Superintelligence Skepticism\n--------------------------------------------\n\n#### 4.1. Is Superintelligence Skepticism Already Politicized?\n\nAt this time, there does not appear to be any superintelligence skepticism that has been politicized to the extent that has occurred for other issues such as tobacco–cancer and fossil fuels–global warming. Superintelligence skeptics are not running ad campaigns or other major dollar operations. For the most part, they are not attacking the scholars who express concern about superintelligence. Much of the discussion appears in peer-reviewed journals, and has the tone of constructive intellectual discourse. An exception that proves the rule is Etzioni [[40](#B40-information-09-00209)], who included a quotation comparing Nick Bostrom (who is concerned about superintelligence) to Donald Trump. In a postscript on the matter, Etzioni [[40](#B40-information-09-00209)] wrote that “we should refrain from ad hominem attacks. Here, I have to offer an apology”. In contrast, the character attacks of the most heated politicized skepticism are made without apology.However, there are already at least some hints of politicized superintelligence skepticism. Perhaps the most significant comes from AI academics downplaying hype to protect their field’s reputation and funding. The early field of AI made some rather grandiose predictions, which soon fell flat, fueling criticisms as early as 1965 [[24](#B24-information-09-00209)]. Some of these criticisms prompted major funding cuts, such as the 1973 Lighthill report [[58](#B58-information-09-00209)], which prompted the British Science Research Council to slash its support for AI. Similarly, Menzies [[59](#B59-information-09-00209)] described AI as going through a “peak of inflated expectations” in the 1980s followed by a “trough of disillusionment” in the late 1980s and early 1990s. Most recently, writing in 2018, Bentley [[60](#B60-information-09-00209)] (p. 11) derided beliefs about superintelligence and instead urges: “Do not be fearful of AI—marvel at the persistence and skill of those human specialists who are dedicating their lives to help create it. And appreciate that AI is helping to improve our lives every day.” (For criticism of Bentley [[60](#B60-information-09-00209)], see [[61](#B61-information-09-00209)].) This suggests that some superintelligence skepticism may serve the political goal of protecting the broader field of AI.Superintelligence skepticism that is aimed at protecting the field of AI may be less of a factor during the current period of intense interest in AI. At least for now, the field of AI does not need to defend its value—its value is rather obvious, and AI researchers are not lacking for job security. Importantly, the current AI boom is largely based on actual accomplishments, not hype. Therefore, while today’s AI researchers may view superintelligence as a distraction, they are less likely to view it as a threat to their livelihood. However, some may nonetheless view superintelligence in this way, especially those who have been in the field long enough to witness previous boom-and-bust cycles. Likewise, the present situation could change if the current AI boom eventually cycles into another bust—another winter. Despite the success of current AI, there are arguments that it is fundamentally limited [[62](#B62-information-09-00209)]. The prospect of a new AI winter could be a significant factor in politicized superintelligence skepticism.A different type of example comes from public intellectuals who profess superintelligence skepticism based on questionable reasoning. A notable case of this is the psychologist and public intellectual Steven Pinker. Pinker recently articulated a superintelligence skepticism that some observers have likened to politicized climate skepticism [[63](#B63-information-09-00209),[64](#B64-information-09-00209)]. Pinker does resemble some notable political skeptics: a senior scholar with an academic background in an unrelated topic who is able to use his (and it is typically a he) platform to advance his skeptical views. Additionally, a close analysis of Pinker’s comments on superintelligence finds them to be flawed and poorly informed by existing research [[65](#B65-information-09-00209)]. Pinker’s superintelligence skepticism appears to be advancing a broader narrative of human progress, and may be making the intellectual sin of putting this conclusion before the analysis of superintelligence. However, his particular motivations are, to the present author’s knowledge, not documented (It would be especially ironic for Pinker to politicize skepticism based on flawed intellectual reasoning, since he otherwise preaches a message intellectual virtue).A third type of example of potential politicized superintelligence skepticism comes from the corporate sector. Several people in leadership positions at technology corporations have expressed superintelligence skepticism, including Eric Schmidt (Executive Chairman of Alphabet, the parent company of Google) [[66](#B66-information-09-00209)] and Mark Zuckerberg (CEO of Facebook) [[67](#B67-information-09-00209)]. Since this skepticism comes the corporate sector, it has some resemblance to risk–profit politicized skepticism and may likewise have the most potential to shape public discourse and policy. One observer postulated that Zuckerberg professes superintelligence skepticism to project the idea that “software is always friendly and tame” and avoid the idea “that computers are intrinsically risky”, the latter of which “has potentially dire consequences for Zuckerberg’s business and personal future” [[67](#B67-information-09-00209)]. While this may just be conjecture, it does come at a time in which Facebook is under considerable public pressure for its role in propagating fake news and influencing elections, which, although unrelated to superintelligence, nonetheless provides an antiregulatory motivation to downplay risks associated with computers.To summarize, there may already be some politicized superintelligence skepticism, coming from AI academics seeking to protect their field, public intellectuals seeking to advance a certain narrative about the world, and corporate leaders seeking to avoid regulation. However, it is not clear how much superintelligence skepticism is already politicized, and there are indications that it may be limited, especially compared to what has occurred for other issues. On the other hand, superintelligence is a relatively new public issue (with a longer history in academia), so perhaps its politicization is just beginning.Finally, it is worth noting that while superintelligence has not been politicized to the extent that climate change has, there is at least one instance of superintelligence being cited in the context of climate skepticism. Cass [[68](#B68-information-09-00209),[69](#B69-information-09-00209)] cited the prospect of superintelligence as a reason to not be concerned about climate change. A counter to this argument is that, even if superintelligence is a larger risk, addressing climate change can still reduce the overall risk faced by humanity. Superintelligence could also be a solution to climate change, and thus may be worth building despite the risks it poses. At the same time, if climate change has been addressed independently, then this reduces the need to take risks in building superintelligence [[70](#B70-information-09-00209)].#### 4.2. Prospects for Politicized Superintelligence Skepticism\n\nWill superintelligence skepticism be (further) politicized? Noting the close historical association between politicized skepticism and corporate profits—at least for risk–profit politicized skepticism—an important question is whether superintelligence could prompt profit-threatening regulations. AI is now being developed by some of the largest corporations in the world. Furthermore, a recent survey found artificial general intelligence projects at several large corporations, including Baidu, Facebook, Google, Microsoft, Tencent, and Uber [[19](#B19-information-09-00209)]. These corporations have the assets to conduct politicized skepticism that is every bit as large as that of the tobacco, fossil fuel, and industrial chemicals industries.It should be noted that the artificial general intelligence projects at these corporations were not found to indicate substantial skepticism. Indeed, some of them are outspoken in concern about superintelligence. Moreover, out of 45 artificial general intelligence projects surveyed, only two were found to be dismissive of concerns about the risks posed by the technology [[19](#B19-information-09-00209)]. However, even if the AI projects themselves do not exhibit skepticism, the corporations that host them still could. Such a scenario would be comparable to that of ExxonMobil, whose scientists confirmed the science of climate change even while corporate publicity campaigns professed skepticism [[7](#B7-information-09-00209)].The history shows that risk–profit politicized skepticism is not inherent to corporate activity—it is generally only found when profits are at stake. The preponderance of corporate research on artificial general intelligence suggests at least a degree of profitability, but, at this time, it is unclear how profitable it will be. If it is profitable, then corporations are likely to become highly motivated to protect it against outside restrictions. This is an important factor to monitor as the technology progresses.In public corporations, the pressure to maximize shareholder returns can motivate risk–profit politicized skepticism. However, this may be less of a factor for some corporations in the AI sector. In particular, voting shares constituting a majority of voting power at both Facebook and Alphabet (the parent company of Google) are controlled by the companies’ founders: Mark Zuckerberg at Facebook [[71](#B71-information-09-00209)] and Larry Page and Sergey Brin at Alphabet [[72](#B72-information-09-00209)]. Given their majority stakes, the founders may be able to resist shareholder pressure for politicized skepticism, although it is not certain that they would, especially since leadership at both companies already display superintelligence skepticism.Another factor is the political ideologies of those involved in superintelligence. As discussed above, risk–profit politicized skepticism of other issues is commonly driven by people with pro-capitalist, anti-socialist, and anti-communist political ideologies. Superintelligence skepticism may be more likely to be politicized by people with similar ideologies. Some insight into this matter can be obtained from a recent survey of 600 technology entrepreneurs [[73](#B73-information-09-00209)], which is a highly relevant demographic. The study finds that, contrary to some conventional wisdom, this demographic tends not to hold libertarian ideologies. Instead, technology entrepreneurs tend to hold views consistent with American liberalism, but with one important exception: technology entrepreneurs tend to oppose government regulation. This finding suggests some prospect for politicizing superintelligence skepticism, although perhaps not as much as may exist in other industries.Further insight can be found from the current political activities of AI corporations. In the US, the corporations’ employees donate mainly to the Democratic Party, which is the predominant party of American liberalism and is more pro-regulation. However, the corporations themselves have recently shifted donations to the Republican Party, which is the predominant party of American conservatism and is more anti-regulation. Edsall [[74](#B74-information-09-00209)] proposed that this divergence between employees and employers is rooted in corporations’ pursuit of financial self-interest. A potential implication of this is that, even if the individuals who develop AI oppose risk–profit politicized skepticism, the corporations that they work for may support it. Additionally, the corporations have recently been accused of using their assets to influence academic and think tank research on regulations that the corporations could face [[75](#B75-information-09-00209),[76](#B76-information-09-00209)], although at least some of the accusations have been disputed [[77](#B77-information-09-00209)]. While the veracity of these accusations is beyond the scope of this paper, they are at least suggestive of the potential for these corporations to politicize superintelligence skepticism.AI corporations would not necessarily politicize superintelligence skepticism, even if profits may be at stake. Alternatively, they could express concern about superintelligence to portray themselves as responsible actors and likewise avoid regulation. This would be analogous to the strategy of “greenwashing” employed by companies seeking to bolster their reputation for environmental stewardship [[78](#B78-information-09-00209)]. Indeed, there have already been some expressions of concern about superintelligence by AI technologists, and likewise some suspicion that the stated concern has this sort of ulterior motive [[79](#B79-information-09-00209)].To the extent that corporations do politicize superintelligence skepticism, they are likely to mainly emphasize doubt about the risks of superintelligence. Insofar as superintelligence could be beneficial, corporations may promote this, just as they promote the benefits of fossil fuels (for transportation, heating, etc.) and other risky products. Or, AI corporations may promote the benefits of their own safety design and sow doubt about the safety of their rivals’ designs, analogous to the marketing of products whose riskiness can vary from company to company, such as automobiles. Alternatively, AI corporations may seek to sow doubt about the possibility of superintelligence, calculating that this would be their best play for avoiding regulation. As with politicized skepticism about other technologies and products, there is no one standard formula that every company always adopts.For their part, academic superintelligence skeptics may be more likely to emphasize doubt about the mere possibility of superintelligence, regardless of whether it would be beneficial or harmful, due to reputational concerns. Or, they could focus skepticism on the risks, for similar reasons as corporations: academic research can also be regulated, and researchers do not always welcome this. Of course, there are also academics who do not exhibit superintelligence skepticism. Again, there is no one standard formula.#### 4.3. Potential Effectiveness of Politicized Superintelligence Skepticism\n\nIf superintelligence skepticism is politicized, several factors point to it being highly effective, even more so than for the other issues in which skepticism has been politicized.First, some of the experts best positioned to resolve the debate are also deeply implicated in it. To the extent that superintelligence is a risk, the risk is driven by the computer scientists who would build superintelligence. These individuals have intimate knowledge of the technology and thus have an essential voice in the public debate (though not the only essential voice). This is distinct from issues such as tobacco or climate change, in which the risk is mainly assessed by outside experts. It would be as if the effect of tobacco on cancer was studied by the agronomists who cultivate tobacco crops, or if the science of climate change was studied by the geologists who map deposits of fossil fuels. With superintelligence, a substantial portion of the relevant experts have a direct incentive to avoid any restrictions on the technology, as do their employers. This could create a deep and enduring pool of highly persuasive skeptics.Second, superintelligence skepticism has deep roots in the mainstream AI computer science community. As noted above, this dates to the days of AI winter. Thus, skeptics may be abundant even where they are not funded by industry. Indeed, most of the skeptics described above do not appear to be speaking out of any industry ties, and thus would not have an industry conflict of interest. They could still have a conflict of interest from their desire in protect the reputation of their field, but this is a subtler matter. Insofar as they are perceived to not have a conflict of interest, they could be especially persuasive. Furthermore, even if their skepticism is honest and not intended for any political purposes, it could be used by others in dishonest and political ways.Third, superintelligence is a topic for which the uncertainty is inherently difficult to resolve. It is a hypothetical future technology that is qualitatively different from anything that currently exists. Furthermore, there is concern that its mere existence could be catastrophic, which could preclude certain forms of safety testing. It is thus a risk that defies normal scientific study. In this regard, it is similar to climate change: moderate climate change can already be observed, as can moderate forms of AI, but the potentially catastrophic forms have not yet materialized and possibly never will. However, climate projections can rely on some relatively simple physics—at its core, climate change largely reduces to basic physical chemistry and thermodynamics. (The physical chemistry covers the nature of greenhouse gasses, which are more transparent to some wavelengths of electromagnetic radiation than to others. The thermodynamics covers the heat transfer expected from greenhouse gas buildup. Both effects can be demonstrated in simple laboratory experiments. Climate change also involves indirect feedback effects on much of the Earth system, including clouds, ice, oceans, and ecosystems, which are often more complex and difficult to resolve and contribute to ongoing scientific uncertainty.) In contrast, AI projections must rely on notions of intelligence, which is not so simple at all. For this reason, it is less likely that scholarly communities will converge on any consensus position on superintelligence in the way that they have on other risks such as climate change.Fourth, some corporations that could develop superintelligence may be uniquely well positioned to influence public opinion. The corporations currently involved in artificial general intelligence research include some corporations that also play major roles in public media. As a leading social media platform, Facebook in particular has been found to be especially consequential for public opinion [[80](#B80-information-09-00209)]. Corporations that serve as information gateways, such as Baidu, Google, and Microsoft, also have unusual potential for influence. These corporations have opportunities to shape public opinion in ways that the tobacco, fossil fuel, and industrial chemicals industries cannot. While the AI corporations would not necessarily exploit these opportunities, it is an important factor to track.In summary, while it remains to be seen whether superintelligence skepticism will be politicized, there are some reasons for believing it will be, and that superintelligence would be an especially potent case of politicized skepticism. 5. Opportunities for Constructive Action\n-----------------------------------------\n\nPoliticized superintelligence skepticism would not necessarily be harmful. As far as this paper is concerned, it is possible that, for superintelligence, skepticism is the correct view, meaning that superintelligence may not be built, may not be dangerous, or may not merit certain forms of imminent attention. (The paper of course assumes that superintelligence is worth some imminent attention, or otherwise it would not have been written.) It is also possible that, even if superintelligence is a major risk, government regulations could nonetheless be counterproductive, and politicized skepticism could help avoid that. That said, the history of politicized skepticism (especially risk–profit politicized skepticism) shows a tendency for harm, which suggests that politicized superintelligence skepticism could be harmful as well.With this in mind, one basic opportunity is to raise awareness about politicized skepticism within communities that discuss superintelligence. Superintelligence skeptics who are motivated by honest intellectual norms may not wish for their skepticism to be used politically. They can likewise be cautious about how to engage with potential political skeptics, such as by avoiding certain speaking opportunities in which their remarks would be used as a political tool instead of as a constructive intellectual contribution. Additionally, all people involved in superintelligence debates can insist on basic intellectual standards, above all by putting analysis before conclusions and not the other way around. These are the sorts of things that an awareness of politicized skepticism can help with.Another opportunity is to redouble efforts to build scientific consensus on superintelligence, and then to draw attention to it. Currently, there is no consensus. As noted above, superintelligence is an inherently uncertain topic and difficult to build consensus on. However, with some effort, it should be possible to at least make progress towards consensus. Of course, scientific consensus does not preclude politicized skepticism—ongoing climate skepticism attests to this. However, it can at least dampen the politicized skepticism. Indeed, recent research has found that the perception of scientific consensus increases acceptance of the underlying science [[81](#B81-information-09-00209)].A third opportunity is to engage with AI corporations to encourage them to avoid politicizing skepticism about superintelligence or other forms of AI. Politicized skepticism is not inevitable, and while corporate leaders may sometimes feel as though they have no choice, there may nonetheless be options. Furthermore, the options may be especially effective at this early stage in superintelligence research, in which corporations may have not yet established internal policy or practices.A fourth opportunity is to follow best practices in debunking misinformation in the event that superintelligence skepticism is politicized. There is a substantial literature on the psychology of debunking [[81](#B81-information-09-00209),[82](#B82-information-09-00209),[83](#B83-information-09-00209)]. A debunking handbook written for a general readership [[82](#B82-information-09-00209)] recommends: (1) focusing on the correct information to avoid cognitively reinforcing the false information; (2) preceding any discussion of the false information with a warning that it is false; and (3) when debunking false information, also give the correct information so that people are not left with a gap in their understanding of the topic. The handbook further cautions against using the information deficit model of human cognition, which proposes that mistaken beliefs can be corrected simply by providing the correct information. The information deficit model is widely used in science communication, but it has been repeatedly found to work poorly, especially in situations of contested science. This sort of advice could be helpful to efforts to counter superintelligence misinformation.Finally, the entire AI community should insist that policy be made based on an honest and balanced read of the current state of knowledge. Burden of proof requirements should not be abused for private gain. As with climate change and other global risks, the world cannot afford to prove that superintelligence would be catastrophic. By the time uncertainty is eliminated, it could be too late. 6. Conclusions\n---------------\n\nSome people believe that superintelligence could be a highly consequential technology, potentially even a transformative event in the course of human history, with either profoundly beneficial or extremely catastrophic effects. Insofar as this belief is plausible, superintelligence may be worth careful advance consideration, to ensure that the technology is handled successfully. Importantly, this advance attention should include social science and policy analysis, and not just computer science. Furthermore, even if belief in superintelligence is mistaken, it can nonetheless be significant as a social and political phenomenon. This is another reason for social science and policy analysis. This paper is a contribution to the social science and policy analysis of superintelligence. Furthermore, despite the unprecedented nature of superintelligence, this paper shows that there are important historical and contemporary analogs that can shed light on the issue. Much of what could occur for the development of superintelligence has already occurred for other technologies. Politicized skepticism is one example of this.One topic not covered in this paper is the prospect of beliefs that superintelligence will occur and/or will be harmful to be politicized. Such a phenomenon could be analogous to, for example, belief in large medical harms from nuclear power, or, phrased differently, skepticism about claims that nuclear power plants are medically safe. The scientific literature on nuclear power finds medical harms to be substantially lower than is commonly believed [[84](#B84-information-09-00209)]. Overstated concern (or “alarmism”) about nuclear power can likewise be harmful, for example by increasing use of fossil fuels. Similarly, the fossil fuel industry could politicize this belief for its own benefit. By the same logic, belief in superintelligence could also be politicized. This prospect is left for future research, although much of this paper’s analysis may be applicable.Perhaps the most important lesson of this paper is that the development of superintelligence could be a contentious political process. It could involve aggressive efforts by powerful actors—efforts that not only are inconsistent with basic intellectual ideals, but that also actively subvert those ideals for narrow, self-interested gain. This poses a fundamental challenge to those who seek to advance a constructive study of superintelligence.\n\n\nFunding\n-------\n\nThis research received no external funding.Acknowledgments\n---------------\n\nTony Barrett, Phil Torres, Olle Häggström, Maurizio Tinnirello, Matthijs Maas, Roman Yampolskiy, and participants in a seminar hosted by the Center for Human-Compatible AI at UC Berkeley provided helpful feedback on an earlier version of this paper. All remaining errors are the author’s alone. The views expressed in this paper are the author’s and not necessarily the views of the Global Catastrophic Risk Institute.Conflicts of Interest\n---------------------\n\nThe author declares no conflict of interest.References\n----------\n\n1. Oreskes, N.; Conway, E.M. Merchants of Doubt: How a Handful of Scientists Obscured the Truth on Issues from Tobacco Smoke to Global Warming; Bloomsbury: New York, NY, USA, 2010. [[Google Scholar](https://scholar.google.com/scholar_lookup?title=Merchants+of+Doubt:+How+a+Handful+of+Scientists+Obscured+the+Truth+on+Issues+from+Tobacco+Smoke+to+Global+Warming&author=Oreskes,+N.&author=Conway,+E.M.&publication_year=2010)]\n2. Grandjean, P. Only One Chance: How Environmental Pollution Impairs Brain Development—And How to Protect the Brains of the Next Generation; Oxford University Press: Oxford, UK, 2013. [[Google Scholar](https://scholar.google.com/scholar_lookup?title=Only+One+Chance:+How+Environmental+Pollution+Impairs+Brain+Development%E2%80%94And+How+to+Protect+the+Brains+of+the+Next+Generation&author=Grandjean,+P.&publication_year=2013)]\n3. Selin, C. Expectations and the emergence of nanotechnology. Sci. Technol. Hum. Values **2007**, 32, 196–220. [[Google Scholar](https://scholar.google.com/scholar_lookup?title=Expectations+and+the+emergence+of+nanotechnology&author=Selin,+C.&publication_year=2007&journal=Sci.+Technol.+Hum.+Values&volume=32&pages=196%E2%80%93220&doi=10.1177/0162243906296918)] [[CrossRef](https://doi.org/10.1177/0162243906296918)]\n4. Jacques, P.J.; Dunlap, R.E.; Freeman, M. The organisation of denial: Conservative think tanks and environmental skepticism. Environ. Politics **2008**, 17, 349–385. [[Google Scholar](https://scholar.google.com/scholar_lookup?title=The+organisation+of+denial:+Conservative+think+tanks+and+environmental+skepticism&author=Jacques,+P.J.&author=Dunlap,+R.E.&author=Freeman,+M.&publication_year=2008&journal=Environ.+Politics&volume=17&pages=349%E2%80%93385&doi=10.1080/09644010802055576)] [[CrossRef](https://doi.org/10.1080/09644010802055576)]\n5. Lewandowsky, S.; Oberauer, K. Motivated rejection of science. Curr. Dir. Psychol. Sci. **2016**, 25, 217–222. [[Google Scholar](https://scholar.google.com/scholar_lookup?title=Motivated+rejection+of+science&author=Lewandowsky,+S.&author=Oberauer,+K.&publication_year=2016&journal=Curr.+Dir.+Psychol.+Sci.&volume=25&pages=217%E2%80%93222&doi=10.1177/0963721416654436)] [[CrossRef](https://doi.org/10.1177/0963721416654436)]\n6. Lewandowsky, S.; Mann, M.E.; Brown, N.J.; Friedman, H. Science and the public: Debate, denial, and skepticism. J. Soc. Polit. Psychol. **2016**, 4, 537–553. [[Google Scholar](https://scholar.google.com/scholar_lookup?title=Science+and+the+public:+Debate,+denial,+and+skepticism&author=Lewandowsky,+S.&author=Mann,+M.E.&author=Brown,+N.J.&author=Friedman,+H.&publication_year=2016&journal=J.+Soc.+Polit.+Psychol.&volume=4&pages=537%E2%80%93553&doi=10.5964/jspp.v4i2.604)] [[CrossRef](https://doi.org/10.5964/jspp.v4i2.604)][[Green Version](http://jspp.psychopen.eu/article/download/604/pdf)]\n7. Supran, G.; Oreskes, N. Assessing ExxonMobil’s climate change communications (1977–2014). Environ. Res. Lett. **2017**, 12, 084019. [[Google Scholar](https://scholar.google.com/scholar_lookup?title=Assessing+ExxonMobil%E2%80%99s+climate+change+communications+(1977%E2%80%932014)&author=Supran,+G.&author=Oreskes,+N.&publication_year=2017&journal=Environ.+Res.+Lett.&volume=12&pages=084019&doi=10.1088/1748-9326/aa815f)] [[CrossRef](https://doi.org/10.1088/1748-9326/aa815f)][[Green Version](http://iopscience.iop.org/article/10.1088/1748-9326/aa815f/pdf)]\n8. McGinnis, J.O. Accelerating Ai. Northwest. Univ. Law Rev. **2010**, 104, 366–381. [[Google Scholar](https://scholar.google.com/scholar_lookup?title=Accelerating+Ai&author=McGinnis,+J.O.&publication_year=2010&journal=Northwest.+Univ.+Law+Rev.&volume=104&pages=366%E2%80%93381&doi=10.2139/ssrn.1593851)] [[CrossRef](https://doi.org/10.2139/ssrn.1593851)]\n9. Sotala, K.; Yampolskiy, R.V. Responses to catastrophic AGI risk: A survey. Phys. Scr. **2014**, 90, 018001. [[Google Scholar](https://scholar.google.com/scholar_lookup?title=Responses+to+catastrophic+AGI+risk:+A+survey&author=Sotala,+K.&author=Yampolskiy,+R.V.&publication_year=2014&journal=Phys.+Scr.&volume=90&pages=018001&doi=10.1088/0031-8949/90/1/018001)] [[CrossRef](https://doi.org/10.1088/0031-8949/90/1/018001)]\n10. Wilson, G. Minimizing global catastrophic and existential risks from emerging technologies through international law. VA Environ. Law J. **2013**, 31, 307–364. [[Google Scholar](https://scholar.google.com/scholar_lookup?title=Minimizing+global+catastrophic+and+existential+risks+from+emerging+technologies+through+international+law&author=Wilson,+G.&publication_year=2013&journal=VA+Environ.+Law+J.&volume=31&pages=307%E2%80%93364)]\n11. Yampolskiy, R.; Fox, J. Safety engineering for artificial general intelligence. Topoi **2013**, 32, 217–226. [[Google Scholar](https://scholar.google.com/scholar_lookup?title=Safety+engineering+for+artificial+general+intelligence&author=Yampolskiy,+R.&author=Fox,+J.&publication_year=2013&journal=Topoi&volume=32&pages=217%E2%80%93226&doi=10.1007/s11245-012-9128-9)] [[CrossRef](https://doi.org/10.1007/s11245-012-9128-9)]\n12. Goertzel, B. The Corporatization of AI Is a Major Threat to Humanity. H+ Magazine, 21 July 2017. [[Google Scholar](https://scholar.google.com/scholar_lookup?title=The+Corporatization+of+AI+Is+a+Major+Threat+to+Humanity&author=Goertzel,+B.&publication_year=2017)]\n13. Baum, S.D. On the promotion of safe and socially beneficial artificial intelligence. AI Soc. **2017**, 32, 543–551. [[Google Scholar](https://scholar.google.com/scholar_lookup?title=On+the+promotion+of+safe+and+socially+beneficial+artificial+intelligence&author=Baum,+S.D.&publication_year=2017&journal=AI+Soc.&volume=32&pages=543%E2%80%93551&doi=10.1007/s00146-016-0677-0)] [[CrossRef](https://doi.org/10.1007/s00146-016-0677-0)]\n14. Butler, S. Darwin among the Machines. The Press, 13 June 1863. [[Google Scholar](https://scholar.google.com/scholar_lookup?title=Darwin+among+the+Machines&author=Butler,+S.&publication_year=1863)]\n15. Good, I.J. Speculations concerning the first ultraintelligent machine. Adv. Comput. **1965**, 6, 31–88. [[Google Scholar](https://scholar.google.com/scholar_lookup?title=Speculations+concerning+the+first+ultraintelligent+machine&author=Good,+I.J.&publication_year=1965&journal=Adv.+Comput.&volume=6&pages=31%E2%80%9388)]\n16. Sandberg, A. An overview of models of technological singularity. In The Transhumanist Reader: Classical and Contemporary Essays on the Science, Technology, and Philosophy of the Human Future; More, M., Vita-More, N., Eds.; Wiley: New York, NY, USA, 2010; pp. 376–394. [[Google Scholar](https://scholar.google.com/scholar_lookup?title=An+overview+of+models+of+technological+singularity&author=Sandberg,+A.&publication_year=2010&pages=376%E2%80%93394)]\n17. Bostrom, N. How Long before Superintelligence? 1998. Available online: (accessed on 18 August 2018).\n18. Goertzel, B. Artificial general intelligence: Concept, state of the art, and future prospects. J. Artif. Gen. Intell. **2014**, 5, 1–48. [[Google Scholar](https://scholar.google.com/scholar_lookup?title=Artificial+general+intelligence:+Concept,+state+of+the+art,+and+future+prospects&author=Goertzel,+B.&publication_year=2014&journal=J.+Artif.+Gen.+Intell.&volume=5&pages=1%E2%80%9348&doi=10.2478/jagi-2014-0001)] [[CrossRef](https://doi.org/10.2478/jagi-2014-0001)]\n19. Baum, S.D. A Survey of Artificial General Intelligence Projects for Ethics, Risk, and Policy. Global Catastrophic Risk Institute Working Paper 17-1. 2017. Available online: (accessed on 18 August 2018).\n20. Legg, S. Machine Super Intelligence. Ph.D. Thesis, University of Lugano, Lugano, Switzerland, 2008. [[Google Scholar](https://scholar.google.com/scholar_lookup?title=Machine+Super+Intelligence&author=Legg,+S.&publication_year=2008)]\n21. Crevier, D. AI: The Tumultuous History of the Search for Artificial Intelligence; Basic Books: New York, NY, USA, 1993. [[Google Scholar](https://scholar.google.com/scholar_lookup?title=AI:+The+Tumultuous+History+of+the+Search+for+Artificial+Intelligence&author=Crevier,+D.&publication_year=1993)]\n22. McCorduck, P. Machines Who Think: 25th Anniversary Edition; A.K. Peters: Natick, MA, USA, 2004. [[Google Scholar](https://scholar.google.com/scholar_lookup?title=Machines+Who+Think:+25th+Anniversary+Edition&author=McCorduck,+P.&publication_year=2004)]\n23. Hendler, J. Avoiding another AI winter. IEEE Intell. Syst. **2008**, 23, 2–4. [[Google Scholar](https://scholar.google.com/scholar_lookup?title=Avoiding+another+AI+winter&author=Hendler,+J.&publication_year=2008&journal=IEEE+Intell.+Syst.&volume=23&pages=2%E2%80%934&doi=10.1109/MIS.2008.20)] [[CrossRef](https://doi.org/10.1109/MIS.2008.20)]\n24. Dreyfus, H. Alchemy and AI. RAND Corporation Document P-3244. 1965. Available online: (accessed on 18 August 2018).\n25. Descartes, R. A Discourse on Method. Project Gutenberg eBook. 1637. Available online: (accessed on 18 August 2018).\n26. Chalmers, D.J. The singularity: A philosophical analysis. J. Conscious. Stud. **2010**, 17, 7–65. [[Google Scholar](https://scholar.google.com/scholar_lookup?title=The+singularity:+A+philosophical+analysis&author=Chalmers,+D.J.&publication_year=2010&journal=J.+Conscious.+Stud.&volume=17&pages=7%E2%80%9365)]\n27. Eden, A.H.; Moor, J.H.; Soraker, J.H.; Steinhart, E. (Eds.) Singularity Hypotheses: A Scientific and Philosophical Assessment; Springer: Berlin, Germany, 2013. [[Google Scholar](https://scholar.google.com/scholar_lookup?title=Singularity+Hypotheses:+A+Scientific+and+Philosophical+Assessment&author=Eden,+A.H.&author=Moor,+J.H.&author=Soraker,+J.H.&author=Steinhart,+E.&publication_year=2013)]\n28. Bostrom, N. Superintelligence: Paths, Dangers, Strategies; Oxford University Press: Oxford, UK, 2014. [[Google Scholar](https://scholar.google.com/scholar_lookup?title=Superintelligence:+Paths,+Dangers,+Strategies&author=Bostrom,+N.&publication_year=2014)]\n29. Callaghan, V.; Miller, J.; Yampolskiy, R.; Armstrong, S. (Eds.) The Technological Singularity: Managing the Journey; Springer: Berlin, Germany, 2017. [[Google Scholar](https://scholar.google.com/scholar_lookup?title=The+Technological+Singularity:+Managing+the+Journey&author=Callaghan,+V.&author=Miller,+J.&author=Yampolskiy,+R.&author=Armstrong,+S.&publication_year=2017)]\n30. Rawlinson, K. Microsoft’s Bill Gates Insists AI Is a Threat. BBC, 29 January 2015. [[Google Scholar](https://scholar.google.com/scholar_lookup?title=Microsoft%E2%80%99s+Bill+Gates+Insists+AI+Is+a+Threat&author=Rawlinson,+K.&publication_year=2015)]\n31. Cellan-Jones, R. Stephen Hawking Warns Artificial Intelligence Could End Mankind. BBC, 2 December 2014. [[Google Scholar](https://scholar.google.com/scholar_lookup?title=Stephen+Hawking+Warns+Artificial+Intelligence+Could+End+Mankind&author=Cellan-Jones,+R.&publication_year=2014)]\n32. Dowd, M. Elon Musk’s Billion-Dollar Crusade to Stop the A.I. Apocalypse. Vanity Fair, 26 March 2017. [[Google Scholar](https://scholar.google.com/scholar_lookup?title=Elon+Musk%E2%80%99s+Billion-Dollar+Crusade+to+Stop+the+A.I.+Apocalypse&author=Dowd,+M.&publication_year=2017)]\n33. LeCun, Y.; Bengio, Y.; Hinton, G. Deep learning. Nature **2015**, 521, 436–444. [[Google Scholar](https://scholar.google.com/scholar_lookup?title=Deep+learning&author=LeCun,+Y.&author=Bengio,+Y.&author=Hinton,+G.&publication_year=2015&journal=Nature&volume=521&pages=436%E2%80%93444&doi=10.1038/nature14539&pmid=26017442)] [[CrossRef](https://doi.org/10.1038/nature14539)] [[PubMed](http://www.ncbi.nlm.nih.gov/pubmed/26017442)]\n34. Silver, D.; Huang, A.; Maddison, C.J.; Guez, A.; Sifre, L.; Van Den Driessche, G.; Schrittwieser, J.; Antonoglou, I.; Panneershelvam, V.; Lanctot, M.; et al. Mastering the game of Go with deep neural networks and tree search. Nature **2016**, 529, 484–489. [[Google Scholar](https://scholar.google.com/scholar_lookup?title=Mastering+the+game+of+Go+with+deep+neural+networks+and+tree+search&author=Silver,+D.&author=Huang,+A.&author=Maddison,+C.J.&author=Guez,+A.&author=Sifre,+L.&author=Van+Den+Driessche,+G.&author=Schrittwieser,+J.&author=Antonoglou,+I.&author=Panneershelvam,+V.&author=Lanctot,+M.&publication_year=2016&journal=Nature&volume=529&pages=484%E2%80%93489&doi=10.1038/nature16961&pmid=26819042)] [[CrossRef](https://doi.org/10.1038/nature16961)] [[PubMed](http://www.ncbi.nlm.nih.gov/pubmed/26819042)]\n35. Bringsjord, S. Belief in the singularity is logically brittle. J. Conscious. Stud. **2012**, 19, 14–20. [[Google Scholar](https://scholar.google.com/scholar_lookup?title=Belief+in+the+singularity+is+logically+brittle&author=Bringsjord,+S.&publication_year=2012&journal=J.+Conscious.+Stud.&volume=19&pages=14%E2%80%9320)]\n36. Chalmers, D. The Singularity: A reply. J. Conscious. Stud. **2012**, 19, 141–167. [[Google Scholar](https://scholar.google.com/scholar_lookup?title=The+Singularity:+A+reply&author=Chalmers,+D.&publication_year=2012&journal=J.+Conscious.+Stud.&volume=19&pages=141%E2%80%93167)]\n37. McDermott, D. Response to the singularity by David Chalmers. J. Conscious. Stud. **2012**, 19, 167–172. [[Google Scholar](https://scholar.google.com/scholar_lookup?title=Response+to+the+singularity+by+David+Chalmers&author=McDermott,+D.&publication_year=2012&journal=J.+Conscious.+Stud.&volume=19&pages=167%E2%80%93172)]\n38. Crawford, K. Artificial Intelligence’s White Guy Problem. New York Times, 25 June 2016. [[Google Scholar](https://scholar.google.com/scholar_lookup?title=Artificial+Intelligence%E2%80%99s+White+Guy+Problem&author=Crawford,+K.&publication_year=2016)]\n39. Garling, C. Andrew Ng: Why ‘Deep Learning’ Is a Mandate for Humans, not just Machines. Wired, May 2015. [[Google Scholar](https://scholar.google.com/scholar_lookup?title=Andrew+Ng:+Why+%E2%80%98Deep+Learning%E2%80%99+Is+a+Mandate+for+Humans,+not+just+Machines&author=Garling,+C.&publication_year=2015)]\n40. Etzioni, O. No, the Experts Don’t Think Superintelligent AI Is a Threat to Humanity. MIT Technology Review, 20 September 2016. [[Google Scholar](https://scholar.google.com/scholar_lookup?title=No,+the+Experts+Don%E2%80%99t+Think+Superintelligent+AI+Is+a+Threat+to+Humanity&author=Etzioni,+O.&publication_year=2016)]\n41. Dafoe, A.; Russell, S. Yes, We Are Worried about the Existential Risk of Artificial Intelligence. MIT Technology Review, 2 November 2016. [[Google Scholar](https://scholar.google.com/scholar_lookup?title=Yes,+We+Are+Worried+about+the+Existential+Risk+of+Artificial+Intelligence&author=Dafoe,+A.&author=Russell,+S.&publication_year=2016)]\n42. Baum, S.D. Reconciliation between factions focused on near-term and long-term artificial intelligence. AI Soc. **2017**. [[Google Scholar](https://scholar.google.com/scholar_lookup?title=Reconciliation+between+factions+focused+on+near-term+and+long-term+artificial+intelligence&author=Baum,+S.D.&publication_year=2017&journal=AI+Soc.&doi=10.1007/s00146-017-0734-3)] [[CrossRef](https://doi.org/10.1007/s00146-017-0734-3)]\n43. Goertzel, B. Superintelligence: Fears, promises and potentials. J. Evol. Technol. **2015**, 25, 55–87. [[Google Scholar](https://scholar.google.com/scholar_lookup?title=Superintelligence:+Fears,+promises+and+potentials&author=Goertzel,+B.&publication_year=2015&journal=J.+Evol.+Technol.&volume=25&pages=55%E2%80%9387)]\n44. Baum, S.D.; Barrett, A.M.; Yampolskiy, R.V. Modeling and interpreting expert disagreement about artificial superintelligence. Informatica **2017**, 41, 419–428. [[Google Scholar](https://scholar.google.com/scholar_lookup?title=Modeling+and+interpreting+expert+disagreement+about+artificial+superintelligence&author=Baum,+S.D.&author=Barrett,+A.M.&author=Yampolskiy,+R.V.&publication_year=2017&journal=Informatica&volume=41&pages=419%E2%80%93428)]\n45. Bieger, J.; Thórisson, K.R.; Wang, P. Safe baby AGI. In Proceedings of the 8th International Conference on Artificial General Intelligence (AGI), Berlin, Germany, 22–25 July 2015; Bieger, J., Goertzel, B., Potapov, A., Eds.; Springer: Cham, Switzerland, 2015; pp. 46–49. [[Google Scholar](https://scholar.google.com/scholar_lookup?title=Safe+baby+AGI&conference=Proceedings+of+the+8th+International+Conference+on+Artificial+General+Intelligence+(AGI)&author=Bieger,+J.&author=Th%C3%B3risson,+K.R.&author=Wang,+P.&publication_year=2015&pages=46%E2%80%9349)]\n46. Searle, J.R. What your computer can’t know. The New York Review of Books, 9 October 2014. [[Google Scholar](https://scholar.google.com/scholar_lookup?title=What+your+computer+can%E2%80%99t+know&author=Searle,+J.R.&publication_year=2014)]\n47. Nichols, T. The Death of Expertise: The Campaign against Established Knowledge and Why It Matters; Oxford University Press: New York, NY, USA, 2017. [[Google Scholar](https://scholar.google.com/scholar_lookup?title=The+Death+of+Expertise:+The+Campaign+against+Established+Knowledge+and+Why+It+Matters&author=Nichols,+T.&publication_year=2017)]\n48. De Vrieze, J. ‘Science wars’ veteran has a new mission. Science **2017**, 358, 159. [[Google Scholar](https://scholar.google.com/scholar_lookup?title=%E2%80%98Science+wars%E2%80%99+veteran+has+a+new+mission&author=De+Vrieze,+J.&publication_year=2017&journal=Science&volume=358&pages=159&doi=10.1126/science.358.6360.159&pmid=29026024)] [[CrossRef](https://doi.org/10.1126/science.358.6360.159)] [[PubMed](http://www.ncbi.nlm.nih.gov/pubmed/29026024)]\n49. Stirling, M. Merchants of Consensus: A Public Battle against Exxon. 2017. Available online: (accessed on 18 August 2018).\n50. Hampshire, G. Alberta Government Cool on Controversial Climate Change Speaker. CBC News, 19 January 2018. [[Google Scholar](https://scholar.google.com/scholar_lookup?title=Alberta+Government+Cool+on+Controversial+Climate+Change+Speaker&author=Hampshire,+G.&publication_year=2018)]\n51. Marshall, G. Don’t Even Think About It: Why Our Brains Are Wired to Ignore Climate Change; Bloomsbury: New York, NY, USA, 2014. [[Google Scholar](https://scholar.google.com/scholar_lookup?title=Don%E2%80%99t+Even+Think+About+It:+Why+Our+Brains+Are+Wired+to+Ignore+Climate+Change&author=Marshall,+G.&publication_year=2014)]\n52. Whitehouse, S. Captured: The Corporate Infiltration of American Democracy; The New Press: New York, NY, USA, 2017. [[Google Scholar](https://scholar.google.com/scholar_lookup?title=Captured:+The+Corporate+Infiltration+of+American+Democracy&author=Whitehouse,+S.&publication_year=2017)]\n53. Kirk, R. Conservation activism is a healthy sign. Baltimore Sun, 4 May 1970. [[Google Scholar](https://scholar.google.com/scholar_lookup?title=Conservation+activism+is+a+healthy+sign&author=Kirk,+R.&publication_year=1970)]\n54. Boykoff, M.T.; Boykoff, J.M. Balance as bias: Global warming and the US prestige press. Glob. Environ. Chang. **2004**, 14, 125–136. [[Google Scholar](https://scholar.google.com/scholar_lookup?title=Balance+as+bias:+Global+warming+and+the+US+prestige+press&author=Boykoff,+M.T.&author=Boykoff,+J.M.&publication_year=2004&journal=Glob.+Environ.+Chang.&volume=14&pages=125%E2%80%93136&doi=10.1016/j.gloenvcha.2003.10.001)] [[CrossRef](https://doi.org/10.1016/j.gloenvcha.2003.10.001)]\n55. Baum, S.D.; Haqq-Misra, J.D.; Karmosky, C. Climate change: Evidence of human causes and arguments for emissions reduction. Sci. Eng. Ethics **2012**, 18, 393–410. [[Google Scholar](https://scholar.google.com/scholar_lookup?title=Climate+change:+Evidence+of+human+causes+and+arguments+for+emissions+reduction&author=Baum,+S.D.&author=Haqq-Misra,+J.D.&author=Karmosky,+C.&publication_year=2012&journal=Sci.+Eng.+Ethics&volume=18&pages=393%E2%80%93410&doi=10.1007/s11948-011-9270-6&pmid=21516371)] [[CrossRef](https://doi.org/10.1007/s11948-011-9270-6)] [[PubMed](http://www.ncbi.nlm.nih.gov/pubmed/21516371)]\n56. Oreskes, N. The scientific consensus on climate change. Science **2004**, 306, 1686. [[Google Scholar](https://scholar.google.com/scholar_lookup?title=The+scientific+consensus+on+climate+change&author=Oreskes,+N.&publication_year=2004&journal=Science&volume=306&pages=1686&doi=10.1126/science.1103618&pmid=15576594)] [[CrossRef](https://doi.org/10.1126/science.1103618)] [[PubMed](http://www.ncbi.nlm.nih.gov/pubmed/15576594)]\n57. CNA Military Advisory Board. National Security and the Threat of Climate Change; The CNA Corporation: Alexandria, VA, USA, 2007. [[Google Scholar](https://scholar.google.com/scholar_lookup?title=National+Security+and+the+Threat+of+Climate+Change&author=CNA+Military+Advisory+Board&publication_year=2007)]\n58. Lighthill, J. Artificial Intelligence: A Paper Symposium; Science Research Council: Swindon, UK, 1973. [[Google Scholar](https://scholar.google.com/scholar_lookup?title=Artificial+Intelligence:+A+Paper+Symposium&author=Lighthill,+J.&publication_year=1973)]\n59. Menzies, T. 21st-century AI: Proud, not smug. IEEE Intell. Syst. **2003**, 18, 18–24. [[Google Scholar](https://scholar.google.com/scholar_lookup?title=21st-century+AI:+Proud,+not+smug&author=Menzies,+T.&publication_year=2003&journal=IEEE+Intell.+Syst.&volume=18&pages=18%E2%80%9324&doi=10.1109/MIS.2003.1200723)] [[CrossRef](https://doi.org/10.1109/MIS.2003.1200723)]\n60. Bentley, P.J. The three laws of artificial intelligence: Dispelling common myths. In Should We Fear Artificial Intelligence? In-Depth Analysis; Boucher, P., Ed.; European Parliamentary Research Service, Strategic Foresight Unit: Brussels, Belgium, 2018; pp. 6–12. [[Google Scholar](https://scholar.google.com/scholar_lookup?title=The+three+laws+of+artificial+intelligence:+Dispelling+common+myths&author=Bentley,+P.J.&publication_year=2018&pages=6%E2%80%9312)]\n61. Häggström, O. A spectacularly uneven AI report. Häggström Hävdar, 30 March 2018. [[Google Scholar](https://scholar.google.com/scholar_lookup?title=A+spectacularly+uneven+AI+report&author=H%C3%A4ggstr%C3%B6m,+O.&publication_year=2018)]\n62. Marcus, G. Artificial intelligence is stuck. Here’s how to move it forward. New York Times, 29 July 2017. [[Google Scholar](https://scholar.google.com/scholar_lookup?title=Artificial+intelligence+is+stuck.+Here%E2%80%99s+how+to+move+it+forward&author=Marcus,+G.&publication_year=2017)]\n63. Bengtsson, B. Pinker is dangerous. Jag är Här, 22 October 2017. [[Google Scholar](https://scholar.google.com/scholar_lookup?title=Pinker+is+dangerous&author=Bengtsson,+B.&publication_year=2017)]\n64. Häggström, O. The AI meeting in Brussels last week. Häggström Hävdar, 23 October 2017. [[Google Scholar](https://scholar.google.com/scholar_lookup?title=The+AI+meeting+in+Brussels+last+week&author=H%C3%A4ggstr%C3%B6m,+O.&publication_year=2017)]\n65. Torres, P. A Detailed Critique of One Section of Steven Pinker’s Chapter “Existential Threats” in Enlightenment Now. Project for Future Human Flourishing Technical Report 2, Version 1.2. 2018. Available online: (accessed on 21 August 2018).\n66. Clifford, C. Google billionaire Eric Schmidt: Elon Musk is ‘exactly wrong’ about A.I. because he ‘doesn’t understand’. CNBC, 29 May 2018. [[Google Scholar](https://scholar.google.com/scholar_lookup?title=Google+billionaire+Eric+Schmidt:+Elon+Musk+is+%E2%80%98exactly+wrong%E2%80%99+about+A.I.+because+he+%E2%80%98doesn%E2%80%99t+understand%E2%80%99&author=Clifford,+C.&publication_year=2018)]\n67. Bogost, I. Why Zuckerberg and Musk are fighting about the robot future. The Atlantic, 27 July 2017. [[Google Scholar](https://scholar.google.com/scholar_lookup?title=Why+Zuckerberg+and+Musk+are+fighting+about+the+robot+future&author=Bogost,+I.&publication_year=2017)]\n68. Cass, O. The problem with climate catastrophizing. Foreign Affairs, 21 March 2017. [[Google Scholar](https://scholar.google.com/scholar_lookup?title=The+problem+with+climate+catastrophizing&author=Cass,+O.&publication_year=2017)]\n69. Cass, O. How to worry about climate change. National Affairs, Winter2017; 115–131. [[Google Scholar](https://scholar.google.com/scholar_lookup?title=How+to+worry+about+climate+change&author=Cass,+O.&publication_year=2017)]\n70. Baum, S.D. The great downside dilemma for risky emerging technologies. Phys. Scr. **2014**, 89, 128004. [[Google Scholar](https://scholar.google.com/scholar_lookup?title=The+great+downside+dilemma+for+risky+emerging+technologies&author=Baum,+S.D.&publication_year=2014&journal=Phys.+Scr.&volume=89&pages=128004&doi=10.1088/0031-8949/89/12/128004)] [[CrossRef](https://doi.org/10.1088/0031-8949/89/12/128004)][[Green Version](http://iopscience.iop.org/article/10.1088/0031-8949/89/12/128004/pdf)]\n71. Heath, A. Mark Zuckerberg’s plan to create non-voting Facebook shares is going to trial in September. Business Insider, 4 May 2017. [[Google Scholar](https://scholar.google.com/scholar_lookup?title=Mark+Zuckerberg%E2%80%99s+plan+to+create+non-voting+Facebook+shares+is+going+to+trial+in+September&author=Heath,+A.&publication_year=2017)]\n72. Ingram, M. At Alphabet, there are only two shareholders who matter. Fortune, 7 June 2017. [[Google Scholar](https://scholar.google.com/scholar_lookup?title=At+Alphabet,+there+are+only+two+shareholders+who+matter&author=Ingram,+M.&publication_year=2017)]\n73. Broockman, D.; Ferenstein, G.F.; Malhotra, N. The Political Behavior of Wealthy Americans: Evidence from Technology Entrepreneurs; Stanford Graduate School of Business Working Paper, No. 3581; Stanford Graduate School of Business: Stanford, CA, USA, 2017. [[Google Scholar](https://scholar.google.com/scholar_lookup?title=The+Political+Behavior+of+Wealthy+Americans:+Evidence+from+Technology+Entrepreneurs&author=Broockman,+D.&author=Ferenstein,+G.F.&author=Malhotra,+N.&publication_year=2017)]\n74. Edsall, T.B. Silicon Valley takes a right turn. New York Times, 12 January 2017. [[Google Scholar](https://scholar.google.com/scholar_lookup?title=Silicon+Valley+takes+a+right+turn&author=Edsall,+T.B.&publication_year=2017)]\n75. Mullins, B. Paying professors: Inside Google’s academic influence campaign. Wall Street Journal, 15 July 2017. [[Google Scholar](https://scholar.google.com/scholar_lookup?title=Paying+professors:+Inside+Google%E2%80%99s+academic+influence+campaign&author=Mullins,+B.&publication_year=2017)]\n76. Taplinaug, J. Google’s disturbing influence over think tanks. New York Times, 30 August 2017. [[Google Scholar](https://scholar.google.com/scholar_lookup?title=Google%E2%80%99s+disturbing+influence+over+think+tanks&author=Taplinaug,+J.&publication_year=2017)]\n77. Tiku, N. New America chair says Google didn’t prompt critic’s ouster. Wired, 6 September 2017. [[Google Scholar](https://scholar.google.com/scholar_lookup?title=New+America+chair+says+Google+didn%E2%80%99t+prompt+critic%E2%80%99s+ouster&author=Tiku,+N.&publication_year=2017)]\n78. Marquis, C.; Toffel, M.W.; Zhou, Y. Scrutiny, norms, and selective disclosure: A global study of greenwashing. Organ. Sci. **2016**, 27, 483–504. [[Google Scholar](https://scholar.google.com/scholar_lookup?title=Scrutiny,+norms,+and+selective+disclosure:+A+global+study+of+greenwashing&author=Marquis,+C.&author=Toffel,+M.W.&author=Zhou,+Y.&publication_year=2016&journal=Organ.+Sci.&volume=27&pages=483%E2%80%93504&doi=10.1287/orsc.2015.1039)] [[CrossRef](https://doi.org/10.1287/orsc.2015.1039)]\n79. Mack, E. Why Elon Musk spent $10 million to keep artificial intelligence friendly. Forbes, 15 January 2015. [[Google Scholar](https://scholar.google.com/scholar_lookup?title=Why+Elon+Musk+spent+$10+million+to+keep+artificial+intelligence+friendly&author=Mack,+E.&publication_year=2015)]\n80. Pickard, V. Media failures in the age of Trump. Political Econ. Commun. **2017**, 4, 118–122. [[Google Scholar](https://scholar.google.com/scholar_lookup?title=Media+failures+in+the+age+of+Trump&author=Pickard,+V.&publication_year=2017&journal=Political+Econ.+Commun.&volume=4&pages=118%E2%80%93122)]\n81. Lewandowsky, S.; Gignac, G.E.; Vaughan, S. The pivotal role of perceived scientific consensus in acceptance of science. Nat. Clim. Chang. **2013**, 3, 399–404. [[Google Scholar](https://scholar.google.com/scholar_lookup?title=The+pivotal+role+of+perceived+scientific+consensus+in+acceptance+of+science&author=Lewandowsky,+S.&author=Gignac,+G.E.&author=Vaughan,+S.&publication_year=2013&journal=Nat.+Clim.+Chang.&volume=3&pages=399%E2%80%93404&doi=10.1038/nclimate1720)] [[CrossRef](https://doi.org/10.1038/nclimate1720)]\n82. Cook, J.; Lewandowsky, S. The Debunking Handbook. St. Lucia, Australia: University of Queensland. 2011. Available online: (accessed on 18 August 2018).\n83. Chan, M.P.; Jones, C.R.; Hall Jamieson, K.; Albarracín, D. Debunking: A meta-analysis of the psychological efficacy of messages countering misinformation. Psychol. Sci. **2017**, 28, 1531–1546. [[Google Scholar](https://scholar.google.com/scholar_lookup?title=Debunking:+A+meta-analysis+of+the+psychological+efficacy+of+messages+countering+misinformation&author=Chan,+M.P.&author=Jones,+C.R.&author=Hall+Jamieson,+K.&author=Albarrac%C3%ADn,+D.&publication_year=2017&journal=Psychol.+Sci.&volume=28&pages=1531%E2%80%931546&doi=10.1177/0956797617714579&pmid=28895452)] [[CrossRef](https://doi.org/10.1177/0956797617714579)] [[PubMed](http://www.ncbi.nlm.nih.gov/pubmed/28895452)]\n84. Slovic, P. The perception gap: Radiation and risk. Bull. At. Sci. **2012**, 68, 67–75. [[Google Scholar](https://scholar.google.com/scholar_lookup?title=The+perception+gap:+Radiation+and+risk&author=Slovic,+P.&publication_year=2012&journal=Bull.+At.+Sci.&volume=68&pages=67%E2%80%9375&doi=10.1177/0096340212444870)] [[CrossRef](https://doi.org/10.1177/0096340212444870)]\n\n \n© 2018 by the author. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license ().", "url": "https://www.mdpi.com/2078-2489/9/9/209", "title": "Superintelligence skepticism as a political tool", "source": "html_articles", "source_type": "journalArticle", "source_filetype": "pdf", "date_published": "2017-12-31T23:00:00Z", "authors": ["Seth Baum"], "summary": [], "id": "2dc38d1c543f72d8f188fedc06d58367"} {"text": "Abstract\n--------\n\n**:**\nAn important challenge for safety in machine learning and artificial intelligence systems is a set of related failures involving specification gaming, reward hacking, fragility to distributional shifts, and Goodhart’s or Campbell’s law. This paper presents additional failure modes for interactions within multi-agent systems that are closely related. These multi-agent failure modes are more complex, more problematic, and less well understood than the single-agent case, and are also already occurring, largely unnoticed. After motivating the discussion with examples from poker-playing artificial intelligence (AI), the paper explains why these failure modes are in some senses unavoidable. Following this, the paper categorizes failure modes, provides definitions, and cites examples for each of the modes: accidental steering, coordination failures, adversarial misalignment, input spoofing and filtering, and goal co-option or direct hacking. The paper then discusses how extant literature on multi-agent AI fails to address these failure modes, and identifies work which may be useful for the mitigation of these failure modes.\n\n\nKeywords: [multi-agent systems](/search?q=multi-agent+systems); [specification gaming](/search?q=specification+gaming); [artificial intelligence safety](/search?q=artificial+intelligence+safety); [Goodhart’s Law](/search?q=Goodhart%E2%80%99s+Law)\nMSC:\n91E45; 91A06\nJEL Classification:\nC79; D74\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n 1. Background, Motivation and Contribution\n-------------------------------------------\n\nWhen complex systems are optimized by a single agent, the representation of the system and of the goal used for optimization often leads to failures that can be surprising to the agent’s designers. These failure modes go by a variety of names, Amodei and Clark called them faulty reward functions [[1](#B1-BDCC-03-00021)] but similar failures have been referred to as Goodhart’s law [[2](#B2-BDCC-03-00021),[3](#B3-BDCC-03-00021)], Campbell’s law [[4](#B4-BDCC-03-00021)], distributional shift [[5](#B5-BDCC-03-00021)], strategic behavior [[6](#B6-BDCC-03-00021)], reward hacking [[7](#B7-BDCC-03-00021)], Proxyeconomics [[8](#B8-BDCC-03-00021)], and other terms.Examples of these failures in the single-agent case are shown by Victoria Krakovna’s extensive list of concrete examples of “generating a solution that literally satisfies the stated objective but fails to solve the problem according to the human designer’s intent.” [[9](#B9-BDCC-03-00021)] Liu et al. suggest that “a complex activity can often be performed in several different ways,” [[10](#B10-BDCC-03-00021)] but not all these ways should be considered valid. To understand why, Krakovna’ s list includes examples of “achieving a goal” by finding and exploiting bugs in a simulation engine to achieve goals [[11](#B11-BDCC-03-00021),[12](#B12-BDCC-03-00021),[13](#B13-BDCC-03-00021)]; by physical manipulation of objects in unanticipated ways, such as moving a table instead of the item on the table [[14](#B14-BDCC-03-00021)], or flipping instead of lifting a block [[15](#B15-BDCC-03-00021)]; and even by exploiting the problem structure or evaluation, such as returning an empty list as being sorted [[16](#B16-BDCC-03-00021)], or deleting the file containing the target output [[16](#B16-BDCC-03-00021)].#### 1.1. Motivation\n\nThis forms only a part of the broader set of concerns in AI safety, [[5](#B5-BDCC-03-00021),[17](#B17-BDCC-03-00021),[18](#B18-BDCC-03-00021),[19](#B19-BDCC-03-00021)], but the failure modes are the focus of a significant body of work in AI safety discussed later in the paper. However, as the systems become more capable and more widely used, Danzig and others have noted that this will “increase rather than reduce collateral risks of loss of control.” [[20](#B20-BDCC-03-00021)] The speed of such systems is almost certainly beyond the point of feasible human control, and as they become more complex, the systems are also individually likely to fail in ways that are harder to understand.While some progress has been made in the single-agent case, the systems have continued to become more capable, corporations, governments, and other actors have developed and deployed machine learning systems. These are not only largely autonomous, but also interact with each other. This allows a new set of failures, and these are not yet a focus of safety-focused research—but they are critical.#### 1.2. Contribution\n\nThe analogues of the earlier-mentioned classes of failure for multi-agent systems are more complex, potentially harder to mitigate, and unfortunately not the subject of a significant focus among AI safety researchers. In this paper, we introduce a classification of failures that are not yet well-addressed in the literature involving multiple agents. These failures can occur even when system designers do not intend to build conflicting AI or ML systems. The current paper contributes to the literature by outlining how and why these multi-agent failures can occur, and providing an overview of approaches that could be developed for mitigating them. In doing so, the paper will hopefully help spur system designers to explicitly consider these failure modes in designing systems, and urge caution on the part of policymakers.As a secondary contribution, the link between ongoing work on AI safety and potential work mitigating these multi-agent failures incidentally answers an objection raised by AI risk skeptics that AI safety is ”not worth current attention” and that the issues are “premature to worry about” [[21](#B21-BDCC-03-00021)]. This paper instead shows how failures due to multi-agent dynamics are critical in the present, as ML and superhuman narrow AI is being widely deployed, even given the (valid) arguments put forward by Yudkowsky [[22](#B22-BDCC-03-00021)] and Bostrom [[7](#B7-BDCC-03-00021)] for why a singleton AI is a more important source of existential risk.#### 1.3. Extending Single-Agent Optimization Failures\n\nSystems which are optimized using an imperfect system model have several important failure modes categorized in work by Manheim and Garrabrant [[3](#B3-BDCC-03-00021)]. First, imperfect correlates of the goal will be less correlated in the tails of the distribution, as discussed by Lewis [[23](#B23-BDCC-03-00021)]. Heavily optimized systems will end up in those regions, and even well-designed metrics do not account for every possible source of variance. Second, there are several context failures [[24](#B24-BDCC-03-00021)], where the optimization is well behaved in the training set (“ancestral environment”) but fails as optimization pressure is applied. For example, it may drift towards an “edge instantiation” where the system may optimize all the variables that relate to the true goal, but further gain on the metric is found by unexpected means. Alternatively, the optimizer may properly obey constraints in the initial stage, but find some “nearest unblocked strategy” [[24](#B24-BDCC-03-00021)] allowing it to circumvent designed limits when given more optimization power. These can all occur in single-agent scenarios.The types of failure in multi-agent systems presented in this paper can be related to Manheim and Garrabrant’s classification of single-agent metric optimization failures. The four single-agent overoptimization failure modes outlined there are:* Tails Fall Apart, or Regressional inaccuracy, where the relationship between the modeled goal and the true goal is inexact due to noise (for example, measurement error,) so that the bias grows as the system is optimized.\n* Extremal Model Insufficiency, where the approximate model omits factors which dominate the system’s behavior after optimization.\n* Extremal Regime Change, where the model does not include a regime change that occurs under certain (unobserved) conditions that optimization creates.\n* Causal Model Failure, where the agent’s actions are based on a model which incorrectly represents causal relationships, and the optimization involves interventions that break the causal structure the model implicitly relies on.\nDespite the completeness of the above categorization, the way in which these failures occur can differ greatly even when only a single agent is present. In a multi-agent scenario, agents can stumble into or intentionally exploit model overoptimization failures in even more complex ways. Despite this complexity, the different multi-agent failure modes can be understood based on understanding the way in which the implicit or explicit system models used by agents fail.#### 1.4. Defining Multi-Agent Failures\n\nIn this paper, a multi-agent optimization failure is when one (or more) of the agents which can achieve positive outcomes in some scenarios exhibits behaviors that negatively affect its own outcome due to the actions of one or more agents other than itself. This occurs either when the objective function of the agent no longer aligns with the goal, as occurs in the Regressional and both Extremal cases, or when the learned relationship between action(s), the metric(s), and the goal have changed, as in the Causal failure case.This definition does not require the failure to be due to malicious behavior on the part of any agent, nor does it forbid it. Note also that the definition does not require failure of the system, as in Behzadan and Munir’s categorization of adversarial attacks [[25](#B25-BDCC-03-00021)], nor does it make any assumptions about type of the agents, such as the type of learning or optimization system used. (The multi-agent cases implicitly preclude agents from being either strongly boxed, as Drexler proposed [[26](#B26-BDCC-03-00021)], or oracular, as discussed by Armstrong [[27](#B27-BDCC-03-00021)].) 2. Multi-Agent Failures: Context and Categorization\n----------------------------------------------------\n\nSeveral relatively straightforward failure modes involving interactions between an agent and a regulator were referred to in Manheim and Garrabrant as adversarial Goodhart [[3](#B3-BDCC-03-00021)]. These occur where one AI system opportunistically alters or optimizes the system and uses the expected optimization of a different victim agent to hijack the overall system. For example, “smart market” electrical grids use systems that optimize producer actions and prices with a linear optimization system using known criteria. If power lines or power plants have strategically planned maintenance schedules, an owner can manipulate the resulting prices to its own advantage, as occurred (legally) in the case of Enron [[28](#B28-BDCC-03-00021)]. This is possible because the manipulator can plan in the presence of a known optimization regime.This class of manipulation by an agent frustrating a regulator’s goals is an important case, but more complex dynamics can also exist, and Manheim and Garrabrant noted that there are “clearly further dynamics worth exploring.” [[3](#B3-BDCC-03-00021)] This involves not only multiple heterogenous agents, which Kleinberg and Raghavan suggest an avenue for investigating, but also interaction between those agents [[6](#B6-BDCC-03-00021)]. An example of a well-understood multi-agent system, the game of poker, allows clarification of why the complexity is far greater in the interaction case.#### 2.1. Texas Hold’em and the Complexity of Multi-Agent Dynamics\n\nIn many-agent systems, simple interactions can become complex adaptive systems due to agent behavior, as the game of poker shows. Solutions to simplified models of two-player poker predate game theory as a field [[29](#B29-BDCC-03-00021)], and for simplified variants, two-player draw poker has a fairly simple optimal strategy [[30](#B30-BDCC-03-00021)]. These early, manually computed solutions were made possible both by limiting the complexity of the cards, and more importantly by limiting interaction to a single bet size, with no raising or interaction between the players. In the more general case of heads-up limit Texas Hold’em, significantly more work was needed, given the multiplicity of card combinations, the existence of hidden information, and player interaction, but this multi-stage interactive game is “now essentially weakly solved” [[31](#B31-BDCC-03-00021)]. Still, this game involves only two players. In the no-limit version of the game, Brown and Sandholm recently unveiled superhuman AI [[32](#B32-BDCC-03-00021)], which restricts the game to “Heads’ Up” poker, which involves only two players per game, and still falls far short of a full solution to the game.The complex adaptive nature of multi-agent systems means that each agent needs model not only model the system itself, but also the actions of the other player(s). The multiplicity of potential outcomes, betting strategies, and different outcomes becomes rapidly infeasible to represent other than heuristically. In limit Texas Hold’em poker, for example, the number of card combinations is immense, but the branching possibilities for betting is the more difficult challenge. In a no-betting game of Hold’em with P players, there are \n\n52\n!\n/\n(\n(\n52\n−\n2\nP\n−\n5\n)\n!\n·\n2\nP\n·\n5\n!\n)\n\n possible situations. This is \n\n2.8\n·\n\n10\n12\n\n\n hands in the two-player case, \n\n3.3\n·\n\n10\n15\n\n\n in the three-player case, and growing by a similar factor when expanded to the four-, five-, or six-player case. The probability of winning is the probability that the five cards on the table plus two unknown other cards from the deck are a better hand than any that another player holds. In Texas Hold’em, there are four betting stages, one after each stage of cards is revealed. Billings et al. use a reduced complexity game (limiting betting to three rounds per stage) and find a complexity of \n\nO\n(\n\n10\n18\n\n)\n\n in the two-hand case [[33](#B33-BDCC-03-00021)]. That means the two-player, three-round game complexity is comparable in size to a no-betting four-player game, with \n\n4.1\n·\n\n10\n18\n\n\n card combinations possible.Unlike a no-betting game, however, a player must consider much more than the simple probability that the hand held is better than those held by other players. That calculation is unmodified during the additional branching due to player choices. The somewhat more difficult issue is that the additional branching requires Bayesian updates to estimate the probable distribution of hand strengths held by other players based on their decisions, which significantly increases the complexity of solving the game. The most critical challenge, however, is that each player bets based on the additional information provided by not only the hidden information provided by their cards, but also based on the betting behavior of other players. Opponent(s) make betting decisions based on non-public information (in Texas Hold’em, hole cards) and strategy for betting requires a meta-update taking advantage of the information the other player reveals by betting. The players must also update based on potential strategic betting by other players, which occurs when a player bets in a way calculated to deceive. To deal with this, poker players need to model not just the cards, but also the strategic decisions of other players. This complex model of strategic decisions must be re-run for all the possible combinations at each decision point to arrive at a conclusion about what other players are doing. Even after this is complete, an advanced poker player, or an effective AI, must then decide not just how likely they are to win, but also how to play strategically, optimizing based on how other players will react to the different choices available.Behaviors such as bluffing and slow play are based on these dynamics, which become much more complex as the number of rounds of betting and the number of players increases. For example, slow play involves underbidding compared to the strength of your hand. This requires that the players will later be able to raise the stakes, and allows a player to lure others into committing additional money. The complexity of the required modeling of other agents’ decision processes grows as a function of the number of choices and stages at which each agent makes a decision. This type of complexity is common in multi-agent systems. In general, however, the problem is much broader in scope than what can be illustrated by a rigidly structured game such as poker.#### 2.2. Limited Complexity Models versus the Real World\n\nIn machine learning systems, the underlying system is approximated by implicitly or explicitly learning a multidimensional transformation between inputs and outputs. This transformation approximates a combination of the relationships between inputs and the underlying system, and between the system state and the outputs. The complexity of the model learned is limited by the computational complexity of the underlying structure, and while the number of possible states for the input is large, it is typically dwarfed by the number of possible states of the system.The critical feature of machine learning that allows such systems to be successful is that most relationships can be approximated without inspecting every available state. (All models simplify the systems they represent.) The implicit simplification done by machine learning is often quite impressive, picking up on clues present in the input that humans might not notice, but it comes at the cost of having difficult to understand and difficult to interpret implicit models of the system.Any intelligence, whether machine learning-based, human, or AI, requires similar implicit simplification, since the branching complexity of even a relatively simple game such as Go dwarfs the number of atoms in the universe. Because even moderately complex systems cannot be fully represented, as discussed by Soares [[34](#B34-BDCC-03-00021)], the types of optimization failures discussed above are inevitable. The contrapositive to Conant and Ashby’s theorem [[35](#B35-BDCC-03-00021)] is that if a system is more complex than the model, any attempt to control the system will be imperfect. Learning, whether human or machine, builds approximate models based on observations, or input data. This implies that the behavior of the approximation in regions far from those covered by the training data is more likely to markedly differ from reality. The more systems change over time, the more difficult prediction becomes—and the more optimization is performed on a system, the more it will change. Worsening this problem, the learning that occurs in ML systems fails to account for the embedded agency issues discussed by Demski and Garrabrant [[36](#B36-BDCC-03-00021)], and interaction between agents with implicit models of each other and themselves amplifies many of these concerns.#### 2.3. Failure modes\n\nBecause an essential part of multi-agent dynamic system modeling is opponent modeling, the opponent models are a central part of any machine learning model. These opponent models may be implicit in the overall model, or they may be explicitly represented, but they are still models that are approximate. In many cases, opponent behavior is ignored—by implicitly simplifying other agent behavior to noise, or by assuming no adversarial agents exist. Because these models are imperfect, they will be vulnerable to overoptimization failures discussed above.The list below is conceptually complete, but limited in at least three ways. First, examples given in this list primarily discuss failures that occur between two parties, such as a malicious actor and a victim, or failures induced by multiple individually benign agents. This would exclude strategies where agents manipulate others indirectly, or those where coordinated interaction between agents is used to manipulate the system. It is possible that when more agents are involved, more specific classes of failure will be relevant.Second, the below list does not include how other factors can compound metric failures. These are critical, but may involve overoptimization, or multiple-agent interaction, only indirectly. For example, O’Neil discusses a class of failure involving the interaction between the system, the inputs, and validation of outputs [[37](#B37-BDCC-03-00021)]. These failures occur when a system’s metrics are validated in part based on outputs it contributes towards. For example, a system predicting greater crime rates in areas with high minority concentrations leads to more police presence, which in turn leads to a higher rate of crime found. This higher rate of crime in those areas is used to train the model, which leads it to reinforce the earlier unjustified assumption. Such cases are both likely to occur, and especially hard to recognize, when the interaction between multiple systems is complex, and it is unclear whether the system’s effects are due in part to its own actions (This class of failure seems particularly likely in systems that are trained via ”self-play,” where failures in the model of the system get reinforced by incorrect feedback on the basis of the models, which is also a case of model insufficiency failure.).Third and finally, the failure modes exclude cases that do not directly involve metric overoptimizations, such as systems learning unacceptable behavior implicitly due to training data that contains unanticipated biases, or failing to attempt to optimize for social preferences such as fairness. These are again important, but they are more basic failures of system design.With those caveats, we propose the following classes of multi-agent overoptimization failures. For each, a general definition is provided, followed by one or more toy models that demonstrate the failure mode. Each agent attempts to achieve their goal by optimizing for the metric, but the optimization is performed by different agents without any explicit coordination or a priori knowledge about the other agents. The specifics of the strategies that can be constructed and the structure of the system can be arbitrarily complex, but as explored below, the ways in which these models fail can still be understood generally.These models are deliberately simplified, but where possible, real-world examples of the failures exhibited in the model are suggested. These examples come from both human systems where parallel dynamics exist, and examples of the failures in extent systems with automated agents. In the toy models, \n\nM\ni\n\n and \n\nG\ni\n\n stands for the metric and goal, respectively, for agent i. The metric is an imperfect proxy for the goal, and will typically be defined in relation to a goal. (The goal itself is often left unspecified, since the model applies to arbitrary systems and agent goals.) In some cases, the failure is non-adversarial, but where relevant, there is a victim agent V and an opponent agent O that attempts to exploit it. Please note that the failures can be shown with examples formulated with game-theoretic notation, but doing so requires more complex specifications of the system and interactions than is possible using the below characterization of the agent goals and the systems.**Failure Mode** **1.** **Accidental Steering** is when multiple agents alter the systems in ways not anticipated by at least one agent, creating one of the above-mentioned single-party overoptimization failures.**Remark** **1.** This failure mode manifests similarly to the single-agent case and differs only in that agents do not anticipate the actions of other agents. When agents have closely related goals, even if those goals are aligned, it can exacerbate the types of failures that occur in single-agent cases.Because the failing agent alone does not (or cannot) trigger the failure, this differs from the single-agent case. The distributional shift can occur due to a combination of actors’ otherwise potentially positive influences by either putting the system in an extremal state where the previously learned relationship decays, or triggering a regime change where previously beneficial actions are harmful.**Model.** **1.1—Group Overoptimization.** A set of agents each have goals which affect the system in related ways, and metric-goal relationship changes in the extremal region where x>a. As noted above, \n\nM\ni\n\n and \n\nG\ni\n\n stands for the metric and goal, respectively, for agent i. This extremal region is one where single-agent failure modes will occur for some or all agents. Each agent i can influence the metric by an amount \n\nα\ni\n\n, where \n\n∑\n\nα\ni\n\n>\na\n\n, but \n\n∀\n\nα\ni\n\n<\na\n\n. In the extremal subspace where \n\n\nM\ni\n\n>\na\n\n, the metric reverses direction, making further optimization of the metric harm the agent’s goal.\n\n\n\n\n\nM\ni\n\n=\n\n\n\n\n\n\nG\ni\n\n,\n\n\n\n\nwhere\n\n\nM\ni\n\n<\n=\na\n\n\n\n\n\n\n\nM\ni\n\n\n(\na\n)\n\n−\n\nG\ni\n\n,\n\n\n\n\nwhere\n\n\nM\ni\n\n>\na\n\n\n\n\n\n\n\n\n\n\n(1)\n\n**Remark** **2.** In the presence of multiple agents without coordination, manipulation of factors not already being manipulated by other agents is likely to be easier and more rewarding, potentially leading to inadvertent steering due to model inadequacy, as discussed in Manheim and Garrabrant’s categorization of single-agent cases [[3](#B3-BDCC-03-00021)]. As shown there overoptimization can lead to perverse outcomes, and the failing agent(s) can hurt both their own goals, and in similar ways, can lead to negative impacts on the goals of other agents.**Model.** **1.2—Catastrophic Threshold Failure.** \n\n\n\n\n\nM\ni\n\n=\n\nx\ni\n\n\n\n\n\n\n\nG\ni\n\n=\n\n\n\n\n\na\n+\n(\n\n∑\n\n∀\ni\n\n\n\nx\ni\n\n)\n\n\n\n\nwhere\n\n\n∑\n\n∀\ni\n\n\n\nx\ni\n\n<\n=\nT\n\n\n\n\n\n\na\n−\n−\n−\n(\n\n∑\n\n∀\ni\n\n\n\nx\ni\n\n)\n\n\n\n\nwhere\n\n\n∑\n\n∀\ni\n\n\n\nx\ni\n\n>\nT\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n(2)\n\nEach agent manipulates their own variable, unaware of the overall impact. Even though the agents are collaborating, because they cannot see other agents’ variables, there is no obvious way to limit the combined impact on the system to stay below the catastrophic threshold T. Because each agent is exploring a different variable, they each are potentially optimizing different parts of the system.**Remark** **3.** This type of catastrophic threshold is commonly discussed in relations to complex adaptive systems, but can occur even in systems where the catastrophic threshold is simple. The case discussed by Michael Eisen involves pricing on Amazon was due to a pair of deterministic linear pricing-setting bots interacting to set the price of an otherwise unremarkable biology book at tens of millions of dollars, showing that runaway dynamics are possible even in the simplest cases [[38](#B38-BDCC-03-00021)]. This phenomenon is also expected whenever exceeding some constraint breaks the system, and such constraints are often not identified until a failure occurs.**Example** **1.** This type of coordination failure can occur in situations such as overfishing across multiple regions, where each group catches local fish, which they can see, but at a given threshold across regions the fish population collapses, and recovery is very slow. (In this case, the groups typically are selfish rather than collaborating, making the dynamics even more extreme.)**Example** **2.** Smaldino and McElreath [[39](#B39-BDCC-03-00021)] shows this failure mode specifically occurring with statistical methodology in academia, where academics find novel ways to degrade statistical rigor. The more general “Mutable Practices” model presented by Braganza [[8](#B8-BDCC-03-00021)], based on part on Smaldino and McElreath, has each agent attempting to both outperform the other agents on a metric as well as fulfill a shared societal goal, allows agents to evolve and find new strategies that combine to subvert a societal goal.**Failure Mode** **2.** **Coordination Failure** occurs when multiple agents clash despite having potentially compatible goals.**Remark** **4.** Coordination is an inherently difficult task, and can in general be considered impossible [[40](#B40-BDCC-03-00021)]. In practice, coordination is especially difficult when the goals of other agents are incompletely known or not fully understood. Coordination failures such as Yudkowsky’s Inadequate equilibria are stable, and coordination to escape from such an equilibrium can be problematic even when agents share goals [[41](#B41-BDCC-03-00021)].**Model.** **2.1—Unintended Resource Contention.** A fixed resource R is split between uses \n\nR\nn\n\n by different agents. Each agent has limited funds \n\nf\ni\n\n, and \n\nR\ni\n\n is allocated to agent i for exploitation in proportion to their bid for the resources \n\nc\n\nR\ni\n\n\n. The agents choose amounts to spend on acquiring resources, and then choose amounts \n\ns\n\nn\ni\n\n\n to exploit each resource, resulting in utility \n\nU\n(\n\ns\nn\n\n,\n\nR\nn\n\n)\n\n. The agent goals are based on the overall exploitation of the resources by all agents.\n\n\n\n\n\n\n\n\nR\ni\n\n=\n\n\nc\n\nR\ni\n\n\n\n\n∑\n\n∀\ni\n\n\n\n\nc\n\nR\ni\n\n\n\n\n\n\n\n\n\n\n\nG\ni\n\n=\n\n∑\n\n∀\ni\n\n\n\nU\n\ni\n,\nn\n\n\n\n(\n\ns\n\nn\ni\n\n\n,\n\nR\nn\n\n)\n\n\n\n\n\n\n\n\n\n(3)\n\nIn this case, we see that conflicting instrumental goals that neither side anticipates will cause wasted funds due to contention. The more funds spent on resource capture, which is zero-sum, the less remaining for exploitation, which can be positive-sum. Above nominal spending on resources to capture them from aligned competitor-agents will reduce funds available for exploitation of those resources, even though less resource contention would benefit all agents.**Remark** **5.** Preferences and gains from different uses can be homogeneous, so that all agents have no marginal gain from affecting the allocation, funds will be wasted on resource contention. More generally, heterogeneous preferences can lead to contention to control the allocation, with sub-optimal individual outcomes, and heterogeneous abilities can lead to less-capable agents harming their goals by capturing then ineffectively exploiting resources.**Example** **3.** Different forms of scientific research benefit different goals differently. Even if spending in every area benefits everyone, a fixed pool of resources implies that with different preferences, contention between projects with different positive impacts will occur. To the extent that effort must be directed towards grant-seeking instead of scientific work, the resources available for the projects themselves are reduced, sometimes enough to cause a net loss.**Remark** **6.** Coordination limiting overuse of public goods is a major area of research in economics. Ostrom explains how such coordination is only possible when conflicts are anticipated or noticed and where a reliable mechanism can be devised [[42](#B42-BDCC-03-00021)].**Model.** **2.2—Unnecessary Resource Contention.** As above, but each agent has an identical reward function of \n\nf\n\ni\n,\nn\n\n\n. Even though all goals are shared, a lack of coordination in the above case leads to overspending, as shown in simple systems and for specified algebraic objective functions in the context of welfare economics. This literature shows many methods for how gains are possible, and in the simplest examples this occurs when agents coordinate to minimize overall spending on resource acquisition.**Remark** **7.** Coordination mechanisms themselves can be exploited by agents. The field of algorithmic game theory has several results for why this is only sometimes possible, and how building mechanisms to avoid such exploitation is possible [[43](#B43-BDCC-03-00021)].**Failure Mode** **3.** **Adversarial optimization** can occur when a victim agent has an incomplete model of how an opponent can influence the system. The opponent’s model of the victim allows it to intentionally select for cases where the victim’s model performs poorly and/or promotes the opponent’s goal [[3](#B3-BDCC-03-00021)].**Model.** **3.1—Adversarial Goal Poisoning.** \n\n\n\n\n\n\n\n\nG\nV\n\n\n\n\n=\nx\n\n\n\n\n\n\nG\nO\n\n\n\n\n=\n−\nx\n\n\n\n\n\np\nx\n\n\n\n\nM\nV\n\n\n\n\n=\nX\n:\nX\n∼\nn\no\nr\nm\na\nl\n(\nx\n,\n\nσ\n2\n\n\n(\ny\n)\n\n)\n\n\n\n\n\n\nM\nO\n\n\n\n\n=\n(\nX\n,\ny\n)\n\n\n\n\n\n\n\n\n\n(4)\n\nIn this case, the Opponent O can see the metric for the victim, and can select for cases where y is large and X is small, so that V chooses maximal values of X, to the marginal benefit of O.**Example** **4.** A victim’s model can be learned by “Stealing” models using techniques such as those explored by Tramèr et al. [[44](#B44-BDCC-03-00021)]. In such a case, the information gained can be used for model evasion and other attacks mentioned there.**Example** **5.** Chess and other game engines may adaptively learn and choose openings or strategies for which the victim is weakest.**Example** **6.** Sophisticated financial actors can make trades to dupe victims into buying or selling an asset (“Momentum Ignition”) in order to exploit the resulting price changes [[45](#B45-BDCC-03-00021)], leading to a failure of the exploited agent due to an actual change in the system which it misinterprets.**Remark** **8.** The probability of exploitable reward functions increases with the complexity of the system the agents manipulate [[5](#B5-BDCC-03-00021)], and the simplicity of the agent and their reward function. The potential for exploitation by other agents seems to follow the same pattern, where simple agents will be manipulated by agents with more accurate opponent models.**Model.** **3.2—Adversarial Optimization Theft.** An attacker can discover exploitable quirks in the goal function to make the victim agent optimize for a new goal, as in Manheim and Garrabrant’s Campbell’s law example, slightly adapted here [[3](#B3-BDCC-03-00021)].\n\n\n\n\n\n\n\nM\nV\n\n\n\n\n=\n\nG\nV\n\n+\nX\n\n\n\n\n\n\nM\nO\n\n\n\n\n=\n\nG\nO\n\n·\nX\n\n\n\n\n\n\n\n\n(5)\n\nO selects \n\nM\nO\n\n after seeing V’s choice of metric. In this case, we can assume the opponent chooses a metric to maximize based on the system and the victim’s goal, which is known to the attacker. The opponent can choose their \n\nM\nO\n\n so that the victim’s later selection then induces a relationship between X and the opponent goal, especially at the extremes. Here, the opponent selects such that even weak selection on \n\nM\nO\n\n hijacks the victim’s selection on \n\nM\nV\n\n to achieve their goal, because states where \n\nM\nV\n\n is high have changed. In the example given, if \n\nX\n∼\nn\no\nr\nm\na\nl\n(\nμ\n,\n\nσ\n2\n\n)\n\n, the correlation between \n\nG\nO\n\n and \n\nM\nO\n\n is zero over the full set of states, but becomes positive on the subspace selected by the victim. (Please note that the opponent choice of metric is not itself a useful proxy for their goal absent the victim’s actions—it is a purely parasitic choice.)**Failure Mode** **4.** **Input spoofing and filtering**—Filtered evidence can be provided, or false evidence can be manufactured and put into the training data stream of a victim agent.**Model.** **4.1—Input Spoofing.** Victim agent receives public data \n\nD\n(\n\nx\ni\n\n|\nt\n)\n\n about the present world-state, and builds a model to choose actions which return rewards \n\nf\n(\nx\n|\nt\n)\n\n. The opponent can generate events \n\nx\ni\n\n to poison the victim’s learned model.**Remark** **9.** See the classes of data poisoning attacks explored by Wang and Chaudhuri [[46](#B46-BDCC-03-00021)] against online learning, and of Chen et al [[47](#B47-BDCC-03-00021)]. for creating backdoors in deep-learning verification systems.**Example** **7.** Financial market participants can (illegally) spoof by posting orders that will quickly be canceled in a “momentum ignition” strategy to lure others into buying or selling, as has been alleged to be occurring in high-frequency-trading [[45](#B45-BDCC-03-00021)]. This differs from the earlier example in that the transactions are not bona-fide transactions which fool other agents, but are actually false evidence.**Example** **8.** Rating systems can be attacked by inputting false reviews into a system, or by discouraging reviews by those likely to be the least or most satisfied reviewers.**Model.** **4.2—Active Input Spoofing.** As in (4.1), where the victim agent employs active learning. In this case, the opponent can potentially fool the system into collecting data that seems very useful to the victim from crafted poisoned sources.**Example** **9.** Honeypots can be placed, or Sybil attacks mounted by opponents to fool victims into learning from examples that systematically differ from the true distribution.**Example** **10.** Comments by users “Max” and “Vincent DeBacco” on Eisen’s blog post about Amazon pricing suggested that it is very possible to abuse badly built linear pricing models on Amazon to receive discounts, if the algorithms choose prices based on other quoted prices [[38](#B38-BDCC-03-00021)].**Model.** **4.3—Input Filtering.** As in (4.1), but instead of generating false evidence, true evidence is hidden to systematically alter the distribution of events seen.**Example** **11.** Financial actors can filter the evidence available to other agents by performing transactions they do not want seen as private transactions or dark pool transactions.**Remark** **10.** There are classes of system where it is impossible to generate arbitrary false data points, but selective filtering can have similar effects.**Failure Mode** **5.** **Goal co-option** is when an opponent controls the system the Victim runs on, or relies on, and can therefore make changes to affect the victim’s actions.**Remark** **11.** Whenever the computer systems running AI and ML systems are themselves insecure, it presents a very tempting weak point that potentially requires much less effort than earlier methods of fooling the system.**Model.** **5.1—External Reward Function Modification.** Opponent O directly modifies Victim V’s reward function to achieve a different objective than the one originally specified.**Remark** **12.** Slight changes in a reward function may have non-obvious impacts until after the system is deployed.**Model.** **5.2—Output Interception.** Opponent O intercepts and modifies Victim V’s output.**Model.** **5.3—Data or Label Interception.** Opponent O modifies externally stored scoring rules (labels) or data inputs provided to Victim V’s output.**Example** **12.** Xiao, Xiao, and Eckert explore a “label flipping” attack against support vector machines [[48](#B48-BDCC-03-00021)] where modifying a limited number of labels used in the training set can cause performance to deteriorate severely.**Remark** **13.** As noted above, there are cases where generating false data may be impossible or easily detected. Modifying the inputs during training may create less obvious traces of an attack has occurred. Where this is impossible, access can also allow pure observation which, while not itself an attack, can allow an opponent to engage in various other exploits discussed earlier.To conclude the list of failure modes, it is useful to note a few areas where the failures are induced or amplified. This is when agents explicitly incentivize certain behaviors on the part of other agents, perhaps by providing payments. These public interactions and incentive payments are not fundamentally different from other failure modes, but can create or magnify any of the other modes. This is discussed in literature on the evolution of collusion, such as Dixon’s treatment [[49](#B49-BDCC-03-00021)]. Contra Dixon, however, the failure modes discussed here can prevent the collusion from being beneficial. A second, related case is when creating incentives where an agent fails to anticipate either the ways in which the other agents can achieve the incentivized target, or the systemic changes that are induced. These so-called “Cobra effects” [[3](#B3-BDCC-03-00021)] can lead to both the simpler failures of the single-agent cases explored in Manheim and Garrabrant, and lead to the failures above. Lastly, as noted by Sandberg [[50](#B50-BDCC-03-00021)], agents with different “speeds” (and, equivalently, processing power per unit time,) can exacerbate victimization, since older and slower systems are susceptible, and susceptibility to attacks only grows as new methods of exploitation are found. 3. Discussion\n--------------\n\nMulti-agent systems can naturally give rise to cooperation instead of competition, as discussed in Leibo et al.’s 2017 paper [[51](#B51-BDCC-03-00021)]. The conditions under which there is exploitation rather than cooperation, however, are less well understood. A more recent paper by Leibo proposes that the competition dynamic can be used to encourage more complex models. This discusses coordination failures, but the discussion of dynamics leading to the failures does not engage with the literature on safety or goal-alignment [[52](#B52-BDCC-03-00021)]. Leibo’s work, however, differs from most earlier work where multi-agent systems are trained together with a single goal, perforce leading to cooperative behavior, as in Lowe et al.’s heavily cited work, in which “competitive” dynamics are dealt with by pre-programming explicit models of other agent behaviors [[53](#B53-BDCC-03-00021)].The failure modes outlined (accidental steering, coordination failures, adversarial misalignment, input spoofing or filtering, and goal co-option or direct hacking) are all due to models that do not fully account for other agent behavior. Because all models must simplify the systems they represent, the prerequisites for these failures are necessarily present in complex-enough systems where multiple non-coordinated agents interact. The problems of embedded agents discussed by Demski and Garrabrant [[36](#B36-BDCC-03-00021)] make it particularly clear that current approaches are fundamentally unable to fully represent these factors. For this and other reasons, mitigating the failures modes discussed here are not yet central to the work of building better ML or narrow AI systems. At the same time, some competitive domains such as finance are already experiencing some of these exploitative failures [[45](#B45-BDCC-03-00021)], and bots engaging in social network manipulation, or various forms of more direct interstate competition are likely engaging in similar strategies.The failures seen so far are minimally disruptive. At the same time, many of the outlined failures are more problematic for agents with a higher degree of sophistication, so they should be expected not to lead to catastrophic failures given the types of fairly rudimentary agents currently being deployed. For this reason, specification gaming currently appears to be a mitigable problem, or as Stuart Russell claimed, be thought of as “errors in specifying the objective, period” [[54](#B54-BDCC-03-00021)]. This might be taken to imply that these failures are avoidable, but the current trajectory of these systems means that the problems will inevitably worsen as they become more complex and more such systems are deployed, and the approaches used are fundamentally incapable of overcoming the obstacles discussed.#### Potential Avenues for Mitigation\n\nMitigations for these failures exist, but as long as the fundamental problems discussed by Demski and Garrabrant [[36](#B36-BDCC-03-00021)] are unaddressed, the dynamics driving these classes of failure seem unavoidable. Furthermore, such failures are likely to be surprising. They will emerge as multiple machine learning agents are deployed, and more sophisticated models will be more likely to trigger them. However, as argued above, these failures are fundamental to interaction between complex agents. This means that while it is unclear how quickly such failures will emerge, or if they will be quickly recognized, it is unquestionable that they will continue to occur. System designers and policymakers should expect that these problems will become intractable if deferred, and are therefore particularly critical to address now. It is be expected that any solution involves a combination of approaches [[17](#B17-BDCC-03-00021)], though the brief overview of safety approaches below shows that not all general approaches to AI safety are helpful for multi-agent failures.First, there are approaches that limit optimization. This can be done via satisficing, using approaches such as Taylor’s Quantilizers, which pick actions at random from the top quantile of evaluated choices [[55](#B55-BDCC-03-00021)]. Satisficing approaches can help in prevent exploiting other agents, or in preventing accidental overoptimization, but are not effective as a defense against exploitative agents or systemic failures due to agent interaction. Another approach limiting optimization is explicit safety guarantees. In extrema, this looks like an AI-Box, preventing any interaction of the AI with the wider world and hence preventing agent interaction completely. This is effective if such boxes are not escaped, but it is unclear if this is possible [[27](#B27-BDCC-03-00021)]. Less extreme versions of safety guarantees are sometimes possible, especially in domains where a formal model of safe behavior is possible, and the system is sufficiently well understood. For example, Shalev-Shwartz et al. have such a model for self-driving cars, heavily relying on the fact that the physics involved with keeping cars from hitting one another, or other objects, is in effect perfectly understood [[56](#B56-BDCC-03-00021)]. Expanding this to less well understood domains seems possible, but is problematic for reasons discussed elsewhere [[57](#B57-BDCC-03-00021)].Without limiting optimization explicitly, some approaches attempt to better define the goals, and thereby reduce the extent of unanticipated behaviors. These approaches involve some version of direct optimization safety. One promising direction for limiting the extent to which goal-directed optimization can be misdirected is to try to recognize actions rather than goals [[58](#B58-BDCC-03-00021)]. Human-in-the-loop oversight is another direction for minimizing surprise and ensuring alignment, though this is already infeasible in many systems [[20](#B20-BDCC-03-00021)]. Neither approach is likely to be more effective than humans themselves are at preventing such exploitation. The primary forward-looking approach for safety is some version of ensuring that the goal is aligned, which is the bulk of what Yampolskiy and Fox refer to as AI safety engineering [[59](#B59-BDCC-03-00021)].In multi-agent contexts there is still a concern that because human values are complex, [[18](#B18-BDCC-03-00021)] exploitation is an intrinsically unavoidable pitfall in multi-agent systems. Paul Christiano’s “Distillation and Amplification” approach involves safe amplification using coordinated multi-agent systems [[60](#B60-BDCC-03-00021)]. This itself involves addressing some of the challenges with multi-agent approaches, and work on safe amplification using coordinated multi-agent systems in that context has begun [[61](#B61-BDCC-03-00021)]. In that work, the coordinating agents are predictive instead of agentic, so the failure modes are more restricted. The methods suggested can also be extended to agentic systems, where they may prove more worrisome, and solving the challenges potentially involves mitigating several failure modes outlined here.Between optimization-limiting approaches and AI safety engineering, it is possible that many of the multi-agent failures discussed in the paper can be mitigated, though not eliminated. In addition, there will always be pressure to prioritize performance as opposed to safety, and safe systems are unlikely to perform as quickly as unsafe ones [[20](#B20-BDCC-03-00021)]. Even if the tradeoff resolves in favor of slower, safer systems, such systems can only be created if these approaches are further explored and the many challenges involved are solved before widespread deployment of unsafe ML and AI. Once the systems are deployed, it seems infeasible that safer approaches could stop failures due to exploiting and exploitable systems, short of recalling them. This is not a concern for the far-off future where misaligned superintelligent AI poses an existential risk. It is instead a present problem, and it is growing more serious along with the growth of research that does not address it. 4. Conclusions: Model Failures and Policy Failures\n---------------------------------------------------\n\nWork addressing the failure modes outlined in the paper is potentially very valuable, in part because these failure modes are mitigable or avoidable if anticipated. AI and ML system designers and users should expect that many currently successful but naive agents will be exploited in the future. Because of this, the failure modes are likely to become more difficult to address if deferred, and are therefore particularly critical to understand and address them preemptively. This may take the form of systemic changes such as redesigned financial market structures, or may involve ensuring that agents have built-in failsafes, or that they fail gracefully when exploited.At present, it seems unlikely that large enough and detected failures will be sufficient to slow the deployment of these systems. It is possible that governmental actors, policymakers, and commercial entities will recognize the tremendous complexities of multiparty coordination among autonomous agents and address these failure modes, or slow deployment and work towards addressing these problems even before they become catastrophic. Alternatively, it is possible these challenges will become apparent via limited catastrophes that are so blatant that AI safety will be prioritized. This depends on how critical the failures are, how clearly they can be diagnosed, and whether the public demands they be addressed.Even if AI amplification remains wholly infeasible, humanity is already deploying autonomous systems with little regards to safety. The depth of complexity is significant but limited in current systems, and the strategic interactions of autonomous systems are therefore even more limited. However, just as AI for poker eventually became capable enough to understand multi-player interaction and engage in strategic play, AI in other systems should expect to be confronted with these challenges. We do not know when the card sharks will show up, or the extent to which they will make the games they play unsafe for others, but we should admit now that we are as-yet unprepared for them.\n\n\nFunding\n-------\n\nThis research was funded in large part by a grant from the Berkeley Existential Risk Initiative.Acknowledgments\n---------------\n\nI would like to thank a subset of the anonymous reviewers in both the first and second submission for very helpful comments, and thank Roman Yampolskiy for encouraging me to write and revise the paper, despite setbacks.Conflicts of Interest\n---------------------\n\nThe author declares no conflict of interest. The funders had no role in the writing of the manuscript, nor in the decision to publish the results.References\n----------\n\n1. Clark, J.; Amodei, D. Faulty Reward Functions in the Wild. 2016. Available online: (accessed on 12 March 2019).\n2. Goodhart, C.A.E. Problems of Monetary Management: The UK Experience; Papers in Monetary Economics; Reserve Bank of Australia: Sydney, Australia, 1975. [[Google Scholar](https://scholar.google.com/scholar_lookup?title=Problems+of+Monetary+Management:+The+UK+Experience&author=Goodhart,+C.A.E.&publication_year=1975)]\n3. Manheim, D.; Garrabrant, S. Categorizing Variants of Goodhart’s Law. arXiv, 2018; arXiv:1803.04585. [[Google Scholar](https://scholar.google.com/scholar_lookup?title=Categorizing+Variants+of+Goodhart%E2%80%99s+Law&author=Manheim,+D.&author=Garrabrant,+S.&publication_year=2018)]\n4. Campbell, D.T. Assessing the impact of planned social change. Eval. Program Plan. **1979**, 2, 67–90. [[Google Scholar](https://scholar.google.com/scholar_lookup?title=Assessing+the+impact+of+planned+social+change&author=Campbell,+D.T.&publication_year=1979&journal=Eval.+Program+Plan.&volume=2&pages=67%E2%80%9390&doi=10.1016/0149-7189(79)90048-X)] [[CrossRef](https://doi.org/10.1016/0149-7189(79)90048-X)]\n5. Amodei, D.; Olah, C.; Steinhardt, J.; Christiano, P.; Schulman, J.; Mané, D. Concrete problems in AI safety. arXiv, 2016; arXiv:1606.06565. [[Google Scholar](https://scholar.google.com/scholar_lookup?title=Concrete+problems+in+AI+safety&author=Amodei,+D.&author=Olah,+C.&author=Steinhardt,+J.&author=Christiano,+P.&author=Schulman,+J.&author=Man%C3%A9,+D.&publication_year=2016)]\n6. Kleinberg, J.; Raghavan, M. How Do Classifiers Induce Agents To Invest Effort Strategically? arXiv, 2018; arXiv:1807.05307. [[Google Scholar](https://scholar.google.com/scholar_lookup?title=How+Do+Classifiers+Induce+Agents+To+Invest+Effort+Strategically?&author=Kleinberg,+J.&author=Raghavan,+M.&publication_year=2018)]\n7. Bostrom, N. Superintelligence; Oxford University Press: Oxford, UK, 2017. [[Google Scholar](https://scholar.google.com/scholar_lookup?title=Superintelligence&author=Bostrom,+N.&publication_year=2017)]\n8. Braganza, O. Proxyeconomics, An agent based model of Campbell’s law in competitive societal systems. arXiv, 2018; arXiv:1803.00345. [[Google Scholar](https://scholar.google.com/scholar_lookup?title=Proxyeconomics,+An+agent+based+model+of+Campbell%E2%80%99s+law+in+competitive+societal+systems&author=Braganza,+O.&publication_year=2018)]\n9. Krakovna, V. Specification Gaming Examples in AI. 2018. Available online: (accessed on 12 March 2019).\n10. Liu, L.; Cheng, L.; Liu, Y.; Jia, Y.; Rosenblum, D.S. Recognizing Complex Activities by a Probabilistic Interval-based Model. In Proceedings of the National Conference on Artificial Intelligence (AAAI), Phoenix, AZ, USA, 12–17 February 2016. [[Google Scholar](https://scholar.google.com/scholar_lookup?title=Recognizing+Complex+Activities+by+a+Probabilistic+Interval-based+Model&conference=Proceedings+of+the+National+Conference+on+Artificial+Intelligence+(AAAI)&author=Liu,+L.&author=Cheng,+L.&author=Liu,+Y.&author=Jia,+Y.&author=Rosenblum,+D.S.&publication_year=2016)]\n11. Cheney, N.; MacCurdy, R.; Clune, J.; Lipson, H. Unshackling evolution: Evolving soft robots with multiple materials and a powerful generative encoding. ACM SIGEVOlution **2014**, 7, 11–23. [[Google Scholar](https://scholar.google.com/scholar_lookup?title=Unshackling+evolution:+Evolving+soft+robots+with+multiple+materials+and+a+powerful+generative+encoding&author=Cheney,+N.&author=MacCurdy,+R.&author=Clune,+J.&author=Lipson,+H.&publication_year=2014&journal=ACM+SIGEVOlution&volume=7&pages=11%E2%80%9323&doi=10.1145/2661735.2661737)] [[CrossRef](https://doi.org/10.1145/2661735.2661737)]\n12. Figueras, J. Genetic Algorithm Physics Exploiting. 2015. Available online: (accessed on 12 March 2019).\n13. Lehman, J.; Clune, J.; Misevic, D.; Adami, C.; Beaulieu, J.; Bentley, P.J.; Bernard, S.; Belson, G.; Bryson, D.M.; Cheney, N. The surprising creativity of digital evolution: A collection of anecdotes from the evolutionary computation and artificial life research communities. arXiv, 2018; arXiv:1803.03453. [[Google Scholar](https://scholar.google.com/scholar_lookup?title=The+surprising+creativity+of+digital+evolution:+A+collection+of+anecdotes+from+the+evolutionary+computation+and+artificial+life+research+communities&author=Lehman,+J.&author=Clune,+J.&author=Misevic,+D.&author=Adami,+C.&author=Beaulieu,+J.&author=Bentley,+P.J.&author=Bernard,+S.&author=Belson,+G.&author=Bryson,+D.M.&author=Cheney,+N.&publication_year=2018)]\n14. Chopra, J. GitHub issue for OpenAI gym environment FetchPush-v0. 2018. Available online: (accessed on 12 March 2019).\n15. Popov, I.; Heess, N.; Lillicrap, T.; Hafner, R.; Barth-Maron, G.; Vecerik, M.; Lampe, T.; Tassa, Y.; Erez, T.; Riedmiller, M. Data-efficient deep reinforcement learning for dexterous manipulation. arXiv, 2017; arXiv:1704.03073. [[Google Scholar](https://scholar.google.com/scholar_lookup?title=Data-efficient+deep+reinforcement+learning+for+dexterous+manipulation&author=Popov,+I.&author=Heess,+N.&author=Lillicrap,+T.&author=Hafner,+R.&author=Barth-Maron,+G.&author=Vecerik,+M.&author=Lampe,+T.&author=Tassa,+Y.&author=Erez,+T.&author=Riedmiller,+M.&publication_year=2017)]\n16. Weimer, W. Advances in Automated Program Repair and a Call to Arms. In Proceedings of the 5th International Symposium on Search Based Software Engineering—Volume 8084; Springer: Berlin/Heidelberg, Germany, 2013. [[Google Scholar](https://scholar.google.com/scholar_lookup?title=Advances+in+Automated+Program+Repair+and+a+Call+to+Arms&author=Weimer,+W.&publication_year=2013)]\n17. Sandberg, A. Friendly Superintelligence. Presentation at Extro 5 Conference. 2001. Available online: (accessed on 12 March 2019).\n18. Yudkowsky, E. Complex value systems in friendly AI. In Proceedings of the International Conference on Artificial General Intelligence, Mountain View, CA, USA, 3–6 August 2011; Springer: New York, NY, USA, 2011; pp. 388–393. [[Google Scholar](https://scholar.google.com/scholar_lookup?title=Complex+value+systems+in+friendly+AI&conference=Proceedings+of+the+International+Conference+on+Artificial+General+Intelligence&author=Yudkowsky,+E.&publication_year=2011&pages=388%E2%80%93393)]\n19. Worley, G.G., III. Robustness to fundamental uncertainty in AGI alignment. arXiv, 2018; arXiv:1807.09836. [[Google Scholar](https://scholar.google.com/scholar_lookup?title=Robustness+to+fundamental+uncertainty+in+AGI+alignment&author=Worley,+G.G.,+III&publication_year=2018)]\n20. Danzig, R. Technology Roulette: Managing Loss of Control as Many Militaries Pursue Technological Superiority; Technical Report; Center for a New American Security: Washington, DC, USA, 2018. [[Google Scholar](https://scholar.google.com/scholar_lookup?title=Technology+Roulette:+Managing+Loss+of+Control+as+Many+Militaries+Pursue+Technological+Superiority&author=Danzig,+R.&publication_year=2018)]\n21. Baum, S. Superintelligence skepticism as a political tool. Information **2018**, 9, 209. [[Google Scholar](https://scholar.google.com/scholar_lookup?title=Superintelligence+skepticism+as+a+political+tool&author=Baum,+S.&publication_year=2018&journal=Information&volume=9&pages=209&doi=10.3390/info9090209)] [[CrossRef](https://doi.org/10.3390/info9090209)]\n22. Yudkowsky, E. Intelligence explosion microeconomics. Mach. Intell. Res. **2013**, 23, 2015. [[Google Scholar](https://scholar.google.com/scholar_lookup?title=Intelligence+explosion+microeconomics&author=Yudkowsky,+E.&publication_year=2013&journal=Mach.+Intell.+Res.&volume=23&pages=2015)]\n23. Lewis, G.T. Why the Tails Come Apart Apart. Lesswrong. 2014. Available online: (accessed on 12 March 2019).\n24. Yudkowsky, E. The AI Alignment Problem: Why It’s Hard, and Where to Start; Stanford University: Stanford, CA, USA, 2016. [[Google Scholar](https://scholar.google.com/scholar_lookup?title=The+AI+Alignment+Problem:+Why+It%E2%80%99s+Hard,+and+Where+to+Start&author=Yudkowsky,+E.&publication_year=2016)]\n25. Behzadan, V.; Munir, A. Models and Framework for Adversarial Attacks on Complex Adaptive Systems. arXiv, 2017; arXiv:1709.04137. [[Google Scholar](https://scholar.google.com/scholar_lookup?title=Models+and+Framework+for+Adversarial+Attacks+on+Complex+Adaptive+Systems&author=Behzadan,+V.&author=Munir,+A.&publication_year=2017)]\n26. Drexler, K.E. Engines of Creation; Anchor: New York, NY, USA, 1986. [[Google Scholar](https://scholar.google.com/scholar_lookup?title=Engines+of+Creation&author=Drexler,+K.E.&publication_year=1986)]\n27. Armstrong, S.; Sandberg, A.; Bostrom, N. Thinking inside the box: Controlling and using an oracle AI. Minds Mach. **2012**, 22, 299–324. [[Google Scholar](https://scholar.google.com/scholar_lookup?title=Thinking+inside+the+box:+Controlling+and+using+an+oracle+AI&author=Armstrong,+S.&author=Sandberg,+A.&author=Bostrom,+N.&publication_year=2012&journal=Minds+Mach.&volume=22&pages=299%E2%80%93324&doi=10.1007/s11023-012-9282-2)] [[CrossRef](https://doi.org/10.1007/s11023-012-9282-2)]\n28. Mulligan, T.S. How Enron Manipulated State’s Power Market. Los Angeles Times. 9 May 2002. Available online: (accessed on 9 March 2019).\n29. Borel, E.; Ville, J. Applications de la théorie des Probabilités aux jeux de Hasard; Gauthier-Villars: Paris, France, 1938. [[Google Scholar](https://scholar.google.com/scholar_lookup?title=Applications+de+la+th%C3%A9orie+des+Probabilit%C3%A9s+aux+jeux+de+Hasard&author=Borel,+E.&author=Ville,+J.&publication_year=1938)]\n30. Kuhn, H.W. A simplified two-person poker. Contrib. Theory Games **1950**, 1, 97–103. [[Google Scholar](https://scholar.google.com/scholar_lookup?title=A+simplified+two-person+poker&author=Kuhn,+H.W.&publication_year=1950&journal=Contrib.+Theory+Games&volume=1&pages=97%E2%80%93103)]\n31. Bowling, M.; Burch, N.; Johanson, M.; Tammelin, O. Heads-up limit hold’em poker is solved. Science **2015**, 347, 145–149. [[Google Scholar](https://scholar.google.com/scholar_lookup?title=Heads-up+limit+hold%E2%80%99em+poker+is+solved&author=Bowling,+M.&author=Burch,+N.&author=Johanson,+M.&author=Tammelin,+O.&publication_year=2015&journal=Science&volume=347&pages=145%E2%80%93149&doi=10.1126/science.1259433&pmid=25574016)] [[CrossRef](https://doi.org/10.1126/science.1259433)] [[PubMed](http://www.ncbi.nlm.nih.gov/pubmed/25574016)]\n32. Brown, N.; Sandholm, T. Superhuman AI for heads-up no-limit poker: Libratus beats top professionals. Science **2018**, 359, 418–424. [[Google Scholar](https://scholar.google.com/scholar_lookup?title=Superhuman+AI+for+heads-up+no-limit+poker:+Libratus+beats+top+professionals&author=Brown,+N.&author=Sandholm,+T.&publication_year=2018&journal=Science&volume=359&pages=418%E2%80%93424&doi=10.1126/science.aao1733&pmid=29249696)] [[CrossRef](https://doi.org/10.1126/science.aao1733)] [[PubMed](http://www.ncbi.nlm.nih.gov/pubmed/29249696)]\n33. Billings, D.; Burch, N.; Davidson, A.; Holte, R.; Schaeffer, J.; Schauenberg, T.; Szafron, D. Approximating game-theoretic optimal strategies for full-scale poker. IJCAI **2003**, 3, 661. [[Google Scholar](https://scholar.google.com/scholar_lookup?title=Approximating+game-theoretic+optimal+strategies+for+full-scale+poker&author=Billings,+D.&author=Burch,+N.&author=Davidson,+A.&author=Holte,+R.&author=Schaeffer,+J.&author=Schauenberg,+T.&author=Szafron,+D.&publication_year=2003&journal=IJCAI&volume=3&pages=661)]\n34. Soares, N. Formalizing Two Problems of Realistic World-Models. Technical Report. Available online: (accessed on 9 March 2019).\n35. Conant, R.C.; Ross Ashby, W. Every good regulator of a system must be a model of that system. Int. J. Syst. Sci. **1970**, 1, 89–97. [[Google Scholar](https://scholar.google.com/scholar_lookup?title=Every+good+regulator+of+a+system+must+be+a+model+of+that+system&author=Conant,+R.C.&author=Ross+Ashby,+W.&publication_year=1970&journal=Int.+J.+Syst.+Sci.&volume=1&pages=89%E2%80%9397&doi=10.1080/00207727008920220)] [[CrossRef](https://doi.org/10.1080/00207727008920220)]\n36. Demski, A.; Garrabrant, S. Embedded Agency. arXiv, 2019; arXiv:1902.09469. [[Google Scholar](https://scholar.google.com/scholar_lookup?title=Embedded+Agency&author=Demski,+A.&author=Garrabrant,+S.&publication_year=2019)]\n37. O’Neil, C. Weapons of Math Destruction: How Big Data Increases Inequality and Threatens Democracy; Broadway Books: New York City, NY, USA, 2016. [[Google Scholar](https://scholar.google.com/scholar_lookup?title=Weapons+of+Math+Destruction:+How+Big+Data+Increases+Inequality+and+Threatens+Democracy&author=O%E2%80%99Neil,+C.&publication_year=2016)]\n38. Eisen, M. Amazon’s $23,698,655.93 Book about Flies. 2011. Available online: (accessed on 9 March 2019).\n39. Smaldino, P.E.; McElreath, R. The natural selection of bad science. Open Sci. **2016**, 3, 160384. [[Google Scholar](https://scholar.google.com/scholar_lookup?title=The+natural+selection+of+bad+science&author=Smaldino,+P.E.&author=McElreath,+R.&publication_year=2016&journal=Open+Sci.&volume=3&pages=160384&doi=10.1098/rsos.160384&pmid=27703703)] [[CrossRef](https://doi.org/10.1098/rsos.160384)] [[PubMed](http://www.ncbi.nlm.nih.gov/pubmed/27703703)][[Green Version](http://rsos.royalsocietypublishing.org/content/royopensci/3/9/160384.full.pdf)]\n40. Gibbard, A. Manipulation of Voting Schemes: A General Result. Econometrica **1973**, 41, 587–601. [[Google Scholar](https://scholar.google.com/scholar_lookup?title=Manipulation+of+Voting+Schemes:+A+General+Result&author=Gibbard,+A.&publication_year=1973&journal=Econometrica&volume=41&pages=587%E2%80%93601&doi=10.2307/1914083)] [[CrossRef](https://doi.org/10.2307/1914083)]\n41. Yudkowsky, E. Inadequate Equilibria: Where and How Civilizations Get Stuck; Machine Intelligence Research Institute: Berkeley, CA, USA, 2017. [[Google Scholar](https://scholar.google.com/scholar_lookup?title=Inadequate+Equilibria:+Where+and+How+Civilizations+Get+Stuck&author=Yudkowsky,+E.&publication_year=2017)]\n42. Ostrom, E. Governing the Commons: The Evolution of Institutions for Collective Action; Cambridge University Press: Cambridge, UK, 1990. [[Google Scholar](https://scholar.google.com/scholar_lookup?title=Governing+the+Commons:+The+Evolution+of+Institutions+for+Collective+Action&author=Ostrom,+E.&publication_year=1990)]\n43. Nisan, N.; Roughgarden, T.; Tardos, E.; Vazirani, V.V. Algorithmic Game Theory; Cambridge University Press: Cambridge, UK, 2007. [[Google Scholar](https://scholar.google.com/scholar_lookup?title=Algorithmic+Game+Theory&author=Nisan,+N.&author=Roughgarden,+T.&author=Tardos,+E.&author=Vazirani,+V.V.&publication_year=2007)]\n44. Tramèr, F.; Zhang, F.; Juels, A.; Reiter, M.K.; Ristenpart, T. Stealing Machine Learning Models via Prediction APIs. In Proceedings of the USENIX Security Symposium, Vancouver, BC, Canada, 16–18 August 2016; pp. 601–618. [[Google Scholar](https://scholar.google.com/scholar_lookup?title=Stealing+Machine+Learning+Models+via+Prediction+APIs&conference=Proceedings+of+the+USENIX+Security+Symposium&author=Tram%C3%A8r,+F.&author=Zhang,+F.&author=Juels,+A.&author=Reiter,+M.K.&author=Ristenpart,+T.&publication_year=2016&pages=601%E2%80%93618)]\n45. Shorter, G.W.; Miller, R.S. High-Frequency Trading: Background, Concerns, and Regulatory Developments; Congressional Research Service: Washington, DC, USA, 2014; Volume 29. [[Google Scholar](https://scholar.google.com/scholar_lookup?title=High-Frequency+Trading:+Background,+Concerns,+and+Regulatory+Developments&author=Shorter,+G.W.&author=Miller,+R.S.&publication_year=2014)]\n46. Wang, Y.; Chaudhuri, K. Data Poisoning Attacks against Online Learning. arXiv, 2018; arXiv:1808.08994. [[Google Scholar](https://scholar.google.com/scholar_lookup?title=Data+Poisoning+Attacks+against+Online+Learning&author=Wang,+Y.&author=Chaudhuri,+K.&publication_year=2018)]\n47. Chen, X.; Liu, C.; Li, B.; Lu, K.; Song, D. Targeted backdoor attacks on deep learning systems using data poisoning. arXiv, 2017; arXiv:1712.05526. [[Google Scholar](https://scholar.google.com/scholar_lookup?title=Targeted+backdoor+attacks+on+deep+learning+systems+using+data+poisoning&author=Chen,+X.&author=Liu,+C.&author=Li,+B.&author=Lu,+K.&author=Song,+D.&publication_year=2017)]\n48. Xiao, H.; Xiao, H.; Eckert, C. Adversarial Label Flips Attack on Support Vector Machines. Front. Artif. Intell. Appl. **2012**, 242. [[Google Scholar](https://scholar.google.com/scholar_lookup?title=Adversarial+Label+Flips+Attack+on+Support+Vector+Machines&author=Xiao,+H.&author=Xiao,+H.&author=Eckert,+C.&publication_year=2012&journal=Front.+Artif.+Intell.+Appl.&volume=242&doi=10.3233/978-1-61499-098-7-870)] [[CrossRef](https://doi.org/10.3233/978-1-61499-098-7-870)]\n49. Dixon, H.D. Keeping up with the Joneses: Competition and the evolution of collusion. J. Econ. Behav. Organ. **2000**, 43, 223–238. [[Google Scholar](https://scholar.google.com/scholar_lookup?title=Keeping+up+with+the+Joneses:+Competition+and+the+evolution+of+collusion&author=Dixon,+H.D.&publication_year=2000&journal=J.+Econ.+Behav.+Organ.&volume=43&pages=223%E2%80%93238&doi=10.1016/S0167-2681(00)00117-7)] [[CrossRef](https://doi.org/10.1016/S0167-2681(00)00117-7)]\n50. Sandberg, A. There is plenty of time at the bottom: The economics, risk and ethics of time compression. Foresight **2018**, 21, 84–99. [[Google Scholar](https://scholar.google.com/scholar_lookup?title=There+is+plenty+of+time+at+the+bottom:+The+economics,+risk+and+ethics+of+time+compression&author=Sandberg,+A.&publication_year=2018&journal=Foresight&volume=21&pages=84%E2%80%9399&doi=10.1108/FS-04-2018-0044)] [[CrossRef](https://doi.org/10.1108/FS-04-2018-0044)]\n51. Leibo, J.Z.; Zambaldi, V.; Lanctot, M.; Marecki, J.; Graepel, T. Multi-agent reinforcement learning in sequential social dilemmas. In Proceedings of the 16th Conference on Autonomous Agents and MultiAgent Systems, São Paulo, Brazil, 8–12 May 2017; pp. 464–473. [[Google Scholar](https://scholar.google.com/scholar_lookup?title=Multi-agent+reinforcement+learning+in+sequential+social+dilemmas&conference=Proceedings+of+the+16th+Conference+on+Autonomous+Agents+and+MultiAgent+Systems&author=Leibo,+J.Z.&author=Zambaldi,+V.&author=Lanctot,+M.&author=Marecki,+J.&author=Graepel,+T.&publication_year=2017&pages=464%E2%80%93473)]\n52. Leibo, J.Z.; Hughes, E.; Lanctot, M.; Graepel, T. Autocurricula and the Emergence of Innovation from Social Interaction: A Manifesto for Multi-Agent Intelligence Research. arXiv, 2019; arXiv:1903.00742. [[Google Scholar](https://scholar.google.com/scholar_lookup?title=Autocurricula+and+the+Emergence+of+Innovation+from+Social+Interaction:+A+Manifesto+for+Multi-Agent+Intelligence+Research&author=Leibo,+J.Z.&author=Hughes,+E.&author=Lanctot,+M.&author=Graepel,+T.&publication_year=2019)]\n53. Lowe, R.; Wu, Y.; Tamar, A.; Harb, J.; Abbeel, O.P.; Mordatch, I. Multi-agent actor-critic for mixed cooperative-competitive environments. Adv. Neural Inf. Process. Syst. **2017**, 6379–6390. [[Google Scholar](https://scholar.google.com/scholar_lookup?title=Multi-agent+actor-critic+for+mixed+cooperative-competitive+environments&author=Lowe,+R.&author=Wu,+Y.&author=Tamar,+A.&author=Harb,+J.&author=Abbeel,+O.P.&author=Mordatch,+I.&publication_year=2017&journal=Adv.+Neural+Inf.+Process.+Syst.&pages=6379%E2%80%936390)]\n54. Russell, S. Comment to Victoria Krakovna, Specification Gaming Examples in AI. 2018. Available online: (accessed on 12 March 2019).\n55. Taylor, J. Quantilizers: A safer alternative to maximizers for limited optimization. In Proceedings of the Workshops at the Thirtieth AAAI Conference on Artificial Intelligence, Phoenix, AZ, USA, 12–17 February 2016. [[Google Scholar](https://scholar.google.com/scholar_lookup?title=Quantilizers:+A+safer+alternative+to+maximizers+for+limited+optimization&conference=Proceedings+of+the+Workshops+at+the+Thirtieth+AAAI+Conference+on+Artificial+Intelligence&author=Taylor,+J.&publication_year=2016)]\n56. Shalev-Shwartz, S.; Shammah, S.; Shashua, A. On a formal model of safe and scalable self-driving cars. arXiv, 2017; arXiv:1708.06374. [[Google Scholar](https://scholar.google.com/scholar_lookup?title=On+a+formal+model+of+safe+and+scalable+self-driving+cars&author=Shalev-Shwartz,+S.&author=Shammah,+S.&author=Shashua,+A.&publication_year=2017)]\n57. Manheim, D. Oversight of Unsafe Systems via Dynamic Safety Envelopes. arXiv, 2018; arXiv:1811.09246. [[Google Scholar](https://scholar.google.com/scholar_lookup?title=Oversight+of+Unsafe+Systems+via+Dynamic+Safety+Envelopes&author=Manheim,+D.&publication_year=2018)]\n58. Liu, Y.; Nie, L.; Liu, L.; Rosenblum, D.S. From action to activity: Sensor-based activity recognition. Neurocomputing **2016**, 181, 108–115. [[Google Scholar](https://scholar.google.com/scholar_lookup?title=From+action+to+activity:+Sensor-based+activity+recognition&author=Liu,+Y.&author=Nie,+L.&author=Liu,+L.&author=Rosenblum,+D.S.&publication_year=2016&journal=Neurocomputing&volume=181&pages=108%E2%80%93115&doi=10.1016/j.neucom.2015.08.096)] [[CrossRef](https://doi.org/10.1016/j.neucom.2015.08.096)]\n59. Yampolskiy, R.; Fox, J. Safety engineering for artificial general intelligence. Topoi **2013**, 32, 217–226. [[Google Scholar](https://scholar.google.com/scholar_lookup?title=Safety+engineering+for+artificial+general+intelligence&author=Yampolskiy,+R.&author=Fox,+J.&publication_year=2013&journal=Topoi&volume=32&pages=217%E2%80%93226&doi=10.1007/s11245-012-9128-9)] [[CrossRef](https://doi.org/10.1007/s11245-012-9128-9)]\n60. Christiano, P.; Shlegeris, B.; Amodei, D. Supervising strong learners by amplifying weak experts. arXiv, 2018; arXiv:1810.08575. [[Google Scholar](https://scholar.google.com/scholar_lookup?title=Supervising+strong+learners+by+amplifying+weak+experts&author=Christiano,+P.&author=Shlegeris,+B.&author=Amodei,+D.&publication_year=2018)]\n61. Irving, G.; Christiano, P.; Amodei, D. AI safety via debate. arXiv, 2018; arXiv:1805.00899. [[Google Scholar](https://scholar.google.com/scholar_lookup?title=AI+safety+via+debate&author=Irving,+G.&author=Christiano,+P.&author=Amodei,+D.&publication_year=2018)]\n\n \n© 2019 by the author. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license ().", "url": "https://www.mdpi.com/2504-2289/3/2/21", "title": "Multiparty Dynamics and Failure Modes for Machine Learning and Artificial Intelligence", "source": "html_articles", "source_type": "journalArticle", "source_filetype": "pdf", "date_published": "2019-05-31T22:00:00Z", "authors": ["David Manheim"], "summary": [], "id": "474cc31d371814d2bfb5447b3dc8572c"} {"text": "Abstract\n--------\n\n**:**\nThis essay argues that a new subfield of AI governance should be explored that examines the policy-making process and its implications for AI governance. A growing number of researchers have begun working on the question of how to mitigate the catastrophic risks of transformative artificial intelligence, including what policies states should adopt. However, this essay identifies a preceding, meta-level problem of how the space of possible policies is affected by the politics and administrative mechanisms of how those policies are created and implemented. This creates a new set of key considerations for the field of AI governance and should influence the action of future policymakers. This essay examines some of the theories of the policymaking process, how they compare to current work in AI governance, and their implications for the field at large and ends by identifying areas of future research.\n\n\nKeywords: [policymaking process](/search?q=policymaking+process); [AI risk](/search?q=AI+risk); [typologies of AI policy](/search?q=typologies+of+AI+policy); [AI governance](/search?q=AI+governance)\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n 1. Introduction\n----------------\n\nArtificial intelligence, especially artificial general intelligence (AGI), has the ability to dramatically impact the future of humanity [[1](#B1-BDCC-03-00026)]. Notable researchers, such as Bostrom (2014), have expressed concern that advanced forms of artificial intelligence, if not aligned to humans values and wellbeing, could be potentially disastrous and pose an existential threat to our civilization [[2](#B2-BDCC-03-00026)]. The two main branches of research on risk from advanced AI are AI safety, which seeks to ensure that advanced AI is engineered in such a way that it will not pose a threat; and AI governance, which focuses on political and social dynamics (AI macrostrategy) and forecasting timelines for AI development [[3](#B3-BDCC-03-00026)]. Issues that AI governance looks at include arms race dynamics, social and economic inequality, public perceptions, issues in surveillance, and more.There has been a modest amount of work on developing policy solutions to AI risk, with a recent literature review by Baum (2017) [[4](#B4-BDCC-03-00026)] and Everitt (2016) [[5](#B5-BDCC-03-00026)] covering most of it. Some authors have focused on the development of AGI, with proposed solutions ranging from Joy (2000) [[6](#B6-BDCC-03-00026)] who calls for a complete moratorium on AGI research, to Hibbard (2002) [[7](#B7-BDCC-03-00026)] and Hughes (2007) [[8](#B8-BDCC-03-00026)], who advocate for regulatory regimes to prevent the emergence of harmful AGI, to McGinnis (2010), who advocates for the US to steeply accelerate friendly AGI research [[9](#B9-BDCC-03-00026)]. Everitt et al. (2017) [[5](#B5-BDCC-03-00026)] suggests that there should be an increase in AI safety funding. Scherer (2016) [[10](#B10-BDCC-03-00026)], however, at least in the context of narrow AI, argues that tort law and the existing legal structures, along with the concentration of AI R&D in large visible corporations like Google, will provide some incentives for the safe development of AI. Guihot et al. (2017) [[11](#B11-BDCC-03-00026)] also notes that attempts to future-proof laws tend to fail, and pre-emptive bans and regulation tend to hurt the long-term health of the field, instead arguing for a soft-law approach. Other authors have focused on the community of researchers, with Baum (2017) [[12](#B12-BDCC-03-00026)] promoting a social psychology approach to promote community self-regulation and activism, and Yampolskiy and Fox (2013) [[13](#B13-BDCC-03-00026)] advocating for review boards at universities and other research organizations.Some authors have advocated for an international approach to resolving AI risk. Erdelyi and Goldsmith (2018) [[14](#B14-BDCC-03-00026)] advocated for an international soft-law regime that would serve as a “international forum for discussion and engage in international standard setting activities”. Erdelyi and Goldsmith’s proposal, however, is not targeted towards AGI risk, although they could scale up to AGI. Wilson (2013) [[15](#B15-BDCC-03-00026)] and Bostrom (2014) [[2](#B2-BDCC-03-00026)], on the other hand, call for some form of international agreement or control on AGI R&D, with the former advocating specifically for a treaty.These approaches are necessary given some of the risks, including states pursuing AGI for unprecedented military and economic strength with destabilizing effects (Shulman 2009) [[16](#B16-BDCC-03-00026)], and the concentration of wealth and political influence in large corporations (Goertzel 2017) [[17](#B17-BDCC-03-00026)]. Questions regarding whether or not AGI R&D should be open sourced or not have been explored by Goertzel (2017) [[17](#B17-BDCC-03-00026)] and Bostrom (2017) [[18](#B18-BDCC-03-00026)]. Shulman (2009) [[16](#B16-BDCC-03-00026)] and Dewey (2015) [[19](#B19-BDCC-03-00026)] follow a different approach and advocate for a global surveillance regime to monitor for rogue AGI projects, with Goertzel (2012) [[20](#B20-BDCC-03-00026)] suggesting that a limited form of AGI could do this.As far as current and future research goes, the Future of Humanity Institute has developed an extensive research agenda [[3](#B3-BDCC-03-00026)] for AI governance, with three main research areas: Technical landscape, which seeks to understand what artificial intelligence can do and its limits; AI politics, which looks at the political dynamics between firms, governments, publics, etc.; and ideal governance, which looks at possible ways and arrangements for stakeholders to cooperate. This research agenda highlights key issues such as security challenges, international political dynamics and distribution of wealth, and arms race dynamics. Other researchers have published reports dealing with issues such as dual use, similarity, and possible interactions with the cybersecurity community [[21](#B21-BDCC-03-00026)] the role and limits of principles for AI ethics [[22](#B22-BDCC-03-00026)], justice and equity [[23](#B23-BDCC-03-00026)], and AGI R&D community norms [[5](#B5-BDCC-03-00026)].Thus far, much of the literature on AI risk has discussed policy issues, but few studies have talked about how policies are made or how the dynamics of the policymaking process affect their work. Calo (2017) [[23](#B23-BDCC-03-00026)] touches upon the problem, noting that there is a lack of institutional expertise, policy tools, and flawed mental models of what AI is, which plague governments’ abilities to regulate AI. Scherer (2016) [[10](#B10-BDCC-03-00026)] cites certain aspects of the technology itself, such as its ability to be created without special equipment, as a hindrance to the ability to regulate it. Everitt et al. (2017) [[5](#B5-BDCC-03-00026)] also briefly discusses policy and political dynamics in the context of AGI researchers, suggesting that AGI researchers should work with other organizations to mitigate the negative dynamics of framing AGI development as an arms race [[24](#B24-BDCC-03-00026)]. Finally, the Future of Humanity Institute’s research agenda for AI governance [[3](#B3-BDCC-03-00026)] touches on policymaking in a few ways, noting that public opinion can have major impacts on technology policy and governance schemes can be subject to mission drift and asking how to facilitate the transition from the present state of affairs to our ideal vision for the future.This paper continues along the lines of facilitating the transition from the present state to “our ideal vision” by exploring the missing discussion on the role of policymaking in AI governance. Research thus far has largely focused on what problems are out there and what should be done to fix them. However, this paper does not only argue that proposal implementation that takes into account the features of the ‘policymaking cycle’ may be vital to success in reducing AI risk but that this model actually has massive implications for the research field as a whole. Proposals will be much more effective if they are informed by an understanding of the political and administrative considerations of consensus-building and implementation and could make the difference between making an impact or none at all.The goal of this paper is to attempt to create a clearer launching point for discussions on the key considerations of the policymaking process for AI governance and the political considerations underpinning policy solutions for AI risk. The policymaking process includes: Problem identification/agenda setting, policy formulation, policy adoption, implementation, and evaluation. Each step of the policymaking process will have different aspects that are critical for the creation of public policies that are able to effectively reduce AI risk. Each section covers a brief overview of the literature, assesses its implications for the greater AI governance field, and identifies different points where further research is needed. The papers we selected are the primary sources of these different theories of the policymaking process.The first section maps out and defines terms in the field of AI governance, to give readers a better understanding of how our paper contributes to the way AI governance is approached. We also created a typology for AI risk policies, to provide an understanding as to how AI governance has implications in a diverse range of policy communities and how that interplays with strategic considerations. The next section goes through each step of the policymaking cycle, with a basic overview of some of the literature and discussing its implications for AI governance. It should be noted that the literature covered in each field is not extensive, and further research may be necessary. The last sections cover some of the key implications and limitations. 2. Terms and Definitions\n-------------------------\n\nOn a broad level, the question of mitigating AI risk, or risks that stem from the development and use of artificial intelligence (such as global catastrophic risks from misaligned AI or military instability from adopting new types of weapons), is broken down into AI technical safety and AI governance. AI technical safety focuses on solving computer science problems around issues like misalignment and the control problem for AGI [[2](#B2-BDCC-03-00026)]. AI governance, on the other hand, studies how humanity can best navigate the transition to advanced AI systems [[3](#B3-BDCC-03-00026)]. This would include the political, military, economic, governance, and ethical considerations and aspects of the problem that advanced AI has on society.AI governance can be further broken down into other components, namely the technical landscape (how technical developments depends on inputs and constraints and affects rates or domains of capability improvement), ideal governance (what would we do ideally if we could cooperate), and AI politics (how AI will affect domestic politics, political economy, international relations, etc.) [[3](#B3-BDCC-03-00026)]. From these research areas, the problems and solutions necessary to discuss AI policy can be defined. This paper, however, refers to this as AI risk policy to differentiate policies intended to reduce catastrophic risk to society versus policies that apply to AI in any other circumstances.Policies, however, must be implemented into the legal statutes of government in order to work. Flynn (2017) [[25](#B25-BDCC-03-00026)], in the blog post that defines ‘AI strategy’ [[3](#B3-BDCC-03-00026)], also defines ‘AI policy implementation’, which is carrying out the activities necessary to safely navigate the transition to advanced AI systems. This definition implies it is action-oriented work done in government, policy, lobbying, funding, etc. As mentioned in the endnotes of Flynn (2017), however, there is an implicit gap between AI strategy (governance) research and policy implementation, with no AI policy research that identifies mechanisms for actualizing change.However, there is another gap that this paper intends to address, which is that the processes that create and implement policies (the policymaking process) often either distort the original policy, fall short of, or even work counter to the intended outcome, or render certain policy options unactionable. Similarly, The AI governance: A Research Agenda report has neither this consideration nor a definition of policy implementation. This paper intends to put forth a definition of AI policymaking strategy to fill this gap, which is defined as:AI Policymaking Strategy: A research field that analyzes the policymaking process and draws implications for policy design, advocacy, organizational strategy, and AI governance as a whole.This goes further than the concern listed in the endnotes and also develops an upstream approach to AI governance, where work in implementation in turn feeds back and can provide new insights to AI governance research.AI policymaking strategy would fit under the definition of AI governance and would be its own subfield in the same way technical landscape is and would help to clarify questions and considerations in the other subfields. AI politics and ideal governance seem to ask questions about what risks humanity faces and what it ought to do about them, approaching the world as if from above and making corrections, whereas policymaking strategy asks questions about how and what can be done, given both present and future circumstances, and the methods to do so at hand. They approach the world as agents who individually influence the trajectory of the world. These two groups, when they work together, should ideally converge on a policy program that both works and is pragmatic—constituting of policies that both aim at the correct goals and can actually get there.An example of this would be the proposed solution by Goertzel (2012) [[20](#B20-BDCC-03-00026)] of creating a surveillance artificial narrow intelligence that monitors the world to prevent the development of superintelligence. Let us say that Policy X is written to do this. However, Policy X, like all other policies, is not simply just a solution to the problem but a set of intended actions and procedures taken by the government that must first be passed by government [[26](#B26-BDCC-03-00026)]. This begs three questions: Can this policy realistically be implemented by government? How do policymakers ensure that Policy X results in the intended outputs and outcomes? And how can policymakers create policy and advocacy strategies to increase the chances of both of these happening? For example, while Policy X is intended to install a surveillance apparatus to prevent superintelligence, would Policy X still have that output and outcome after going through the legislature and executive branch? Is there a chance over time that it would result in mission creep? Policymakers can also develop strategies to ensure that Policy X has its intended outcomes, such as oversight mechanisms within the policy itself. Policymakers can go a step further and ask how the policymaking process itself creates implications for the AI governance field. For example, are there restrictions within the policymaking process that impact timelines for reducing risk, such as how fast governments can act or create new laws? Could some form of upstream innovation be acheived where the policymaking process inspires or generates new ideas for AI governance [[27](#B27-BDCC-03-00026)]? 3. Typologies of AI Policy\n---------------------------\n\nBefore this paper can delve into the policymaking process, AI policy needs to be further refined to understand what kind of policies are being made. The point of this section is to show that AI risk policies are not monolithic, but rather there are multiple approaches to help achieve the same goal, and each set of these policies is going to have with it a different set of political difficulties. It also begs the question in terms of AI governance as a whole as to which sets of policies should be implemented and when, and which policies should be considered relevant to AI risk. In the same way that Bostrom (2014) [[2](#B2-BDCC-03-00026)] argues that there may be a preferred order of technological development, there is a similar analog with AI risk policies where there is a strategic order to policies that should be attempted to be implemented, whether it is because their political-capital cost is lower, the cost of failure is lower, or because it helps with future efforts to implement policies (such as the creation of an advisory body).A typology of AI policies already has some previous explorative work to build on. Brundage (2016) [[28](#B28-BDCC-03-00026)] proposed the idea of De Facto AI policies. These are policies that already exist and are relevant to AI. These are further broken down into direct, indirect, and relevant policies. Direct policies are policies that specifically target AI, such as regulations on self-driving cars. Indirect policies are policies that do not specifically target AI but generally impact the development and diffusion of technologies (including AI), such as intellectual property laws and tort law. Relevant policies do not immediately impact AI but are still worth considering because of their impact, such as education policy or the use of electronic medical records.Brundage (2016) [[27](#B27-BDCC-03-00026)] in this paper, however, does not talk about AI risk policy but rather existing policies around AI as a whole. However, the classification used in this paper is useful overall and can be extended into AI risk policy. Instead of whether or not it directly or indirectly affects AI, AI risk policy can be classified into whether or not it directly or indirectly aims at reducing AI risk. Direct AI risk policies would explicitly govern the use, development, deployment, etc. of AI to reduce risk. Examples of direct AI risk policy could include funding for AI safety research, rules for the development of AGI, international agreements on AI, etc. Indirect AI risk policies would either affect AI but not explicitly govern it or address consequences of the use of advanced AI systems. This could include both policies that affect AI and those that are AI-agnostic. For example, a policy that puts in place stronger protections for privacy in general would reduce the amount of training data available, and thus the speed of AI development, and could be considered an indirect approach. An AI-agnostic policy, for example, would be basic minimum income to address technological unemployment, which could be considered a risk if it leads to societal destabilization. AI risk relevant policies would affect neither AI nor the consequences of it but would rather make it easier for sound AI risk policies to be developed and implemented, such as changing the rules and procedures of government itself to alleviate the pacing problem.There is another layer of classification that should be applied to AI risk policy based on Lowi’s Typology [[29](#B29-BDCC-03-00026)]. Lowi categorizes policies into regulatory, distributive, redistributive, and constituency categories. Regulatory policies regulate one’s behavior, restricting or incentivizing certain actions, such as the mandating of seat belts in cars. Distributive policies are policies that take money from the general treasury and use them for a specific project that directly benefits one group, such as a dam or research grants. Redistributive policies are those which fundamentally alter the distribution of wealth and resources in the whole of society, such as tax and welfare policies. Constituency policies are those that alter the composition and the rules and regulations of government, such as creating a new executive agency.Each one of these typologies has with it a certain set of political conditions, as they impact people, businesses, and members of government differently. For example, both basic minimum income and the creation of AI safety standards are policies that are intended to reduce existential risk. However, both of these policies will have a different set of political pressures. Basic minimum income is a redistributive policy, which would move substantial amounts of wealth between classes of society. This would mean that it would likely become a nationwide controversial issue with two opposing camps based largely on who benefits and who loses. By contrast, AI safety standards are a regulatory policy, and while there would be two groups opposed to each other on the issue (unless it comes in the form of voluntary self-regulation by the industry), the political factors around it would look different. Regulatory policies are not usually salient or popular to the general public, and thus, the political battle would be largely limited to regulators, experts, and the business class. This typology will help us to understand how the different policies will be treated in the policymaking process. In other words, policy creates politics. Further work on developing this might be useful for understanding the likelihood of policies being adopted and could shift strategies for which policies to pursue. 4. The Policymaking Cycle\n--------------------------\n\n#### 4.1. Problem Identification, Agenda Setting, and Policy Formulation\n\nThe first few steps of the policymaking process: Problem identification, agenda setting, and policy formulation, are usually tied together [[30](#B30-BDCC-03-00026)], including in a so-called ‘multiple streams framework’. The multiple streams framework attempts to explain how policies reach the agenda when policy entrepreneurs are able to couple the policy, politics, and problems streams to open up a policy window, the opportune time when all the conditions are right to get a policy on the agenda [[31](#B31-BDCC-03-00026)].#### 4.1.1. Problem Stream\n\nThere are many problems in society. However, the public does not seek government intervention for many of these problems. There are some basic requirements for an issue in society to become a policy problem, which is that it is something that the public finds to be intolerable, government can do something about, and is generally seen as a legitimate area for government to work on [[30](#B30-BDCC-03-00026)]. Policy problems can also arise when there are two or more identifiable groups who enter into conflict in a policy arena for resources or positions of power [[32](#B32-BDCC-03-00026)].The first condition for an issue to be considered a policy problem is that it is something that the public or a group finds to be intolerable. Indicators such as statistics can help to identify a problem. These can be used objectively, for understanding conditions in society, or politically, when they are used to justify a political position: for example, using gun violence statistics as an argument for gun control. What is considered an issue over time changes because of the evolution of society. Changes in values, distribution of resources, technology, etc. will change what issues are considered in society [[30](#B30-BDCC-03-00026)]. In AI governance, identifiers such as the rate of technological progress or the proliferation of autonomous weapons could be used as examples. Creating a list of politically salient identifiers or metrics could be potentially useful for creating long-term strategies and goals.How the issue is framed is very important for whether or not it will be considered a policy problem [[30](#B30-BDCC-03-00026)]. Is mandating seatbelts in cars beneficial for public safety? Or is it paternalistic? Are these problems legitimate for government to handle? The framing of a problem can have an overwhelming impact on whether or not it is considered a problem appropriate for government to even formulate policy on. It can also impact the content of the policy. Whether you define access to transportation for handicapped people as a transportation problem or a civil rights issue determines whether the acceptable solution involves buying special needs vans, or costly upgrades to buses and subways to ensure equal access. Framing can also raise the priority of a policy problem by, for example, calling it a crisis and raising a sense of urgency.The question of framing is also incredibly important for AI governance. For example, would autonomous weapons make war more humane by removing humans? Or will it distance ourselves from the violence and make us more willing to use them? The AI governance community needs to think about how these issues ought to be framed, and the consequences of doing so.In order for an issue to be a part of the system agenda, or what the public or specific communities are discussing, there must be a focusing event. Focusing events are specific events that draw attention to a problem in society and the reasons behind it. The Sandy Hook school shooting, for example, is a focusing event that drew attention to America’s gun laws. Moreover, events that occur outside of sector-specific focusing events [[31](#B31-BDCC-03-00026)], or past policies on these issues, can have a large impact, especially on the types of solutions used. For AI governance, “Sputnik moments” such as AlphaGo beating Lee Sedol would be an example that drew considerable media attention and generated much discussion about the future of AI, especially in China [[33](#B33-BDCC-03-00026)].Understanding how to exploit these events for the AI governance agenda will be key to generating support and getting policies on the agenda. It is also important to stay on top of these events to understand the direction society is heading in—and to pre-empt or avert less productive or dangerous framings that might feed into arms races [[31](#B31-BDCC-03-00026)]. For example, Yampolskiy (2018) details a list of past failures by AI-enabled products [[34](#B34-BDCC-03-00026)]. How could work like this be used to influence the problem-setting? Could other AI risk researchers expand on it and build that work into a more thorough project to be used to draw attention to AI risk? Or, could attempts such as this backfire and cause pre-emptive stigmatization or ineffective policies?#### 4.1.2. Politics Stream\n\nThe politics stream is the combined factors of the national mood or public opinion, campaign groups, and administrative/legislative change. Decision-makers in government keep tabs on the swaying opinions of the masses and interest groups and act in a way that promotes themselves favorably, changing items on the agenda to stay relevant and popular, and to obscure unpopular policy stances. Changes in administration, especially when there is a major shift in the ideological composition of the institution, have a strong impact on what is included or not included on the agenda [[31](#B31-BDCC-03-00026)].In AI governance, and for people involved in advocating and implementing policies, maintaining a key eye on domestic and international politics will be key. Knowing when and what kind of policy to advocate for, and to whom, is crucial not only to saving time and energy, but also for legitimacy. Trying to sell a nationalistic administration on greater UN involvement will probably not help someone with furthering their policy proposals and may even damage their (and their coalition’s) political capital and cause. However, other forms of cooperation, such as bilateral cooperation for reducing the risk of accidents [[35](#B35-BDCC-03-00026)], may be more promising.AI governance researchers will need to consider how the political landscape should shape their recommendations or policy proposals. Not only would it determine if their recommendations would ever get considered, but if it was implemented, how would it affect the national mood? Would the next administration simply walk it back? How would other interest groups react and impact the long-term ability to reduce risk? If administration changes result in a flip-flop of ideology, what does that mean for AI risk policies associated with the past administration? Could an AI risk policy group maintain influence throughout changing administrations? All of these have implications on our ability to reduce AI risk, and this means that the policymaking strategy will not only have to be robust but also flexible enough to survive changing political conditions.#### 4.1.3. Policy Stream\n\nThe policy stream, which is in essence the policy formulation aspect of the policy cycle, is the “soup” of ideas that are generated by policymakers [[35](#B35-BDCC-03-00026)] when deciding what to do about a problem. Different policy networks create policies differently, with different levels of innovativeness and speed [[35](#B35-BDCC-03-00026)]. Understanding these differences and examining their implications for the AI governance field might be useful to understand its long-term impact and the specific strategic routes it should take. In other words, how should the AI governance research field itself be organized in a way that promotes useful and relevant solutions?Despite the staggering number of policy proposals coming out, only a handful will ever be accepted. These policies compete with one another and are selected on a set of criteria, which include technical feasibility, value compatibility [[35](#B35-BDCC-03-00026)], budgetary and political costs, and public acceptance. Policies that work will also be technically sound, with no major loopholes, and a clear rationale for how its provisions would lead to actually achieving the policy objectives [[30](#B30-BDCC-03-00026)]. This actually creates some key considerations for the field. It means that many ideas are either functionally useless due to their political limitations, unlikely to be adopted in the face of easier or less politically costly options, do not have viable policy mechanisms to achieve their goal, or are otherwise intractable prospects for government. Even if all of the above conditions are resolved, loopholes and unintended consequences may neuter the policy or make conditions worse. This vastly reduces the space of possible solutions. Further, even though the ability for policy implementation or values might change over time, it is still a matter of how much and when. This begs the question: What problems can be solved when, how, and by whom? What does that mean for the large picture strategic approach?Where should our policies originate from? While there are a bunch of policy ideas out there, only a few are ever seriously considered for adoption. Sources of these policies include (in the United States Federal Government, for example) the President along with the Executive Office of the President, Congressional leaders, government agencies (mostly small incremental changes and adjustments), temporary organizations or ‘adhocracies’ that serve to investigate specific topics, and interest groups whose topical expertise and political power can sometimes make them de facto policymakers. Each of these areas have differing levels of legitimacy, influence, and degree to which they can make policy changes. A question to consider is not only where in the policy network AI risk policymakers should focus on making these policies, but where they can best advocate for the creation of additional bodies like adhocracies to create additional policies, and what implications that has for the field at large.With regard to the policy formulation phase of policymaking, a continuum of political environments has been created such that on one extreme, there are policies with publics and on the other, there are policies without publics [[36](#B36-BDCC-03-00026)]. When policies are formulated, it is important to consider political environments relevant to the issue. The term “publics” refers to groups who have more than a passing interest in an issue or are actively involved in it. It appears that AI risks are issues where there are limited incentives for publics to form because of problems being remote, costly, or even abstract and uncertain. What does this mean for the AI safety community? How can interest groups be created most effectively? How can these issues be best expressed so that they do not seem so remote, abstract, or uncertain?#### 4.1.4. Policy Windows and Policy Entrepreneurs\n\nThis framework assumes that policy decision-makers, the legislators and bureaucrats in government exist in a state of ambiguity, where they do not have a clear set of preferences, and each set of circumstances can be seen in more than one way. This cannot be resolved with more information, as it is not an issue of ignorance. The example that Zahariadis (2007) gives is that “more information can tell us how AIDS is spread, but it still will not tell us whether AIDS is a health, educational, political, or moral issue [[31](#B31-BDCC-03-00026)]”.Overall, the multiple streams framework describes government organizations as “organized anarchies” where institutional problems run rampant, there are often unclear or underdefined goals, overlapping jurisdictions, and a host of other problems that mean that decision-makers have to ration their time between problems and do not have enough time to create a clear set of preferences, make good use of information, or take the time to comprehend the problem for sound decisions on policies. In essence, decision-makers are not rational decision-makers by any stretch. Instead, it depends on the ability of policy entrepreneurs to couple the three streams and manipulate the decision-maker into achieving their intended policy goals [[31](#B31-BDCC-03-00026)].Policy entrepreneurs, who are the policymakers, advocates, interest groups, etc. who push to make specific legislative changes in their areas, only have a short window of time to have their proposals added to the formal agenda. It is when the right political environment, a timely problem, and a potentially acceptable solution all meet together with a policy entrepreneur who can manipulate the situation to their advantage. Because decision-makers exist in a state of ambiguity, policy entrepreneurs are able to manipulate their interpretation of their information to provide meaning, identity, and clarity.Policy entrepreneurs use different tools and tactics to manipulate the way decision-makers process information and exploit their behavioral biases. Framing tactics, for example, can be used to present a policy option as a loss to the status quo, not taking note of the degree of loss it creates, exploiting decision-makers who are loss-averse, and may push them towards more extreme options like going to war to make up for those small losses [[31](#B31-BDCC-03-00026)].The manipulation of emotions through symbols and the identity or social status of a decision-maker can also pressure them to make certain choices; policies around flag-burning are a great example of this. Because decision-makers are under a great deal of stress and are time-constrained, the strategic ordering of decisions, or ‘salami tactics’, creates agreement in steps by reducing the total perceived risk of a policy [[31](#B31-BDCC-03-00026)]. The manipulation of symbols in the way that artificial intelligence is being framed today has already occured. At first, anti-autonomous weapons advocates were describing ‘armed quadcopters’ as a serious problem with little media attention [[37](#B37-BDCC-03-00026)]. These were rebranded as ‘slaughterbots’ and a short-film was released with substantial media attention. However, what sort of long-run impact will this have on the field? While giving policymakers straight facts and solutions seems appealing, AI risk policymakers have to consider that it is impractical in reality and may have to accept the inevitability, to policy success, of tactics like framing. Which begs the question, which tactics should they use and how? Questions like these must be considered.All of this strongly requires an appropriate consideration. Consider, if there are some problems that can only be resolved through state action (such as an arms race), that means that it is dependent on the policymaking process, and thus, these solutions can only be passed when policy windows open. Therefore, how many of these opportunities do AI risk policymakers get? Or, how many chances do they get to implement AI risk policies? These windows only open every once in a while, and they are often in fragile conditions. For example, Bill Clinton’s campaign in 1992 aimed to reform the healthcare system and made it a campaign priority, but his administration’s failure to pass the bill closed the window [[31](#B31-BDCC-03-00026)]. In other words, what impact does this have on AI governance and policy implementation timelines and what does that mean for the field as a whole?However, in order for a policy entrepreneur to manipulate decision-makers, they must have access to them, which is highly dependent on both the legitimacy of their issue but also for the legitimacy of the group itself and their interest. One of the ways that policy entrepreneurs increase their own influence is to create new decision-points that they can exploit and to reduce access of other groups [[32](#B32-BDCC-03-00026)]. AI risk policymakers and advocates will have to find some way to gain access to decision-makers. For example, working on near-term or non-existential risk issues with AI might help someone to build the social capital and network that is necessary to work on existential risks issues. This would not only make it easier people in the field to implement their solutions but to also make themselves gatekeepers to the decision-makers, which could help with preventing policies that would increase existential risks (whether from AI or other sources) from getting through. This may be an area that needs further research. Aspects such as a group’s access to decision-makers, the advocating group’s legitimacy, biases of the institution [[38](#B38-BDCC-03-00026)], and a group’s ability to mobilize resources will determine what gets added to the agenda, and the AI risk community will need to work on building all of these. AI policymakers will need to develop a strategy for how to get the right people into the right places and how to coordinate between different groups.Getting on the formal agenda is a competitive process because there are fundamental limits to a decision-maker’s time, and because the policy may be perceived to harm the interests of other groups. Opposing groups can use a variety of tactics, such as denying that the problem exists, arguing that it is not a problem for government, or arguing that the solution would have bad societal consequences, to deny it agenda status. Other factors that could deny an issue agenda status include changing societal norms, political changes, or political leaders avoiding having to be confronted by an issue that hurts their interests. Thus, AI policymakers will need to know how to overcome and adapt to these changing situations and other organizations preventing their policies from being adopted.AI governance and policy experts will need to pay attention to the arguments being used for and against superintelligence, and whether or not this will become a political issue. Baum (2018) notes that superintelligence is particularly vulnerable to what is known as politicized skepticism, skepticism that is not based on an intellectual disagreement about the problem, based on good-faith attempts to understand the arguments, but rather to shut down concerns based out of self-interest (or a conflict of interests). Some major AI companies, and even other academics, have criticized the idea of superintelligence out of what seems to be their own self-interest as opposed to genuine concerns [[39](#B39-BDCC-03-00026)]. This would have a devastating impact on AI policy advocates in a similar way that the tobacco industry significantly impacted scientific efforts to study the public health links between tobacco and cancer.#### 4.2. Policy Adoption\n\nThe next stage of the policy cycle is policy adoption, or when decision-makers choose an option that adopts, modifies, or abandons a policy. This does not necessarily take the form of choosing from a buffet of completed pieces of policy, but rather to take further action on a policy alternative that is more preferable and that is more likely to win approval. At this point, after much bargaining and discussion, the policy choice will only be a formality, or there will be continuous discussion and disagreement until there is a formal vote or decision made. This is an important field to analyze for AI policymakers for the obvious implication that they will want their policy proposals being chosen, and so they will need to understand and design strategies to do so. Further, as will be discussed later, when changes do occur, they can often bring with them wider changes in public policy [[40](#B40-BDCC-03-00026)], an implication that will need to be taken into account.The advocacy coalition framework is a theory on policy adoption but also incorporates every other aspect of the policy cycle with it. The theory describes the interactions of two or more ‘advocacy coalitions’; groups of people from a multitude of positions who coordinate together to advocate for some belief, or to implement some policy change (potentially over many fields) over an extended period of time [[41](#B41-BDCC-03-00026)]. These do not need to be a single, explicitly delineated organizations like the National Rifle Association but could include loosely affiliated groups of organizations and/or individuals, all working towards the same goal. Building and maintaining coalitions will be one of the major tasks that AI policymakers will need to work on, and so, examining this framework will be highly valuable.What is it that binds a coalition together? All advocacy coalitions share some form of beliefs. However, the advocacy coalition framework uses a hierarchical belief system. The deepest and broadest of these are deep core beliefs, which are normative positions on human nature, hierarchy of value preferences (i.e., should we value liberty over equality?), the role of government, etc. Policy core beliefs are the next stage of the hierarchy, which involves the extension of deep core beliefs into policy areas. Both of these areas are very difficult to change, as they involve fundamental values. This actually creates an issue where, due to differing fundamental and personal values which lead to lack of interaction, different coalitions often see the same information differently, leading to distrust. Each may come to see the other side as “evil”, reducing the possibilities of cooperation and compromise [[41](#B41-BDCC-03-00026)].The deeply held convictions of what a policy subsystem ought to look like are called policy core policy preferences and are the source of conflict between advocacy coalitions. They are the salient problems that have been the long-running issues in that area for a time. Policy core policy preferences shape the political landscape, dictating who allies with whom and who the enemies are, and what strategies coalitions take.The final level of the belief hierarchy are secondary beliefs, belief that cover procedures, rules, and things of this nature. These are very narrow in scope and the easier to change, requiring less evidence and little bargaining to change.Understanding the values and beliefs of different existing coalitions, groups, and individuals is key to building and maintaining new coalitions for AI policymakers. This brings up a few considerations. Since it is difficult for conflicting coalitions to work together, will AI policymakers have to choose certain coalitions to work with? What are the costs, benefits, and the potential blowback of this? Since some policies related to AI risk are not in a mature policy field (and thus do not have established coalitions), what can be done to shape the field beforehand to their advantage and/or promote cooperation among coalitions that are likely to form? Further, since secondary beliefs are relatively easy to change, what can be changed to help reduce existential risk?On a macro-level, this AC Framework acts as a cycle. Relatively stable parameters, as mentioned before, exist in the status quo since policy arenas usually come to some equilibrium where one coalition dominates the policy subsystem. Then, policy changes made by an advocacy coalition or an outside event create a fundamental change in the world, whether it is a change in public opinion or in the rules and procedures governing a subsystem, which changes the initial stable parameters, such as a major event like a mass shooting. These lead to a shift in power that allows another coalition to gain influence over the types of policies being adopted. However, especially in the case of controversial legislature, policies that require multiple veto points to pass will create access for multiple coalitions. This means that even a coalition that dominates a subsystem will not have unilateral ability to dictate policies in some situations. Others, however, especially where there are few decision-makers or an exceptionally influential decision-maker, can result in highly monopolized systems. Questions such as how to be resilient to these changes in conditions, how to facilitate changes into conditions that are beneficial to AI policymakers, and how to construct policy subsystems in a way that is conducive to AI policymakers’ goals are useful questions to consider.This theory describes policy adoption on a very broad level, but how do the decision-makers themselves decide which policies to move forward with? Different incentives and restrictions come to play at different levels of policymaking. For example, highly salient and popular issues are more likely to be influenced by popular opinion, whereas obscure technical issues will likely be determined by policy experts in that field. Different factors that affect both individual and group decision-makers also come into play, such as their personal, professional, organizational, and ideological values. For legislators, their political party and their constituency also play an overwhelming role in their decision-making. Understanding and mapping out these factors will be necessary for the successful implementation of AI risk policy.On top of these factors, decision-makers usually never have the time, expertise, or even care enough to be able to come up with a fully rational approach to deciding most policies. In many cases, legislators will seek out the advice of other legislators and experts and follow their lead. Due to this being a widespread practice, a few key institutions and leaders often have disproportionate power. For those working in AI risk policy, it is necessary to understand these things so that the message they craft for as to why policy change should occur, and whom to specifically target to get widespread adoption from other decision-makers in the policy arena.#### 4.3. Policy Implementation\n\nPolicy implementation is a key step in the policymaking process. It is defined as “whatever is done to carry a law into effect, to apply it to the target population … and to achieve its goals” [[30](#B30-BDCC-03-00026)]. In other words, it is the activity where adopted policies are carried into effect [[30](#B30-BDCC-03-00026)]. However, that is not to say that it is a very distinct step that can be clearly distinguished from others. Every implementation action can influence policy problems, resources, and objectives as the process evolves [[42](#B42-BDCC-03-00026)]. Policy implementation can influence problem identification, policy adoption, etc.Two broad factors that have been offered for the success of policy are local capacity and will [[42](#B42-BDCC-03-00026)]. In other words, is there enough training, money, and human resources, along with the right attitudes, motivation, and beliefs to make something happen? It is suggested that the former can be influenced much more easily than the latter as more money can be received and consultants can be hired. For AI risk, both questions are relevant: How to increase capacity and how to influence the influencers. With the former, it has been estimated that about $9-$20 million is currently spent on AI risk [[43](#B43-BDCC-03-00026),[44](#B44-BDCC-03-00026)]. With the latter, studying the opinion of the public as well as experts might be a useful approach. One survey [[45](#B45-BDCC-03-00026)] indicates that only 8% of top-cited authors in AI consider that human-level AI would be extremely bad (existential risk) for humanity. Another survey that is more recent [[46](#B46-BDCC-03-00026)] indicates that machine learning researchers think on average (median) that there is a 10% probability that human-level machine intelligence will result in a negative outcome and 5% probability that it will have an extremely bad outcome (existential risk). The general public seems to be generally cautious, with a survey showing 82% of Americans believing that AI/robots should be managed carefully [[47](#B47-BDCC-03-00026)].This part of the policymaking process is very difficult as the literature is generally quite pessimistic about the ability of policies to bring social changes into effect [[48](#B48-BDCC-03-00026)]. However, the authors of the cited paper have identified conditions of effective implementation based on successful examples. These conditions are (a) the policy is based on a sound theory of getting the target group to behave in a desired way, (b) policy directives and structures for the target group are unambiguous, (c) the leaders implementing the policies are skillful with regard to management and politics and committed to the goals, (d) policy is supported by organized constituency groups and key legislators as well as courts throughout the implementation process, and (e) the relative priority of policies is not significantly undermined over time by other policies or socioeconomic changes. Additionally [[49](#B49-BDCC-03-00026)], having carefully drafted statute that incentivizes behavior changes, provides adequate funds, expresses clearly ranked goals, is an implementation process, and has few veto points is also vital to the success of a policy.With regard to AI governance, the ambiguity and complexity of the problem creates a major hurdle for effective policies to be developed. These problems are nonlinear, very hard to predict, and may have the traits of wicked problems in the sense that solving one problem can create new problems. Breaking down AI risk policy into multiple domains as discussed in the previous section helps with creating somewhat less ambiguous objectives, such as changing the education system to be more conducive for technological growth. Even then, however, because many of the issues are either complex or have not happened yet, it is difficult to create concrete objectives and policies. AI risk is not like noise pollution, where there is an easily identifiable, manageable, and tractable problem. Further research could help to identify concrete and tractable issues that might lead to a reduction of risk. In addition, when trying to develop and implement policy, AI policymakers will need to keep in mind factors such as to what extent there is support for it in the executive branch, with outside organizations, and how exactly the policy is written and how those change throughout the policymaking cycle.Another key consideration for successful policy implementation that was identified from the literature is engaging with the community to increase readiness to accept and devote resources to policy-related problems. It has been acknowledged that there are no good evidence-based ways of achieving community buy-in. This is an area that might be useful to study in order to increase the chances of successful reduction of AI risk. There are different stages of community readiness, such as no awareness, denial, and vague awareness to preplanning, preparation, initiation, and stabilization phases [[49](#B49-BDCC-03-00026)]. It is important to understand what counts as the community and what phases different subcommunities of AI safety field are in. Earlier, this paper mentioned a survey about AI experts and showed that their readiness with AI risks was low. Other relevant experts, the public, and other types of subcommunities might have different levels of readiness.It has been suggested that “the more clearly the core components of an intervention program or practice are known and defined, the more readily the program or practice can be implemented successfully” [[49](#B49-BDCC-03-00026)]. In other words, policies and steps of implementation of those policies have to be very clearly expressed. What implications does this have for AI risk? Researchers and policymakers should evaluate how clearly core components have been expressed in this field and improve them as necessary.#### 4.4. Policy Evaluation\n\nThe final step in the policymaking cycle is policy evaluation. This includes activities related to determining the impact of the policy, whether it is achieving its goals, whether the rules and procedures it lays out are being followed, and other externalities or unintended consequences [[30](#B30-BDCC-03-00026)]. As we have explained before, policy evaluation does not have to occur only at this step. For example, the impact of a policy is estimated already in the early stages. Anderson et al. highlighted different types of policy evaluations in their book but especially considered systematic evaluations of programs. This involves “the specification of goals or objectives; the collection of information and data on program inputs, outputs, and consequences; and their rigorous analysis, preferably through the use of quantitative or statistical techniques” [[30](#B30-BDCC-03-00026)]’.Policy evaluation examines a policy to understand its impacts in multiple ways [[30](#B30-BDCC-03-00026)]. First, is the policy affecting the population that it is intending to target? In AI risk policy, this could be anything from large tech companies, to AI researchers, to people affected by technological unemployment. Second, are there populations that are being affected that were not intended? These externalities could be positive or negative. Third, what are the benefits and costs associated with this policy? AI policymakers will want to ensure that their policies actually reduce risk and that the costs are not so astronomical that they become politically infeasible. Finally, what long-term costs and benefits does a policy have? This is especially important for AI risk policy, as decisions now could have a major impact on the long-term risk that AI has. In AI governance and policymaking, research needs to be done on what sort of indicators or metrics are used for the reduction of risk, and for identifying what goals that should be achieved.If the previous steps in the policymaking process have generated goals that are unclear or diverse, it is very difficult to evaluate the impact of the policy [[30](#B30-BDCC-03-00026)]. Different decision-makers can more easily reach a differing conclusion about the results of a program in that case, or may not follow it all [[30](#B30-BDCC-03-00026)]. How the goals of an AI risk program are defined is, therefore, very important.Another key consideration for policy evaluation is how to make sure that the results are objectively measured. Agency and program officials may be wary of possible political consequences of the evaluation process [[30](#B30-BDCC-03-00026)]. If it turns out that the program was not useful or even detrimental, this might have consequences to their influence and career. Because of this consideration, they might not be very interested in correct evaluation studies or they may hinder the process in some other way. There are many ways an evaluation of a policy might be ignored or attacked, such as claiming it was poorly done, the data were inadequate, or the findings inconclusive [[30](#B30-BDCC-03-00026)]. Thus, it is important that researchers are provided with high-quality and relevant data-sets that are accurate.There is also the distinction between policy outputs and outcomes [[30](#B30-BDCC-03-00026)] to consider. Outputs are tangible actions taken or things produced, such as collecting taxes or building a dam. Outcomes, on the other hand, are the consequences for society, such as lower disposable income or cleaner air quality. Outputs do not always produce the intended outcomes, which is highly evident in areas such as social welfare policy, where policies may unintentionally trap people in poverty. For AI policymakers, it is very important to consider whether their policy outputs will have the intended consequences, and if so, how to correct that policy.The evaluation of a policy and the political responses to it can result in the termination of it [[30](#B30-BDCC-03-00026)]. Assuming that AI risk policymakers do not want their policies to be terminated or altered in a detrimental way, how can they make sure this does not happen? A policy getting altered to be more effective might be a good thing, but termination can bring unpleasant and negative connotations. It might even have negative consequences to the community [[30](#B30-BDCC-03-00026)]. What exact consequences might it have politically? Further, it is important to remember that many policymakers’ time horizon only goes until the next election, and so, they often seek immediate results, often before the returns come into fruition. While this may not impact all policies, as this mostly applies to salient policies like healthcare and education, AI policymakers should keep this in mind and try to understand how it might impact their work. 5. Conclusions\n---------------\n\nThere are multiple policy options that could be chosen that either directly or indirectly reduce AI risk, or relevant policies that could help with further efforts to reduce AI risk. Because different policy arenas have different political conditions, and the policymaking process itself draws a number of important challenges, this brings up questions as to what policies in what order are chosen, what strategies are used to get these policies passed and implemented by the government, and the larger impact of these choices on AI governance and risk as a whole. This paper argues that a new subfield of AI governance research on AI policymaking strategies should be further investigated to draw implications for how these policies should be designed, advocated for, and how organizations should approach solving this issue. 6. Limitations and Future Research\n-----------------------------------\n\nThis paper is intended to be a broad overview and to be a conversation starter for future research into this area. Thus, there is a strong limitation to the depth of research in this paper. However, it is expected that future work will be done to further refine the line of thinking laid out above, along with further in-depth study into the different theories and their applicability to AI risk.One of the major limitations of this paper is that the stages heuristic presented in this paper has been heavily criticized and is subject to debate about its effectiveness. Sabatier (2007) has criticized it for not being a causal theory, having a strong top-down bias, among other critiques. However, he also notes that there is much up to debate, with some scholars such as Anderson (2010) advocating for it. There are also a number of other theories that were not discussed in this paper, such as Institutional Rational Choice, the punctuated equilibrium framework, the policy diffusion framework, and other lesser-known theories. Future research is expected that will explore which policy frameworks should be focused on in AI risk research.The other limitation of this paper is that its applicability to the international governance of AI was not discussed. Future research that looks at how much these theories apply to foreign policy and the international governance of AI in general would be useful. If these theories have a very limited or no impact on the international governance of AI, then figuring out how much work can be done to reduce AI risk in domestic policy would determine the usefulness of these theories.Throughout the paper, a number of key considerations have been raised. For convenience, a list of them has been curated below below. 7. Summary\n-----------\n\nThis part of the paper summarizes and lists some of the key questions and considerations brought up in the discussion.Thesis level consideration:\n* How do the politics and administrative mechanisms of policymaking affect how policies to mitigate AI risk are created and implemented?\nConsiderations from Typologies of Policies:\n* Are there AI risk policies that should be implemented first? What are the methods to decide this?\n* What types of policies should the AI risk policymakers try to get implemented? Why should those types be prioritized?\n* What are the political considerations surrounding different sets of policies, and how does that affect their ability to be implemented?\nConsiderations from Problem Identification, Agenda Setting, and Policy Formulation:\n* Is this issue or policy legitimate?\n* Would the policy be supported by the current administration and be able to be maintained through changing administrations?\n* Which policies out of different sets of potential solutions are politically feasible?\n* Are there less costly alternative policies that AI risk policymakers will have to compete with?\n* How does attention to problems by different communities affect AI risk policymakers’ actions?\n* What types of framing of policy issues are most beneficial? What types are most dangerous?\n* Is there a way to determine how framing will determine policy content?\n* What focusing events have occurred in the field of AI?\n* How can AI risk policymakers utilize focusing events to further policy agendas?\n* What effect do other organizations have on reducing the legitimacy of AI risk?\n* What can be done to respond to these counter-movements effectively? What kind of responses to objections are most convincing?\n* How many policy windows will there be for a particular issue? What does this mean for AI risk policymakers’ overall strategy?\n* What role should AI risk policy entrepreneurs play in AI governance?\n* How and where should AI risk policy entrepreneurs gain access in government?\nConsiderations from Policy Adoption:* What policy alternatives are more likely to win approval to improve the odds of success for AI risk reduction?\n* What strategies can be used to improve the chances of a preferred policy to be adopted?\n* Which groups or individuals could join AI risk coalitions, what criteria are used to decide this, and what costs does them joining the coalition have?\n* What role can organizations outside of AI risk play in furthering AI risk policymakers’ agenda?\nConsiderations from Policy Implementation:\n* Is this solution technically feasible for governments to implement?\n* Are there enough resources, will, and support by leaders and constituency groups to be successful in implementation?\n* Is the policy crafted in a way that effectively structures incentives for the target group?\n* Is the policy unambiguous? If so, then how will that affect its ability to be implemented?\n* Are the goals of the policy in conflict with any other policy or changes in society?\n* Are there any veto points in the policy’s statutes to prevent effective implementation?\n* How will the contents or the political factors surrounding of a policy be affected during implementation?\n* Do the relevant communities accept the issue, and are they willing to devote resources to resolve it?\nConsiderations from Policy Evaluation:\n* Are the policy outputs having the intended outcomes?\n* What are the consequences of any unintentional outcomes?\n* What are the political factors surrounding the metrics that are being used to evaluate the policy?\n* Do the political costs or benefits of the policy have an impact on its success?\n* If the policy is terminated, will there be any negative political consequences?\n* How can AI risk policymakers update the policy? How can they prevent changes by other groups that would be harmful?\n* How will the limited time horizons of lawmakers and other groups affect the evaluation of the policy?\n\n\n\nAuthor Contributions\n--------------------\n\nConceptualization, B.P. and R.U.; methodology, B.P. and R.U.; writing—original draft preparation, B.P. and R.U.; writing—review and editing, B.P. and R.U.; project administration, B.P. and R.U.Funding\n-------\n\nThis research received no external funding.Acknowledgments\n---------------\n\nThe authors thank Matthijs Maas, Seth Baum, Sabrina Kavanaugh, Max Daniel, and the organizers and participants of the AI Safety Camp for useful comments and feedback.Conflicts of Interest\n---------------------\n\nThe authors declare no conflict of interest.References\n----------\n\n1. Tegmark, M. Life 3.0: Being Human in the Age of Artificial Intelligence; Knopf: New York, NY, USA, 2017. [[Google Scholar](https://scholar.google.com/scholar_lookup?title=Life+3.0:+Being+Human+in+the+Age+of+Artificial+Intelligence&author=Tegmark,+M.&publication_year=2017)]\n2. Bostrom, N. Superintelligence: Paths, Dangers, Strategies; Oxford University Press: Oxford, UK; New York, NY, USA, 2014. [[Google Scholar](https://scholar.google.com/scholar_lookup?title=Superintelligence:+Paths,+Dangers,+Strategies&author=Bostrom,+N.&publication_year=2014)]\n3. Dafoe, A. AI Governance: A Research Agenda; Governance of AI Program, Future of Humanity Institute: Oxford, UK, 2018; Available online: (accessed on 17 December 2018).\n4. Baum, S.D. A Survey of Artificial General Intelligence Projects for Ethics, Risk, and Policy. Global Catastrophic Risk Institute Working Paper 17-1. 2017. Available online: (accessed on 11 November 2019).\n5. Everitt, T.; Lea, G.; Hutter, M. AGI Safety Literature Review. arXiv **2018**, arXiv:1805.01109. [[Google Scholar](https://scholar.google.com/scholar_lookup?title=AGI+Safety+Literature+Review&author=Everitt,+T.&author=Lea,+G.&author=Hutter,+M.&publication_year=2018&journal=arXiv)]\n6. Joy, B. Why the future doesn’t need us. Wired **2000**, 8, 238–263. Available online: (accessed on 6 January 2019).\n7. Hibbard, B. Super-Intelligent Machines; Springer: New York, NY, USA, 2002. [[Google Scholar](https://scholar.google.com/scholar_lookup?title=Super-Intelligent+Machines&author=Hibbard,+B.&publication_year=2002)]\n8. Hughes, J.J. Global technology regulation and potentially apocalyptic technological threats. In Nanoethics: The Ethical and Social Implications of Nanotechnology; Allhoff, F., Ed.; John Wiley: Hoboken, NJ, USA, 2007; pp. 201–214. [[Google Scholar](https://scholar.google.com/scholar_lookup?title=Global+technology+regulation+and+potentially+apocalyptic+technological+threats&author=Hughes,+J.J.&publication_year=2007&pages=201%E2%80%93214)]\n9. McGinnis, J.O. Accelerating AI. Northwest. Univ. Law Rev. **2010**, 104, 366–381. Available online: (accessed on 14 March 2019). [[CrossRef](https://doi.org/10.2139/ssrn.1593851)]\n10. Scherer, M.U. Regulating artificial intelligence systems: Risks, challenges, competencies, and strategies. Harv. J. Law Technol. **2016**, 29, 354–398. [[Google Scholar](https://scholar.google.com/scholar_lookup?title=Regulating+artificial+intelligence+systems:+Risks,+challenges,+competencies,+and+strategies&author=Scherer,+M.U.&publication_year=2016&journal=Harv.+J.+Law+Technol.&volume=29&pages=354%E2%80%93398&doi=10.2139/ssrn.2609777)] [[CrossRef](https://doi.org/10.2139/ssrn.2609777)]\n11. Guihot, M.; Matthew, A.F.; Suzor, N.P. Nudging robots: Innovative solutions to regulate artificial intelligence. Vanderbilt J. Entertain. Technol. Law **2017**, 20, 385–456. [[Google Scholar](https://scholar.google.com/scholar_lookup?title=Nudging+robots:+Innovative+solutions+to+regulate+artificial+intelligence&author=Guihot,+M.&author=Matthew,+A.F.&author=Suzor,+N.P.&publication_year=2017&journal=Vanderbilt+J.+Entertain.+Technol.+Law&volume=20&pages=385%E2%80%93456)]\n12. Baum, S.D. On the promotion of safe and socially beneficial artificial intelligence. AI Soc. **2017**, 32, 543–551. [[Google Scholar](https://scholar.google.com/scholar_lookup?title=On+the+promotion+of+safe+and+socially+beneficial+artificial+intelligence&author=Baum,+S.D.&publication_year=2017&journal=AI+Soc.&volume=32&pages=543%E2%80%93551&doi=10.1007/s00146-016-0677-0)] [[CrossRef](https://doi.org/10.1007/s00146-016-0677-0)]\n13. Yampolskiy, R.; Fox, J. Safety Engineering for Artificial General Intelligence. Topoi **2013**, 32, 217–226. [[Google Scholar](https://scholar.google.com/scholar_lookup?title=Safety+Engineering+for+Artificial+General+Intelligence&author=Yampolskiy,+R.&author=Fox,+J.&publication_year=2013&journal=Topoi&volume=32&pages=217%E2%80%93226&doi=10.1007/s11245-012-9128-9)] [[CrossRef](https://doi.org/10.1007/s11245-012-9128-9)]\n14. Erdelyi, O.J.; Goldsmith, J. Regulating Artificial Intelligence: Proposal for a Global Solution. In Proceedings of the 2018 AAAI/ACM Conference on AI, Ethics, and Society (AIES ’18), New Orleans, LO, USA, 2–3 February 2018; Available online: (accessed on 6 January 2019).\n15. Wilson, G. Minimizing global catastrophic and existential risks from emerging technologies through international law. Va. Environ. Law J. **2013**, 31, 307–364. [[Google Scholar](https://scholar.google.com/scholar_lookup?title=Minimizing+global+catastrophic+and+existential+risks+from+emerging+technologies+through+international+law&author=Wilson,+G.&publication_year=2013&journal=Va.+Environ.+Law+J.&volume=31&pages=307%E2%80%93364)]\n16. Shulman, C. Arms control and intelligence explosions. In Proceedings of the 7th European Conference on Computing and Philosophy (ECAP), Bellaterra, Spain, 2–4 July 2009. [[Google Scholar](https://scholar.google.com/scholar_lookup?title=Arms+control+and+intelligence+explosions&conference=Proceedings+of+the+7th+European+Conference+on+Computing+and+Philosophy+(ECAP)&author=Shulman,+C.&publication_year=2009)]\n17. Goertzel, B. The Corporatization of AI is a Major Threat to Humanity. h+ Magazine. 2017. Available online: (accessed on 6 January 2019).\n18. Bostrom, N. Strategic Implications of Openness in AI Development. Glob. Policy **2017**, 8, 135–148. [[Google Scholar](https://scholar.google.com/scholar_lookup?title=Strategic+Implications+of+Openness+in+AI+Development&author=Bostrom,+N.&publication_year=2017&journal=Glob.+Policy&volume=8&pages=135%E2%80%93148&doi=10.1111/1758-5899.12403)] [[CrossRef](https://doi.org/10.1111/1758-5899.12403)][[Green Version](http://onlinelibrary.wiley.com/doi/10.1111/1758-5899.12403/pdf)]\n19. Dewey, D. Long-term strategies for ending existential risk from fast takeoff. In Risks of Artificial Intelligence; Müller, V.C., Ed.; CRC: Boca Raton, FL, USA, 2015; pp. 243–266. [[Google Scholar](https://scholar.google.com/scholar_lookup?title=Long-term+strategies+for+ending+existential+risk+from+fast+takeoff&author=Dewey,+D.&publication_year=2015&pages=243%E2%80%93266)]\n20. Goertzel, B. Should Humanity Build a Global AI Nanny to Delay the Singularity Until It’s Better Understood? J. Conscious. Stud. **2012**, 19, 96. [[Google Scholar](https://scholar.google.com/scholar_lookup?title=Should+Humanity+Build+a+Global+AI+Nanny+to+Delay+the+Singularity+Until+It%E2%80%99s+Better+Understood?&author=Goertzel,+B.&publication_year=2012&journal=J.+Conscious.+Stud.&volume=19&pages=96)]\n21. Brundage, M.; Avin, S.; Clark, J.; Toner, H.; Eckersley, P.; Garfinkel, B.; Dafoe, A.; Scharre, P.; Zeitzoff, T.; Filar, B.; et al. The Malicious Use of Artificial Intelligence: Forecasting, Prevention, and Mitigation. Available online: (accessed on 6 January 2018).\n22. Whittlestone, J.; Nyrup, R.; Alexandrova, A.; Cave, S. The Role and Limits of Principles in AI Ethics: Towards a Focus on Tensions. In Proceedings of the AAAI/ACM Conference on AI Ethics and Society, Honolulu, HI, USA, 27–28 January 2019. [[Google Scholar](https://scholar.google.com/scholar_lookup?title=The+Role+and+Limits+of+Principles+in+AI+Ethics:+Towards+a+Focus+on+Tensions&conference=Proceedings+of+the+AAAI/ACM+Conference+on+AI+Ethics+and+Society&author=Whittlestone,+J.&author=Nyrup,+R.&author=Alexandrova,+A.&author=Cave,+S.&publication_year=2019)]\n23. Calo, R. Artificial Intelligence Policy: A Primer and Roadmap. 2017. Available online: (accessed on 6 January 2019). It should also be noted that Calo is dismissive of the risk of artificial general intelligence.\n24. Cave, S.; ÓhÉigeartaigh, S.S. An AI Race for Strategic Advantage: Rhetoric and Risks. In Proceedings of the 2018 AAAI/ACM Conference on AI, Ethics, and Society, Honolulu, HI, USA, 27–28 January 2019; Available online: (accessed on 14 March 2019).\n25. Flynn, C. Personal Thoughts on Careers in AI Policy and Strategy. Effective Altruism Forum. 2017. Available online: (accessed on 6 January 2019).\n26. The specifics issues will depend on the type of government. For example, the types of difficulties would be different in a democracy vs. a dictatorship. This paper however will focus on federal republics.\n27. Thank you to Sabrina Kavanagh for suggesting the idea that the policy process could inspire new ideas for AI governance researchers.\n28. Brundage, M.; Bryson, J. Smart Policies for Artificial Intelligence. arXiv **2016**, arXiv:1608.08196. [[Google Scholar](https://scholar.google.com/scholar_lookup?title=Smart+Policies+for+Artificial+Intelligence&author=Brundage,+M.&author=Bryson,+J.&publication_year=2016&journal=arXiv)]\n29. Lowi, T.J. Four Systems of Policy, Politics, and Choice. Public Adm. Rev. **1972**, 32, 298–310. [[Google Scholar](https://scholar.google.com/scholar_lookup?title=Four+Systems+of+Policy,+Politics,+and+Choice&author=Lowi,+T.J.&publication_year=1972&journal=Public+Adm.+Rev.&volume=32&pages=298%E2%80%93310&doi=10.2307/974990)] [[CrossRef](https://doi.org/10.2307/974990)]\n30. Anderson, J.E. Public Policymaking: An Introduction, 7th ed.; Cengage Learning: Boston, MA, USA, 2010. [[Google Scholar](https://scholar.google.com/scholar_lookup?title=Public+Policymaking:+An+Introduction&author=Anderson,+J.E.&publication_year=2010)]\n31. Zahariadis, N. The Multiple Streams Framework: Structure, Limitations, Prospects. In Theories of the Policy Process, 2nd ed.; Sabatier, P., Ed.; Westview Press: Boulder, CO, USA, 2007. [[Google Scholar](https://scholar.google.com/scholar_lookup?title=The+Multiple+Streams+Framework:+Structure,+Limitations,+Prospects&author=Zahariadis,+N.&publication_year=2007)]\n32. Cobb, R.; Elder, C.D. What is an Issue? What Makes an Issue? In Participation in American Politics: The Dynamics of Agenda Building; Johns Hopkins University Press: Baltimore, MD, USA, 1983; pp. 82–93. [[Google Scholar](https://scholar.google.com/scholar_lookup?title=What+is+an+Issue?+What+Makes+an+Issue?&author=Cobb,+R.&author=Elder,+C.D.&publication_year=1983&pages=82%E2%80%9393)]\n33. Allen, G. China’s Artificial Intelligence Strategy Poses a Credible Threat to U.S. Tech Leadership. Center for Foreign Affairs Blog. Available online: (accessed on 26 February 2019).\n34. Yampolskiy, R. Current State of Knowledge on Failures of AI Enabled Products. Report. Consortium for Safer AI. 2018. Available online: (accessed on 6 January 2018).\n35. Danzig, R. Technology Roulette: Managing Loss of Control as Many Militaries Pursue Technological Superiority; Center for New American Security: Washington, DC, USA, 2018; Available online: (accessed on 24 March 2019).\n36. May, P.J. Reconsidering Policy Design: Policies and Publics. J. Public Policy **1991**, 11, 187–206. [[Google Scholar](https://scholar.google.com/scholar_lookup?title=Reconsidering+Policy+Design:+Policies+and+Publics&author=May,+P.J.&publication_year=1991&journal=J.+Public+Policy&volume=11&pages=187%E2%80%93206&doi=10.1017/S0143814X0000619X)] [[CrossRef](https://doi.org/10.1017/S0143814X0000619X)]\n37. Russell, S.; Aguirre, A.; Conn, A.; Tegmark, M. Why You Should Fear “Slaughterbots”—A Response. IEEE Spectrum. 2018. Available online: (accessed on 9 January 2019).\n38. Yudkowsky, E. Cognitive Biases Potentially Affecting Judgment of Global Risks. In Global Catastrophic Risks; Bostrom, N., Ćirković, M.M., Eds.; Oxford University Press: New York, NY, USA, 2008; pp. 91–119. [[Google Scholar](https://scholar.google.com/scholar_lookup?title=Cognitive+Biases+Potentially+Affecting+Judgment+of+Global+Risks&author=Yudkowsky,+E.&publication_year=2008&pages=91%E2%80%93119)]\n39. Baum, S.D. Superintelligence Skepticism as a Political Tool. Information **2018**, 9, 209. [[Google Scholar](https://scholar.google.com/scholar_lookup?title=Superintelligence+Skepticism+as+a+Political+Tool&author=Baum,+S.D.&publication_year=2018&journal=Information&volume=9&pages=209&doi=10.3390/info9090209)] [[CrossRef](https://doi.org/10.3390/info9090209)]\n40. James, T.L.; Jones, B.D.; Baumgartner, F.R. Punctuated-Equilibrium Theory: Explaining Stability and Change in Public Policymaking. In Theories of the Policy Process, 2nd ed.; Sabatier, P.A., Ed.; Westview Press: Boulder, CO, USA, 2007; Chapter 6. [[Google Scholar](https://scholar.google.com/scholar_lookup?title=Punctuated-Equilibrium+Theory:+Explaining+Stability+and+Change+in+Public+Policymaking&author=James,+T.L.&author=Jones,+B.D.&author=Baumgartner,+F.R.&publication_year=2007)]\n41. Sabatier, P.; Weiblle, C.M. An Advocacy Coalition Framework. In Theories of the Policy Process, 2nd ed.; Sabatier, P.A., Ed.; Westview Press: Boulder, CO, USA, 2007; Chapter 7. [[Google Scholar](https://scholar.google.com/scholar_lookup?title=An+Advocacy+Coalition+Framework&author=Sabatier,+P.&author=Weiblle,+C.M.&publication_year=2007)]\n42. McLaughlin, M.W. Learning From Experience: Lessons From Policy Implementation. Educ. Eval. Policy Anal. **1987**, 9, 171–178. [[Google Scholar](https://scholar.google.com/scholar_lookup?title=Learning+From+Experience:+Lessons+From+Policy+Implementation&author=McLaughlin,+M.W.&publication_year=1987&journal=Educ.+Eval.+Policy+Anal.&volume=9&pages=171%E2%80%93178&doi=10.3102/01623737009002171)] [[CrossRef](https://doi.org/10.3102/01623737009002171)][[Green Version](http://journals.sagepub.com/doi/pdf/10.3102/01623737009002171)]\n43. Farquhar, S. Changes in Funding in the AI Safety Field. 2017. Available online: (accessed on 6 January 2019).\n44. MacAskill, W. What Are the Most Important Moral Problems of Our Time? TED Talk. 2018. Available online: (accessed on 6 January 2019).\n45. Müller, V.; Bostrom, N. Future progress in artificial intelligence: A Survey of Expert Opinion. In Fundamental Issues of Artificial Intelligence; Müller, V.C., Ed.; Synthese Library; Springer: Berlin, Germany, 2014; Available online: (accessed on 6 January 2019).\n46. Grace, K.; Salvatier, J.; Dafoe, A.; Zhang, B.; Evans, O. When Will AI Exceed Human Performance? Evidence from AI Experts. arXiv **2017**, arXiv:1705.08807. [[Google Scholar](https://scholar.google.com/scholar_lookup?title=When+Will+AI+Exceed+Human+Performance?+Evidence+from+AI+Experts&author=Grace,+K.&author=Salvatier,+J.&author=Dafoe,+A.&author=Zhang,+B.&author=Evans,+O.&publication_year=2017&journal=arXiv&doi=10.1613/jair.1.11222)] [[CrossRef](https://doi.org/10.1613/jair.1.11222)]\n47. Zhang, B.; Dafoe, A. Artificial Intelligence: American Attitudes and Trends. January 2019. Available online: (accessed on 3 January 2019).\n48. Sabatier, P.; Mazmanian, D. The Conditions of Effective Implementation: A Guide to Accomplishing Policy Objectives. Policy Anal. **1979**, 5, 481–504. [[Google Scholar](https://scholar.google.com/scholar_lookup?title=The+Conditions+of+Effective+Implementation:+A+Guide+to+Accomplishing+Policy+Objectives&author=Sabatier,+P.&author=Mazmanian,+D.&publication_year=1979&journal=Policy+Anal.&volume=5&pages=481%E2%80%93504&pmid=10244415)] [[PubMed](http://www.ncbi.nlm.nih.gov/pubmed/10244415)]\n49. Sabatier, P.; Mazmanian, D. The Implementation of Public Policy: A Framework of Analysis. Policy Stud. J. **1980**, 8, 538–560. [[Google Scholar](https://scholar.google.com/scholar_lookup?title=The+Implementation+of+Public+Policy:+A+Framework+of+Analysis&author=Sabatier,+P.&author=Mazmanian,+D.&publication_year=1980&journal=Policy+Stud.+J.&volume=8&pages=538%E2%80%93560&doi=10.1111/j.1541-0072.1980.tb01266.x)] [[CrossRef](https://doi.org/10.1111/j.1541-0072.1980.tb01266.x)]\n\n \n© 2019 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license ().", "url": "https://www.mdpi.com/2504-2289/3/2/26", "title": "AI Governance and the Policymaking Process: Key Considerations for Reducing AI Risk", "source": "html_articles", "source_type": "journalArticle", "source_filetype": "pdf", "date_published": "2019-05-31T22:00:00Z", "authors": ["Brandon Perry", "Risto Uuk"], "summary": [], "id": "1a72a963448dfad39f5013dae43f41da"} {"text": "In this article\n\n\n\n\n1. [3D parallelism: Scaling to trillion-parameter models](#3d-parallelism-scaling-to-trillion-parameter-models)\n2. [ZeRO-Offload: 10x bigger model training using a single GPU](#zero-offload-10x-bigger-model-training-using-a-single-gpu)\n3. [DeepSpeed Sparse Attention: Powering 10x longer sequences with 6x faster execution](#deepspeed-sparse-attention-powering-10x-longer-sequences-with-6x-faster-execution)\n4. [1-bit Adam: 5x less communication and 3.4x faster training](#1-bit-adam-5x-less-communication-and-3-4x-faster-training)\n\n\n\n\n\n\nIn February, [we announced DeepSpeed](https://www.microsoft.com/en-us/research/blog/zero-deepspeed-new-system-optimizations-enable-training-models-with-over-100-billion-parameters/), an open-source deep learning training optimization library, and ZeRO (Zero Redundancy Optimizer), a novel memory optimization technology in the library, which vastly advances large model training by improving scale, speed, cost, and usability. DeepSpeed has enabled researchers to create Turing Natural Language Generation ([Turing-NLG](https://www.microsoft.com/en-us/research/blog/turing-nlg-a-17-billion-parameter-language-model-by-microsoft)), the largest language model with 17 billion parameters and state-of-the-art accuracy at the time of its release. In May, [we released ZeRO-2](https://www.microsoft.com/en-us/research/blog/zero-2-deepspeed-shattering-barriers-of-deep-learning-speed-scale/)—supporting model training of 200 billion parameters up to 10x faster compared to state of the art—along with a list of compute, I/O, and convergence optimizations powering the fastest BERT training. From there, we have been continuing to innovate at a fast rate, pushing the boundaries of speed and scale for deep learning training.\n\n\nToday, we are happy to share our new advancements that not only push deep learning training to the extreme, but also democratize it for more people—from data scientists training on massive supercomputers to those training on low-end clusters or even on a single GPU. More specifically, DeepSpeed adds four new system technologies that further the [AI at Scale](https://www.microsoft.com/en-us/research/project/ai-at-scale/) initiative to innovate across Microsoft’s AI products and platforms. These offer extreme compute, memory, and communication efficiency, and they power model training with billions to trillions of parameters. The technologies also allow for extremely long input sequences and power on hardware systems with a single GPU, high-end clusters with thousands of GPUs, or low-end clusters with very slow ethernet networks.\n\n\n* **Trillion parameter model training with 3D parallelism:** DeepSpeed enables a flexible combination of three parallelism approaches—ZeRO-powered data parallelism, pipeline parallelism, and tensor-slicing model parallelism. 3D parallelism adapts to the varying needs of workload requirements to power **extremely large models** with over a **trillion** parameters while achieving near-perfect memory-scaling and throughput-scaling efficiency. In addition, its improved communication efficiency allows users to train multi-billion-parameter models 2–7x faster on regular clusters with limited network bandwidth.\n* **10x bigger model training on a single GPU with ZeRO-Offload:** We extend ZeRO-2 to leverage both CPU and GPU memory for training large models. Using a machine with **a single NVIDIA V100 GPU**, our users can run **models of up to 13 billion parameters** without running out of memory, 10x bigger than the existing approaches, while obtaining competitive throughput. This feature democratizes multi-billion-parameter model training and opens the window for many deep learning practitioners to explore bigger and better models. Read the paper: \n* **Powering 10x longer sequences and 6x faster execution through DeepSpeed Sparse Attention:** DeepSpeed offers sparse attention kernels—an instrumental technology to support long sequences of model inputs, whether for text, image, or sound. Compared with the classic dense Transformers, it powers **an order-of-magnitude longer input sequence** and obtains up to 6x faster execution with comparable accuracy. It also outperforms state-of-the-art sparse implementations with 1.5–3x faster execution. Furthermore, our sparse kernels support efficient execution of flexible sparse format and empower users to innovate on their custom sparse structures.\n* **1-bit Adam with up to 5x communication volume reduction:** Adam is an effective and (probably the most well-utilized) optimizer for training many large-scale deep learning models. However, Adam is generally not compatible with communication-efficient optimization algorithms. Therefore, the communication cost could become a bottleneck while scaling across distributed devices. We introduce a new algorithm, 1-bit Adam with efficient implementation, which **reduces communication volume by up to 5x** while achieving similar convergence efficiency to Adam. We observe up to 3.5x faster distributed training in communication-constrained scenarios, allowing for scaling to different types of GPU clusters and networks. Read the paper: [https://www.microsoft.com/en-us/research/publication/1-bit-adam-communication-efficient-large-scale-training-with-adams-convergence-speed/](https://www.microsoft.com/en-us/research/publication/1-bit-adam-communication-efficient-large-scale-training-with-adams-convergence-speed/ )\n\n\n[![3d Parallelism, ZeRO-Offload, Sparse Attention, 1-bit Adam](https://www.microsoft.com/en-us/research/uploads/prod/2020/09/Blog_DeepSpeed3_MainHero_HighRes.jpg)](https://www.microsoft.com/en-us/research/uploads/prod/2020/09/Blog_DeepSpeed3_MainHero_HighRes-1024x636.jpg)\nThis blog post explores these four lines of technology in greater depth. We have been making all of these exciting new optimizations available in [open-source library, DeepSpeed](https://github.com/microsoft/DeepSpeed).\n\n\n\n\nSpotlight: Microsoft Research Podcast\n\n\n\n\n\n[![Sebastian Bubeck smiling for the camera in a black and white photo accompanied by the Microsoft Research Podcast logo to the left](https://www.microsoft.com/en-us/research/uploads/prod/2023/03/Podcast_Seb-Bubeck_2023Mar_1400x788-641b52c5cf826.png)](https://www.microsoft.com/en-us/research/podcast/ai-frontiers-the-physics-of-ai-with-sebastien-bubeck/)\n\n\nAI Frontiers: The Physics of AI with Sébastien Bubeck\n-----------------------------------------------------\n\n\nWhat is intelligence? How does it emerge and how do we measure it? Ashley Llorens and machine learning theorist Sébastian Bubeck discuss accelerating progress in large-scale AI and early experiments with GPT-4. \n\n\n\n\n\n[Listen now](https://www.microsoft.com/en-us/research/podcast/ai-frontiers-the-physics-of-ai-with-sebastien-bubeck/) \n\n\n\n\n\n3D parallelism: Scaling to trillion-parameter models\n----------------------------------------------------\n\n\nWith the rapid growth of compute available on modern GPU clusters, training a powerful trillion-parameter model with incredible capabilities is no longer a far-fetched dream but rather a near-future reality. DeepSpeed has combined three powerful technologies to enable training trillion-scale models and to scale to thousands of GPUs: data parallel training, model parallel training, and pipeline parallel training. This symbiosis scales deep learning training far beyond what each of the strategies can offer in isolation. 3D parallelism simultaneously addresses the two fundamental challenges toward training trillion-parameter models: *memory efficiency* and *compute efficiency*. As a result, DeepSpeed can scale to fit the most massive models in memory without sacrificing speed.\n\n\n* Learn the challenges of obtaining memory and compute efficiency for gigantic models \n\n\n\n**Memory Efficiency:** The memory requirements to train a trillion-parameter model are far beyond what is available in a single GPU device. Training using the Adam optimizer in mixed precision requires approximately 16 terabytes (TB) of memory just to store the model states (parameters, gradients, and optimizer states). For comparison, the state-of-the-art NVIDIA A100 GPUs have just 40 gigabytes (GB) of memory. It would require the collective memory of 400 such GPUs just to store the model states.\n\n\nActivations consume additional memory that increases with the batch size. A trillion-parameter model trained with only unit batch size produces over 1 TB of activation memory. Activation *checkpointing* reduces this memory to approximately 20 GB by trading for additional compute, but the memory requirements remain prohibitively large for training.\n\n\nThe model states and activations must be efficiently partitioned across the available multiple GPU devices to enable such a model to even begin training without running out of memory.\n\n\n**Compute Efficiency:** Training a trillion-parameter model end-to-end requires approximately 5,000 zettaflops (that’s 5 with *24 zeros* after it; based on the [laws of scaling](https://arxiv.org/abs/2001.08361) work from OpenAI). It would take 4,000 NVIDIA A100 GPUS running at 50% compute efficiency about 100 days to train such a model.\n\n\nWhile large super-computing GPU clusters can have well over 4,000 GPUs, achieving high compute efficiency at this scale is challenging due to the batch size constraints. Compute efficiency increases as the computation time increases over the communication time. This ratio is proportional to the batch size. However, the batch size that a model can be trained with has an upper bound—beyond that the convergence efficiency deteriorates rapidly.\n\n\nOne of the largest models in the world, [GPT-3](https://arxiv.org/abs/2005.14165), was trained using a batch size of about 1,500. With 4,000 GPUs, even a liberal batch size of 4,000 would only allow for a batch size of 1 per GPU and limit scalability.\n* Understand the tradeoffs of data, model, and pipeline parallelism \n\n\n\n**Data parallelism** is a ubiquitous technique in deep learning in which each input batch of training data is split among the data parallel workers. Gradients must be communicated and aggregated after backward propagation to ensure that consistent steps are taken by the optimizer. Data parallelism has several distinct advantages, including compute efficiency and minimal implementation effort. However, data parallelism relies on scaling the batch size with the number of data parallel workers, which cannot be done indefinitely without affecting convergence.\n\n\n\n\t+ **Memory efficiency:** Data parallelism replicates the model and optimizer across all workers, and therefore is not memory efficient. DeepSpeed developed [ZeRO](https://www.microsoft.com/en-us/research/blog/zero-deepspeed-new-system-optimizations-enable-training-models-with-over-100-billion-parameters/), a collection of optimizations that improve the memory efficiency of data parallelism. This work relies on ZeRO stage 1, which partitions the optimizer states among data parallel workers to reduce redundancy.\n\t+ **Compute efficiency:** The amount of computation performed by each worker is constant as we increase the degree of parallelism. Data parallelism can achieve near-perfect scaling at small scales. However, the communication cost of aggregating gradients among data parallel workers scales with the model size and limits compute efficiency on large models or systems with low communication bandwidth. *Gradient accumulation* is a common strategy for amortizing this communication cost by further increasing the batch size and performing multiple forward and backward propagations on *micro-batches* while locally accumulating gradients before aggregating and taking an optimizer step.\n**Model Parallelism** is a broad class of techniques that partitions the individual layers of the model across workers. By its nature, the computations and communications of model parallelism are specific to a model architecture and therefore can have large initial implementation effort. DeepSpeed leverages NVIDIA’s [Megatron-LM](https://github.com/NVIDIA/Megatron-LM) in this work for massive model-parallel Transformer-based language models. Model parallelism reduces the memory proportional to the number of workers. Model parallelism is the most memory efficient among the three types of parallelism at the cost of the lowest compute efficiency.\n\n\n\n\t+ **Memory efficiency:** Model parallelism reduces the memory footprint proportional to the number of workers. Crucially, it is the only approach that reduces the activation memory for individual network layers. DeepSpeed further improves memory efficiency by partitioning the activation memory among model-parallel workers.\n\t+ **Compute efficiency:** Model parallelism has poor computational efficiency due to additional communication of activations in each forward and backward propagation. Model parallelism requires high communication bandwidth to be efficient and does not scale well beyond a single node where the communication bandwidth is limited. Furthermore, each model-parallel worker decreases the amount of computation performed between each communication stage, impacting compute efficiency. Model parallelism is often used in conjunction with data parallelism to trade between memory and compute efficiency.\n**Pipeline parallelism** training engine is included in this release of DeepSpeed! Pipeline parallelism divides the layers of the model into *stages* that can be processed in parallel. As one stage completes the forward pass for a micro-batch, the activation memory is communicated to the next stage in the pipeline. Similarly, as the next stage completes its backward propagation, gradients are communicated backwards through the pipeline. Multiple micro-batches must be kept in flight to ensure pipeline stages compute in parallel. Several approaches, such as [PipeDream](https://www.microsoft.com/en-us/research/blog/pipedream-a-more-effective-way-to-train-deep-neural-networks-using-pipeline-parallelism/), have been developed to trade off memory and compute efficiency as well as convergence behavior. The DeepSpeed approach extracts parallelism through gradient accumulation to maintain the same convergence behavior as traditional data- and model-parallel training with the same total batch size.\n\n\n\n\t+ **Memory efficiency:** Pipeline parallelism reduces memory proportional to the number of pipeline stages, allowing model size to scale linearly with the number of workers. However, pipeline parallelism does not reduce the memory footprint for the activations of each layer. Additionally, each worker must store the activations for all micro-batches in flight. In effect, the activation memory on the first stage of the pipeline is approximately the same as the total activation memory for a single micro-batch. A trillion-parameter model would need approximately 19 GB of memory for the activations of a micro-batch, consuming almost half the available memory of the new NVIDIA A100 GPU.\n\t+ **Compute efficiency:** Pipeline parallelism has the lowest communication volume since it only communicates data proportional to the activation size of the layers between stage boundaries. However, it cannot scale indefinitely. Like model parallelism, increasing the pipeline size decreases the computation per pipeline stage, which also decreases the compute-to-communication ratio. Pipeline parallelism also requires each of its stages to be perfectly load balanced to achieve good efficiency. \n\t \n\tFurthermore, pipeline parallelism incurs a bubble overhead from filling and emptying the pipeline at the beginning and end of each training batch. Training with gradient accumulation steps (and thus batch size) that is 4x or 8x the number of pipeline stages achieves 81% and 90% scaling efficiency from one pipeline stage, respectively.\n\n\n### Achieving both memory and compute efficiency with 3D parallelism\n\n\nData, model, and pipeline parallelism each perform a specific role in improving memory and compute efficiency. Figure 1 illustrates our 3D strategy.\n\n\n*Memory Efficiency:* The layers of the model are divided into pipeline stages, and the layers of each stage are further divided via model parallelism. This 2D combination simultaneously reduces the memory consumed by the model, optimizer, and activations. However, we cannot partition the model indefinitely without succumbing to communication overheads which limits compute efficiency.\n\n\n*Compute Efficiency:* To allow the number of workers to scale beyond model and pipeline parallelism without sacrificing compute efficiency, we use ZeRO-powered data parallelism (ZeRO-DP). ZeRO-DP not only improves memory efficiency further via optimizer state partition, but also allows scaling to arbitrarily large number of GPUs with minimal communication overhead by exploiting topology aware mapping.\n\n\n*Topology aware 3D mapping* (Figure 2)*:* Each dimension in 3D parallelism is carefully mapped onto the workers to achieve maximum compute efficiency by exploiting two key architectural properties.\n\n\n1. **Optimizing for intra- and inter-node communication bandwidth**: Model parallelism has the largest communication overhead of the three strategies, and so we prioritize placing model parallel groups within a node to utilize the larger intra-node bandwidth. Here we apply NVIDIA Megatron-LM for tensor-slicing style of model parallelism. Data parallel groups are placed within a node when model parallelism does not span all the workers in a node.  Otherwise, they are placed across nodes.  Pipeline parallelism has the lowest communication volume, and so we can schedule pipeline stages across nodes without being limited by the communication bandwidth.\n2. **Bandwidth amplification via parallelism in communication:** The size of the gradients communicated by each data parallel group decreases linearly via both pipeline and model parallelism, and thus the total communication volume is decreased from pure data parallelism.  Furthermore, each data parallel group performs its communication independently and in parallel among a subset of localized workers. As a result, the effective bandwidth for data parallel communication is amplified by a combination of reduced communication volume and increased locality and parallelism.\n\n\n![Diagram showing Example 3D parallelism with 32 workers. Layers of the neural network are divided among four pipeline stages. Layers within each pipeline stage are further partitioned among four model parallel workers. Lastly, each pipeline is replicated across two data parallel instances, and ZeRO partitions the optimizer states across the data parallel replicas.](https://www.microsoft.com/en-us/research/uploads/prod/2020/09/Blog_DeepSpeed3_Figure-1_highres-1024x615.png)Figure 1: Example 3D parallelism with 32 workers. Layers of the neural network are divided among four pipeline stages. Layers within each pipeline stage are further partitioned among four model parallel workers. Lastly, each pipeline is replicated across two data parallel instances, and ZeRO partitions the optimizer states across the data parallel replicas.\n![Colorful blocks showing Mapping of workers in Figure 1 to GPUs on a system with eight nodes, each with four GPUs. Coloring denotes GPUs on the same node.](https://www.microsoft.com/en-us/research/uploads/prod/2020/09/Blog_DeepSpeed3_Figure2_highres.png)Figure 2: Mapping of workers in Figure 1 to GPUs on a system with eight nodes, each with four GPUs. Coloring denotes GPUs on the same node.\n* Learn more about how 3D parallelism enlists each type of parallelism to train trillion-parameter models \n\n\n\nA trillion-parameter model could be scaled across 4,096 NVIDIA A100 GPUs using 8-way model parallelism, 64-way pipeline parallelism, and 8-way data parallelism.\n\n\nBy combining model parallelism and pipeline parallelism, 3D parallelism achieves excellent memory efficiency and compute efficiency across multiple nodes. Model parallelism brings memory efficiency for the activations and model states within a node, while pipeline parallelism allows for memory efficiency of model states across nodes without sacrificing compute efficiency compared to using model parallelism alone. In our trillion-parameter example with a micro-batch size of 1, our model would consume 30 GB of memory for model states and 2.5 GB for partitioned activations after activation checkpointing with the aforementioned 3D parallelism. This results in a total memory footprint of 32.5 GB. With such a configuration, NVIDIA A100 GPUs with 40 GB of memory have more than enough space to fit and train such a model.\n\n\nCombining model parallelism with pipeline parallelism also allows pipeline parallelism to achieve high compute efficiency with minimal bubble overhead even at very small batch sizes. With 8-way model parallelism, using a micro-batch of 1 per model would result in an effective micro-batch of 1/8 per GPU. Therefore, pipeline parallelism can achieve a 90% compute efficiency using a gradient accumulation step of 8x the pipeline parallelism degree and with an aggregate per-GPU batch size of only 1. When combined with data parallelism, this results in an effective batch size of 4,096 on 4,096 GPUs, which can still achieve 90% pipeline efficiency.\n\n\n**But what compute efficiency results from data parallelism? Doesn’t data parallelism require large batch per GPU to remain efficient?**\n\n\nModel parallelism can reduce the effective batch size to be less than 1 per GPU. This allows pipeline parallelism to hide the pipeline bubble overhead even with small batch sizes. Note that by using pipeline parallelism across nodes, we are effectively allowing communication between data parallel nodes at each stage of the pipeline to happen independently and in parallel with the other pipeline stages. In fact, in a fully connected network topology common in high-end GPU clusters, this has a significant implication on the effective communication bandwidth available for data parallel training. Since each node at a pipeline stage can communicate in parallel with its corresponding data parallel nodes, the effective communication bandwidth is directly proportional to the number of pipeline stages. With 64 pipeline-parallel stages, the effective bandwidth is 64x the bandwidth to and from a single node. With such large effective bandwidth pipeline parallelism enables data parallelism to scale effectively, even at small batch sizes where the compute-to-communication ratio is very low.\n\n\n### Powering trillion-parameter model training with linear efficiency scaling\n\n\nDeepSpeed can train a language model with one ***trillion*** parameters using as few as 800 NVIDIA V100 GPUs (Figure 3). We demonstrate simultaneous memory and compute efficiency by scaling the size of the model and observing linear growth, both in terms of the size of the model and the throughput of the training. In every configuration, we can train approximately 1.4 billion parameters per GPU, which is the largest model size that a single GPU can support without running out of memory, indicating perfect memory scaling. We also obtain close to perfect-linear compute efficiency scaling and a throughput of 47 teraflops per V100 GPU. This is impressive scaling and throughput for the given hardware.\n\n\n![Figure 3: Model size (in billions of parameters) and training throughput (in petaflops) as a function of GPUs. DeepSpeed can train a model with 1 trillion parameters using 800 NVIDIA V100 Tensor Core GPUs with 32 GB of memory. Each configuration uses 16-way model parallelism provided by NVIDIA Megatron-LM, and the remaining GPUs are arranged using pipeline parallelism. The trillion-parameter model has 298 layers of Transformers with a hidden dimension of 17,408 and is trained with sequence length 2,048 and batch size 2,048. For smaller models, we decrease the number of Transformer layers and the batch size proportionally to the number of GPUs.](https://www.microsoft.com/en-us/research/uploads/prod/2020/09/DeepSpeed-Figure-3_Section-1.jpg)Figure 3: Model size (in billions of parameters) and training throughput (in petaflops) as a function of GPUs. DeepSpeed can train a model with 1 trillion parameters using 800 NVIDIA V100 Tensor Core GPUs with 32 GB of memory. Each configuration uses 16-way model parallelism provided by [NVIDIA Megatron-LM](https://github.com/NVIDIA/Megatron-LM), and the remaining GPUs are arranged using pipeline parallelism. The trillion-parameter model has 298 layers of Transformers with a hidden dimension of 17,408 and is trained with sequence length 2,048 and batch size 2,048. For smaller models, we decrease the number of Transformer layers and the batch size proportionally to the number of GPUs.\n* Dive deeper into how 3D parallelism accelerates training at the scale of GPT-3 \n\n\n\n[![Figure 4: System performance using 800 GPUs to train a GPT-3 scale model with 180 billion parameters using 2D and 3D parallelism. The model has 100 Transformer layers with hidden dimension 12,288 and 96 attention heads. The model is trained with batch size 2,048 and sequence length 2,048. ZeRO-1 is enabled alongside data parallelism. P, M, and D denote the pipeline, model, and data parallel dimensions, respectively.](https://www.microsoft.com/en-us/research/uploads/prod/2020/09/DeepSpeed-3_Figure-2-_section-2.jpg)](https://www.microsoft.com/en-us/research/blog/zero-deepspeed-new-system-optimizations-enable-training-models-with-over-100-billion-parameters/)Figure 4: System performance using 800 GPUs to train a GPT-3 scale model with 180 billion parameters using 2D and 3D parallelism. The model has 100 Transformer layers with hidden dimension 12,288 and 96 attention heads. The model is trained with batch size 2,048 and sequence length 2,048. ZeRO-1 is enabled alongside data parallelism. P, M, and D denote the pipeline, model, and data parallel dimensions, respectively.\nIn Figure 4, we use the recent [GPT-3](https://arxiv.org/abs/2005.14165) model architecture, with over 175 billion parameters, as a benchmark for 3D parallelism: ­\n\n\n\n\t+ We first evaluate the **2D configurations** (C1-C3). Configurations C1 and C2 use only pipeline and model parallelism—they can train the model but achieve low throughput due to over-decomposing the problem and having low GPU utilization. C3 attempts to use only pipeline and data parallelism but is unable to fit the problem in memory without reducing the size of activations via Megatron’s model parallelism.\n\t+ The **3D configurations** (C4-C10) are arranged by increasing degree of pipeline parallelism; the best performance is achieved by the middle configurations that balance the parallelism in order to be memory-, computation-, and communication-efficient.\n\t+ The best 3D approaches achieve 49 teraflops per GPU, over 40% of the theoretical hardware peak.\n* See how hybrid parallelism accelerates training GPT-2 on low-bandwidth clusters up to 7x \n\n\n\nWe demonstrate the communication benefits of hybrid parallelism in Figure 5 while training a 1.5-billion-parameter GPT-2 model. We train on four nodes of a cluster with low inter-node bandwidth in order to emphasize the communication stages of training:\n\n\n\n\t+ **Model parallelism** is not advantageous in this case due to the low intra-node bandwidth and smaller model size.\n\t+ **Pipeline parallelism** communicates over an order of magnitude less volume than the data and model parallel configurations and is 7x faster at small batch sizes.\n\t+ **Data parallelism** uses gradient accumulation to amortize communication overhead as the batch size increases, but pipeline parallel configurations still achieve over twice the performance of data parallelism at larger batch sizes.\n\t+ The **hybrid pipeline and data parallel configuration** avoids the gradient communication bottleneck by restricting data parallel groups to GPUs within a node, so gradient communications benefit from the faster intra-node bandwidth.\n![Figure 5: Throughput as a function of batch size while training GPT-2 (1.5B parameters) with sequence length 1,024. Training uses four nodes, each with four NVIDIA V100 GPUs with 16 GB of memory. The GPUs are connected with 50 Gigabits-per-second (Gbps) intra-node bandwidth and 4 Gbps inter-node bandwidth. DP denotes data parallelism with ZeRO-1 enabled. All methods scale batch size via increasing steps of gradient accumulation.](https://www.microsoft.com/en-us/research/uploads/prod/2020/09/DeepSpeed-3_Figure-4_Section-1.jpg)Figure 5: Throughput as a function of batch size while training GPT-2 (1.5B parameters) with sequence length 1,024. Training uses four nodes, each with four NVIDIA V100 GPUs with 16 GB of memory. The GPUs are connected with 50 Gigabits-per-second (Gbps) intra-node bandwidth and 4 Gbps inter-node bandwidth. DP denotes data parallelism with ZeRO-1 enabled. All methods scale batch size via increasing steps of gradient accumulation.\n\n\nZeRO-Offload: 10x bigger model training using a single GPU\n----------------------------------------------------------\n\n\nZeRO-Offload pushes the boundary of the maximum model size that can be trained efficiently using minimal GPU resources, by exploiting computational and memory resources on both GPUs and their host CPUs. It allows training up to 13-billion-parameter models on a single NVIDIA V100 GPU, 10x larger than the state-of-the-art while retaining high training throughput of over 30 teraflops per GPU.\n\n\n\n* Publication \n[ZeRO-Offload: Democratizing Billion-Scale Model Training](https://www.microsoft.com/en-us/research/publication/zero-offload-democratizing-billion-scale-model-training/)\n\n\n\nBy enabling multi-billion-parameter model training on a single GPU, ZeRO-Offload democratizes large model training, making it accessible to deep learning practitioners with limited resources.\n\n\n![Bar graph showing largest models can be trained using default PyTorch and ZeRO-Offload on a single GPU.](https://www.microsoft.com/en-us/research/uploads/prod/2020/09/Blog_deepspeed3_figure6_highres-1024x552.jpg)Figure 6: The largest models can be trained using default PyTorch and ZeRO-Offload on a single GPU.\nThe key technology behind ZeRO-Offload is our new capability to offload optimizer states and gradients onto CPU memory, building on top of [ZeRO-2](https://www.microsoft.com/en-us/research/blog/zero-2-deepspeed-shattering-barriers-of-deep-learning-speed-scale/). This approach allows ZeRO-Offload to minimize the compute efficiency loss from CPU offloading while also achieving the same, and sometimes even better, efficiency of the original ZeRO-2. The figure below shows the architecture of ZeRO-Offload.\n\n\n![Figure 7: ZeRO-Offload overview. ](https://www.microsoft.com/en-us/research/uploads/prod/2020/09/DeepSpeed-3_Figure-2_section-1-1024x546.png)Figure 7: ZeRO-Offload overview. \n* Learn how ZeRO-Offload enables multi-billion parameter training on a single GPU \n\n\n\nTraining multi-billion-parameter models like GPT and T5 require many GPUs to fit the model and its states in GPU memory. Large model training has been mostly carried out with model parallelism across multiple GPU devices to solve the memory limitation problem. Recently, we released ZeRO, a memory efficient optimizer that partitions model states (optimizer states, gradients, and model weights) across data parallel GPUs, allowing multi-billion-parameter models to be trained without requiring model parallelism. However, ZeRO still requires a large number of data parallel GPUs to hold the partitioned model states, limiting the access of large model training to a few with access to such resources.\n\n\nZeRO-Offload democratizes large model training by making it possible even on a single GPU. To allow training multi-billion-parameter models without using multiple GPUs, ZeRO-Offload inherits the optimizer state and gradient partitioning from ZeRO-2. Unlike ZeRO-2, instead of having each GPU keep a partition of the optimizer state and gradients, ZeRO-Offload offloads both to host CPU memory. Optimizer states are kept in CPU memory for the entire training. Gradients, on the other hand, are computed and averaged using reduce-scatter on the GPUs during the backward pass, and each data-parallel process then offloads the averaged gradients belonging to its partition to the CPU memory (*g offload* in Figure 7) while discarding the rest.\n\n\nOnce the gradients are available on the CPU, optimizer state partitions are updated in parallel by each data parallel process directly on the CPU (*p update* in Figure 7). After the update, parameter partitions are moved back to GPU followed by an all-gather operation on the GPU to gather all the updated parameters (*g swap* in Figure 7). ZeRO-Offload also exploits overlapping between communication (such as *g offload* and *g swap*) and computation (such as the backward pass and *p update*) using separate CUDA streams to maximize training efficiency.\n* See the benefits of ZeRO-Offload on model scale, training speed, and scalability \n\n\n\n**10x model scale:** On a single 32 GB V100 GPU, Figure 6 shows that the biggest model that can be trained by PyTorch has 1.3 billion parameters, while ZeRO-Offload allows for training models of 13 billion parameters, which is 10 times bigger. This is because ZeRO-Offload keeps the optimizer states (which consume a large portion of GPU memory) in host memory during the entire training process while also offloading gradients to CPU as they are computed in the backward pass. As a result, the saved GPU memory can be used in hosting bigger models for training.\n\n\n**Efficient training throughput**: Figure 8 shows that when training a 10-billion-parameter model, ZeRO-Offload provides over 30 teraflops throughput per GPU even when training with only a single GPU, and its throughput increases close to perfect linearly with the increasing number of GPUs. \n\n\nZeRO-Offload complements ZeRO-2 well, supporting efficient training of large models on a small number of GPUs. From 1 to 16 GPUs, ZeRO-Offload turns the model training from infeasible to feasible by leveraging CPU memory, reducing GPU memory required for the model. On 32 GPUs, ZeRO-Offload slightly outperforms ZeRO-2; the improvement comes from additional memory savings on GPU from ZeRO-Offload, which allows training with larger batch sizes and increases the GPU computation efficiency despite the overhead of CPU offloading. With more GPUs (such as 64 and 128), ZeRO-2 outperforms ZeRO-Offload since both can now run similar batch sizes. On one hand, though, ZeRO-2 does not have the overhead of moving data to CPU, while on the other hand, the optimizer step calculation on GPU is much faster than on CPU. In summary, ZeRO-Offload complements ZeRO-2 and extends ZeRO family of optimizations to cover the full spectrum of large model training from a single device to thousands of devices. \n\n\n![Bar graph showing The training throughput is compared for ZeRO-Offload and ZeRO-2 using 128 GPUs to train a 10-billion parameter GPT-2 model. ](https://www.microsoft.com/en-us/research/uploads/prod/2020/09/Blog_DeepSpeed3_Figure8_HighRes-1024x403.jpg)Figure 8: The training throughput is compared for ZeRO-Offload and ZeRO-2 using 128 GPUs to train a 10-billion parameter GPT-2 model.\n\n\nDeepSpeed Sparse Attention: Powering 10x longer sequences with 6x faster execution\n----------------------------------------------------------------------------------\n\n\nAttention-based deep learning models, such as Transformers, are highly effective in capturing relationships between tokens in an input sequence, even across long distances. As a result, they are used with text, image, and sound-based inputs, where the sequence length can be in thousands of tokens. However, despite the effectiveness of attention modules to capture long term dependencies, in practice their application to long sequence input is limited by compute and memory requirements of the attention computation that grow quadratically, \\(O(n^2 )\\), with the sequence length \\(n\\).\n\n\nTo address this limitation, **DeepSpeed offers a suite of sparse attention kernels**—an instrumental technology that can reduce the compute and memory requirement of attention computation by orders of magnitude via block-sparse computation. The suite not only alleviates the memory bottleneck of attention calculation, but also performs sparse computation efficiently. Its APIs allow convenient integration with any transformer-based models. Along with providing a wide spectrum of sparsity structures, it has the flexibility of handling any user-defined block-sparse structures. \n\n\nMore specifically, sparse attention (SA) can be designed to compute local attention between nearby tokens, or global attention via summary tokens computed with local attention. Moreover, SA can also allow random attention or any combination of local, global, and random attention as shown in Figure 10 with blue, orange, and green blocks, respectively. As a result, SA decreases the memory footprint to \\(O (wn)\\), in which 1\\(li:not(:first-child)::before{transform:translateY(-50%);content:\"\";height:1rem;position:absolute;top:50%;left:0;border-left:2px solid #999}.LiveAreaSection-193358632 .additional-login>li:not(:first-child){padding-left:10px}.LiveAreaSection-193358632 .additional-login>li{display:inline-block;position:relative;vertical-align:middle;padding-right:10px}.BuyBoxSection-683559780{display:flex;flex-wrap:wrap;flex:1;flex-direction:row-reverse;margin:-30px -15px 0}.BuyBoxSection-683559780 .box-inner{width:100%;height:100%}.BuyBoxSection-683559780 .readcube-buybox{background-color:#f3f3f3;flex-shrink:1;flex-grow:1;flex-basis:255px;background-clip:content-box;padding:0 15px;margin-top:30px}.BuyBoxSection-683559780 .subscribe-buybox{background-color:#f3f3f3;flex-shrink:1;flex-grow:4;flex-basis:300px;background-clip:content-box;padding:0 15px;margin-top:30px}.BuyBoxSection-683559780 .subscribe-buybox-nature-plus{background-color:#f3f3f3;flex-shrink:1;flex-grow:4;flex-basis:100%;background-clip:content-box;padding:0 15px;margin-top:30px}.BuyBoxSection-683559780 .title-readcube,.BuyBoxSection-683559780 .title-buybox{display:block;margin:0;margin-right:10%;margin-left:10%;font-size:24px;line-height:32px;color:#222;padding-top:30px;text-align:center;font-family:Harding,Palatino,serif}.BuyBoxSection-683559780 .title-asia-buybox{display:block;margin:0;margin-right:5%;margin-left:5%;font-size:24px;line-height:32px;color:#222;padding-top:30px;text-align:center;font-family:Harding,Palatino,serif}.BuyBoxSection-683559780 .asia-link{color:#069;cursor:pointer;text-decoration:none;font-size:1.05em;font-family:-apple-system,BlinkMacSystemFont,\"Segoe UI\",Roboto,Oxygen-Sans,Ubuntu,Cantarell,\"Helvetica Neue\",sans-serif;line-height:1.05em6}.BuyBoxSection-683559780 .access-readcube{display:block;margin:0;margin-right:10%;margin-left:10%;font-size:14px;color:#222;padding-top:10px;text-align:center;font-family:-apple-system,BlinkMacSystemFont,\"Segoe UI\",Roboto,Oxygen-Sans,Ubuntu,Cantarell,\"Helvetica Neue\",sans-serif;line-height:20px}.BuyBoxSection-683559780 .access-asia-buybox{display:block;margin:0;margin-right:5%;margin-left:5%;font-size:14px;color:#222;padding-top:10px;text-align:center;font-family:-apple-system,BlinkMacSystemFont,\"Segoe UI\",Roboto,Oxygen-Sans,Ubuntu,Cantarell,\"Helvetica Neue\",sans-serif;line-height:20px}.BuyBoxSection-683559780 .access-buybox{display:block;margin:0;margin-right:10%;margin-left:10%;font-size:14px;color:#222;opacity:.8px;padding-top:10px;text-align:center;font-family:-apple-system,BlinkMacSystemFont,\"Segoe UI\",Roboto,Oxygen-Sans,Ubuntu,Cantarell,\"Helvetica Neue\",sans-serif;line-height:20px}.BuyBoxSection-683559780 .price-buybox{display:block;font-size:30px;color:#222;font-family:-apple-system,BlinkMacSystemFont,\"Segoe UI\",Roboto,Oxygen-Sans,Ubuntu,Cantarell,\"Helvetica Neue\",sans-serif;padding-top:30px;text-align:center}.BuyBoxSection-683559780 .price-buybox-to{display:block;font-size:30px;color:#222;font-family:-apple-system,BlinkMacSystemFont,\"Segoe UI\",Roboto,Oxygen-Sans,Ubuntu,Cantarell,\"Helvetica Neue\",sans-serif;text-align:center}.BuyBoxSection-683559780 .price-info-text{font-size:16px;padding-right:10px;color:#222;font-family:-apple-system,BlinkMacSystemFont,\"Segoe UI\",Roboto,Oxygen-Sans,Ubuntu,Cantarell,\"Helvetica Neue\",sans-serif}.BuyBoxSection-683559780 .price-value{font-size:30px;font-family:-apple-system,BlinkMacSystemFont,\"Segoe UI\",Roboto,Oxygen-Sans,Ubuntu,Cantarell,\"Helvetica Neue\",sans-serif}.BuyBoxSection-683559780 .price-per-period{font-family:-apple-system,BlinkMacSystemFont,\"Segoe UI\",Roboto,Oxygen-Sans,Ubuntu,Cantarell,\"Helvetica Neue\",sans-serif}.BuyBoxSection-683559780 .price-from{font-size:14px;padding-right:10px;color:#222;font-family:-apple-system,BlinkMacSystemFont,\"Segoe UI\",Roboto,Oxygen-Sans,Ubuntu,Cantarell,\"Helvetica Neue\",sans-serif;line-height:20px}.BuyBoxSection-683559780 .issue-buybox{display:block;font-size:13px;text-align:center;color:#222;font-family:-apple-system,BlinkMacSystemFont,\"Segoe UI\",Roboto,Oxygen-Sans,Ubuntu,Cantarell,\"Helvetica Neue\",sans-serif;line-height:19px}.BuyBoxSection-683559780 .no-price-buybox{display:block;font-size:13px;line-height:18px;text-align:center;padding-right:10%;padding-left:10%;padding-bottom:20px;padding-top:30px;color:#222;font-family:-apple-system,BlinkMacSystemFont,\"Segoe UI\",Roboto,Oxygen-Sans,Ubuntu,Cantarell,\"Helvetica Neue\",sans-serif}.BuyBoxSection-683559780 .vat-buybox{display:block;margin-top:5px;margin-right:20%;margin-left:20%;font-size:11px;color:#222;padding-top:10px;padding-bottom:15px;text-align:center;font-family:-apple-system,BlinkMacSystemFont,\"Segoe UI\",Roboto,Oxygen-Sans,Ubuntu,Cantarell,\"Helvetica Neue\",sans-serif;line-height:17px}.BuyBoxSection-683559780 .tax-buybox{display:block;width:100%;color:#222;padding:20px 16px;text-align:center;font-family:-apple-system,BlinkMacSystemFont,\"Segoe UI\",Roboto,Oxygen-Sans,Ubuntu,Cantarell,\"Helvetica Neue\",sans-serif;line-height:NaNpx}.BuyBoxSection-683559780 .button-container{display:flex;padding-right:20px;padding-left:20px;justify-content:center}.BuyBoxSection-683559780 .button-container>\\*{flex:1px}.BuyBoxSection-683559780 .button-container>a:hover,.Button-505204839:hover,.Button-1078489254:hover,.Button-2496381730:hover{text-decoration:none}.BuyBoxSection-683559780 .readcube-button{background:#fff;margin-top:30px}.BuyBoxSection-683559780 .button-asia{background:#069;border:1px solid #069;border-radius:0;cursor:pointer;display:block;padding:9px;outline:0;text-align:center;text-decoration:none;min-width:80px;margin-top:75px}.BuyBoxSection-683559780 .button-label-asia,.ButtonLabel-3869432492,.ButtonLabel-3296148077,.ButtonLabel-1651148777{display:block;color:#fff;font-size:17px;line-height:20px;font-family:-apple-system,BlinkMacSystemFont,\"Segoe UI\",Roboto,Oxygen-Sans,Ubuntu,Cantarell,\"Helvetica Neue\",sans-serif;text-align:center;text-decoration:none;cursor:pointer}.Button-505204839,.Button-1078489254,.Button-2496381730{background:#069;border:1px solid #069;border-radius:0;cursor:pointer;display:block;padding:9px;outline:0;text-align:center;text-decoration:none;min-width:80px;max-width:320px;margin-top:10px}.Button-505204839 .readcube-label,.Button-1078489254 .readcube-label,.Button-2496381730 .readcube-label{color:#069}\n/\\* style specs end \\*/Access Nature and 54 other Nature Portfolio journals\n\nGet Nature+, our best-value online-access subscription\n\n24,99 € / 30 days\n\ncancel any time\n\n[Learn more](https://shop.nature.com/products/plus)Subscribe to this journal\n\nReceive 51 print issues and online access\n\n199,00 € per year\n\nonly 3,90 € per issue\n\n[Learn more](/nature/subscribe)Rent or buy this article\n\nPrices vary by article type\n\nfrom$1.95\n\nto$39.95\n\n[Learn more](//www.nature.com/articles/d41586-021-01170-0.epdf?no_publisher_access=1&r3_referer=nature)Prices may be subject to local taxes which are calculated during checkout\n\n\n\n### Additional access options:\n\n\n* [Log in](https://idp.nature.com/authorize/natureuser?client_id=grover&redirect_uri=https%3A%2F%2Fwww.nature.com%2Farticles%2Fd41586-021-01170-0)\n* [Learn about institutional subscriptions](https://www.springernature.com/gp/librarians/licensing/license-options)\n* [Read our FAQs](https://support.nature.com/en/support/home)\n* [Contact customer support](https://www.springernature.com/gp/contact)\n\n\n\n\n*Nature* **593**, 33-36 (2021)\n\n\n*doi: https://doi.org/10.1038/d41586-021-01170-0*\n\n\n\nReferences\n----------\n\n1. Campbell, M., Hoane Jr, A. J. & Hsu, F.-H. *Artif. Intell.* **134**, 57–83 (2002).\n\n[Article](https://doi.org/10.1016%2FS0004-3702%2801%2900129-1) \n [Google Scholar](http://scholar.google.com/scholar_lookup?&title=&journal=Artif.%20Intell.&doi=10.1016%2FS0004-3702%2801%2900129-1&volume=134&pages=57-83&publication_year=2002&author=Campbell%2CM.&author=Hoane%20Jr%2CA.%20J.&author=Hsu%2CF.-H.)\n2. Silver, D. *et al.* *Nature* **529**, 484–489 (2016).\n\n[Article](https://doi.org/10.1038%2Fnature16961) \n [PubMed](http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=Retrieve&db=PubMed&dopt=Abstract&list_uids=26819042) \n [Google Scholar](http://scholar.google.com/scholar_lookup?&title=&journal=Nature&doi=10.1038%2Fnature16961&volume=529&pages=484-489&publication_year=2016&author=Silver%2CD.)\n3. Moravčík, M. *et al.* *Science* **356**, 508–513 (2017).\n\n[Article](https://doi.org/10.1126%2Fscience.aam6960) \n [PubMed](http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=Retrieve&db=PubMed&dopt=Abstract&list_uids=28254783) \n [Google Scholar](http://scholar.google.com/scholar_lookup?&title=&journal=Science&doi=10.1126%2Fscience.aam6960&volume=356&pages=508-513&publication_year=2017&author=Morav%C4%8D%C3%ADk%2CM.)\n4. Bard, N. *et al.* *Artif. Intell.* **280**, 103216 (2020).\n\n[Article](https://doi.org/10.1016%2Fj.artint.2019.103216) \n [Google Scholar](http://scholar.google.com/scholar_lookup?&title=&journal=Artif.%20Intell.&doi=10.1016%2Fj.artint.2019.103216&volume=280&publication_year=2020&author=Bard%2CN.)\n5. Kitano, H. *et al.* *AI Magazine* **18**, 73 (1997).\n\n[Article](https://doi.org/10.1609%2Faimag.v18i1.1276) \n [Google Scholar](http://scholar.google.com/scholar_lookup?&title=&journal=AI%20Magazine&doi=10.1609%2Faimag.v18i1.1276&volume=18&publication_year=1997&author=Kitano%2CH.)\n6. Villani, V., Pini, F., Leali, F. & Secchi, C. *Mechatronics* **55**, 248–266 (2018).\n\n[Article](https://doi.org/10.1016%2Fj.mechatronics.2018.02.009) \n [Google Scholar](http://scholar.google.com/scholar_lookup?&title=&journal=Mechatronics&doi=10.1016%2Fj.mechatronics.2018.02.009&volume=55&pages=248-266&publication_year=2018&author=Villani%2CV.&author=Pini%2CF.&author=Leali%2CF.&author=Secchi%2CC.)\n7. Russell, S. *Human Compatible: Artificial Intelligence and the Problem of Control* (Penguin, 2019).\n\n[Google Scholar](http://scholar.google.com/scholar_lookup?&title=Human%20Compatible%3A%20Artificial%20Intelligence%20and%20the%20Problem%20of%20Control&publication_year=2019&author=Russell%2CS.)\n8. Adadi, A. & Berrada, M. *IEEE Access* **6**, 52138–52160 (2018).\n\n[Article](https://doi.org/10.1109%2FACCESS.2018.2870052) \n [Google Scholar](http://scholar.google.com/scholar_lookup?&title=&journal=IEEE%20Access&doi=10.1109%2FACCESS.2018.2870052&volume=6&pages=52138-52160&publication_year=2018&author=Adadi%2CA.&author=Berrada%2CM.)\n9. Allcott, H., Braghieri, L., Eichmeyer, S. & Gentzkow, M. *Am. Econ. Rev.* **110**, 629–676 (2020).\n\n[Article](https://doi.org/10.1257%2Faer.20190658) \n [Google Scholar](http://scholar.google.com/scholar_lookup?&title=&journal=Am.%20Econ.%20Rev.&doi=10.1257%2Faer.20190658&volume=110&pages=629-676&publication_year=2020&author=Allcott%2CH.&author=Braghieri%2CL.&author=Eichmeyer%2CS.&author=Gentzkow%2CM.)\n10. Barocas, S., Hardt, M. & Narayanan, A. *Fairness and Machine Learning* (Fairmlbook.org, 2019).\n\n[Google Scholar](http://scholar.google.com/scholar_lookup?&title=Fairness%20and%20Machine%20Learning&publication_year=2019&author=Barocas%2CS.&author=Hardt%2CM.&author=Narayanan%2CA.)\n11. Buolamwini, J. & Gebru, T. *Proc. Mach. Learn. Res.* **81**, 77–91 (2018).\n\n[Google Scholar](http://scholar.google.com/scholar_lookup?&title=&journal=Proc.%20Mach.%20Learn.%20Res.&volume=81&pages=77-91&publication_year=2018&author=Buolamwini%2CJ.&author=Gebru%2CT.)\n12. Axelrod, R. *The Evolution of Cooperation* (Basic, 1984).\n\n[Google Scholar](http://scholar.google.com/scholar_lookup?&title=The%20Evolution%20of%20Cooperation&publication_year=1984&author=Axelrod%2CR.)\n13. Deng, J. *et al.* *IEEE Conf. Comput. Vis. Pattern Recognit*. **2009**, 248–255 (2009).\n\n[Article](https://doi.org/10.1109%2FCVPR.2009.5206848) \n [Google Scholar](http://scholar.google.com/scholar_lookup?&title=&journal=IEEE%20Conf.%20Comput.%20Vis.%20Pattern%20Recognit&doi=10.1109%2FCVPR.2009.5206848&volume=2009&pages=248-255&publication_year=2009&author=Deng%2CJ.)\n14. Dafoe, A. *et al.* Preprint at (2020).\n\n[Download references](https://citation-needed.springer.com/v2/references/10.1038/d41586-021-01170-0?format=refman&flavour=references)\n\n\n\nRelated Articles\n----------------\n\n\n* [![](//media.nature.com/lw100/magazine-assets/d41586-021-01170-0/d41586-021-01170-0_18662916.jpg)\n \n Sliced, diced and digested: AI-generated science ready in minutes](https://www.nature.com/articles/d41586-020-03415-w)\n* [![](//media.nature.com/lw100/magazine-assets/d41586-021-01170-0/d41586-021-01170-0_19028706.jpg)\n \n Robo-writers: the rise and risks of language-generating AI](https://www.nature.com/articles/d41586-021-00530-0)\n* [![](//media.nature.com/lw100/magazine-assets/d41586-021-01170-0/d41586-021-01170-0_18365752.jpg)\n \n Don’t ask if artificial intelligence is good or fair, ask how it shifts power](https://www.nature.com/articles/d41586-020-02003-2)\n* [![](//media.nature.com/lw100/magazine-assets/d41586-021-01170-0/d41586-021-01170-0_19042736.jpg)\n \n Am I arguing with a machine? AI debaters highlight need for transparency](https://www.nature.com/articles/d41586-021-00867-6)\n* [![](//media.nature.com/lw100/magazine-assets/d41586-021-01170-0/d41586-021-01170-0_19120814.jpg)\n \n Cooperating with machines](https://www.nature.com/articles/s41467-017-02597-8)\n* [![](//media.nature.com/lw100/magazine-assets/d41586-021-01170-0/d41586-021-01170-0_19039036.jpg)\n \n Time to regulate AI that interprets human emotions](https://www.nature.com/articles/d41586-021-00868-5)\n\n\nSubjects\n--------\n\n\n* [Machine learning](/subjects/machine-learning)\n* [Computer science](/subjects/computer-science)\n* [Society](/subjects/society)\n* [Technology](/subjects/technology)\n* [Sociology](/subjects/sociology)\n* [Human behaviour](/subjects/human-behaviour)\n\n\n\n\nLatest on:\n----------\n\n\n\n\nMachine learning\n\n\n\n[Adaptive algorithms: users must be more vigilant\n\n\nCorrespondence 27 JUN 23](https://www.nature.com/articles/d41586-023-02033-6)\n\n\n[![Stop talking about tomorrow’s AI doomsday when AI poses risks today](https://images.nature.com/w140h79/magazine-assets/d41586-023-02094-7/d41586-023-02094-7_25541026.jpg)\nStop talking about tomorrow’s AI doomsday when AI poses risks today\n\n\nEditorial 27 JUN 23](https://www.nature.com/articles/d41586-023-02094-7)\n\n\n[Ethics: fund an independent system to verify EdTech\n\n\nCorrespondence 20 JUN 23](https://www.nature.com/articles/d41586-023-01922-0)\n\n\n\n\n\nComputer science\n\n\n\n[![Open-source AI chatbots are booming — what does this mean for researchers?](https://images.nature.com/w140h79/magazine-assets/d41586-023-01970-6/d41586-023-01970-6_25515826.jpg)\nOpen-source AI chatbots are booming — what does this mean for researchers?\n\n\nNews 20 JUN 23](https://www.nature.com/articles/d41586-023-01970-6)\n\n\n[![Gordon Moore (1929–2023)](https://images.nature.com/w140h79/magazine-assets/d41586-023-01978-y/d41586-023-01978-y_25476224.jpg)\nGordon Moore (1929–2023)\n\n\nObituary 16 JUN 23](https://www.nature.com/articles/d41586-023-01978-y)\n\n\n[![DeepMind AI creates algorithms that sort data faster than those built by people](https://images.nature.com/w140h79/magazine-assets/d41586-023-01883-4/d41586-023-01883-4_25458998.jpg)\nDeepMind AI creates algorithms that sort data faster than those built by people\n\n\nNews 07 JUN 23](https://www.nature.com/articles/d41586-023-01883-4)\n\n\n\n\n\nSociety\n\n\n\n[![Don’t get mad, get equal: putting an end to misogyny in science](https://images.nature.com/w140h79/magazine-assets/d41586-023-02101-x/d41586-023-02101-x_25403536.jpg)\nDon’t get mad, get equal: putting an end to misogyny in science\n\n\nCareer Column 26 JUN 23](https://www.nature.com/articles/d41586-023-02101-x)\n\n\n[![Universities urged to improve how staff sexual-assault claims are handled](https://images.nature.com/w140h79/magazine-assets/d41586-023-02099-2/d41586-023-02099-2_25528632.jpg)\nUniversities urged to improve how staff sexual-assault claims are handled\n\n\nCareer News 23 JUN 23](https://www.nature.com/articles/d41586-023-02099-2)\n\n\n[![Coming out at work: transgender scientists share their stories](https://images.nature.com/w140h79/magazine-assets/d41586-023-01908-y/d41586-023-01908-y_25475578.jpg)\nComing out at work: transgender scientists share their stories\n\n\nCareer Feature 19 JUN 23](https://www.nature.com/articles/d41586-023-01908-y)", "url": "https://www.nature.com/articles/d41586-021-01170-0", "title": "Cooperative AI: machines must learn to find common ground", "source": "html_articles", "source_type": "journalArticle", "source_filetype": "pdf", "date_published": "2021-04-30T22:00:00Z", "authors": ["Allan Dafoe", "Yoram Bachrach", "Gillian Hadfield", "Eric Horvitz", "Kate Larson", "Thore Graepel"], "summary": [], "id": "0e26804cf8dd6bb0d935a4f3335862da"} {"text": "[Download PDF](/articles/s41586-021-03819-2.pdf)\n\n\n\n\n\n\n### Subjects\n\n\n* [Computational biophysics](/subjects/computational-biophysics)\n* [Machine learning](/subjects/machine-learning)\n* [Protein structure predictions](/subjects/protein-structure-predictions)\n* [Structural biology](/subjects/structural-biology)\n\n\n\n\n\nAbstract\n--------\n\nProteins are essential to life, and understanding their structure can facilitate a mechanistic understanding of their function. Through an enormous experimental effort[1](#ref-CR1 \"Thompson, M. C., Yeates, T. O. & Rodriguez, J. A. Advances in methods for atomic resolution macromolecular structure determination. F1000Res. 9, 667 (2020).\"),[2](#ref-CR2 \"Bai, X.-C., McMullan, G. & Scheres, S. H. W. How cryo-EM is revolutionizing structural biology. Trends Biochem. Sci. 40, 49–57 (2015).\"),[3](#ref-CR3 \"Jaskolski, M., Dauter, Z. & Wlodawer, A. A brief history of macromolecular crystallography, illustrated by a family tree and its Nobel fruits. FEBS J. 281, 3985–4009 (2014).\"),[4](/articles/s41586-021-03819-2#ref-CR4 \"Wüthrich, K. The way to NMR structures of proteins. Nat. Struct. Biol. 8, 923–925 (2001).\"), the structures of around 100,000 unique proteins have been determined[5](/articles/s41586-021-03819-2#ref-CR5 \"wwPDB Consortium. Protein Data Bank: the single global archive for 3D macromolecular structure data. Nucleic Acids Res. 47, D520–D528 (2018).\"), but this represents a small fraction of the billions of known protein sequences[6](/articles/s41586-021-03819-2#ref-CR6 \"Mitchell, A. L. et al. MGnify: the microbiome analysis resource in 2020. Nucleic Acids Res. 48, D570–D578 (2020).\"),[7](/articles/s41586-021-03819-2#ref-CR7 \"Steinegger, M., Mirdita, M. & Söding, J. Protein-level assembly increases protein sequence recovery from metagenomic samples manyfold. Nat. Methods 16, 603–606 (2019).\"). Structural coverage is bottlenecked by the months to years of painstaking effort required to determine a single protein structure. Accurate computational approaches are needed to address this gap and to enable large-scale structural bioinformatics. Predicting the three-dimensional structure that a protein will adopt based solely on its amino acid sequence—the structure prediction component of the ‘protein folding problem’[8](/articles/s41586-021-03819-2#ref-CR8 \"Dill, K. A., Ozkan, S. B., Shell, M. S. & Weikl, T. R. The protein folding problem. Annu. Rev. Biophys. 37, 289–316 (2008).\")—has been an important open research problem for more than 50 years[9](/articles/s41586-021-03819-2#ref-CR9 \"Anfinsen, C. B. Principles that govern the folding of protein chains. Science 181, 223–230 (1973).\"). Despite recent progress[10](#ref-CR10 \"Senior, A. W. et al. Improved protein structure prediction using potentials from deep learning. Nature 577, 706–710 (2020).\"),[11](#ref-CR11 \"Wang, S., Sun, S., Li, Z., Zhang, R. & Xu, J. Accurate de novo prediction of protein contact map by ultra-deep learning model. PLOS Comput. Biol. 13, e1005324 (2017).\"),[12](#ref-CR12 \"Zheng, W. et al. Deep-learning contact-map guided protein structure prediction in CASP13. Proteins 87, 1149–1164 (2019).\"),[13](#ref-CR13 \"Abriata, L. A., Tamò, G. E. & Dal Peraro, M. A further leap of improvement in tertiary structure prediction in CASP13 prompts new routes for future assessments. Proteins 87, 1100–1112 (2019).\"),[14](/articles/s41586-021-03819-2#ref-CR14 \"Pearce, R. & Zhang, Y. Deep learning techniques have significantly impacted protein structure prediction and protein design. Curr. Opin. Struct. Biol. 68, 194–207 (2021).\"), existing methods fall far short of atomic accuracy, especially when no homologous structure is available. Here we provide the first computational method that can regularly predict protein structures with atomic accuracy even in cases in which no similar structure is known. We validated an entirely redesigned version of our neural network-based model, AlphaFold, in the challenging 14th Critical Assessment of protein Structure Prediction (CASP14)[15](/articles/s41586-021-03819-2#ref-CR15 \"Moult, J., Fidelis, K., Kryshtafovych, A., Schwede, T. & Topf, M. Critical assessment of techniques for protein structure prediction, fourteenth round. CASP 14 Abstract Book \n https://www.predictioncenter.org/casp14/doc/CASP14_Abstracts.pdf\n \n (2020).\"), demonstrating accuracy competitive with experimental structures in a majority of cases and greatly outperforming other methods. Underpinning the latest version of AlphaFold is a novel machine learning approach that incorporates physical and biological knowledge about protein structure, leveraging multi-sequence alignments, into the design of the deep learning algorithm.\n\n\n\n\n\nMain\n----\n\nThe development of computational methods to predict three-dimensional (3D) protein structures from the protein sequence has proceeded along two complementary paths that focus on either the physical interactions or the evolutionary history. The physical interaction programme heavily integrates our understanding of molecular driving forces into either thermodynamic or kinetic simulation of protein physics[16](/articles/s41586-021-03819-2#ref-CR16 \"Brini, E., Simmerling, C. & Dill, K. Protein storytelling through physics. Science 370, eaaz3041 (2020).\") or statistical approximations thereof[17](/articles/s41586-021-03819-2#ref-CR17 \"Sippl, M. J. Calculation of conformational ensembles from potentials of mean force. An approach to the knowledge-based prediction of local structures in globular proteins. J. Mol. Biol. 213, 859–883 (1990).\"). Although theoretically very appealing, this approach has proved highly challenging for even moderate-sized proteins due to the computational intractability of molecular simulation, the context dependence of protein stability and the difficulty of producing sufficiently accurate models of protein physics. The evolutionary programme has provided an alternative in recent years, in which the constraints on protein structure are derived from bioinformatics analysis of the evolutionary history of proteins, homology to solved structures[18](/articles/s41586-021-03819-2#ref-CR18 \"Šali, A. & Blundell, T. L. Comparative protein modelling by satisfaction of spatial restraints. J. Mol. Biol. 234, 779–815 (1993).\"),[19](/articles/s41586-021-03819-2#ref-CR19 \"Roy, A., Kucukural, A. & Zhang, Y. I-TASSER: a unified platform for automated protein structure and function prediction. Nat. Protocols 5, 725–738 (2010).\") and pairwise evolutionary correlations[20](#ref-CR20 \"Altschuh, D., Lesk, A. M., Bloomer, A. C. & Klug, A. Correlation of co-ordinated amino acid substitutions with function in viruses related to tobacco mosaic virus. J. Mol. Biol. 193, 693–707 (1987).\"),[21](#ref-CR21 \"Shindyalov, I. N., Kolchanov, N. A. & Sander, C. Can three-dimensional contacts in protein structures be predicted by analysis of correlated mutations? Protein Eng. 7, 349–358 (1994).\"),[22](#ref-CR22 \"Weigt, M., White, R. A., Szurmant, H., Hoch, J. A. & Hwa, T. Identification of direct residue contacts in protein–protein interaction by message passing. Proc. Natl Acad. Sci. USA 106, 67–72 (2009).\"),[23](#ref-CR23 \"Marks, D. S. et al. Protein 3D structure computed from evolutionary sequence variation. PLoS ONE 6, e28766 (2011).\"),[24](/articles/s41586-021-03819-2#ref-CR24 \"Jones, D. T., Buchan, D. W. A., Cozzetto, D. & Pontil, M. PSICOV: precise structural contact prediction using sparse inverse covariance estimation on large multiple sequence alignments. Bioinformatics 28, 184–190 (2012).\"). This bioinformatics approach has benefited greatly from the steady growth of experimental protein structures deposited in the Protein Data Bank (PDB)[5](/articles/s41586-021-03819-2#ref-CR5 \"wwPDB Consortium. Protein Data Bank: the single global archive for 3D macromolecular structure data. Nucleic Acids Res. 47, D520–D528 (2018).\"), the explosion of genomic sequencing and the rapid development of deep learning techniques to interpret these correlations. Despite these advances, contemporary physical and evolutionary-history-based approaches produce predictions that are far short of experimental accuracy in the majority of cases in which a close homologue has not been solved experimentally and this has limited their utility for many biological applications.\n\nIn this study, we develop the first, to our knowledge, computational approach capable of predicting protein structures to near experimental accuracy in a majority of cases. The neural network AlphaFold that we developed was entered into the CASP14 assessment (May–July 2020; entered under the team name ‘AlphaFold2’ and a completely different model from our CASP13 AlphaFold system[10](/articles/s41586-021-03819-2#ref-CR10 \"Senior, A. W. et al. Improved protein structure prediction using potentials from deep learning. Nature 577, 706–710 (2020).\")). The CASP assessment is carried out biennially using recently solved structures that have not been deposited in the PDB or publicly disclosed so that it is a blind test for the participating methods, and has long served as the gold-standard assessment for the accuracy of structure prediction[25](/articles/s41586-021-03819-2#ref-CR25 \"Moult, J., Pedersen, J. T., Judson, R. & Fidelis, K. A large-scale experiment to assess protein structure prediction methods. Proteins 23, ii–iv (1995).\"),[26](/articles/s41586-021-03819-2#ref-CR26 \"Kryshtafovych, A., Schwede, T., Topf, M., Fidelis, K. & Moult, J. Critical assessment of methods of protein structure prediction (CASP)-round XIII. Proteins 87, 1011–1020 (2019).\").\n\nIn CASP14, AlphaFold structures were vastly more accurate than competing methods. AlphaFold structures had a median backbone accuracy of 0.96 Å r.m.s.d.95 (Cα root-mean-square deviation at 95% residue coverage) (95% confidence interval = 0.85–1.16 Å) whereas the next best performing method had a median backbone accuracy of 2.8 Å r.m.s.d.95 (95% confidence interval = 2.7–4.0 Å) (measured on CASP domains; see Fig. [1a](/articles/s41586-021-03819-2#Fig1) for backbone accuracy and Supplementary Fig. [14](/articles/s41586-021-03819-2#MOESM1) for all-atom accuracy). As a comparison point for this accuracy, the width of a carbon atom is approximately 1.4 Å. In addition to very accurate domain structures (Fig. [1b](/articles/s41586-021-03819-2#Fig1)), AlphaFold is able to produce highly accurate side chains (Fig. [1c](/articles/s41586-021-03819-2#Fig1)) when the backbone is highly accurate and considerably improves over template-based methods even when strong templates are available. The all-atom accuracy of AlphaFold was 1.5 Å r.m.s.d.95 (95% confidence interval = 1.2–1.6 Å) compared with the 3.5 Å r.m.s.d.95 (95% confidence interval = 3.1–4.2 Å) of the best alternative method. Our methods are scalable to very long proteins with accurate domains and domain-packing (see Fig. [1d](/articles/s41586-021-03819-2#Fig1) for the prediction of a 2,180-residue protein with no structural homologues). Finally, the model is able to provide precise, per-residue estimates of its reliability that should enable the confident use of these predictions.\n\n**Fig. 1: AlphaFold produces highly accurate structures.**[![figure 1](//media.springernature.com/lw685/springer-static/image/art%3A10.1038%2Fs41586-021-03819-2/MediaObjects/41586_2021_3819_Fig1_HTML.png)](/articles/s41586-021-03819-2/figures/1)**a**, The performance of AlphaFold on the CASP14 dataset (*n* = 87 protein domains) relative to the top-15 entries (out of 146 entries), group numbers correspond to the numbers assigned to entrants by CASP. Data are median and the 95% confidence interval of the median, estimated from 10,000 bootstrap samples. **b**, Our prediction of CASP14 target T1049 (PDB 6Y4F, blue) compared with the true (experimental) structure (green). Four residues in the C terminus of the crystal structure are *B*-factor outliers and are not depicted. **c**, CASP14 target T1056 (PDB 6YJ1). An example of a well-predicted zinc-binding site (AlphaFold has accurate side chains even though it does not explicitly predict the zinc ion). **d**, CASP target T1044 (PDB 6VR4)—a 2,180-residue single chain—was predicted with correct domain packing (the prediction was made after CASP using AlphaFold without intervention). **e**, Model architecture. Arrows show the information flow among the various components described in this paper. Array shapes are shown in parentheses with *s*, number of sequences (*N*seq in the main text); *r*, number of residues (*N*res in the main text); *c*, number of channels.\n\n[Full size image](/articles/s41586-021-03819-2/figures/1)We demonstrate in Fig. [2a](/articles/s41586-021-03819-2#Fig2) that the high accuracy that AlphaFold demonstrated in CASP14 extends to a large sample of recently released PDB structures; in this dataset, all structures were deposited in the PDB after our training data cut-off and are analysed as full chains (see [Methods](/articles/s41586-021-03819-2#Sec10), Supplementary Fig. [15](/articles/s41586-021-03819-2#MOESM1) and Supplementary Table [6](/articles/s41586-021-03819-2#MOESM1) for more details). Furthermore, we observe high side-chain accuracy when the backbone prediction is accurate (Fig. [2b](/articles/s41586-021-03819-2#Fig2)) and we show that our confidence measure, the predicted local-distance difference test (pLDDT), reliably predicts the Cα local-distance difference test (lDDT-Cα) accuracy of the corresponding prediction (Fig. [2c](/articles/s41586-021-03819-2#Fig2)). We also find that the global superposition metric template modelling score (TM-score)[27](/articles/s41586-021-03819-2#ref-CR27 \"Zhang, Y. & Skolnick, J. Scoring function for automated assessment of protein structure template quality. Proteins 57, 702–710 (2004).\") can be accurately estimated (Fig. [2d](/articles/s41586-021-03819-2#Fig2)). Overall, these analyses validate that the high accuracy and reliability of AlphaFold on CASP14 proteins also transfers to an uncurated collection of recent PDB submissions, as would be expected (see [Supplementary Methods 1.15](/articles/s41586-021-03819-2#MOESM1) and Supplementary Fig. [11](/articles/s41586-021-03819-2#MOESM1) for confirmation that this high accuracy extends to new folds).\n\n**Fig. 2: Accuracy of AlphaFold on recent PDB structures.**[![figure 2](//media.springernature.com/lw685/springer-static/image/art%3A10.1038%2Fs41586-021-03819-2/MediaObjects/41586_2021_3819_Fig2_HTML.png)](/articles/s41586-021-03819-2/figures/2)The analysed structures are newer than any structure in the training set. Further filtering is applied to reduce redundancy (see [Methods](/articles/s41586-021-03819-2#Sec10)). **a**, Histogram of backbone r.m.s.d. for full chains (Cα r.m.s.d. at 95% coverage). Error bars are 95% confidence intervals (Poisson). This dataset excludes proteins with a template (identified by hmmsearch) from the training set with more than 40% sequence identity covering more than 1% of the chain (*n* = 3,144 protein chains). The overall median is 1.46 Å (95% confidence interval = 1.40–1.56 Å). Note that this measure will be highly sensitive to domain packing and domain accuracy; a high r.m.s.d. is expected for some chains with uncertain packing or packing errors. **b**, Correlation between backbone accuracy and side-chain accuracy. Filtered to structures with any observed side chains and resolution better than 2.5 Å (*n* = 5,317 protein chains); side chains were further filtered to *B*-factor <30 Å2. A rotamer is classified as correct if the predicted torsion angle is within 40°. Each point aggregates a range of lDDT-Cα, with a bin size of 2 units above 70 lDDT-Cα and 5 units otherwise. Points correspond to the mean accuracy; error bars are 95% confidence intervals (Student *t*-test) of the mean on a per-residue basis. **c**, Confidence score compared to the true accuracy on chains. Least-squares linear fit lDDT-Cα = 0.997 × pLDDT − 1.17 (Pearson’s *r* = 0.76). *n* = 10,795 protein chains. The shaded region of the linear fit represents a 95% confidence interval estimated from 10,000 bootstrap samples. In the companion paper[39](/articles/s41586-021-03819-2#ref-CR39 \"Tunyasuvunakool, K. et al. Highly accurate protein structure prediction for the human proteome. Nature \n https://doi.org/10.1038/s41586-021-03828-1\n \n (2021).\"), additional quantification of the reliability of pLDDT as a confidence measure is provided. **d**, Correlation between pTM and full chain TM-score. Least-squares linear fit TM-score = 0.98 × pTM + 0.07 (Pearson’s *r* = 0.85). *n* = 10,795 protein chains. The shaded region of the linear fit represents a 95% confidence interval estimated from 10,000 bootstrap samples.\n\n[Full size image](/articles/s41586-021-03819-2/figures/2)The AlphaFold network\n---------------------\n\nAlphaFold greatly improves the accuracy of structure prediction by incorporating novel neural network architectures and training procedures based on the evolutionary, physical and geometric constraints of protein structures. In particular, we demonstrate a new architecture to jointly embed multiple sequence alignments (MSAs) and pairwise features, a new output representation and associated loss that enable accurate end-to-end structure prediction, a new equivariant attention architecture, use of intermediate losses to achieve iterative refinement of predictions, masked MSA loss to jointly train with the structure, learning from unlabelled protein sequences using self-distillation and self-estimates of accuracy.\n\nThe AlphaFold network directly predicts the 3D coordinates of all heavy atoms for a given protein using the primary amino acid sequence and aligned sequences of homologues as inputs (Fig. [1e](/articles/s41586-021-03819-2#Fig1); see [Methods](/articles/s41586-021-03819-2#Sec10) for details of inputs including databases, MSA construction and use of templates). A description of the most important ideas and components is provided below. The full network architecture and training procedure are provided in the [Supplementary Methods](/articles/s41586-021-03819-2#MOESM1).\n\nThe network comprises two main stages. First, the trunk of the network processes the inputs through repeated layers of a novel neural network block that we term Evoformer to produce an *N*seq × *N*res array (*N*seq, number of sequences; *N*res, number of residues) that represents a processed MSA and an *N*res × *N*res array that represents residue pairs. The MSA representation is initialized with the raw MSA (although see [Supplementary Methods 1.2.7](/articles/s41586-021-03819-2#MOESM1) for details of handling very deep MSAs). The Evoformer blocks contain a number of attention-based and non-attention-based components. We show evidence in ‘Interpreting the neural network’ that a concrete structural hypothesis arises early within the Evoformer blocks and is continuously refined. The key innovations in the Evoformer block are new mechanisms to exchange information within the MSA and pair representations that enable direct reasoning about the spatial and evolutionary relationships.\n\nThe trunk of the network is followed by the structure module that introduces an explicit 3D structure in the form of a rotation and translation for each residue of the protein (global rigid body frames). These representations are initialized in a trivial state with all rotations set to the identity and all positions set to the origin, but rapidly develop and refine a highly accurate protein structure with precise atomic details. Key innovations in this section of the network include breaking the chain structure to allow simultaneous local refinement of all parts of the structure, a novel equivariant transformer to allow the network to implicitly reason about the unrepresented side-chain atoms and a loss term that places substantial weight on the orientational correctness of the residues. Both within the structure module and throughout the whole network, we reinforce the notion of iterative refinement by repeatedly applying the final loss to outputs and then feeding the outputs recursively into the same modules. The iterative refinement using the whole network (which we term ‘recycling’ and is related to approaches in computer vision[28](/articles/s41586-021-03819-2#ref-CR28 \"Tu, Z. & Bai, X. Auto-context and its application to high-level vision tasks and 3D brain image segmentation. IEEE Trans. Pattern Anal. Mach. Intell. 32, 1744–1757 (2010).\"),[29](/articles/s41586-021-03819-2#ref-CR29 \"Carreira, J., Agrawal, P., Fragkiadaki, K. & Malik, J. Human pose estimation with iterative error feedback. In Proc. IEEE Conference on Computer Vision and Pattern Recognition 4733–4742 (2016).\")) contributes markedly to accuracy with minor extra training time (see [Supplementary Methods 1.8](/articles/s41586-021-03819-2#MOESM1) for details).\n\nEvoformer\n---------\n\nThe key principle of the building block of the network—named Evoformer (Figs. [1](/articles/s41586-021-03819-2#Fig1)e, [3a](/articles/s41586-021-03819-2#Fig3))—is to view the prediction of protein structures as a graph inference problem in 3D space in which the edges of the graph are defined by residues in proximity. The elements of the pair representation encode information about the relation between the residues (Fig. [3b](/articles/s41586-021-03819-2#Fig3)). The columns of the MSA representation encode the individual residues of the input sequence while the rows represent the sequences in which those residues appear. Within this framework, we define a number of update operations that are applied in each block in which the different update operations are applied in series.\n\n**Fig. 3: Architectural details.**[![figure 3](//media.springernature.com/lw685/springer-static/image/art%3A10.1038%2Fs41586-021-03819-2/MediaObjects/41586_2021_3819_Fig3_HTML.png)](/articles/s41586-021-03819-2/figures/3)**a**, Evoformer block. Arrows show the information flow. The shape of the arrays is shown in parentheses. **b**, The pair representation interpreted as directed edges in a graph. **c**, Triangle multiplicative update and triangle self-attention. The circles represent residues. Entries in the pair representation are illustrated as directed edges and in each diagram, the edge being updated is *ij*. **d**, Structure module including Invariant point attention (IPA) module. The single representation is a copy of the first row of the MSA representation. **e**, Residue gas: a representation of each residue as one free-floating rigid body for the backbone (blue triangles) and *χ* angles for the side chains (green circles). The corresponding atomic structure is shown below. **f**, Frame aligned point error (FAPE). Green, predicted structure; grey, true structure; (*R**k*, **t***k*), frames; **x**i, atom positions.\n\n[Full size image](/articles/s41586-021-03819-2/figures/3)The MSA representation updates the pair representation through an element-wise outer product that is summed over the MSA sequence dimension. In contrast to previous work[30](/articles/s41586-021-03819-2#ref-CR30 \"Mirabello, C. & Wallner, B. rawMSA: end-to-end deep learning using raw multiple sequence alignments. PLoS ONE 14, e0220182 (2019).\"), this operation is applied within every block rather than once in the network, which enables the continuous communication from the evolving MSA representation to the pair representation.\n\nWithin the pair representation, there are two different update patterns. Both are inspired by the necessity of consistency of the pair representation—for a pairwise description of amino acids to be representable as a single 3D structure, many constraints must be satisfied including the triangle inequality on distances. On the basis of this intuition, we arrange the update operations on the pair representation in terms of triangles of edges involving three different nodes (Fig. [3c](/articles/s41586-021-03819-2#Fig3)). In particular, we add an extra logit bias to axial attention[31](/articles/s41586-021-03819-2#ref-CR31 \"Huang, Z. et al. CCNet: criss-cross attention for semantic segmentation. In Proc. IEEE/CVF International Conference on Computer Vision 603–612 (2019).\") to include the ‘missing edge’ of the triangle and we define a non-attention update operation ‘triangle multiplicative update’ that uses two edges to update the missing third edge (see [Supplementary Methods 1.6.5](/articles/s41586-021-03819-2#MOESM1) for details). The triangle multiplicative update was developed originally as a more symmetric and cheaper replacement for the attention, and networks that use only the attention or multiplicative update are both able to produce high-accuracy structures. However, the combination of the two updates is more accurate.\n\nWe also use a variant of axial attention within the MSA representation. During the per-sequence attention in the MSA, we project additional logits from the pair stack to bias the MSA attention. This closes the loop by providing information flow from the pair representation back into the MSA representation, ensuring that the overall Evoformer block is able to fully mix information between the pair and MSA representations and prepare for structure generation within the structure module.\n\nEnd-to-end structure prediction\n-------------------------------\n\nThe structure module (Fig. [3d](/articles/s41586-021-03819-2#Fig3)) operates on a concrete 3D backbone structure using the pair representation and the original sequence row (single representation) of the MSA representation from the trunk. The 3D backbone structure is represented as *N*res independent rotations and translations, each with respect to the global frame (residue gas) (Fig. [3e](/articles/s41586-021-03819-2#Fig3)). These rotations and translations—representing the geometry of the N-Cα-C atoms—prioritize the orientation of the protein backbone so that the location of the side chain of each residue is highly constrained within that frame. Conversely, the peptide bond geometry is completely unconstrained and the network is observed to frequently violate the chain constraint during the application of the structure module as breaking this constraint enables the local refinement of all parts of the chain without solving complex loop closure problems. Satisfaction of the peptide bond geometry is encouraged during fine-tuning by a violation loss term. Exact enforcement of peptide bond geometry is only achieved in the post-prediction relaxation of the structure by gradient descent in the Amber[32](/articles/s41586-021-03819-2#ref-CR32 \"Hornak, V. et al. Comparison of multiple Amber force fields and development of improved protein backbone parameters. Proteins 65, 712–725 (2006).\") force field. Empirically, this final relaxation does not improve the accuracy of the model as measured by the global distance test (GDT)[33](/articles/s41586-021-03819-2#ref-CR33 \"Zemla, A. LGA: a method for finding 3D similarities in protein structures. Nucleic Acids Res. 31, 3370–3374 (2003).\") or lDDT-Cα[34](/articles/s41586-021-03819-2#ref-CR34 \"Mariani, V., Biasini, M., Barbato, A. & Schwede, T. lDDT: a local superposition-free score for comparing protein structures and models using distance difference tests. Bioinformatics 29, 2722–2728 (2013).\") but does remove distracting stereochemical violations without the loss of accuracy.\n\nThe residue gas representation is updated iteratively in two stages (Fig. [3d](/articles/s41586-021-03819-2#Fig3)). First, a geometry-aware attention operation that we term ‘invariant point attention’ (IPA) is used to update an *N*res set of neural activations (single representation) without changing the 3D positions, then an equivariant update operation is performed on the residue gas using the updated activations. The IPA augments each of the usual attention queries, keys and values with 3D points that are produced in the local frame of each residue such that the final value is invariant to global rotations and translations (see [Methods](/articles/s41586-021-03819-2#Sec10) ‘IPA’ for details). The 3D queries and keys also impose a strong spatial/locality bias on the attention, which is well-suited to the iterative refinement of the protein structure. After each attention operation and element-wise transition block, the module computes an update to the rotation and translation of each backbone frame. The application of these updates within the local frame of each residue makes the overall attention and update block an equivariant operation on the residue gas.\n\nPredictions of side-chain *χ* angles as well as the final, per-residue accuracy of the structure (pLDDT) are computed with small per-residue networks on the final activations at the end of the network. The estimate of the TM-score (pTM) is obtained from a pairwise error prediction that is computed as a linear projection from the final pair representation. The final loss (which we term the frame-aligned point error (FAPE) (Fig. [3f](/articles/s41586-021-03819-2#Fig3))) compares the predicted atom positions to the true positions under many different alignments. For each alignment, defined by aligning the predicted frame (*R**k*, **t***k*) to the corresponding true frame, we compute the distance of all predicted atom positions **x***i* from the true atom positions. The resulting *N*frames × *N*atoms distances are penalized with a clamped *L*1 loss. This creates a strong bias for atoms to be correct relative to the local frame of each residue and hence correct with respect to its side-chain interactions, as well as providing the main source of chirality for AlphaFold ([Supplementary Methods 1.9.3](/articles/s41586-021-03819-2#MOESM1) and Supplementary Fig. [9](/articles/s41586-021-03819-2#MOESM1)).\n\nTraining with labelled and unlabelled data\n------------------------------------------\n\nThe AlphaFold architecture is able to train to high accuracy using only supervised learning on PDB data, but we are able to enhance accuracy (Fig. [4a](/articles/s41586-021-03819-2#Fig4)) using an approach similar to noisy student self-distillation[35](/articles/s41586-021-03819-2#ref-CR35 \"Xie, Q., Luong, M.-T., Hovy, E. & Le, Q. V. Self-training with noisy student improves imagenet classification. In Proc. IEEE/CVF Conference on Computer Vision and Pattern Recognition 10687–10698 (2020).\"). In this procedure, we use a trained network to predict the structure of around 350,000 diverse sequences from Uniclust30[36](/articles/s41586-021-03819-2#ref-CR36 \"Mirdita, M. et al. Uniclust databases of clustered and deeply annotated protein sequences and alignments. Nucleic Acids Res. 45, D170–D176 (2017).\") and make a new dataset of predicted structures filtered to a high-confidence subset. We then train the same architecture again from scratch using a mixture of PDB data and this new dataset of predicted structures as the training data, in which the various training data augmentations such as cropping and MSA subsampling make it challenging for the network to recapitulate the previously predicted structures. This self-distillation procedure makes effective use of the unlabelled sequence data and considerably improves the accuracy of the resulting network.\n\n**Fig. 4: Interpreting the neural network.**[![figure 4](//media.springernature.com/lw685/springer-static/image/art%3A10.1038%2Fs41586-021-03819-2/MediaObjects/41586_2021_3819_Fig4_HTML.png)](/articles/s41586-021-03819-2/figures/4)**a**, Ablation results on two target sets: the CASP14 set of domains (*n* = 87 protein domains) and the PDB test set of chains with template coverage of ≤30% at 30% identity (*n* = 2,261 protein chains). Domains are scored with GDT and chains are scored with lDDT-Cα. The ablations are reported as a difference compared with the average of the three baseline seeds. Means (points) and 95% bootstrap percentile intervals (error bars) are computed using bootstrap estimates of 10,000 samples. **b**, Domain GDT trajectory over 4 recycling iterations and 48 Evoformer blocks on CASP14 targets LmrP (T1024) and Orf8 (T1064) where D1 and D2 refer to the individual domains as defined by the CASP assessment. Both T1024 domains obtain the correct structure early in the network, whereas the structure of T1064 changes multiple times and requires nearly the full depth of the network to reach the final structure. Note, 48 Evoformer blocks comprise one recycling iteration.\n\n[Full size image](/articles/s41586-021-03819-2/figures/4)Additionally, we randomly mask out or mutate individual residues within the MSA and have a Bidirectional Encoder Representations from Transformers (BERT)-style[37](/articles/s41586-021-03819-2#ref-CR37 \"Devlin, J., Chang, M.-W., Lee, K. & Toutanova, K. BERT: pre-training of deep bidirectional transformers for language understanding. In Proc. 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies 1, 4171–4186 (2019).\") objective to predict the masked elements of the MSA sequences. This objective encourages the network to learn to interpret phylogenetic and covariation relationships without hardcoding a particular correlation statistic into the features. The BERT objective is trained jointly with the normal PDB structure loss on the same training examples and is not pre-trained, in contrast to recent independent work[38](/articles/s41586-021-03819-2#ref-CR38 \"Rao, R. et al. MSA transformer. In Proc. 38th International Conference on Machine Learning PMLR 139, 8844–8856 (2021).\").\n\nInterpreting the neural network\n-------------------------------\n\nTo understand how AlphaFold predicts protein structure, we trained a separate structure module for each of the 48 Evoformer blocks in the network while keeping all parameters of the main network frozen ([Supplementary Methods 1.14](/articles/s41586-021-03819-2#MOESM1)). Including our recycling stages, this provides a trajectory of 192 intermediate structures—one per full Evoformer block—in which each intermediate represents the belief of the network of the most likely structure at that block. The resulting trajectories are surprisingly smooth after the first few blocks, showing that AlphaFold makes constant incremental improvements to the structure until it can no longer improve (see Fig. [4b](/articles/s41586-021-03819-2#Fig4) for a trajectory of accuracy). These trajectories also illustrate the role of network depth. For very challenging proteins such as ORF8 of SARS-CoV-2 (T1064), the network searches and rearranges secondary structure elements for many layers before settling on a good structure. For other proteins such as LmrP (T1024), the network finds the final structure within the first few layers. Structure trajectories of CASP14 targets T1024, T1044, T1064 and T1091 that demonstrate a clear iterative building process for a range of protein sizes and difficulties are shown in Supplementary Videos [1](/articles/s41586-021-03819-2#MOESM3)–[4](/articles/s41586-021-03819-2#MOESM6). In [Supplementary Methods 1.16](/articles/s41586-021-03819-2#MOESM1) and Supplementary Figs. [12](/articles/s41586-021-03819-2#MOESM1), [13](/articles/s41586-021-03819-2#MOESM1), we interpret the attention maps produced by AlphaFold layers.\n\nFigure [4a](/articles/s41586-021-03819-2#Fig4) contains detailed ablations of the components of AlphaFold that demonstrate that a variety of different mechanisms contribute to AlphaFold accuracy. Detailed descriptions of each ablation model, their training details, extended discussion of ablation results and the effect of MSA depth on each ablation are provided in [Supplementary Methods 1.13](/articles/s41586-021-03819-2#MOESM1) and Supplementary Fig. [10](/articles/s41586-021-03819-2#MOESM1).\n\nMSA depth and cross-chain contacts\n----------------------------------\n\nAlthough AlphaFold has a high accuracy across the vast majority of deposited PDB structures, we note that there are still factors that affect accuracy or limit the applicability of the model. The model uses MSAs and the accuracy decreases substantially when the median alignment depth is less than around 30 sequences (see Fig. [5a](/articles/s41586-021-03819-2#Fig5) for details). We observe a threshold effect where improvements in MSA depth over around 100 sequences lead to small gains. We hypothesize that the MSA information is needed to coarsely find the correct structure within the early stages of the network, but refinement of that prediction into a high-accuracy model does not depend crucially on the MSA information. The other substantial limitation that we have observed is that AlphaFold is much weaker for proteins that have few intra-chain or homotypic contacts compared to the number of heterotypic contacts (further details are provided in a companion paper[39](/articles/s41586-021-03819-2#ref-CR39 \"Tunyasuvunakool, K. et al. Highly accurate protein structure prediction for the human proteome. Nature \n https://doi.org/10.1038/s41586-021-03828-1\n \n (2021).\")). This typically occurs for bridging domains within larger complexes in which the shape of the protein is created almost entirely by interactions with other chains in the complex. Conversely, AlphaFold is often able to give high-accuracy predictions for homomers, even when the chains are substantially intertwined (Fig. [5b](/articles/s41586-021-03819-2#Fig5)). We expect that the ideas of AlphaFold are readily applicable to predicting full hetero-complexes in a future system and that this will remove the difficulty with protein chains that have a large number of hetero-contacts.\n\n**Fig. 5: Effect of MSA depth and cross-chain contacts.**[![figure 5](//media.springernature.com/lw685/springer-static/image/art%3A10.1038%2Fs41586-021-03819-2/MediaObjects/41586_2021_3819_Fig5_HTML.png)](/articles/s41586-021-03819-2/figures/5)**a**, Backbone accuracy (lDDT-Cα) for the redundancy-reduced set of the PDB after our training data cut-off, restricting to proteins in which at most 25% of the long-range contacts are between different heteromer chains. We further consider two groups of proteins based on template coverage at 30% sequence identity: covering more than 60% of the chain (*n* = 6,743 protein chains) and covering less than 30% of the chain (*n* = 1,596 protein chains). MSA depth is computed by counting the number of non-gap residues for each position in the MSA (using the *N*eff weighting scheme; see [Methods](/articles/s41586-021-03819-2#Sec10) for details) and taking the median across residues. The curves are obtained through Gaussian kernel average smoothing (window size is 0.2 units in log10(*N*eff)); the shaded area is the 95% confidence interval estimated using bootstrap of 10,000 samples. **b**, An intertwined homotrimer (PDB 6SK0) is correctly predicted without input stoichiometry and only a weak template (blue is predicted and green is experimental).\n\n[Full size image](/articles/s41586-021-03819-2/figures/5)Related work\n------------\n\nThe prediction of protein structures has had a long and varied development, which is extensively covered in a number of reviews[14](/articles/s41586-021-03819-2#ref-CR14 \"Pearce, R. & Zhang, Y. Deep learning techniques have significantly impacted protein structure prediction and protein design. Curr. Opin. Struct. Biol. 68, 194–207 (2021).\"),[40](#ref-CR40 \"Kuhlman, B. & Bradley, P. Advances in protein structure prediction and design. Nat. Rev. Mol. Cell Biol. 20, 681–697 (2019).\"),[41](#ref-CR41 \"Marks, D. S., Hopf, T. A. & Sander, C. Protein structure prediction from sequence variation. Nat. Biotechnol. 30, 1072–1080 (2012).\"),[42](#ref-CR42 \"Qian, N. & Sejnowski, T. J. Predicting the secondary structure of globular proteins using neural network models. J. Mol. Biol. 202, 865–884 (1988).\"),[43](/articles/s41586-021-03819-2#ref-CR43 \"Fariselli, P., Olmea, O., Valencia, A. & Casadio, R. Prediction of contact maps with neural networks and correlated mutations. Protein Eng. 14, 835–843 (2001).\"). Despite the long history of applying neural networks to structure prediction[14](/articles/s41586-021-03819-2#ref-CR14 \"Pearce, R. & Zhang, Y. Deep learning techniques have significantly impacted protein structure prediction and protein design. Curr. Opin. Struct. Biol. 68, 194–207 (2021).\"),[42](/articles/s41586-021-03819-2#ref-CR42 \"Qian, N. & Sejnowski, T. J. Predicting the secondary structure of globular proteins using neural network models. J. Mol. Biol. 202, 865–884 (1988).\"),[43](/articles/s41586-021-03819-2#ref-CR43 \"Fariselli, P., Olmea, O., Valencia, A. & Casadio, R. Prediction of contact maps with neural networks and correlated mutations. Protein Eng. 14, 835–843 (2001).\"), they have only recently come to improve structure prediction[10](/articles/s41586-021-03819-2#ref-CR10 \"Senior, A. W. et al. Improved protein structure prediction using potentials from deep learning. Nature 577, 706–710 (2020).\"),[11](/articles/s41586-021-03819-2#ref-CR11 \"Wang, S., Sun, S., Li, Z., Zhang, R. & Xu, J. Accurate de novo prediction of protein contact map by ultra-deep learning model. PLOS Comput. Biol. 13, e1005324 (2017).\"),[44](/articles/s41586-021-03819-2#ref-CR44 \"Yang, J. et al. Improved protein structure prediction using predicted interresidue orientations. Proc. Natl Acad. Sci. USA 117, 1496–1503 (2020).\"),[45](/articles/s41586-021-03819-2#ref-CR45 \"Li, Y. et al. Deducing high-accuracy protein contact-maps from a triplet of coevolutionary matrices through deep residual convolutional networks. PLOS Comput. Biol. 17, e1008865 (2021).\"). These approaches effectively leverage the rapid improvement in computer vision systems[46](/articles/s41586-021-03819-2#ref-CR46 \"He, K., Zhang, X., Ren, S. & Sun, J. Deep residual learning for image recognition. In Proc. IEEE Conference on Computer Vision and Pattern Recognition 770–778 (2016).\") by treating the problem of protein structure prediction as converting an ‘image’ of evolutionary couplings[22](#ref-CR22 \"Weigt, M., White, R. A., Szurmant, H., Hoch, J. A. & Hwa, T. Identification of direct residue contacts in protein–protein interaction by message passing. Proc. Natl Acad. Sci. USA 106, 67–72 (2009).\"),[23](#ref-CR23 \"Marks, D. S. et al. Protein 3D structure computed from evolutionary sequence variation. PLoS ONE 6, e28766 (2011).\"),[24](/articles/s41586-021-03819-2#ref-CR24 \"Jones, D. T., Buchan, D. W. A., Cozzetto, D. & Pontil, M. PSICOV: precise structural contact prediction using sparse inverse covariance estimation on large multiple sequence alignments. Bioinformatics 28, 184–190 (2012).\") to an ‘image’ of the protein distance matrix and then integrating the distance predictions into a heuristic system that produces the final 3D coordinate prediction. A few recent studies have been developed to predict the 3D coordinates directly[47](#ref-CR47 \"AlQuraishi, M. End-to-end differentiable learning of protein structure. Cell Syst. 8, 292–301 (2019).\"),[48](#ref-CR48 \"Senior, A. W. et al. Protein structure prediction using multiple deep neural networks in the 13th Critical Assessment of Protein Structure Prediction (CASP13). Proteins 87, 1141–1148 (2019).\"),[49](#ref-CR49 \"Ingraham, J., Riesselman, A. J., Sander, C. & Marks, D. S. Learning protein structure with a differentiable simulator. in Proc. International Conference on Learning Representations (2019).\"),[50](/articles/s41586-021-03819-2#ref-CR50 \"Li, J. Universal transforming geometric network. Preprint at \n https://arxiv.org/abs/1908.00723\n \n (2019).\"), but the accuracy of these approaches does not match traditional, hand-crafted structure prediction pipelines[51](/articles/s41586-021-03819-2#ref-CR51 \"Xu, J., McPartlon, M. & Li, J. Improved protein structure prediction by deep learning irrespective of co-evolution information. Nat. Mach. Intell. 3, 601–609 (2021).\"). In parallel, the success of attention-based networks for language processing[52](/articles/s41586-021-03819-2#ref-CR52 \"Vaswani, A. et al. Attention is all you need. In Advances in Neural Information Processing Systems 5998–6008 (2017).\") and, more recently, computer vision[31](/articles/s41586-021-03819-2#ref-CR31 \"Huang, Z. et al. CCNet: criss-cross attention for semantic segmentation. In Proc. IEEE/CVF International Conference on Computer Vision 603–612 (2019).\"),[53](/articles/s41586-021-03819-2#ref-CR53 \"Wang, H. et al. Axial-deeplab: stand-alone axial-attention for panoptic segmentation. in European Conference on Computer Vision 108–126 (Springer, 2020).\") has inspired the exploration of attention-based methods for interpreting protein sequences[54](#ref-CR54 \"Alley, E. C., Khimulya, G., Biswas, S., AlQuraishi, M. & Church, G. M. Unified rational protein engineering with sequence-based deep representation learning. Nat. Methods 16, 1315–1322 (2019).\"),[55](#ref-CR55 \"Heinzinger, M. et al. Modeling aspects of the language of life through transfer-learning protein sequences. BMC Bioinformatics 20, 723 (2019).\"),[56](/articles/s41586-021-03819-2#ref-CR56 \"Rives, A. et al. Biological structure and function emerge from scaling unsupervised learning to 250 million protein sequences. Proc. Natl Acad. Sci. USA 118, e2016239118 (2021).\").\n\nDiscussion\n----------\n\nThe methodology that we have taken in designing AlphaFold is a combination of the bioinformatics and physical approaches: we use a physical and geometric inductive bias to build components that learn from PDB data with minimal imposition of handcrafted features (for example, AlphaFold builds hydrogen bonds effectively without a hydrogen bond score function). This results in a network that learns far more efficiently from the limited data in the PDB but is able to cope with the complexity and variety of structural data.\n\nIn particular, AlphaFold is able to handle missing the physical context and produce accurate models in challenging cases such as intertwined homomers or proteins that only fold in the presence of an unknown haem group. The ability to handle underspecified structural conditions is essential to learning from PDB structures as the PDB represents the full range of conditions in which structures have been solved. In general, AlphaFold is trained to produce the protein structure most likely to appear as part of a PDB structure. For example, in cases in which a particular stochiometry, ligand or ion is predictable from the sequence alone, AlphaFold is likely to produce a structure that respects those constraints implicitly.\n\nAlphaFold has already demonstrated its utility to the experimental community, both for molecular replacement[57](/articles/s41586-021-03819-2#ref-CR57 \"Pereira, J. et al. High-accuracy protein structure prediction in CASP14. Proteins \n https://doi.org/10.1002/prot.26171\n \n (2021).\") and for interpreting cryogenic electron microscopy maps[58](/articles/s41586-021-03819-2#ref-CR58 \"Gupta, M. et al. CryoEM and AI reveal a structure of SARS-CoV-2 Nsp2, a multifunctional protein involved in key host processes. Preprint at \n https://doi.org/10.1101/2021.05.10.443524\n \n (2021).\"). Moreover, because AlphaFold outputs protein coordinates directly, AlphaFold produces predictions in graphics processing unit (GPU) minutes to GPU hours depending on the length of the protein sequence (for example, around one GPU minute per model for 384 residues; see [Methods](/articles/s41586-021-03819-2#Sec10) for details). This opens up the exciting possibility of predicting structures at the proteome-scale and beyond—in a companion paper[39](/articles/s41586-021-03819-2#ref-CR39 \"Tunyasuvunakool, K. et al. Highly accurate protein structure prediction for the human proteome. Nature \n https://doi.org/10.1038/s41586-021-03828-1\n \n (2021).\"), we demonstrate the application of AlphaFold to the entire human proteome[39](/articles/s41586-021-03819-2#ref-CR39 \"Tunyasuvunakool, K. et al. Highly accurate protein structure prediction for the human proteome. Nature \n https://doi.org/10.1038/s41586-021-03828-1\n \n (2021).\").\n\nThe explosion in available genomic sequencing techniques and data has revolutionized bioinformatics but the intrinsic challenge of experimental structure determination has prevented a similar expansion in our structural knowledge. By developing an accurate protein structure prediction algorithm, coupled with existing large and well-curated structure and sequence databases assembled by the experimental community, we hope to accelerate the advancement of structural bioinformatics that can keep pace with the genomics revolution. We hope that AlphaFold—and computational approaches that apply its techniques for other biophysical problems—will become essential tools of modern biology.\n\nMethods\n-------\n\n### Full algorithm details\n\nExtensive explanations of the components and their motivations are available in [Supplementary Methods 1.1–1.10](/articles/s41586-021-03819-2#MOESM1), in addition, pseudocode is available in [Supplementary Information Algorithms 1–32](/articles/s41586-021-03819-2#MOESM1), network diagrams in Supplementary Figs. [1](/articles/s41586-021-03819-2#MOESM1)–[8](/articles/s41586-021-03819-2#MOESM1), input features in Supplementary Table [1](/articles/s41586-021-03819-2#MOESM1) and additional details are provided in Supplementary Tables [2](/articles/s41586-021-03819-2#MOESM1), [3](/articles/s41586-021-03819-2#MOESM1). Training and inference details are provided in [Supplementary Methods 1.11–1.12](/articles/s41586-021-03819-2#MOESM1) and Supplementary Tables [4](/articles/s41586-021-03819-2#MOESM1), [5](/articles/s41586-021-03819-2#MOESM1).\n\n### IPA\n\nThe IPA module combines the pair representation, the single representation and the geometric representation to update the single representation (Supplementary Fig. [8](/articles/s41586-021-03819-2#MOESM1)). Each of these representations contributes affinities to the shared attention weights and then uses these weights to map its values to the output. The IPA operates in 3D space. Each residue produces query points, key points and value points in its local frame. These points are projected into the global frame using the backbone frame of the residue in which they interact with each other. The resulting points are then projected back into the local frame. The affinity computation in the 3D space uses squared distances and the coordinate transformations ensure the invariance of this module with respect to the global frame (see [Supplementary Methods 1.8.2](/articles/s41586-021-03819-2#MOESM1) ‘Invariant point attention (IPA)’ for the algorithm, proof of invariance and a description of the full multi-head version). A related construction that uses classic geometric invariants to construct pairwise features in place of the learned 3D points has been applied to protein design[59](/articles/s41586-021-03819-2#ref-CR59 \"Ingraham, J., Garg, V. K., Barzilay, R. & Jaakkola, T. Generative models for graph-based protein design. in Proc. 33rd Conference on Neural Information Processing Systems (2019).\").\n\nIn addition to the IPA, standard dot product attention is computed on the abstract single representation and a special attention on the pair representation. The pair representation augments both the logits and the values of the attention process, which is the primary way in which the pair representation controls the structure generation.\n\n### Inputs and data sources\n\nInputs to the network are the primary sequence, sequences from evolutionarily related proteins in the form of a MSA created by standard tools including jackhmmer[60](/articles/s41586-021-03819-2#ref-CR60 \"Johnson, L. S., Eddy, S. R. & Portugaly, E. Hidden Markov model speed heuristic and iterative HMM search procedure. BMC Bioinformatics 11, 431 (2010).\") and HHBlits[61](/articles/s41586-021-03819-2#ref-CR61 \"Remmert, M., Biegert, A., Hauser, A. & Söding, J. HHblits: lightning-fast iterative protein sequence searching by HMM-HMM alignment. Nat. Methods 9, 173–175 (2012).\"), and 3D atom coordinates of a small number of homologous structures (templates) where available. For both the MSA and templates, the search processes are tuned for high recall; spurious matches will probably appear in the raw MSA but this matches the training condition of the network.\n\nOne of the sequence databases used, Big Fantastic Database (BFD), was custom-made and released publicly (see ‘Data availability’) and was used by several CASP teams. BFD is one of the largest publicly available collections of protein families. It consists of 65,983,866 families represented as MSAs and hidden Markov models (HMMs) covering 2,204,359,010 protein sequences from reference databases, metagenomes and metatranscriptomes.\n\nBFD was built in three steps. First, 2,423,213,294 protein sequences were collected from UniProt (Swiss-Prot&TrEMBL, 2017-11)[62](/articles/s41586-021-03819-2#ref-CR62 \"The UniProt Consortium. UniProt: the universal protein knowledgebase in 2021. Nucleic Acids Res. 49, D480–D489 (2020).\"), a soil reference protein catalogue and the marine eukaryotic reference catalogue[7](/articles/s41586-021-03819-2#ref-CR7 \"Steinegger, M., Mirdita, M. & Söding, J. Protein-level assembly increases protein sequence recovery from metagenomic samples manyfold. Nat. Methods 16, 603–606 (2019).\"), and clustered to 30% sequence identity, while enforcing a 90% alignment coverage of the shorter sequences using MMseqs2/Linclust[63](/articles/s41586-021-03819-2#ref-CR63 \"Steinegger, M. & Söding, J. Clustering huge protein sequence sets in linear time. Nat. Commun. 9, 2542 (2018).\"). This resulted in 345,159,030 clusters. For computational efficiency, we removed all clusters with less than three members, resulting in 61,083,719 clusters. Second, we added 166,510,624 representative protein sequences from Metaclust NR (2017-05; discarding all sequences shorter than 150 residues)[63](/articles/s41586-021-03819-2#ref-CR63 \"Steinegger, M. & Söding, J. Clustering huge protein sequence sets in linear time. Nat. Commun. 9, 2542 (2018).\") by aligning them against the cluster representatives using MMseqs2[64](/articles/s41586-021-03819-2#ref-CR64 \"Steinegger, M. & Söding, J. MMseqs2 enables sensitive protein sequence searching for the analysis of massive data sets. Nat. Biotechnol. 35, 1026–1028 (2017).\"). Sequences that fulfilled the sequence identity and coverage criteria were assigned to the best scoring cluster. The remaining 25,347,429 sequences that could not be assigned were clustered separately and added as new clusters, resulting in the final clustering. Third, for each of the clusters, we computed an MSA using FAMSA[65](/articles/s41586-021-03819-2#ref-CR65 \"Deorowicz, S., Debudaj-Grabysz, A. & Gudyś, A. FAMSA: fast and accurate multiple sequence alignment of huge protein families. Sci. Rep. 6, 33964 (2016).\") and computed the HMMs following the Uniclust HH-suite database protocol[36](/articles/s41586-021-03819-2#ref-CR36 \"Mirdita, M. et al. Uniclust databases of clustered and deeply annotated protein sequences and alignments. Nucleic Acids Res. 45, D170–D176 (2017).\").\n\nThe following versions of public datasets were used in this study. Our models were trained on a copy of the PDB[5](/articles/s41586-021-03819-2#ref-CR5 \"wwPDB Consortium. Protein Data Bank: the single global archive for 3D macromolecular structure data. Nucleic Acids Res. 47, D520–D528 (2018).\") downloaded on 28 August 2019. For finding template structures at prediction time, we used a copy of the PDB downloaded on 14 May 2020, and the PDB70[66](/articles/s41586-021-03819-2#ref-CR66 \"Steinegger, M. et al. HH-suite3 for fast remote homology detection and deep protein annotation. BMC Bioinformatics 20, 473 (2019).\") clustering database downloaded on 13 May 2020. For MSA lookup at both training and prediction time, we used Uniref90[67](/articles/s41586-021-03819-2#ref-CR67 \"Suzek, B. E., Wang, Y., Huang, H., McGarvey, P. B. & Wu, C. H. UniRef clusters: a comprehensive and scalable alternative for improving sequence similarity searches. Bioinformatics 31, 926–932 (2015).\") v.2020\\_01, BFD, Uniclust30[36](/articles/s41586-021-03819-2#ref-CR36 \"Mirdita, M. et al. Uniclust databases of clustered and deeply annotated protein sequences and alignments. Nucleic Acids Res. 45, D170–D176 (2017).\") v.2018\\_08 and MGnify[6](/articles/s41586-021-03819-2#ref-CR6 \"Mitchell, A. L. et al. MGnify: the microbiome analysis resource in 2020. Nucleic Acids Res. 48, D570–D578 (2020).\") v.2018\\_12. For sequence distillation, we used Uniclust30[36](/articles/s41586-021-03819-2#ref-CR36 \"Mirdita, M. et al. Uniclust databases of clustered and deeply annotated protein sequences and alignments. Nucleic Acids Res. 45, D170–D176 (2017).\") v.2018\\_08 to construct a distillation structure dataset. Full details are provided in [Supplementary Methods 1.2](/articles/s41586-021-03819-2#MOESM1).\n\nFor MSA search on BFD + Uniclust30, and template search against PDB70, we used HHBlits[61](/articles/s41586-021-03819-2#ref-CR61 \"Remmert, M., Biegert, A., Hauser, A. & Söding, J. HHblits: lightning-fast iterative protein sequence searching by HMM-HMM alignment. Nat. Methods 9, 173–175 (2012).\") and HHSearch[66](/articles/s41586-021-03819-2#ref-CR66 \"Steinegger, M. et al. HH-suite3 for fast remote homology detection and deep protein annotation. BMC Bioinformatics 20, 473 (2019).\") from hh-suite v.3.0-beta.3 (version 14/07/2017). For MSA search on Uniref90 and clustered MGnify, we used jackhmmer from HMMER3[68](/articles/s41586-021-03819-2#ref-CR68 \"Eddy, S. R. Accelerated profile HMM searches. PLOS Comput. Biol. 7, e1002195 (2011).\"). For constrained relaxation of structures, we used OpenMM v.7.3.1[69](/articles/s41586-021-03819-2#ref-CR69 \"Eastman, P. et al. OpenMM 7: rapid development of high performance algorithms for molecular dynamics. PLOS Comput. Biol. 13, e1005659 (2017).\") with the Amber99sb force field[32](/articles/s41586-021-03819-2#ref-CR32 \"Hornak, V. et al. Comparison of multiple Amber force fields and development of improved protein backbone parameters. Proteins 65, 712–725 (2006).\"). For neural network construction, running and other analyses, we used TensorFlow[70](/articles/s41586-021-03819-2#ref-CR70 \"Ashish, A. M. A. et al. TensorFlow: large-scale machine learning on heterogeneous systems. Preprint at \n https://arxiv.org/abs/1603.04467\n \n (2015).\"), Sonnet[71](/articles/s41586-021-03819-2#ref-CR71 \"Reynolds, M. et al. Open sourcing Sonnet – a new library for constructing neural networks. DeepMind \n https://deepmind.com/blog/open-sourcing-sonnet/\n \n (7 April 2017).\"), NumPy[72](/articles/s41586-021-03819-2#ref-CR72 \"Harris, C. R. et al. Array programming with NumPy. Nature 585, 357–362 (2020).\"), Python[73](/articles/s41586-021-03819-2#ref-CR73 \"Van Rossum, G. & Drake, F. L. Python 3 Reference Manual (CreateSpace, 2009).\") and Colab[74](/articles/s41586-021-03819-2#ref-CR74 \"Bisong, E. in Building Machine Learning and Deep Learning Models on Google Cloud Platform: A Comprehensive Guide for Beginners 59–64 (Apress, 2019).\").\n\nTo quantify the effect of the different sequence data sources, we re-ran the CASP14 proteins using the same models but varying how the MSA was constructed. Removing BFD reduced the mean accuracy by 0.4 GDT, removing Mgnify reduced the mean accuracy by 0.7 GDT, and removing both reduced the mean accuracy by 6.1 GDT. In each case, we found that most targets had very small changes in accuracy but a few outliers had very large (20+ GDT) differences. This is consistent with the results in Fig. [5a](/articles/s41586-021-03819-2#Fig5) in which the depth of the MSA is relatively unimportant until it approaches a threshold value of around 30 sequences when the MSA size effects become quite large. We observe mostly overlapping effects between inclusion of BFD and Mgnify, but having at least one of these metagenomics databases is very important for target classes that are poorly represented in UniRef, and having both was necessary to achieve full CASP accuracy.\n\n### Training regimen\n\nTo train, we use structures from the PDB with a maximum release date of 30 April 2018. Chains are sampled in inverse proportion to cluster size of a 40% sequence identity clustering. We then randomly crop them to 256 residues and assemble into batches of size 128. We train the model on Tensor Processing Unit (TPU) v3 with a batch size of 1 per TPU core, hence the model uses 128 TPU v3 cores. The model is trained until convergence (around 10 million samples) and further fine-tuned using longer crops of 384 residues, larger MSA stack and reduced learning rate (see [Supplementary Methods 1.11](/articles/s41586-021-03819-2#MOESM1) for the exact configuration). The initial training stage takes approximately 1 week, and the fine-tuning stage takes approximately 4 additional days.\n\nThe network is supervised by the FAPE loss and a number of auxiliary losses. First, the final pair representation is linearly projected to a binned distance distribution (distogram) prediction, scored with a cross-entropy loss. Second, we use random masking on the input MSAs and require the network to reconstruct the masked regions from the output MSA representation using a BERT-like loss[37](/articles/s41586-021-03819-2#ref-CR37 \"Devlin, J., Chang, M.-W., Lee, K. & Toutanova, K. BERT: pre-training of deep bidirectional transformers for language understanding. In Proc. 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies 1, 4171–4186 (2019).\"). Third, the output single representations of the structure module are used to predict binned per-residue lDDT-Cα values. Finally, we use an auxiliary side-chain loss during training, and an auxiliary structure violation loss during fine-tuning. Detailed descriptions and weighting are provided in the [Supplementary Information](/articles/s41586-021-03819-2#MOESM1).\n\nAn initial model trained with the above objectives was used to make structure predictions for a Uniclust dataset of 355,993 sequences with the full MSAs. These predictions were then used to train a final model with identical hyperparameters, except for sampling examples 75% of the time from the Uniclust prediction set, with sub-sampled MSAs, and 25% of the time from the clustered PDB set.\n\nWe train five different models using different random seeds, some with templates and some without, to encourage diversity in the predictions (see Supplementary Table [5](/articles/s41586-021-03819-2#MOESM1) and [Supplementary Methods 1.12.1](/articles/s41586-021-03819-2#MOESM1) for details). We also fine-tuned these models after CASP14 to add a pTM prediction objective ([Supplementary Methods 1.9.7](/articles/s41586-021-03819-2#MOESM1)) and use the obtained models for Fig. [2d](/articles/s41586-021-03819-2#Fig2).\n\n### Inference regimen\n\nWe inference the five trained models and use the predicted confidence score to select the best model per target.\n\nUsing our CASP14 configuration for AlphaFold, the trunk of the network is run multiple times with different random choices for the MSA cluster centres (see [Supplementary Methods 1.11.2](/articles/s41586-021-03819-2#MOESM1) for details of the ensembling procedure). The full time to make a structure prediction varies considerably depending on the length of the protein. Representative timings for the neural network using a single model on V100 GPU are 4.8 min with 256 residues, 9.2 min with 384 residues and 18 h at 2,500 residues. These timings are measured using our open-source code, and the open-source code is notably faster than the version we ran in CASP14 as we now use the XLA compiler[75](/articles/s41586-021-03819-2#ref-CR75 \"TensorFlow. XLA: Optimizing Compiler for TensorFlow. \n https://www.tensorflow.org/xla\n \n (2018).\").\n\nSince CASP14, we have found that the accuracy of the network without ensembling is very close or equal to the accuracy with ensembling and we turn off ensembling for most inference. Without ensembling, the network is 8× faster and the representative timings for a single model are 0.6 min with 256 residues, 1.1 min with 384 residues and 2.1 h with 2,500 residues.\n\nInferencing large proteins can easily exceed the memory of a single GPU. For a V100 with 16 GB of memory, we can predict the structure of proteins up to around 1,300 residues without ensembling and the 256- and 384-residue inference times are using the memory of a single GPU. The memory usage is approximately quadratic in the number of residues, so a 2,500-residue protein involves using unified memory so that we can greatly exceed the memory of a single V100. In our cloud setup, a single V100 is used for computation on a 2,500-residue protein but we requested four GPUs to have sufficient memory.\n\nSearching genetic sequence databases to prepare inputs and final relaxation of the structures take additional central processing unit (CPU) time but do not require a GPU or TPU.\n\n### Metrics\n\nThe predicted structure is compared to the true structure from the PDB in terms of lDDT metric[34](/articles/s41586-021-03819-2#ref-CR34 \"Mariani, V., Biasini, M., Barbato, A. & Schwede, T. lDDT: a local superposition-free score for comparing protein structures and models using distance difference tests. Bioinformatics 29, 2722–2728 (2013).\"), as this metric reports the domain accuracy without requiring a domain segmentation of chain structures. The distances are either computed between all heavy atoms (lDDT) or only the Cα atoms to measure the backbone accuracy (lDDT-Cα). As lDDT-Cα only focuses on the Cα atoms, it does not include the penalty for structural violations and clashes. Domain accuracies in CASP are reported as GDT[33](/articles/s41586-021-03819-2#ref-CR33 \"Zemla, A. LGA: a method for finding 3D similarities in protein structures. Nucleic Acids Res. 31, 3370–3374 (2003).\") and the TM-score[27](/articles/s41586-021-03819-2#ref-CR27 \"Zhang, Y. & Skolnick, J. Scoring function for automated assessment of protein structure template quality. Proteins 57, 702–710 (2004).\") is used as a full chain global superposition metric.\n\nWe also report accuracies using the r.m.s.d.95 (Cα r.m.s.d. at 95% coverage). We perform five iterations of (1) a least-squares alignment of the predicted structure and the PDB structure on the currently chosen Cα atoms (using all Cα atoms in the first iteration); (2) selecting the 95% of Cα atoms with the lowest alignment error. The r.m.s.d. of the atoms chosen for the final iterations is the r.m.s.d.95. This metric is more robust to apparent errors that can originate from crystal structure artefacts, although in some cases the removed 5% of residues will contain genuine modelling errors.\n\n### Test set of recent PDB sequences\n\nFor evaluation on recent PDB sequences (Figs. [2](/articles/s41586-021-03819-2#Fig2)a–d, [4](/articles/s41586-021-03819-2#Fig4)a, [5a](/articles/s41586-021-03819-2#Fig5)), we used a copy of the PDB downloaded 15 February 2021. Structures were filtered to those with a release date after 30 April 2018 (the date limit for inclusion in the training set for AlphaFold). Chains were further filtered to remove sequences that consisted of a single amino acid as well as sequences with an ambiguous chemical component at any residue position. Exact duplicates were removed, with the chain with the most resolved Cα atoms used as the representative sequence. Subsequently, structures with less than 16 resolved residues, with unknown residues or solved by NMR methods were removed. As the PDB contains many near-duplicate sequences, the chain with the highest resolution was selected from each cluster in the PDB 40% sequence clustering of the data. Furthermore, we removed all sequences for which fewer than 80 amino acids had the alpha carbon resolved and removed chains with more than 1,400 residues. The final dataset contained 10,795 protein sequences.\n\nThe procedure for filtering the recent PDB dataset based on prior template identity was as follows. Hmmsearch was run with default parameters against a copy of the PDB SEQRES fasta downloaded 15 February 2021. Template hits were accepted if the associated structure had a release date earlier than 30 April 2018. Each residue position in a query sequence was assigned the maximum identity of any template hit covering that position. Filtering then proceeded as described in the individual figure legends, based on a combination of maximum identity and sequence coverage.\n\nThe MSA depth analysis was based on computing the normalized number of effective sequences (*N*eff) for each position of a query sequence. Per-residue *N*eff values were obtained by counting the number of non-gap residues in the MSA for this position and weighting the sequences using the *N*eff scheme[76](/articles/s41586-021-03819-2#ref-CR76 \"Wu, T., Hou, J., Adhikari, B. & Cheng, J. Analysis of several key factors influencing deep learning-based inter-residue contact prediction. Bioinformatics 36, 1091–1098 (2020).\") with a threshold of 80% sequence identity measured on the region that is non-gap in either sequence.\n\n### Reporting summary\n\nFurther information on research design is available in the [Nature Research Reporting Summary](/articles/s41586-021-03819-2#MOESM2) linked to this paper.\n\n\n\n\nData availability\n-----------------\n\n\nAll input data are freely available from public sources.\n\n\nStructures from the PDB were used for training and as templates (; for the associated sequence data and 40% sequence clustering see also and ). Training used a version of the PDB downloaded 28 August 2019, while the CASP14 template search used a version downloaded 14 May 2020. The template search also used the PDB70 database, downloaded 13 May 2020 ().\n\n\nWe show experimental structures from the PDB with accession numbers [6Y4F](http://doi.org/10.2210/pdb6Y4F/pdb)[77](/articles/s41586-021-03819-2#ref-CR77 \"Jiang, W. et al. MrpH, a new class of metal-binding adhesin, requires zinc to mediate biofilm formation. PLoS Pathog. 16, e1008707 (2020).\"), [6YJ1](http://doi.org/10.2210/pdb6YJ1/pdb)[78](/articles/s41586-021-03819-2#ref-CR78 \"Dunne, M., Ernst, P., Sobieraj, A., Pluckthun, A. & Loessner, M. J. The M23 peptidase domain of the Staphylococcal phage 2638A endolysin. PDB \n https://doi.org/10.2210/pdb6YJ1/pdb\n \n (2020).\"), [6VR4](http://doi.org/10.2210/pdb6VR4/pdb)[79](/articles/s41586-021-03819-2#ref-CR79 \"Drobysheva, A. V. et al. Structure and function of virion RNA polymerase of a crAss-like phage. Nature 589, 306–309 (2021).\"), [6SK0](http://doi.org/10.2210/pdb6SK0/pdb)[80](/articles/s41586-021-03819-2#ref-CR80 \"Flaugnatti, N. et al. Structural basis for loading and inhibition of a bacterial T6SS phospholipase effector by the VgrG spike. EMBO J. 39, e104129 (2020).\"), [6FES](http://doi.org/10.2210/pdb6FES/pdb)[81](/articles/s41586-021-03819-2#ref-CR81 \"ElGamacy, M. et al. An interface-driven design strategy yields a novel, corrugated protein architecture. ACS Synth. Biol. 7, 2226–2235 (2018).\"), [6W6W](http://doi.org/10.2210/pdb6W6W/pdb)[82](/articles/s41586-021-03819-2#ref-CR82 \"Lim, C. J. et al. The structure of human CST reveals a decameric assembly bound to telomeric DNA. Science 368, 1081–1085 (2020).\"), [6T1Z](http://doi.org/10.2210/pdb6T1Z/pdb)[83](/articles/s41586-021-03819-2#ref-CR83 \"Debruycker, V. et al. An embedded lipid in the multidrug transporter LmrP suggests a mechanism for polyspecificity. Nat. Struct. Mol. Biol. 27, 829–835 (2020).\") and [7JTL](http://doi.org/10.2210/pdb7JTL/pdb)[84](/articles/s41586-021-03819-2#ref-CR84 \"Flower, T. G. et al. Structure of SARS-CoV-2 ORF8, a rapidly evolving immune evasion protein. Proc. Natl Acad. Sci. USA 118, e2021785118 (2021).\").\n\n\nFor MSA lookup at both the training and prediction time, we used UniRef90 v.2020\\_01 (https://ftp.ebi.ac.uk/pub/databases/uniprot/previous\\_releases/release-2020\\_01/uniref/), BFD (), Uniclust30 v.2018\\_08 () and MGnify clusters v.2018\\_12 (). Uniclust30 v.2018\\_08 was also used as input for constructing a distillation structure dataset.\n\n\nCode availability\n-----------------\n\n\nSource code for the AlphaFold model, trained weights and inference script are available under an open-source license at .\n\n\nNeural networks were developed with TensorFlow v.1 (), Sonnet v.1 (), JAX v.0.1.69 () and Haiku v.0.0.4 (). The XLA compiler is bundled with JAX and does not have a separate version number.\n\n\nFor MSA search on BFD+Uniclust30, and for template search against PDB70, we used HHBlits and HHSearch from hh-suite v.3.0-beta.3 release 14/07/2017 (). For MSA search on UniRef90 and clustered MGnify, we used jackhmmer from HMMER v.3.3 (). For constrained relaxation of structures, we used OpenMM v.7.3.1 () with the Amber99sb force field.\n\n\nConstruction of BFD used MMseqs2 v.925AF () and FAMSA v.1.2.5 ().\n\n\nData analysis used Python v.3.6 (), NumPy v.1.16.4 (), SciPy v.1.2.1 (), seaborn v.0.11.1 (), Matplotlib v.3.3.4 (), bokeh v.1.4.0 (), pandas v.1.1.5 (), plotnine v.0.8.0 (), statsmodels v.0.12.2 () and Colab (). TM-align v.20190822 () was used for computing TM-scores. Structure visualizations were created in Pymol v.2.3.0 ().\n\n\nReferences\n----------\n\n1. Thompson, M. C., Yeates, T. O. & Rodriguez, J. A. Advances in methods for atomic resolution macromolecular structure determination. *F1000Res*. **9**, 667 (2020).\n\n[Article](https://doi.org/10.12688%2Ff1000research.25097.1) \n [CAS](/articles/cas-redirect/1:CAS:528:DC%2BB3cXis1OgtrzK) \n [Google Scholar](http://scholar.google.com/scholar_lookup?&title=Advances%20in%20methods%20for%20atomic%20resolution%20macromolecular%20structure%20determination&journal=F1000Res.&doi=10.12688%2Ff1000research.25097.1&volume=9&publication_year=2020&author=Thompson%2CMC&author=Yeates%2CTO&author=Rodriguez%2CJA)\n2. Bai, X.-C., McMullan, G. & Scheres, S. H. W. How cryo-EM is revolutionizing structural biology. *Trends Biochem. Sci*. **40**, 49–57 (2015).\n\n[Article](https://doi.org/10.1016%2Fj.tibs.2014.10.005) \n [CAS](/articles/cas-redirect/1:CAS:528:DC%2BC2cXhvVCktLnK) \n [PubMed](http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=Retrieve&db=PubMed&dopt=Abstract&list_uids=25544475) \n [Google Scholar](http://scholar.google.com/scholar_lookup?&title=How%20cryo-EM%20is%20revolutionizing%20structural%20biology&journal=Trends%20Biochem.%20Sci.&doi=10.1016%2Fj.tibs.2014.10.005&volume=40&pages=49-57&publication_year=2015&author=Bai%2CX-C&author=McMullan%2CG&author=Scheres%2CSHW)\n3. Jaskolski, M., Dauter, Z. & Wlodawer, A. A brief history of macromolecular crystallography, illustrated by a family tree and its Nobel fruits. *FEBS J*. **281**, 3985–4009 (2014).\n\n[Article](https://doi.org/10.1111%2Ffebs.12796) \n [CAS](/articles/cas-redirect/1:CAS:528:DC%2BC2cXhsFKnsbnM) \n [PubMed](http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=Retrieve&db=PubMed&dopt=Abstract&list_uids=24698025) \n [PubMed Central](http://www.ncbi.nlm.nih.gov/pmc/articles/PMC6309182) \n [Google Scholar](http://scholar.google.com/scholar_lookup?&title=A%20brief%20history%20of%20macromolecular%20crystallography%2C%20illustrated%20by%20a%20family%20tree%20and%20its%20Nobel%20fruits&journal=FEBS%20J.&doi=10.1111%2Ffebs.12796&volume=281&pages=3985-4009&publication_year=2014&author=Jaskolski%2CM&author=Dauter%2CZ&author=Wlodawer%2CA)\n4. Wüthrich, K. The way to NMR structures of proteins. *Nat. Struct. Biol*. **8**, 923–925 (2001).\n\n[Article](https://doi.org/10.1038%2Fnsb1101-923) \n [PubMed](http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=Retrieve&db=PubMed&dopt=Abstract&list_uids=11685234) \n [Google Scholar](http://scholar.google.com/scholar_lookup?&title=The%20way%20to%20NMR%20structures%20of%20proteins&journal=Nat.%20Struct.%20Biol.&doi=10.1038%2Fnsb1101-923&volume=8&pages=923-925&publication_year=2001&author=W%C3%BCthrich%2CK)\n5. wwPDB Consortium. Protein Data Bank: the single global archive for 3D macromolecular structure data. *Nucleic Acids Res*. **47**, D520–D528 (2018).\n\n[Article](https://doi.org/10.1093%2Fnar%2Fgky949) \n [CAS](/articles/cas-redirect/1:CAS:528:DC%2BC1MXhs1Clu7zL) \n [Google Scholar](http://scholar.google.com/scholar_lookup?&title=Protein%20Data%20Bank%3A%20the%20single%20global%20archive%20for%203D%20macromolecular%20structure%20data&journal=Nucleic%20Acids%20Res.&doi=10.1093%2Fnar%2Fgky949&volume=47&pages=D520-D528&publication_year=2018)\n6. Mitchell, A. L. et al. MGnify: the microbiome analysis resource in 2020. *Nucleic Acids Res*. **48**, D570–D578 (2020).\n\n[CAS](/articles/cas-redirect/1:CAS:528:DC%2BB3cXhs1GltrjN) \n [PubMed](http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=Retrieve&db=PubMed&dopt=Abstract&list_uids=31696235) \n [Google Scholar](http://scholar.google.com/scholar_lookup?&title=MGnify%3A%20the%20microbiome%20analysis%20resource%20in%202020&journal=Nucleic%20Acids%20Res.&volume=48&pages=D570-D578&publication_year=2020&author=Mitchell%2CAL)\n7. Steinegger, M., Mirdita, M. & Söding, J. Protein-level assembly increases protein sequence recovery from metagenomic samples manyfold. *Nat. Methods* **16**, 603–606 (2019).\n\n[Article](https://doi.org/10.1038%2Fs41592-019-0437-4) \n [CAS](/articles/cas-redirect/1:CAS:528:DC%2BC1MXht1eku7vJ) \n [PubMed](http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=Retrieve&db=PubMed&dopt=Abstract&list_uids=31235882) \n [Google Scholar](http://scholar.google.com/scholar_lookup?&title=Protein-level%20assembly%20increases%20protein%20sequence%20recovery%20from%20metagenomic%20samples%20manyfold&journal=Nat.%20Methods&doi=10.1038%2Fs41592-019-0437-4&volume=16&pages=603-606&publication_year=2019&author=Steinegger%2CM&author=Mirdita%2CM&author=S%C3%B6ding%2CJ)\n8. Dill, K. A., Ozkan, S. B., Shell, M. S. & Weikl, T. R. The protein folding problem. *Annu. Rev. Biophys*. **37**, 289–316 (2008).\n\n[Article](https://doi.org/10.1146%2Fannurev.biophys.37.092707.153558) \n [CAS](/articles/cas-redirect/1:CAS:528:DC%2BD1cXnsVGlurw%3D) \n [PubMed](http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=Retrieve&db=PubMed&dopt=Abstract&list_uids=18573083) \n [PubMed Central](http://www.ncbi.nlm.nih.gov/pmc/articles/PMC2443096) \n [ADS](http://adsabs.harvard.edu/cgi-bin/nph-data_query?link_type=ABSTRACT&bibcode=1988PhRvC..37..289D) \n [Google Scholar](http://scholar.google.com/scholar_lookup?&title=The%20protein%20folding%20problem&journal=Annu.%20Rev.%20Biophys.&doi=10.1146%2Fannurev.biophys.37.092707.153558&volume=37&pages=289-316&publication_year=2008&author=Dill%2CKA&author=Ozkan%2CSB&author=Shell%2CMS&author=Weikl%2CTR)\n9. Anfinsen, C. B. Principles that govern the folding of protein chains. *Science* **181**, 223–230 (1973).\n\n[Article](https://doi.org/10.1126%2Fscience.181.4096.223) \n [CAS](/articles/cas-redirect/1:CAS:528:DyaE3sXkvVygtbc%3D) \n [PubMed](http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=Retrieve&db=PubMed&dopt=Abstract&list_uids=4124164) \n [ADS](http://adsabs.harvard.edu/cgi-bin/nph-data_query?link_type=ABSTRACT&bibcode=1973Sci...181..223A) \n [Google Scholar](http://scholar.google.com/scholar_lookup?&title=Principles%20that%20govern%20the%20folding%20of%20protein%20chains&journal=Science&doi=10.1126%2Fscience.181.4096.223&volume=181&pages=223-230&publication_year=1973&author=Anfinsen%2CCB)\n10. Senior, A. W. et al. Improved protein structure prediction using potentials from deep learning. *Nature* **577**, 706–710 (2020).\n\n[Article](https://doi.org/10.1038%2Fs41586-019-1923-7) \n [CAS](/articles/cas-redirect/1:CAS:528:DC%2BB3cXis1SisL0%3D) \n [PubMed](http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=Retrieve&db=PubMed&dopt=Abstract&list_uids=31942072) \n [ADS](http://adsabs.harvard.edu/cgi-bin/nph-data_query?link_type=ABSTRACT&bibcode=2020Natur.577..706S) \n [Google Scholar](http://scholar.google.com/scholar_lookup?&title=Improved%20protein%20structure%20prediction%20using%20potentials%20from%20deep%20learning&journal=Nature&doi=10.1038%2Fs41586-019-1923-7&volume=577&pages=706-710&publication_year=2020&author=Senior%2CAW)\n11. Wang, S., Sun, S., Li, Z., Zhang, R. & Xu, J. Accurate de novo prediction of protein contact map by ultra-deep learning model. *PLOS Comput. Biol*. **13**, e1005324 (2017).\n\n[Article](https://doi.org/10.1371%2Fjournal.pcbi.1005324) \n [PubMed](http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=Retrieve&db=PubMed&dopt=Abstract&list_uids=28056090) \n [PubMed Central](http://www.ncbi.nlm.nih.gov/pmc/articles/PMC5249242) \n [ADS](http://adsabs.harvard.edu/cgi-bin/nph-data_query?link_type=ABSTRACT&bibcode=2017PLSCB..13E5324W) \n [CAS](/articles/cas-redirect/1:CAS:528:DC%2BC2sXovVykurk%3D) \n [Google Scholar](http://scholar.google.com/scholar_lookup?&title=Accurate%20de%20novo%20prediction%20of%20protein%20contact%20map%20by%20ultra-deep%20learning%20model&journal=PLOS%20Comput.%20Biol.&doi=10.1371%2Fjournal.pcbi.1005324&volume=13&publication_year=2017&author=Wang%2CS&author=Sun%2CS&author=Li%2CZ&author=Zhang%2CR&author=Xu%2CJ)\n12. Zheng, W. et al. Deep-learning contact-map guided protein structure prediction in CASP13. *Proteins* **87**, 1149–1164 (2019).\n\n[Article](https://doi.org/10.1002%2Fprot.25792) \n [CAS](/articles/cas-redirect/1:CAS:528:DC%2BC1MXhsFKgsrnK) \n [PubMed](http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=Retrieve&db=PubMed&dopt=Abstract&list_uids=31365149) \n [PubMed Central](http://www.ncbi.nlm.nih.gov/pmc/articles/PMC6851476) \n [Google Scholar](http://scholar.google.com/scholar_lookup?&title=Deep-learning%20contact-map%20guided%20protein%20structure%20prediction%20in%20CASP13&journal=Proteins&doi=10.1002%2Fprot.25792&volume=87&pages=1149-1164&publication_year=2019&author=Zheng%2CW)\n13. Abriata, L. A., Tamò, G. E. & Dal Peraro, M. A further leap of improvement in tertiary structure prediction in CASP13 prompts new routes for future assessments. *Proteins* **87**, 1100–1112 (2019).\n\n[Article](https://doi.org/10.1002%2Fprot.25787) \n [CAS](/articles/cas-redirect/1:CAS:528:DC%2BC1MXhsFaqs77K) \n [PubMed](http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=Retrieve&db=PubMed&dopt=Abstract&list_uids=31344267) \n [Google Scholar](http://scholar.google.com/scholar_lookup?&title=A%20further%20leap%20of%20improvement%20in%20tertiary%20structure%20prediction%20in%20CASP13%20prompts%20new%20routes%20for%20future%20assessments&journal=Proteins&doi=10.1002%2Fprot.25787&volume=87&pages=1100-1112&publication_year=2019&author=Abriata%2CLA&author=Tam%C3%B2%2CGE&author=Dal%20Peraro%2CM)\n14. Pearce, R. & Zhang, Y. Deep learning techniques have significantly impacted protein structure prediction and protein design. *Curr. Opin. Struct. Biol*. **68**, 194–207 (2021).\n\n[Article](https://doi.org/10.1016%2Fj.sbi.2021.01.007) \n [CAS](/articles/cas-redirect/1:CAS:528:DC%2BB3MXks1Ghtr4%3D) \n [PubMed](http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=Retrieve&db=PubMed&dopt=Abstract&list_uids=33639355) \n [Google Scholar](http://scholar.google.com/scholar_lookup?&title=Deep%20learning%20techniques%20have%20significantly%20impacted%20protein%20structure%20prediction%20and%20protein%20design&journal=Curr.%20Opin.%20Struct.%20Biol.&doi=10.1016%2Fj.sbi.2021.01.007&volume=68&pages=194-207&publication_year=2021&author=Pearce%2CR&author=Zhang%2CY)\n15. Moult, J., Fidelis, K., Kryshtafovych, A., Schwede, T. & Topf, M. Critical assessment of techniques for protein structure prediction, fourteenth round. *CASP 14 Abstract Book* (2020).\n16. Brini, E., Simmerling, C. & Dill, K. Protein storytelling through physics. *Science* **370**, eaaz3041 (2020).\n\n[Article](https://doi.org/10.1126%2Fscience.aaz3041) \n [CAS](/articles/cas-redirect/1:CAS:528:DC%2BB3cXisVOmu7vF) \n [PubMed](http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=Retrieve&db=PubMed&dopt=Abstract&list_uids=33243857) \n [PubMed Central](http://www.ncbi.nlm.nih.gov/pmc/articles/PMC7945008) \n [Google Scholar](http://scholar.google.com/scholar_lookup?&title=Protein%20storytelling%20through%20physics&journal=Science&doi=10.1126%2Fscience.aaz3041&volume=370&publication_year=2020&author=Brini%2CE&author=Simmerling%2CC&author=Dill%2CK)\n17. Sippl, M. J. Calculation of conformational ensembles from potentials of mean force. An approach to the knowledge-based prediction of local structures in globular proteins. *J. Mol. Biol*. **213**, 859–883 (1990).\n\n[Article](https://doi.org/10.1016%2FS0022-2836%2805%2980269-4) \n [CAS](/articles/cas-redirect/1:CAS:528:DyaK3cXkvFCgs7Y%3D) \n [PubMed](http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=Retrieve&db=PubMed&dopt=Abstract&list_uids=2359125) \n [Google Scholar](http://scholar.google.com/scholar_lookup?&title=Calculation%20of%20conformational%20ensembles%20from%20potentials%20of%20mean%20force.%20An%20approach%20to%20the%20knowledge-based%20prediction%20of%20local%20structures%20in%20globular%20proteins&journal=J.%20Mol.%20Biol.&doi=10.1016%2FS0022-2836%2805%2980269-4&volume=213&pages=859-883&publication_year=1990&author=Sippl%2CMJ)\n18. Šali, A. & Blundell, T. L. Comparative protein modelling by satisfaction of spatial restraints. *J. Mol. Biol*. **234**, 779–815 (1993).\n\n[Article](https://doi.org/10.1006%2Fjmbi.1993.1626) \n [PubMed](http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=Retrieve&db=PubMed&dopt=Abstract&list_uids=8254673) \n [Google Scholar](http://scholar.google.com/scholar_lookup?&title=Comparative%20protein%20modelling%20by%20satisfaction%20of%20spatial%20restraints&journal=J.%20Mol.%20Biol.&doi=10.1006%2Fjmbi.1993.1626&volume=234&pages=779-815&publication_year=1993&author=%C5%A0ali%2CA&author=Blundell%2CTL)\n19. Roy, A., Kucukural, A. & Zhang, Y. I-TASSER: a unified platform for automated protein structure and function prediction. *Nat. Protocols* **5**, 725–738 (2010).\n\n[Article](https://doi.org/10.1038%2Fnprot.2010.5) \n [CAS](/articles/cas-redirect/1:CAS:528:DC%2BC3cXksVahs74%3D) \n [PubMed](http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=Retrieve&db=PubMed&dopt=Abstract&list_uids=20360767) \n [Google Scholar](http://scholar.google.com/scholar_lookup?&title=I-TASSER%3A%20a%20unified%20platform%20for%20automated%20protein%20structure%20and%20function%20prediction&journal=Nat.%20Protocols&doi=10.1038%2Fnprot.2010.5&volume=5&pages=725-738&publication_year=2010&author=Roy%2CA&author=Kucukural%2CA&author=Zhang%2CY)\n20. Altschuh, D., Lesk, A. M., Bloomer, A. C. & Klug, A. Correlation of co-ordinated amino acid substitutions with function in viruses related to tobacco mosaic virus. *J. Mol. Biol*. **193**, 693–707 (1987).\n\n[Article](https://doi.org/10.1016%2F0022-2836%2887%2990352-4) \n [CAS](/articles/cas-redirect/1:CAS:528:DyaL2sXitV2ksL8%3D) \n [PubMed](http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=Retrieve&db=PubMed&dopt=Abstract&list_uids=3612789) \n [Google Scholar](http://scholar.google.com/scholar_lookup?&title=Correlation%20of%20co-ordinated%20amino%20acid%20substitutions%20with%20function%20in%20viruses%20related%20to%20tobacco%20mosaic%20virus&journal=J.%20Mol.%20Biol.&doi=10.1016%2F0022-2836%2887%2990352-4&volume=193&pages=693-707&publication_year=1987&author=Altschuh%2CD&author=Lesk%2CAM&author=Bloomer%2CAC&author=Klug%2CA)\n21. Shindyalov, I. N., Kolchanov, N. A. & Sander, C. Can three-dimensional contacts in protein structures be predicted by analysis of correlated mutations? *Protein Eng*. **7**, 349–358 (1994).\n\n[Article](https://doi.org/10.1093%2Fprotein%2F7.3.349) \n [CAS](/articles/cas-redirect/1:CAS:528:DyaK2cXitFWqtbs%3D) \n [PubMed](http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=Retrieve&db=PubMed&dopt=Abstract&list_uids=8177884) \n [Google Scholar](http://scholar.google.com/scholar_lookup?&title=Can%20three-dimensional%20contacts%20in%20protein%20structures%20be%20predicted%20by%20analysis%20of%20correlated%20mutations%3F&journal=Protein%20Eng.&doi=10.1093%2Fprotein%2F7.3.349&volume=7&pages=349-358&publication_year=1994&author=Shindyalov%2CIN&author=Kolchanov%2CNA&author=Sander%2CC)\n22. Weigt, M., White, R. A., Szurmant, H., Hoch, J. A. & Hwa, T. Identification of direct residue contacts in protein–protein interaction by message passing. *Proc. Natl Acad. Sci. USA* **106**, 67–72 (2009).\n\n[Article](https://doi.org/10.1073%2Fpnas.0805923106) \n [CAS](/articles/cas-redirect/1:CAS:528:DC%2BD1MXltF2jug%3D%3D) \n [PubMed](http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=Retrieve&db=PubMed&dopt=Abstract&list_uids=19116270) \n [ADS](http://adsabs.harvard.edu/cgi-bin/nph-data_query?link_type=ABSTRACT&bibcode=2009PNAS..106...67W) \n [Google Scholar](http://scholar.google.com/scholar_lookup?&title=Identification%20of%20direct%20residue%20contacts%20in%20protein%E2%80%93protein%20interaction%20by%20message%20passing&journal=Proc.%20Natl%20Acad.%20Sci.%20USA&doi=10.1073%2Fpnas.0805923106&volume=106&pages=67-72&publication_year=2009&author=Weigt%2CM&author=White%2CRA&author=Szurmant%2CH&author=Hoch%2CJA&author=Hwa%2CT)\n23. Marks, D. S. et al. Protein 3D structure computed from evolutionary sequence variation. *PLoS ONE* **6**, e28766 (2011).\n\n[Article](https://doi.org/10.1371%2Fjournal.pone.0028766) \n [CAS](/articles/cas-redirect/1:CAS:528:DC%2BC3MXhs1KhurnJ) \n [PubMed](http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=Retrieve&db=PubMed&dopt=Abstract&list_uids=22163331) \n [PubMed Central](http://www.ncbi.nlm.nih.gov/pmc/articles/PMC3233603) \n [ADS](http://adsabs.harvard.edu/cgi-bin/nph-data_query?link_type=ABSTRACT&bibcode=2011PLoSO...628766M) \n [Google Scholar](http://scholar.google.com/scholar_lookup?&title=Protein%203D%20structure%20computed%20from%20evolutionary%20sequence%20variation&journal=PLoS%20ONE&doi=10.1371%2Fjournal.pone.0028766&volume=6&publication_year=2011&author=Marks%2CDS)\n24. Jones, D. T., Buchan, D. W. A., Cozzetto, D. & Pontil, M. PSICOV: precise structural contact prediction using sparse inverse covariance estimation on large multiple sequence alignments. *Bioinformatics* **28**, 184–190 (2012).\n\n[Article](https://doi.org/10.1093%2Fbioinformatics%2Fbtr638) \n [CAS](/articles/cas-redirect/1:CAS:528:DC%2BC38Xht1agurg%3D) \n [PubMed](http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=Retrieve&db=PubMed&dopt=Abstract&list_uids=22101153) \n [Google Scholar](http://scholar.google.com/scholar_lookup?&title=PSICOV%3A%20precise%20structural%20contact%20prediction%20using%20sparse%20inverse%20covariance%20estimation%20on%20large%20multiple%20sequence%20alignments&journal=Bioinformatics&doi=10.1093%2Fbioinformatics%2Fbtr638&volume=28&pages=184-190&publication_year=2012&author=Jones%2CDT&author=Buchan%2CDWA&author=Cozzetto%2CD&author=Pontil%2CM)\n25. Moult, J., Pedersen, J. T., Judson, R. & Fidelis, K. A large-scale experiment to assess protein structure prediction methods. *Proteins* **23**, ii–iv (1995).\n\n[Article](https://doi.org/10.1002%2Fprot.340230303) \n [CAS](/articles/cas-redirect/1:STN:280:DyaK287oslCntw%3D%3D) \n [PubMed](http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=Retrieve&db=PubMed&dopt=Abstract&list_uids=8710822) \n [Google Scholar](http://scholar.google.com/scholar_lookup?&title=A%20large-scale%20experiment%20to%20assess%20protein%20structure%20prediction%20methods&journal=Proteins&doi=10.1002%2Fprot.340230303&volume=23&pages=ii-iv&publication_year=1995&author=Moult%2CJ&author=Pedersen%2CJT&author=Judson%2CR&author=Fidelis%2CK)\n26. Kryshtafovych, A., Schwede, T., Topf, M., Fidelis, K. & Moult, J. Critical assessment of methods of protein structure prediction (CASP)-round XIII. *Proteins* **87**, 1011–1020 (2019).\n\n[Article](https://doi.org/10.1002%2Fprot.25823) \n [CAS](/articles/cas-redirect/1:CAS:528:DC%2BC1MXitVSlt7%2FO) \n [PubMed](http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=Retrieve&db=PubMed&dopt=Abstract&list_uids=31589781) \n [PubMed Central](http://www.ncbi.nlm.nih.gov/pmc/articles/PMC6927249) \n [Google Scholar](http://scholar.google.com/scholar_lookup?&title=Critical%20assessment%20of%20methods%20of%20protein%20structure%20prediction%20%28CASP%29-round%20XIII&journal=Proteins&doi=10.1002%2Fprot.25823&volume=87&pages=1011-1020&publication_year=2019&author=Kryshtafovych%2CA&author=Schwede%2CT&author=Topf%2CM&author=Fidelis%2CK&author=Moult%2CJ)\n27. Zhang, Y. & Skolnick, J. Scoring function for automated assessment of protein structure template quality. *Proteins* **57**, 702–710 (2004).\n\n[Article](https://doi.org/10.1002%2Fprot.20264) \n [CAS](/articles/cas-redirect/1:CAS:528:DC%2BD2cXhtVaqtLvI) \n [PubMed](http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=Retrieve&db=PubMed&dopt=Abstract&list_uids=15476259) \n [Google Scholar](http://scholar.google.com/scholar_lookup?&title=Scoring%20function%20for%20automated%20assessment%20of%20protein%20structure%20template%20quality&journal=Proteins&doi=10.1002%2Fprot.20264&volume=57&pages=702-710&publication_year=2004&author=Zhang%2CY&author=Skolnick%2CJ)\n28. Tu, Z. & Bai, X. Auto-context and its application to high-level vision tasks and 3D brain image segmentation. *IEEE Trans. Pattern Anal. Mach. Intell*. **32**, 1744–1757 (2010).\n\n[Article](https://doi.org/10.1109%2FTPAMI.2009.186) \n [PubMed](http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=Retrieve&db=PubMed&dopt=Abstract&list_uids=20724753) \n [Google Scholar](http://scholar.google.com/scholar_lookup?&title=Auto-context%20and%20its%20application%20to%20high-level%20vision%20tasks%20and%203D%20brain%20image%20segmentation&journal=IEEE%20Trans.%20Pattern%20Anal.%20Mach.%20Intell.&doi=10.1109%2FTPAMI.2009.186&volume=32&pages=1744-1757&publication_year=2010&author=Tu%2CZ&author=Bai%2CX)\n29. Carreira, J., Agrawal, P., Fragkiadaki, K. & Malik, J. Human pose estimation with iterative error feedback. In *Proc. IEEE Conference on Computer Vision and Pattern Recognition* 4733–4742 (2016).\n30. Mirabello, C. & Wallner, B. rawMSA: end-to-end deep learning using raw multiple sequence alignments. *PLoS ONE* **14**, e0220182 (2019).\n\n[Article](https://doi.org/10.1371%2Fjournal.pone.0220182) \n [CAS](/articles/cas-redirect/1:CAS:528:DC%2BC1MXhvFamur7E) \n [PubMed](http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=Retrieve&db=PubMed&dopt=Abstract&list_uids=31415569) \n [PubMed Central](http://www.ncbi.nlm.nih.gov/pmc/articles/PMC6695225) \n [Google Scholar](http://scholar.google.com/scholar_lookup?&title=rawMSA%3A%20end-to-end%20deep%20learning%20using%20raw%20multiple%20sequence%20alignments&journal=PLoS%20ONE&doi=10.1371%2Fjournal.pone.0220182&volume=14&publication_year=2019&author=Mirabello%2CC&author=Wallner%2CB)\n31. Huang, Z. et al. CCNet: criss-cross attention for semantic segmentation. In *Proc. IEEE/CVF International Conference on Computer Vision* 603–612 (2019).\n32. Hornak, V. et al. Comparison of multiple Amber force fields and development of improved protein backbone parameters. *Proteins* **65**, 712–725 (2006).\n\n[Article](https://doi.org/10.1002%2Fprot.21123) \n [CAS](/articles/cas-redirect/1:CAS:528:DC%2BD28XhtFWqt7fM) \n [PubMed](http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=Retrieve&db=PubMed&dopt=Abstract&list_uids=16981200) \n [PubMed Central](http://www.ncbi.nlm.nih.gov/pmc/articles/PMC4805110) \n [Google Scholar](http://scholar.google.com/scholar_lookup?&title=Comparison%20of%20multiple%20Amber%20force%20fields%20and%20development%20of%20improved%20protein%20backbone%20parameters&journal=Proteins&doi=10.1002%2Fprot.21123&volume=65&pages=712-725&publication_year=2006&author=Hornak%2CV)\n33. Zemla, A. LGA: a method for finding 3D similarities in protein structures. *Nucleic Acids Res*. **31**, 3370–3374 (2003).\n\n[Article](https://doi.org/10.1093%2Fnar%2Fgkg571) \n [CAS](/articles/cas-redirect/1:CAS:528:DC%2BD3sXltVWjtbk%3D) \n [PubMed](http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=Retrieve&db=PubMed&dopt=Abstract&list_uids=12824330) \n [PubMed Central](http://www.ncbi.nlm.nih.gov/pmc/articles/PMC168977) \n [Google Scholar](http://scholar.google.com/scholar_lookup?&title=LGA%3A%20a%20method%20for%20finding%203D%20similarities%20in%20protein%20structures&journal=Nucleic%20Acids%20Res.&doi=10.1093%2Fnar%2Fgkg571&volume=31&pages=3370-3374&publication_year=2003&author=Zemla%2CA)\n34. Mariani, V., Biasini, M., Barbato, A. & Schwede, T. lDDT: a local superposition-free score for comparing protein structures and models using distance difference tests. *Bioinformatics* **29**, 2722–2728 (2013).\n\n[Article](https://doi.org/10.1093%2Fbioinformatics%2Fbtt473) \n [CAS](/articles/cas-redirect/1:CAS:528:DC%2BC3sXhs1CisrfK) \n [PubMed](http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=Retrieve&db=PubMed&dopt=Abstract&list_uids=23986568) \n [PubMed Central](http://www.ncbi.nlm.nih.gov/pmc/articles/PMC3799472) \n [Google Scholar](http://scholar.google.com/scholar_lookup?&title=lDDT%3A%20a%20local%20superposition-free%20score%20for%20comparing%20protein%20structures%20and%20models%20using%20distance%20difference%20tests&journal=Bioinformatics&doi=10.1093%2Fbioinformatics%2Fbtt473&volume=29&pages=2722-2728&publication_year=2013&author=Mariani%2CV&author=Biasini%2CM&author=Barbato%2CA&author=Schwede%2CT)\n35. Xie, Q., Luong, M.-T., Hovy, E. & Le, Q. V. Self-training with noisy student improves imagenet classification. In *Proc. IEEE/CVF Conference on Computer Vision and Pattern Recognition* 10687–10698 (2020).\n36. Mirdita, M. et al. Uniclust databases of clustered and deeply annotated protein sequences and alignments. *Nucleic Acids Res*. **45**, D170–D176 (2017).\n\n[Article](https://doi.org/10.1093%2Fnar%2Fgkw1081) \n [CAS](/articles/cas-redirect/1:CAS:528:DC%2BC1cXhslWgsb8%3D) \n [PubMed](http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=Retrieve&db=PubMed&dopt=Abstract&list_uids=27899574) \n [Google Scholar](http://scholar.google.com/scholar_lookup?&title=Uniclust%20databases%20of%20clustered%20and%20deeply%20annotated%20protein%20sequences%20and%20alignments&journal=Nucleic%20Acids%20Res.&doi=10.1093%2Fnar%2Fgkw1081&volume=45&pages=D170-D176&publication_year=2017&author=Mirdita%2CM)\n37. Devlin, J., Chang, M.-W., Lee, K. & Toutanova, K. BERT: pre-training of deep bidirectional transformers for language understanding. In *Proc. 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies* 1, 4171–4186 (2019).\n38. Rao, R. et al. MSA transformer. In *Proc. 38th International Conference on Machine Learning* PMLR 139, 8844–8856 (2021).\n39. Tunyasuvunakool, K. et al. Highly accurate protein structure prediction for the human proteome. *Nature* (2021).\n40. Kuhlman, B. & Bradley, P. Advances in protein structure prediction and design. *Nat. Rev. Mol. Cell Biol*. **20**, 681–697 (2019).\n\n[Article](https://doi.org/10.1038%2Fs41580-019-0163-x) \n [CAS](/articles/cas-redirect/1:CAS:528:DC%2BC1MXhsFyksL7J) \n [PubMed](http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=Retrieve&db=PubMed&dopt=Abstract&list_uids=31417196) \n [PubMed Central](http://www.ncbi.nlm.nih.gov/pmc/articles/PMC7032036) \n [Google Scholar](http://scholar.google.com/scholar_lookup?&title=Advances%20in%20protein%20structure%20prediction%20and%20design&journal=Nat.%20Rev.%20Mol.%20Cell%20Biol.&doi=10.1038%2Fs41580-019-0163-x&volume=20&pages=681-697&publication_year=2019&author=Kuhlman%2CB&author=Bradley%2CP)\n41. Marks, D. S., Hopf, T. A. & Sander, C. Protein structure prediction from sequence variation. *Nat. Biotechnol*. **30**, 1072–1080 (2012).\n\n[Article](https://doi.org/10.1038%2Fnbt.2419) \n [CAS](/articles/cas-redirect/1:CAS:528:DC%2BC38Xhs1elt7bM) \n [PubMed](http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=Retrieve&db=PubMed&dopt=Abstract&list_uids=23138306) \n [PubMed Central](http://www.ncbi.nlm.nih.gov/pmc/articles/PMC4319528) \n [Google Scholar](http://scholar.google.com/scholar_lookup?&title=Protein%20structure%20prediction%20from%20sequence%20variation&journal=Nat.%20Biotechnol.&doi=10.1038%2Fnbt.2419&volume=30&pages=1072-1080&publication_year=2012&author=Marks%2CDS&author=Hopf%2CTA&author=Sander%2CC)\n42. Qian, N. & Sejnowski, T. J. Predicting the secondary structure of globular proteins using neural network models. *J. Mol. Biol*. **202**, 865–884 (1988).\n\n[Article](https://doi.org/10.1016%2F0022-2836%2888%2990564-5) \n [CAS](/articles/cas-redirect/1:CAS:528:DyaL1MXhtlWksb0%3D) \n [PubMed](http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=Retrieve&db=PubMed&dopt=Abstract&list_uids=3172241) \n [Google Scholar](http://scholar.google.com/scholar_lookup?&title=Predicting%20the%20secondary%20structure%20of%20globular%20proteins%20using%20neural%20network%20models&journal=J.%20Mol.%20Biol.&doi=10.1016%2F0022-2836%2888%2990564-5&volume=202&pages=865-884&publication_year=1988&author=Qian%2CN&author=Sejnowski%2CTJ)\n43. Fariselli, P., Olmea, O., Valencia, A. & Casadio, R. Prediction of contact maps with neural networks and correlated mutations. *Protein Eng*. **14**, 835–843 (2001).\n\n[Article](https://doi.org/10.1093%2Fprotein%2F14.11.835) \n [CAS](/articles/cas-redirect/1:CAS:528:DC%2BD38XjtVentA%3D%3D) \n [PubMed](http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=Retrieve&db=PubMed&dopt=Abstract&list_uids=11742102) \n [Google Scholar](http://scholar.google.com/scholar_lookup?&title=Prediction%20of%20contact%20maps%20with%20neural%20networks%20and%20correlated%20mutations&journal=Protein%20Eng.&doi=10.1093%2Fprotein%2F14.11.835&volume=14&pages=835-843&publication_year=2001&author=Fariselli%2CP&author=Olmea%2CO&author=Valencia%2CA&author=Casadio%2CR)\n44. Yang, J. et al. Improved protein structure prediction using predicted interresidue orientations. *Proc. Natl Acad. Sci. USA* **117**, 1496–1503 (2020).\n\n[Article](https://doi.org/10.1073%2Fpnas.1914677117) \n [CAS](/articles/cas-redirect/1:CAS:528:DC%2BB3cXhsFKrsLg%3D) \n [PubMed](http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=Retrieve&db=PubMed&dopt=Abstract&list_uids=31896580) \n [PubMed Central](http://www.ncbi.nlm.nih.gov/pmc/articles/PMC6983395) \n [Google Scholar](http://scholar.google.com/scholar_lookup?&title=Improved%20protein%20structure%20prediction%20using%20predicted%20interresidue%20orientations&journal=Proc.%20Natl%20Acad.%20Sci.%20USA&doi=10.1073%2Fpnas.1914677117&volume=117&pages=1496-1503&publication_year=2020&author=Yang%2CJ)\n45. Li, Y. et al. Deducing high-accuracy protein contact-maps from a triplet of coevolutionary matrices through deep residual convolutional networks. *PLOS Comput. Biol*. **17**, e1008865 (2021).\n\n[Article](https://doi.org/10.1371%2Fjournal.pcbi.1008865) \n [CAS](/articles/cas-redirect/1:CAS:528:DC%2BB3MXosFSms78%3D) \n [PubMed](http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=Retrieve&db=PubMed&dopt=Abstract&list_uids=33770072) \n [PubMed Central](http://www.ncbi.nlm.nih.gov/pmc/articles/PMC8026059) \n [Google Scholar](http://scholar.google.com/scholar_lookup?&title=Deducing%20high-accuracy%20protein%20contact-maps%20from%20a%20triplet%20of%20coevolutionary%20matrices%20through%20deep%20residual%20convolutional%20networks&journal=PLOS%20Comput.%20Biol.&doi=10.1371%2Fjournal.pcbi.1008865&volume=17&publication_year=2021&author=Li%2CY)\n46. He, K., Zhang, X., Ren, S. & Sun, J. Deep residual learning for image recognition. In *Proc. IEEE Conference on Computer Vision and Pattern Recognition* 770–778 (2016).\n47. AlQuraishi, M. End-to-end differentiable learning of protein structure. *Cell Syst*. **8**, 292–301 (2019).\n\n[Article](https://doi.org/10.1016%2Fj.cels.2019.03.006) \n [CAS](/articles/cas-redirect/1:CAS:528:DC%2BC1MXosVyhtb0%3D) \n [PubMed](http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=Retrieve&db=PubMed&dopt=Abstract&list_uids=31005579) \n [PubMed Central](http://www.ncbi.nlm.nih.gov/pmc/articles/PMC6513320) \n [Google Scholar](http://scholar.google.com/scholar_lookup?&title=End-to-end%20differentiable%20learning%20of%20protein%20structure&journal=Cell%20Syst.&doi=10.1016%2Fj.cels.2019.03.006&volume=8&pages=292-301&publication_year=2019&author=AlQuraishi%2CM)\n48. Senior, A. W. et al. Protein structure prediction using multiple deep neural networks in the 13th Critical Assessment of Protein Structure Prediction (CASP13). *Proteins* **87**, 1141–1148 (2019).\n\n[Article](https://doi.org/10.1002%2Fprot.25834) \n [CAS](/articles/cas-redirect/1:CAS:528:DC%2BC1MXitFartb%2FK) \n [PubMed](http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=Retrieve&db=PubMed&dopt=Abstract&list_uids=31602685) \n [PubMed Central](http://www.ncbi.nlm.nih.gov/pmc/articles/PMC7079254) \n [Google Scholar](http://scholar.google.com/scholar_lookup?&title=Protein%20structure%20prediction%20using%20multiple%20deep%20neural%20networks%20in%20the%2013th%20Critical%20Assessment%20of%20Protein%20Structure%20Prediction%20%28CASP13%29&journal=Proteins&doi=10.1002%2Fprot.25834&volume=87&pages=1141-1148&publication_year=2019&author=Senior%2CAW)\n49. Ingraham, J., Riesselman, A. J., Sander, C. & Marks, D. S. Learning protein structure with a differentiable simulator. in *Proc. International Conference on Learning Representations* (2019).\n50. Li, J. Universal transforming geometric network. Preprint at (2019).\n51. Xu, J., McPartlon, M. & Li, J. Improved protein structure prediction by deep learning irrespective of co-evolution information. *Nat. Mach. Intell*. **3**, 601–609 (2021).\n\n[Article](https://doi.org/10.1038%2Fs42256-021-00348-5) \n [PubMed](http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=Retrieve&db=PubMed&dopt=Abstract&list_uids=34368623) \n [PubMed Central](http://www.ncbi.nlm.nih.gov/pmc/articles/PMC8340610) \n [Google Scholar](http://scholar.google.com/scholar_lookup?&title=Improved%20protein%20structure%20prediction%20by%20deep%20learning%20irrespective%20of%20co-evolution%20information&journal=Nat.%20Mach.%20Intell.&doi=10.1038%2Fs42256-021-00348-5&volume=3&pages=601-609&publication_year=2021&author=Xu%2CJ&author=McPartlon%2CM&author=Li%2CJ)\n52. Vaswani, A. et al. Attention is all you need. In *Advances in Neural Information Processing Systems* 5998–6008 (2017).\n53. Wang, H. et al. Axial-deeplab: stand-alone axial-attention for panoptic segmentation. in *European Conference on Computer Vision* 108–126 (Springer, 2020).\n54. Alley, E. C., Khimulya, G., Biswas, S., AlQuraishi, M. & Church, G. M. Unified rational protein engineering with sequence-based deep representation learning. *Nat. Methods* **16**, 1315–1322 (2019).\n\n[Article](https://doi.org/10.1038%2Fs41592-019-0598-1) \n [CAS](/articles/cas-redirect/1:CAS:528:DC%2BC1MXitVSlsbnJ) \n [PubMed](http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=Retrieve&db=PubMed&dopt=Abstract&list_uids=31636460) \n [PubMed Central](http://www.ncbi.nlm.nih.gov/pmc/articles/PMC7067682) \n [Google Scholar](http://scholar.google.com/scholar_lookup?&title=Unified%20rational%20protein%20engineering%20with%20sequence-based%20deep%20representation%20learning&journal=Nat.%20Methods&doi=10.1038%2Fs41592-019-0598-1&volume=16&pages=1315-1322&publication_year=2019&author=Alley%2CEC&author=Khimulya%2CG&author=Biswas%2CS&author=AlQuraishi%2CM&author=Church%2CGM)\n55. Heinzinger, M. et al. Modeling aspects of the language of life through transfer-learning protein sequences. *BMC Bioinformatics* **20**, 723 (2019).\n\n[Article](https://doi.org/10.1186%2Fs12859-019-3220-8) \n [CAS](/articles/cas-redirect/1:CAS:528:DC%2BC1MXisVGjsLbK) \n [PubMed](http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=Retrieve&db=PubMed&dopt=Abstract&list_uids=31847804) \n [PubMed Central](http://www.ncbi.nlm.nih.gov/pmc/articles/PMC6918593) \n [Google Scholar](http://scholar.google.com/scholar_lookup?&title=Modeling%20aspects%20of%20the%20language%20of%20life%20through%20transfer-learning%20protein%20sequences&journal=BMC%20Bioinformatics&doi=10.1186%2Fs12859-019-3220-8&volume=20&publication_year=2019&author=Heinzinger%2CM)\n56. Rives, A. et al. Biological structure and function emerge from scaling unsupervised learning to 250 million protein sequences. *Proc. Natl Acad. Sci. USA* **118**, e2016239118 (2021).\n\n[Article](https://doi.org/10.1073%2Fpnas.2016239118) \n [CAS](/articles/cas-redirect/1:CAS:528:DC%2BB3MXovVantro%3D) \n [PubMed](http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=Retrieve&db=PubMed&dopt=Abstract&list_uids=33876751) \n [PubMed Central](http://www.ncbi.nlm.nih.gov/pmc/articles/PMC8053943) \n [Google Scholar](http://scholar.google.com/scholar_lookup?&title=Biological%20structure%20and%20function%20emerge%20from%20scaling%20unsupervised%20learning%20to%20250%20million%20protein%20sequences&journal=Proc.%20Natl%20Acad.%20Sci.%20USA&doi=10.1073%2Fpnas.2016239118&volume=118&publication_year=2021&author=Rives%2CA)\n57. Pereira, J. et al. High-accuracy protein structure prediction in CASP14. *Proteins* (2021).\n\n[Article](https://doi.org/10.1002%2Fprot.26171) \n [PubMed](http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=Retrieve&db=PubMed&dopt=Abstract&list_uids=34387010) \n [PubMed Central](http://www.ncbi.nlm.nih.gov/pmc/articles/PMC8881082) \n [Google Scholar](http://scholar.google.com/scholar_lookup?&title=High-accuracy%20protein%20structure%20prediction%20in%20CASP14&journal=Proteins&doi=10.1002%2Fprot.26171&publication_year=2021&author=Pereira%2CJ)\n58. Gupta, M. et al. CryoEM and AI reveal a structure of SARS-CoV-2 Nsp2, a multifunctional protein involved in key host processes. Preprint at (2021).\n59. Ingraham, J., Garg, V. K., Barzilay, R. & Jaakkola, T. Generative models for graph-based protein design. in *Proc. 33rd Conference on Neural Information Processing Systems* (2019).\n60. Johnson, L. S., Eddy, S. R. & Portugaly, E. Hidden Markov model speed heuristic and iterative HMM search procedure. *BMC Bioinformatics* **11**, 431 (2010).\n\n[Article](https://doi.org/10.1186%2F1471-2105-11-431) \n [PubMed](http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=Retrieve&db=PubMed&dopt=Abstract&list_uids=20718988) \n [PubMed Central](http://www.ncbi.nlm.nih.gov/pmc/articles/PMC2931519) \n [CAS](/articles/cas-redirect/1:CAS:528:DC%2BC3cXhtVKqsbrF) \n [Google Scholar](http://scholar.google.com/scholar_lookup?&title=Hidden%20Markov%20model%20speed%20heuristic%20and%20iterative%20HMM%20search%20procedure&journal=BMC%20Bioinformatics&doi=10.1186%2F1471-2105-11-431&volume=11&publication_year=2010&author=Johnson%2CLS&author=Eddy%2CSR&author=Portugaly%2CE)\n61. Remmert, M., Biegert, A., Hauser, A. & Söding, J. HHblits: lightning-fast iterative protein sequence searching by HMM-HMM alignment. *Nat. Methods* **9**, 173–175 (2012).\n\n[Article](https://doi.org/10.1038%2Fnmeth.1818) \n [CAS](/articles/cas-redirect/1:CAS:528:DC%2BC3MXhs1OltbnO) \n [Google Scholar](http://scholar.google.com/scholar_lookup?&title=HHblits%3A%20lightning-fast%20iterative%20protein%20sequence%20searching%20by%20HMM-HMM%20alignment&journal=Nat.%20Methods&doi=10.1038%2Fnmeth.1818&volume=9&pages=173-175&publication_year=2012&author=Remmert%2CM&author=Biegert%2CA&author=Hauser%2CA&author=S%C3%B6ding%2CJ)\n62. The UniProt Consortium. UniProt: the universal protein knowledgebase in 2021. *Nucleic Acids Res*. **49**, D480–D489 (2020).\n\n[Article](https://doi.org/10.1093%2Fnar%2Fgkaa1100) \n [CAS](/articles/cas-redirect/1:CAS:528:DC%2BB3MXntFCit7s%3D) \n [Google Scholar](http://scholar.google.com/scholar_lookup?&title=UniProt%3A%20the%20universal%20protein%20knowledgebase%20in%202021&journal=Nucleic%20Acids%20Res.&doi=10.1093%2Fnar%2Fgkaa1100&volume=49&pages=D480-D489&publication_year=2020)\n63. Steinegger, M. & Söding, J. Clustering huge protein sequence sets in linear time. *Nat. Commun*. **9**, 2542 (2018).\n\n[Article](https://doi.org/10.1038%2Fs41467-018-04964-5) \n [PubMed](http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=Retrieve&db=PubMed&dopt=Abstract&list_uids=29959318) \n [PubMed Central](http://www.ncbi.nlm.nih.gov/pmc/articles/PMC6026198) \n [ADS](http://adsabs.harvard.edu/cgi-bin/nph-data_query?link_type=ABSTRACT&bibcode=2018NatCo...9.2542S) \n [CAS](/articles/cas-redirect/1:CAS:528:DC%2BC1cXht1Cns7rO) \n [Google Scholar](http://scholar.google.com/scholar_lookup?&title=Clustering%20huge%20protein%20sequence%20sets%20in%20linear%20time&journal=Nat.%20Commun.&doi=10.1038%2Fs41467-018-04964-5&volume=9&publication_year=2018&author=Steinegger%2CM&author=S%C3%B6ding%2CJ)\n64. Steinegger, M. & Söding, J. MMseqs2 enables sensitive protein sequence searching for the analysis of massive data sets. *Nat. Biotechnol*. **35**, 1026–1028 (2017).\n\n[Article](https://doi.org/10.1038%2Fnbt.3988) \n [CAS](/articles/cas-redirect/1:CAS:528:DC%2BC2sXhs1GqsLzE) \n [PubMed](http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=Retrieve&db=PubMed&dopt=Abstract&list_uids=29035372) \n [Google Scholar](http://scholar.google.com/scholar_lookup?&title=MMseqs2%20enables%20sensitive%20protein%20sequence%20searching%20for%20the%20analysis%20of%20massive%20data%20sets&journal=Nat.%20Biotechnol.&doi=10.1038%2Fnbt.3988&volume=35&pages=1026-1028&publication_year=2017&author=Steinegger%2CM&author=S%C3%B6ding%2CJ)\n65. Deorowicz, S., Debudaj-Grabysz, A. & Gudyś, A. FAMSA: fast and accurate multiple sequence alignment of huge protein families. *Sci. Rep*. **6**, 33964 (2016).\n\n[Article](https://doi.org/10.1038%2Fsrep33964) \n [CAS](/articles/cas-redirect/1:CAS:528:DC%2BC28XhsF2qs7fN) \n [PubMed](http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=Retrieve&db=PubMed&dopt=Abstract&list_uids=27670777) \n [PubMed Central](http://www.ncbi.nlm.nih.gov/pmc/articles/PMC5037421) \n [ADS](http://adsabs.harvard.edu/cgi-bin/nph-data_query?link_type=ABSTRACT&bibcode=2016NatSR...633964D) \n [Google Scholar](http://scholar.google.com/scholar_lookup?&title=FAMSA%3A%20fast%20and%20accurate%20multiple%20sequence%20alignment%20of%20huge%20protein%20families&journal=Sci.%20Rep.&doi=10.1038%2Fsrep33964&volume=6&publication_year=2016&author=Deorowicz%2CS&author=Debudaj-Grabysz%2CA&author=Gudy%C5%9B%2CA)\n66. Steinegger, M. et al. HH-suite3 for fast remote homology detection and deep protein annotation. *BMC Bioinformatics* **20**, 473 (2019).\n\n[Article](https://doi.org/10.1186%2Fs12859-019-3019-7) \n [PubMed](http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=Retrieve&db=PubMed&dopt=Abstract&list_uids=31521110) \n [PubMed Central](http://www.ncbi.nlm.nih.gov/pmc/articles/PMC6744700) \n [CAS](/articles/cas-redirect/1:CAS:528:DC%2BC1MXhsl2hurbK) \n [Google Scholar](http://scholar.google.com/scholar_lookup?&title=HH-suite3%20for%20fast%20remote%20homology%20detection%20and%20deep%20protein%20annotation&journal=BMC%20Bioinformatics&doi=10.1186%2Fs12859-019-3019-7&volume=20&publication_year=2019&author=Steinegger%2CM)\n67. Suzek, B. E., Wang, Y., Huang, H., McGarvey, P. B. & Wu, C. H. UniRef clusters: a comprehensive and scalable alternative for improving sequence similarity searches. *Bioinformatics* **31**, 926–932 (2015).\n\n[Article](https://doi.org/10.1093%2Fbioinformatics%2Fbtu739) \n [CAS](/articles/cas-redirect/1:CAS:528:DC%2BC28Xht1Gntb7F) \n [PubMed](http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=Retrieve&db=PubMed&dopt=Abstract&list_uids=25398609) \n [Google Scholar](http://scholar.google.com/scholar_lookup?&title=UniRef%20clusters%3A%20a%20comprehensive%20and%20scalable%20alternative%20for%20improving%20sequence%20similarity%20searches&journal=Bioinformatics&doi=10.1093%2Fbioinformatics%2Fbtu739&volume=31&pages=926-932&publication_year=2015&author=Suzek%2CBE&author=Wang%2CY&author=Huang%2CH&author=McGarvey%2CPB&author=Wu%2CCH)\n68. Eddy, S. R. Accelerated profile HMM searches. *PLOS Comput. Biol*. **7**, e1002195 (2011).\n\n[Article](https://doi.org/10.1371%2Fjournal.pcbi.1002195) \n [MathSciNet](http://www.ams.org/mathscinet-getitem?mr=2859646) \n [CAS](/articles/cas-redirect/1:CAS:528:DC%2BC3MXhsVCku7rL) \n [PubMed](http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=Retrieve&db=PubMed&dopt=Abstract&list_uids=22039361) \n [PubMed Central](http://www.ncbi.nlm.nih.gov/pmc/articles/PMC3197634) \n [ADS](http://adsabs.harvard.edu/cgi-bin/nph-data_query?link_type=ABSTRACT&bibcode=2011PLSCB...7E2195E) \n [Google Scholar](http://scholar.google.com/scholar_lookup?&title=Accelerated%20profile%20HMM%20searches&journal=PLOS%20Comput.%20Biol.&doi=10.1371%2Fjournal.pcbi.1002195&volume=7&publication_year=2011&author=Eddy%2CSR)\n69. Eastman, P. et al. OpenMM 7: rapid development of high performance algorithms for molecular dynamics. *PLOS Comput. Biol*. **13**, e1005659 (2017).\n\n[Article](https://doi.org/10.1371%2Fjournal.pcbi.1005659) \n [PubMed](http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=Retrieve&db=PubMed&dopt=Abstract&list_uids=28746339) \n [PubMed Central](http://www.ncbi.nlm.nih.gov/pmc/articles/PMC5549999) \n [CAS](/articles/cas-redirect/1:CAS:528:DC%2BC1cXivVWhur0%3D) \n [Google Scholar](http://scholar.google.com/scholar_lookup?&title=OpenMM%207%3A%20rapid%20development%20of%20high%20performance%20algorithms%20for%20molecular%20dynamics&journal=PLOS%20Comput.%20Biol.&doi=10.1371%2Fjournal.pcbi.1005659&volume=13&publication_year=2017&author=Eastman%2CP)\n70. Ashish, A. M. A. et al. TensorFlow: large-scale machine learning on heterogeneous systems. Preprint at (2015).\n71. Reynolds, M. et al. Open sourcing Sonnet – a new library for constructing neural networks. *DeepMind* (7 April 2017).\n72. Harris, C. R. et al. Array programming with NumPy. *Nature* **585**, 357–362 (2020).\n\n[Article](https://doi.org/10.1038%2Fs41586-020-2649-2) \n [CAS](/articles/cas-redirect/1:CAS:528:DC%2BB3cXitlWmsbbN) \n [PubMed](http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=Retrieve&db=PubMed&dopt=Abstract&list_uids=32939066) \n [PubMed Central](http://www.ncbi.nlm.nih.gov/pmc/articles/PMC7759461) \n [ADS](http://adsabs.harvard.edu/cgi-bin/nph-data_query?link_type=ABSTRACT&bibcode=2020Natur.585..357H) \n [Google Scholar](http://scholar.google.com/scholar_lookup?&title=Array%20programming%20with%20NumPy&journal=Nature&doi=10.1038%2Fs41586-020-2649-2&volume=585&pages=357-362&publication_year=2020&author=Harris%2CCR)\n73. Van Rossum, G. & Drake, F. L. *Python 3 Reference Manual* (CreateSpace, 2009).\n74. Bisong, E. in *Building Machine Learning and Deep Learning Models on Google Cloud Platform: A Comprehensive Guide for Beginners* 59–64 (Apress, 2019).\n75. TensorFlow. XLA: Optimizing Compiler for TensorFlow. (2018).\n76. Wu, T., Hou, J., Adhikari, B. & Cheng, J. Analysis of several key factors influencing deep learning-based inter-residue contact prediction. *Bioinformatics* **36**, 1091–1098 (2020).\n\n[Article](https://doi.org/10.1093%2Fbioinformatics%2Fbtz679) \n [CAS](/articles/cas-redirect/1:CAS:528:DC%2BB3cXisVOrtbvJ) \n [PubMed](http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=Retrieve&db=PubMed&dopt=Abstract&list_uids=31504181) \n [Google Scholar](http://scholar.google.com/scholar_lookup?&title=Analysis%20of%20several%20key%20factors%20influencing%20deep%20learning-based%20inter-residue%20contact%20prediction&journal=Bioinformatics&doi=10.1093%2Fbioinformatics%2Fbtz679&volume=36&pages=1091-1098&publication_year=2020&author=Wu%2CT&author=Hou%2CJ&author=Adhikari%2CB&author=Cheng%2CJ)\n77. Jiang, W. et al. MrpH, a new class of metal-binding adhesin, requires zinc to mediate biofilm formation. *PLoS Pathog*. **16**, e1008707 (2020).\n\n[Article](https://doi.org/10.1371%2Fjournal.ppat.1008707) \n [CAS](/articles/cas-redirect/1:CAS:528:DC%2BB3cXhs1Glt7fI) \n [PubMed](http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=Retrieve&db=PubMed&dopt=Abstract&list_uids=32780778) \n [PubMed Central](http://www.ncbi.nlm.nih.gov/pmc/articles/PMC7444556) \n [Google Scholar](http://scholar.google.com/scholar_lookup?&title=MrpH%2C%20a%20new%20class%20of%20metal-binding%20adhesin%2C%20requires%20zinc%20to%20mediate%20biofilm%20formation&journal=PLoS%20Pathog.&doi=10.1371%2Fjournal.ppat.1008707&volume=16&publication_year=2020&author=Jiang%2CW)\n78. Dunne, M., Ernst, P., Sobieraj, A., Pluckthun, A. & Loessner, M. J. The M23 peptidase domain of the Staphylococcal phage 2638A endolysin. *PDB* (2020).\n79. Drobysheva, A. V. et al. Structure and function of virion RNA polymerase of a crAss-like phage. *Nature* **589**, 306–309 (2021).\n\n[Article](https://doi.org/10.1038%2Fs41586-020-2921-5) \n [CAS](/articles/cas-redirect/1:CAS:528:DC%2BB3cXitlOgs7jI) \n [PubMed](http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=Retrieve&db=PubMed&dopt=Abstract&list_uids=33208949) \n [ADS](http://adsabs.harvard.edu/cgi-bin/nph-data_query?link_type=ABSTRACT&bibcode=2021Natur.589..306D) \n [Google Scholar](http://scholar.google.com/scholar_lookup?&title=Structure%20and%20function%20of%20virion%20RNA%20polymerase%20of%20a%20crAss-like%20phage&journal=Nature&doi=10.1038%2Fs41586-020-2921-5&volume=589&pages=306-309&publication_year=2021&author=Drobysheva%2CAV)\n80. Flaugnatti, N. et al. Structural basis for loading and inhibition of a bacterial T6SS phospholipase effector by the VgrG spike. *EMBO J*. **39**, e104129 (2020).\n\n[Article](https://doi.org/10.15252%2Fembj.2019104129) \n [CAS](/articles/cas-redirect/1:CAS:528:DC%2BB3cXotFCnu7s%3D) \n [PubMed](http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=Retrieve&db=PubMed&dopt=Abstract&list_uids=32350888) \n [PubMed Central](http://www.ncbi.nlm.nih.gov/pmc/articles/PMC7265238) \n [Google Scholar](http://scholar.google.com/scholar_lookup?&title=Structural%20basis%20for%20loading%20and%20inhibition%20of%20a%20bacterial%20T6SS%20phospholipase%20effector%20by%20the%20VgrG%20spike&journal=EMBO%20J.&doi=10.15252%2Fembj.2019104129&volume=39&publication_year=2020&author=Flaugnatti%2CN)\n81. ElGamacy, M. et al. An interface-driven design strategy yields a novel, corrugated protein architecture. *ACS Synth. Biol*. **7**, 2226–2235 (2018).\n\n[Article](https://doi.org/10.1021%2Facssynbio.8b00224) \n [CAS](/articles/cas-redirect/1:CAS:528:DC%2BC1cXhsFyqtbnP) \n [PubMed](http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=Retrieve&db=PubMed&dopt=Abstract&list_uids=30148951) \n [Google Scholar](http://scholar.google.com/scholar_lookup?&title=An%20interface-driven%20design%20strategy%20yields%20a%20novel%2C%20corrugated%20protein%20architecture&journal=ACS%20Synth.%20Biol.&doi=10.1021%2Facssynbio.8b00224&volume=7&pages=2226-2235&publication_year=2018&author=ElGamacy%2CM)\n82. Lim, C. J. et al. The structure of human CST reveals a decameric assembly bound to telomeric DNA. *Science* **368**, 1081–1085 (2020).\n\n[Article](https://doi.org/10.1126%2Fscience.aaz9649) \n [CAS](/articles/cas-redirect/1:CAS:528:DC%2BB3cXhtFSqt7fI) \n [PubMed](http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=Retrieve&db=PubMed&dopt=Abstract&list_uids=32499435) \n [PubMed Central](http://www.ncbi.nlm.nih.gov/pmc/articles/PMC7559292) \n [ADS](http://adsabs.harvard.edu/cgi-bin/nph-data_query?link_type=ABSTRACT&bibcode=2020Sci...368.1081L) \n [Google Scholar](http://scholar.google.com/scholar_lookup?&title=The%20structure%20of%20human%20CST%20reveals%20a%20decameric%20assembly%20bound%20to%20telomeric%20DNA&journal=Science&doi=10.1126%2Fscience.aaz9649&volume=368&pages=1081-1085&publication_year=2020&author=Lim%2CCJ)\n83. Debruycker, V. et al. An embedded lipid in the multidrug transporter LmrP suggests a mechanism for polyspecificity. *Nat. Struct. Mol. Biol*. **27**, 829–835 (2020).\n\n[Article](https://doi.org/10.1038%2Fs41594-020-0464-y) \n [CAS](/articles/cas-redirect/1:CAS:528:DC%2BB3cXhsVKkt7bE) \n [PubMed](http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=Retrieve&db=PubMed&dopt=Abstract&list_uids=32719456) \n [PubMed Central](http://www.ncbi.nlm.nih.gov/pmc/articles/PMC7951658) \n [Google Scholar](http://scholar.google.com/scholar_lookup?&title=An%20embedded%20lipid%20in%20the%20multidrug%20transporter%20LmrP%20suggests%20a%20mechanism%20for%20polyspecificity&journal=Nat.%20Struct.%20Mol.%20Biol.&doi=10.1038%2Fs41594-020-0464-y&volume=27&pages=829-835&publication_year=2020&author=Debruycker%2CV)\n84. Flower, T. G. et al. Structure of SARS-CoV-2 ORF8, a rapidly evolving immune evasion protein. *Proc. Natl Acad. Sci. USA* **118**, e2021785118 (2021).\n\n[Article](https://doi.org/10.1073%2Fpnas.2021785118) \n [CAS](/articles/cas-redirect/1:CAS:528:DC%2BB3MXhsVCitb4%3D) \n [PubMed](http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=Retrieve&db=PubMed&dopt=Abstract&list_uids=33361333) \n [Google Scholar](http://scholar.google.com/scholar_lookup?&title=Structure%20of%20SARS-CoV-2%20ORF8%2C%20a%20rapidly%20evolving%20immune%20evasion%20protein&journal=Proc.%20Natl%20Acad.%20Sci.%20USA&doi=10.1073%2Fpnas.2021785118&volume=118&publication_year=2021&author=Flower%2CTG)\n\n[Download references](https://citation-needed.springer.com/v2/references/10.1038/s41586-021-03819-2?format=refman&flavour=references)\n\nAcknowledgements\n----------------\n\nWe thank A. Rrustemi, A. Gu, A. Guseynov, B. Hechtman, C. Beattie, C. Jones, C. Donner, E. Parisotto, E. Elsen, F. Popovici, G. Necula, H. Maclean, J. Menick, J. Kirkpatrick, J. Molloy, J. Yim, J. Stanway, K. Simonyan, L. Sifre, L. Martens, M. Johnson, M. O’Neill, N. Antropova, R. Hadsell, S. Blackwell, S. Das, S. Hou, S. Gouws, S. Wheelwright, T. Hennigan, T. Ward, Z. Wu, Ž. Avsec and the Research Platform Team for their contributions; M. Mirdita for his help with the datasets; M. Piovesan-Forster, A. Nelson and R. Kemp for their help managing the project; the JAX, TensorFlow and XLA teams for detailed support and enabling machine learning models of the complexity of AlphaFold; our colleagues at DeepMind, Google and Alphabet for their encouragement and support; and J. Moult and the CASP14 organizers, and the experimentalists whose structures enabled the assessment. M.S. acknowledges support from the National Research Foundation of Korea grant (2019R1A6A1A10073437, 2020M3A9G7103933) and the Creative-Pioneering Researchers Program through Seoul National University.\n\nAuthor information\n------------------\n\nAuthor notes1. These authors contributed equally: John Jumper, Richard Evans, Alexander Pritzel, Tim Green, Michael Figurnov, Olaf Ronneberger, Kathryn Tunyasuvunakool, Russ Bates, Augustin Žídek, Anna Potapenko, Alex Bridgland, Clemens Meyer, Simon A. A. Kohl, Andrew J. Ballard, Andrew Cowie, Bernardino Romera-Paredes, Stanislav Nikolov, Rishub Jain, Demis Hassabis\n\n### Authors and Affiliations\n\n1. DeepMind, London, UK\n\nJohn Jumper, Richard Evans, Alexander Pritzel, Tim Green, Michael Figurnov, Olaf Ronneberger, Kathryn Tunyasuvunakool, Russ Bates, Augustin Žídek, Anna Potapenko, Alex Bridgland, Clemens Meyer, Simon A. A. Kohl, Andrew J. Ballard, Andrew Cowie, Bernardino Romera-Paredes, Stanislav Nikolov, Rishub Jain, Jonas Adler, Trevor Back, Stig Petersen, David Reiman, Ellen Clancy, Michal Zielinski, Michalina Pacholska, Tamas Berghammer, Sebastian Bodenstein, David Silver, Oriol Vinyals, Andrew W. Senior, Koray Kavukcuoglu, Pushmeet Kohli & Demis Hassabis\n2. School of Biological Sciences, Seoul National University, Seoul, South Korea\n\nMartin Steinegger\n3. Artificial Intelligence Institute, Seoul National University, Seoul, South Korea\n\nMartin Steinegger\n\nAuthors1. John Jumper[View author publications](/search?author=John%20Jumper)You can also search for this author in\n [PubMed](http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=search&term=John%20Jumper) [Google Scholar](http://scholar.google.co.uk/scholar?as_q=&num=10&btnG=Search+Scholar&as_epq=&as_oq=&as_eq=&as_occt=any&as_sauthors=%22John%20Jumper%22&as_publication=&as_ylo=&as_yhi=&as_allsubj=all&hl=en)\n2. Richard Evans[View author publications](/search?author=Richard%20Evans)You can also search for this author in\n [PubMed](http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=search&term=Richard%20Evans) [Google Scholar](http://scholar.google.co.uk/scholar?as_q=&num=10&btnG=Search+Scholar&as_epq=&as_oq=&as_eq=&as_occt=any&as_sauthors=%22Richard%20Evans%22&as_publication=&as_ylo=&as_yhi=&as_allsubj=all&hl=en)\n3. Alexander Pritzel[View author publications](/search?author=Alexander%20Pritzel)You can also search for this author in\n [PubMed](http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=search&term=Alexander%20Pritzel) [Google Scholar](http://scholar.google.co.uk/scholar?as_q=&num=10&btnG=Search+Scholar&as_epq=&as_oq=&as_eq=&as_occt=any&as_sauthors=%22Alexander%20Pritzel%22&as_publication=&as_ylo=&as_yhi=&as_allsubj=all&hl=en)\n4. Tim Green[View author publications](/search?author=Tim%20Green)You can also search for this author in\n [PubMed](http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=search&term=Tim%20Green) [Google Scholar](http://scholar.google.co.uk/scholar?as_q=&num=10&btnG=Search+Scholar&as_epq=&as_oq=&as_eq=&as_occt=any&as_sauthors=%22Tim%20Green%22&as_publication=&as_ylo=&as_yhi=&as_allsubj=all&hl=en)\n5. Michael Figurnov[View author publications](/search?author=Michael%20Figurnov)You can also search for this author in\n [PubMed](http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=search&term=Michael%20Figurnov) [Google Scholar](http://scholar.google.co.uk/scholar?as_q=&num=10&btnG=Search+Scholar&as_epq=&as_oq=&as_eq=&as_occt=any&as_sauthors=%22Michael%20Figurnov%22&as_publication=&as_ylo=&as_yhi=&as_allsubj=all&hl=en)\n6. Olaf Ronneberger[View author publications](/search?author=Olaf%20Ronneberger)You can also search for this author in\n [PubMed](http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=search&term=Olaf%20Ronneberger) [Google Scholar](http://scholar.google.co.uk/scholar?as_q=&num=10&btnG=Search+Scholar&as_epq=&as_oq=&as_eq=&as_occt=any&as_sauthors=%22Olaf%20Ronneberger%22&as_publication=&as_ylo=&as_yhi=&as_allsubj=all&hl=en)\n7. Kathryn Tunyasuvunakool[View author publications](/search?author=Kathryn%20Tunyasuvunakool)You can also search for this author in\n [PubMed](http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=search&term=Kathryn%20Tunyasuvunakool) [Google Scholar](http://scholar.google.co.uk/scholar?as_q=&num=10&btnG=Search+Scholar&as_epq=&as_oq=&as_eq=&as_occt=any&as_sauthors=%22Kathryn%20Tunyasuvunakool%22&as_publication=&as_ylo=&as_yhi=&as_allsubj=all&hl=en)\n8. Russ Bates[View author publications](/search?author=Russ%20Bates)You can also search for this author in\n [PubMed](http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=search&term=Russ%20Bates) [Google Scholar](http://scholar.google.co.uk/scholar?as_q=&num=10&btnG=Search+Scholar&as_epq=&as_oq=&as_eq=&as_occt=any&as_sauthors=%22Russ%20Bates%22&as_publication=&as_ylo=&as_yhi=&as_allsubj=all&hl=en)\n9. Augustin Žídek[View author publications](/search?author=Augustin%20%C5%BD%C3%ADdek)You can also search for this author in\n [PubMed](http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=search&term=Augustin%20%C5%BD%C3%ADdek) [Google Scholar](http://scholar.google.co.uk/scholar?as_q=&num=10&btnG=Search+Scholar&as_epq=&as_oq=&as_eq=&as_occt=any&as_sauthors=%22Augustin%20%C5%BD%C3%ADdek%22&as_publication=&as_ylo=&as_yhi=&as_allsubj=all&hl=en)\n10. Anna Potapenko[View author publications](/search?author=Anna%20Potapenko)You can also search for this author in\n [PubMed](http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=search&term=Anna%20Potapenko) [Google Scholar](http://scholar.google.co.uk/scholar?as_q=&num=10&btnG=Search+Scholar&as_epq=&as_oq=&as_eq=&as_occt=any&as_sauthors=%22Anna%20Potapenko%22&as_publication=&as_ylo=&as_yhi=&as_allsubj=all&hl=en)\n11. Alex Bridgland[View author publications](/search?author=Alex%20Bridgland)You can also search for this author in\n [PubMed](http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=search&term=Alex%20Bridgland) [Google Scholar](http://scholar.google.co.uk/scholar?as_q=&num=10&btnG=Search+Scholar&as_epq=&as_oq=&as_eq=&as_occt=any&as_sauthors=%22Alex%20Bridgland%22&as_publication=&as_ylo=&as_yhi=&as_allsubj=all&hl=en)\n12. Clemens Meyer[View author publications](/search?author=Clemens%20Meyer)You can also search for this author in\n [PubMed](http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=search&term=Clemens%20Meyer) [Google Scholar](http://scholar.google.co.uk/scholar?as_q=&num=10&btnG=Search+Scholar&as_epq=&as_oq=&as_eq=&as_occt=any&as_sauthors=%22Clemens%20Meyer%22&as_publication=&as_ylo=&as_yhi=&as_allsubj=all&hl=en)\n13. Simon A. A. Kohl[View author publications](/search?author=Simon%20A.%20A.%20Kohl)You can also search for this author in\n [PubMed](http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=search&term=Simon%20A.%20A.%20Kohl) [Google Scholar](http://scholar.google.co.uk/scholar?as_q=&num=10&btnG=Search+Scholar&as_epq=&as_oq=&as_eq=&as_occt=any&as_sauthors=%22Simon%20A.%20A.%20Kohl%22&as_publication=&as_ylo=&as_yhi=&as_allsubj=all&hl=en)\n14. Andrew J. Ballard[View author publications](/search?author=Andrew%20J.%20Ballard)You can also search for this author in\n [PubMed](http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=search&term=Andrew%20J.%20Ballard) [Google Scholar](http://scholar.google.co.uk/scholar?as_q=&num=10&btnG=Search+Scholar&as_epq=&as_oq=&as_eq=&as_occt=any&as_sauthors=%22Andrew%20J.%20Ballard%22&as_publication=&as_ylo=&as_yhi=&as_allsubj=all&hl=en)\n15. Andrew Cowie[View author publications](/search?author=Andrew%20Cowie)You can also search for this author in\n [PubMed](http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=search&term=Andrew%20Cowie) [Google Scholar](http://scholar.google.co.uk/scholar?as_q=&num=10&btnG=Search+Scholar&as_epq=&as_oq=&as_eq=&as_occt=any&as_sauthors=%22Andrew%20Cowie%22&as_publication=&as_ylo=&as_yhi=&as_allsubj=all&hl=en)\n16. Bernardino Romera-Paredes[View author publications](/search?author=Bernardino%20Romera-Paredes)You can also search for this author in\n [PubMed](http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=search&term=Bernardino%20Romera-Paredes) [Google Scholar](http://scholar.google.co.uk/scholar?as_q=&num=10&btnG=Search+Scholar&as_epq=&as_oq=&as_eq=&as_occt=any&as_sauthors=%22Bernardino%20Romera-Paredes%22&as_publication=&as_ylo=&as_yhi=&as_allsubj=all&hl=en)\n17. Stanislav Nikolov[View author publications](/search?author=Stanislav%20Nikolov)You can also search for this author in\n [PubMed](http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=search&term=Stanislav%20Nikolov) [Google Scholar](http://scholar.google.co.uk/scholar?as_q=&num=10&btnG=Search+Scholar&as_epq=&as_oq=&as_eq=&as_occt=any&as_sauthors=%22Stanislav%20Nikolov%22&as_publication=&as_ylo=&as_yhi=&as_allsubj=all&hl=en)\n18. Rishub Jain[View author publications](/search?author=Rishub%20Jain)You can also search for this author in\n [PubMed](http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=search&term=Rishub%20Jain) [Google Scholar](http://scholar.google.co.uk/scholar?as_q=&num=10&btnG=Search+Scholar&as_epq=&as_oq=&as_eq=&as_occt=any&as_sauthors=%22Rishub%20Jain%22&as_publication=&as_ylo=&as_yhi=&as_allsubj=all&hl=en)\n19. Jonas Adler[View author publications](/search?author=Jonas%20Adler)You can also search for this author in\n [PubMed](http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=search&term=Jonas%20Adler) [Google Scholar](http://scholar.google.co.uk/scholar?as_q=&num=10&btnG=Search+Scholar&as_epq=&as_oq=&as_eq=&as_occt=any&as_sauthors=%22Jonas%20Adler%22&as_publication=&as_ylo=&as_yhi=&as_allsubj=all&hl=en)\n20. Trevor Back[View author publications](/search?author=Trevor%20Back)You can also search for this author in\n [PubMed](http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=search&term=Trevor%20Back) [Google Scholar](http://scholar.google.co.uk/scholar?as_q=&num=10&btnG=Search+Scholar&as_epq=&as_oq=&as_eq=&as_occt=any&as_sauthors=%22Trevor%20Back%22&as_publication=&as_ylo=&as_yhi=&as_allsubj=all&hl=en)\n21. Stig Petersen[View author publications](/search?author=Stig%20Petersen)You can also search for this author in\n [PubMed](http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=search&term=Stig%20Petersen) [Google Scholar](http://scholar.google.co.uk/scholar?as_q=&num=10&btnG=Search+Scholar&as_epq=&as_oq=&as_eq=&as_occt=any&as_sauthors=%22Stig%20Petersen%22&as_publication=&as_ylo=&as_yhi=&as_allsubj=all&hl=en)\n22. David Reiman[View author publications](/search?author=David%20Reiman)You can also search for this author in\n [PubMed](http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=search&term=David%20Reiman) [Google Scholar](http://scholar.google.co.uk/scholar?as_q=&num=10&btnG=Search+Scholar&as_epq=&as_oq=&as_eq=&as_occt=any&as_sauthors=%22David%20Reiman%22&as_publication=&as_ylo=&as_yhi=&as_allsubj=all&hl=en)\n23. Ellen Clancy[View author publications](/search?author=Ellen%20Clancy)You can also search for this author in\n [PubMed](http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=search&term=Ellen%20Clancy) [Google Scholar](http://scholar.google.co.uk/scholar?as_q=&num=10&btnG=Search+Scholar&as_epq=&as_oq=&as_eq=&as_occt=any&as_sauthors=%22Ellen%20Clancy%22&as_publication=&as_ylo=&as_yhi=&as_allsubj=all&hl=en)\n24. Michal Zielinski[View author publications](/search?author=Michal%20Zielinski)You can also search for this author in\n [PubMed](http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=search&term=Michal%20Zielinski) [Google Scholar](http://scholar.google.co.uk/scholar?as_q=&num=10&btnG=Search+Scholar&as_epq=&as_oq=&as_eq=&as_occt=any&as_sauthors=%22Michal%20Zielinski%22&as_publication=&as_ylo=&as_yhi=&as_allsubj=all&hl=en)\n25. Martin Steinegger[View author publications](/search?author=Martin%20Steinegger)You can also search for this author in\n [PubMed](http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=search&term=Martin%20Steinegger) [Google Scholar](http://scholar.google.co.uk/scholar?as_q=&num=10&btnG=Search+Scholar&as_epq=&as_oq=&as_eq=&as_occt=any&as_sauthors=%22Martin%20Steinegger%22&as_publication=&as_ylo=&as_yhi=&as_allsubj=all&hl=en)\n26. Michalina Pacholska[View author publications](/search?author=Michalina%20Pacholska)You can also search for this author in\n [PubMed](http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=search&term=Michalina%20Pacholska) [Google Scholar](http://scholar.google.co.uk/scholar?as_q=&num=10&btnG=Search+Scholar&as_epq=&as_oq=&as_eq=&as_occt=any&as_sauthors=%22Michalina%20Pacholska%22&as_publication=&as_ylo=&as_yhi=&as_allsubj=all&hl=en)\n27. Tamas Berghammer[View author publications](/search?author=Tamas%20Berghammer)You can also search for this author in\n [PubMed](http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=search&term=Tamas%20Berghammer) [Google Scholar](http://scholar.google.co.uk/scholar?as_q=&num=10&btnG=Search+Scholar&as_epq=&as_oq=&as_eq=&as_occt=any&as_sauthors=%22Tamas%20Berghammer%22&as_publication=&as_ylo=&as_yhi=&as_allsubj=all&hl=en)\n28. Sebastian Bodenstein[View author publications](/search?author=Sebastian%20Bodenstein)You can also search for this author in\n [PubMed](http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=search&term=Sebastian%20Bodenstein) [Google Scholar](http://scholar.google.co.uk/scholar?as_q=&num=10&btnG=Search+Scholar&as_epq=&as_oq=&as_eq=&as_occt=any&as_sauthors=%22Sebastian%20Bodenstein%22&as_publication=&as_ylo=&as_yhi=&as_allsubj=all&hl=en)\n29. David Silver[View author publications](/search?author=David%20Silver)You can also search for this author in\n [PubMed](http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=search&term=David%20Silver) [Google Scholar](http://scholar.google.co.uk/scholar?as_q=&num=10&btnG=Search+Scholar&as_epq=&as_oq=&as_eq=&as_occt=any&as_sauthors=%22David%20Silver%22&as_publication=&as_ylo=&as_yhi=&as_allsubj=all&hl=en)\n30. Oriol Vinyals[View author publications](/search?author=Oriol%20Vinyals)You can also search for this author in\n [PubMed](http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=search&term=Oriol%20Vinyals) [Google Scholar](http://scholar.google.co.uk/scholar?as_q=&num=10&btnG=Search+Scholar&as_epq=&as_oq=&as_eq=&as_occt=any&as_sauthors=%22Oriol%20Vinyals%22&as_publication=&as_ylo=&as_yhi=&as_allsubj=all&hl=en)\n31. Andrew W. Senior[View author publications](/search?author=Andrew%20W.%20Senior)You can also search for this author in\n [PubMed](http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=search&term=Andrew%20W.%20Senior) [Google Scholar](http://scholar.google.co.uk/scholar?as_q=&num=10&btnG=Search+Scholar&as_epq=&as_oq=&as_eq=&as_occt=any&as_sauthors=%22Andrew%20W.%20Senior%22&as_publication=&as_ylo=&as_yhi=&as_allsubj=all&hl=en)\n32. Koray Kavukcuoglu[View author publications](/search?author=Koray%20Kavukcuoglu)You can also search for this author in\n [PubMed](http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=search&term=Koray%20Kavukcuoglu) [Google Scholar](http://scholar.google.co.uk/scholar?as_q=&num=10&btnG=Search+Scholar&as_epq=&as_oq=&as_eq=&as_occt=any&as_sauthors=%22Koray%20Kavukcuoglu%22&as_publication=&as_ylo=&as_yhi=&as_allsubj=all&hl=en)\n33. Pushmeet Kohli[View author publications](/search?author=Pushmeet%20Kohli)You can also search for this author in\n [PubMed](http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=search&term=Pushmeet%20Kohli) [Google Scholar](http://scholar.google.co.uk/scholar?as_q=&num=10&btnG=Search+Scholar&as_epq=&as_oq=&as_eq=&as_occt=any&as_sauthors=%22Pushmeet%20Kohli%22&as_publication=&as_ylo=&as_yhi=&as_allsubj=all&hl=en)\n34. Demis Hassabis[View author publications](/search?author=Demis%20Hassabis)You can also search for this author in\n [PubMed](http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=search&term=Demis%20Hassabis) [Google Scholar](http://scholar.google.co.uk/scholar?as_q=&num=10&btnG=Search+Scholar&as_epq=&as_oq=&as_eq=&as_occt=any&as_sauthors=%22Demis%20Hassabis%22&as_publication=&as_ylo=&as_yhi=&as_allsubj=all&hl=en)\n### Contributions\n\nJ.J. and D.H. led the research. J.J., R.E., A. Pritzel, M.F., O.R., R.B., A. Potapenko, S.A.A.K., B.R.-P., J.A., M.P., T. Berghammer and O.V. developed the neural network architecture and training. T.G., A.Ž., K.T., R.B., A.B., R.E., A.J.B., A.C., S.N., R.J., D.R., M.Z. and S.B. developed the data, analytics and inference systems. D.H., K.K., P.K., C.M. and E.C. managed the research. T.G. led the technical platform. P.K., A.W.S., K.K., O.V., D.S., S.P. and T. Back contributed technical advice and ideas. M.S. created the BFD genomics database and provided technical assistance on HHBlits. D.H., R.E., A.W.S. and K.K. conceived the AlphaFold project. J.J., R.E. and A.W.S. conceived the end-to-end approach. J.J., A. Pritzel, O.R., A. Potapenko, R.E., M.F., T.G., K.T., C.M. and D.H. wrote the paper.\n\n### Corresponding authors\n\nCorrespondence to\n [John Jumper](mailto:jumper@deepmind.com) or [Demis Hassabis](mailto:dhcontact@deepmind.com).\n\nEthics declarations\n-------------------\n\n\n### Competing interests\n\n\nJ.J., R.E., A. Pritzel, T.G., M.F., O.R., R.B., A.B., S.A.A.K., D.R. and A.W.S. have filed non-provisional patent applications 16/701,070 and PCT/EP2020/084238, and provisional patent applications 63/107,362, 63/118,917, 63/118,918, 63/118,921 and 63/118,919, each in the name of DeepMind Technologies Limited, each pending, relating to machine learning for predicting protein structures. The other authors declare no competing interests.\n\n\nAdditional information\n----------------------\n\n**Peer review information** *Nature* thanks Mohammed AlQuraishi, Charlotte Deane and Yang Zhang for their contribution to the peer review of this work.\n\n**Publisher’s note** Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.\n\nSupplementary information\n-------------------------\n\n### [Supplementary Information](https://static-content.springer.com/esm/art%3A10.1038%2Fs41586-021-03819-2/MediaObjects/41586_2021_3819_MOESM1_ESM.pdf)\n\nDescription of the method details of the AlphaFold system, model, and analysis, including data pipeline, datasets, model blocks, loss functions, training and inference details, and ablations. Includes Supplementary Methods, Supplementary Figures, Supplementary Tables and Supplementary Algorithms.\n\n### [Reporting Summary](https://static-content.springer.com/esm/art%3A10.1038%2Fs41586-021-03819-2/MediaObjects/41586_2021_3819_MOESM2_ESM.pdf)\n\n### [Supplementary Video 1](https://static-content.springer.com/esm/art%3A10.1038%2Fs41586-021-03819-2/MediaObjects/41586_2021_3819_MOESM3_ESM.mp4)\n\nVideo of the intermediate structure trajectory of the CASP14 target T1024 (LmrP) A two-domain target (408 residues). Both domains are folded early, while their packing is adjusted for a longer time.\n\n### [Supplementary Video 2](https://static-content.springer.com/esm/art%3A10.1038%2Fs41586-021-03819-2/MediaObjects/41586_2021_3819_MOESM4_ESM.mp4)\n\nVideo of the intermediate structure trajectory of the CASP14 target T1044 (RNA polymerase of crAss-like phage). A large protein (2180 residues), with multiple domains. Some domains are folded quickly, while others take a considerable amount of time to fold.\n\n### [Supplementary Video 3](https://static-content.springer.com/esm/art%3A10.1038%2Fs41586-021-03819-2/MediaObjects/41586_2021_3819_MOESM5_ESM.mp4)\n\nVideo of the intermediate structure trajectory of the CASP14 target T1064 (Orf8). A very difficult single-domain target (106 residues) that takes the entire depth of the network to fold.\n\n### [Supplementary Video 4](https://static-content.springer.com/esm/art%3A10.1038%2Fs41586-021-03819-2/MediaObjects/41586_2021_3819_MOESM6_ESM.mp4)\n\nVideo of the intermediate structure trajectory of the CASP14 target T1091. A multi-domain target (863 residues). Individual domains’ structure is determined early, while the domain packing evolves throughout the network. The network is exploring unphysical configurations throughout the process, resulting in long ‘strings’ in the visualization.\n\nRights and permissions\n----------------------\n\n\n**Open Access** This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons license, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons license and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this license, visit .\n\n\n[Reprints and Permissions](https://s100.copyright.com/AppDispatchServlet?title=Highly%20accurate%20protein%20structure%20prediction%20with%20AlphaFold&author=John%20Jumper%20et%20al&contentID=10.1038%2Fs41586-021-03819-2©right=The%20Author%28s%29&publication=0028-0836&publicationDate=2021-07-15&publisherName=SpringerNature&orderBeanReset=true&oa=CC%20BY)\n\n\n\n\n\nThis article is cited by\n------------------------\n\n\n\n* ### \n[Deep generative model for drug design from protein target sequence](https://doi.org/10.1186/s13321-023-00702-2)\n\n\n\t+ Yangyang Chen\n\t+ Zixu Wang\n\t+ Tetsuya Sakurai*Journal of Cheminformatics* (2023)\n* ### \n[Insights into the structure-function relationship of the NorQ/NorD chaperones from Paracoccus denitrificans reveal shared principles of interacting MoxR AAA+/VWA domain proteins](https://doi.org/10.1186/s12915-023-01546-w)\n\n\n\t+ Maximilian Kahle\n\t+ Sofia Appelgren\n\t+ Pia Ädelroth*BMC Biology* (2023)\n* ### \n[Formation and characterization of BMP2/GDF5 and BMP4/GDF5 heterodimers](https://doi.org/10.1186/s12915-023-01522-4)\n\n\n\t+ Gregory R. Gipson\n\t+ Kristof Nolan\n\t+ Thomas B. Thompson*BMC Biology* (2023)\n* ### \n[Prediction of protein solubility based on sequence physicochemical patterns and distributed representation information with DeepSoluE](https://doi.org/10.1186/s12915-023-01510-8)\n\n\n\t+ Chao Wang\n\t+ Quan Zou*BMC Biology* (2023)\n* ### \n[Definition of the transcriptional units of inherited retinal disease genes by meta-analysis of human retinal transcriptome data](https://doi.org/10.1186/s12864-023-09300-w)\n\n\n\t+ Karla Alejandra Ruiz-Ceja\n\t+ Dalila Capasso\n\t+ Sandro Banfi*BMC Genomics* (2023)\n\n\n\n\n\nComments\n--------\n\nBy submitting a comment you agree to abide by our [Terms](/info/tandc.html) and [Community Guidelines](/info/community-guidelines.html). If you find something abusive or that does not comply with our terms or guidelines please flag it as inappropriate.", "url": "https://www.nature.com/articles/s41586-021-03819-2", "title": "Highly accurate protein structure prediction with AlphaFold | Nature", "source": "html_articles", "source_type": "webpage", "source_filetype": "pdf", "date_published": "2021-07-14T22:00:00Z", "authors": ["John Jumper", "Richard Evans", "Alexander Pritzel", "Tim Green", "Michael Figurnov", "Olaf Ronneberger", "Kathryn Tunyasuvunakool", "Russ Bates", "Augustin Žídek", "Anna Potapenko", "Alex Bridgland", "Clemens Meyer", "Simon A. A. Kohl", "Andrew J. Ballard", "Andrew Cowie", "Bernardino Romera-Paredes", "Stanislav Nikolov", "Rishub Jain", "Jonas Adler", "Trevor Back", "Stig Petersen", "David Reiman", "Ellen Clancy", "Michal Zielinski", "Martin Steinegger", "Michalina Pacholska", "Tamas Berghammer", "Sebastian Bodenstein", "David Silver", "Oriol Vinyals", "Andrew W. Senior", "Koray Kavukcuoglu", "Pushmeet Kohli", "Demis Hassabis"], "summary": [], "id": "e611f928e0db7ea240b1b056a3f4f42c"} {"text": "[Download PDF](/articles/s41598-019-47540-7.pdf)\n\n\n\n\n\n\n### Subjects\n\n\n* [Natural hazards](/subjects/natural-hazards)\n* [Planetary science](/subjects/planetary-science)\n\n\n\n\n\n\n\n\n\nAn [Author Correction](https://doi.org/10.1038/s41598-019-55816-1) to this article was published on 11 December 2019\n\n\n\n\n\n\n\n\n\nThis article has been [updated](#change-history)\n\n\n\n\n\nAbstract\n--------\n\nWe evaluate the total probability of human extinction from naturally occurring processes. Such processes include risks that are well characterized such as asteroid impacts and supervolcanic eruptions, as well as risks that remain unknown. Using only the information that *Homo sapiens* has existed at least 200,000 years, we conclude that the probability that humanity goes extinct from natural causes in any given year is almost guaranteed to be less than one in 14,000, and likely to be less than one in 87,000. Using the longer track record of survival for our entire genus *Homo* produces even tighter bounds, with an annual probability of natural extinction likely below one in 870,000. These bounds are unlikely to be affected by possible survivorship bias in the data, and are consistent with mammalian extinction rates, typical hominin species lifespans, the frequency of well-characterized risks, and the frequency of mass extinctions. No similar guarantee can be made for risks that our ancestors did not face, such as anthropogenic climate change or nuclear/biological warfare.\n\n\n\n\n\nIntroduction\n------------\n\nOut of all species that have existed, over 99% are now extinct[1](/articles/s41598-019-47540-7#ref-CR1 \"Raup, D. M. Extinction: bad genes or bad luck? (WW Norton & Company, 1992).\"). Although human activity is dramatically increasing extinction rates for many species[2](/articles/s41598-019-47540-7#ref-CR2 \"Barnosky, A. D. et al. Has the Earth’s sixth mass extinction already arrived? Nat. 471, 51 (2011).\"), species extinctions were regular occurrences long before humanity emerged. Many of these extinctions were caused by gradual environmental shifts, evolutionary arms races, or local interspecific competition[3](/articles/s41598-019-47540-7#ref-CR3 \"Smith, J. M. The causes of extinction. Phil. Trans. R. Soc. Lond. B 325, 241–252 (1989).\"),[4](/articles/s41598-019-47540-7#ref-CR4 \"Benton, M. J. The Red Queen and the Court Jester: species diversity and the role of biotic and abiotic factors through time. Sci. 323, 728–732 (2009).\"), while others were abrupt, being part of global mass extinctions caused by asteroid impacts, volcanism, or causes as of yet to be identified[5](/articles/s41598-019-47540-7#ref-CR5 \"Schulte, P. et al. The Chicxulub asteroid impact and mass extinction at the Cretaceous-Paleogene boundary. Sci. 327, 1214–1218 (2010).\"),[6](/articles/s41598-019-47540-7#ref-CR6 \"Wignall, P. B. Large igneous provinces and mass extinctions. Earth-science reviews 53, 1–33 (2001).\"). Could such a catastrophe befall our own species? If so, are the risks greater from natural or anthropogenic sources?\n\nHere, we evaluate the natural ‘background’ extinction rate for *Homo sapiens*. This means considerations of anthropogenic risks such as climate change and nuclear weapons are excluded from our estimates, although these clearly pose existential threats to our own species as well as others. Indeed, it has been hypothesized that the great majority of human extinction risk comes from anthropogenic sources[7](/articles/s41598-019-47540-7#ref-CR7 \"Bostrom, N. Existential risk prevention as global priority. Glob. Policy 4, 15–31 (2013).\"),[8](/articles/s41598-019-47540-7#ref-CR8 \"Sandberg, A. Human extinction from natural hazard events. In Oxford Research Encyclopedia of Natural Hazard Science (2018).\"). But by limiting our analysis to natural risks that our predecessors also faced, we can draw on data spanning many thousands (or millions) of years. Obtaining bounds on natural extinction rates also enables an indirect and partial test of the hypothesis that anthropogenic risks are greater than natural ones, as sufficiently low natural extinction risk will imply higher relative risks from anthropogenic sources.\n\nEstimating such an extinction rate directly is impossible. We have no examples of *Homo sapiens* extinction, so the most directly relevant data are non-existent. An alternative approach would be to enumerate the different types of naturally occurring hazards (e.g. asteroids, supervolcanoes), estimate their independent probability of causing extinction, and then use these probabilities to derive an aggregate extinction rate. However, this method has its own shortcomings. Beyond the great uncertainties around the probabilities of each risk, there could also be unknown risks that fail to be included. It would be hard to say with confidence that any list of risks had captured all natural hazards to humanity.\n\nWe can bypass these problems by instead considering the length of time that humanity has survived so far[9](/articles/s41598-019-47540-7#ref-CR9 \"Gott, J. R. Implications of the Copernican principle for our future prospects. Nat. 363, 315–319 (1993).\"),[10](/articles/s41598-019-47540-7#ref-CR10 \"Pisaturo, R. Past longevity as evidence for the future. Philos. Sci. 76, 73–100 (2009).\"). This survival time can be used to estimate an upper bound on the extinction rate from all natural sources combined, including from sources for which we remain unaware. However, this approach could be subject to a particular form of sample bias known as an observation selection bias. These observer selection biases occur when a sample is not representative of all outcomes, but rather a subset of outcomes that are compatible with the existence of the observers[11](/articles/s41598-019-47540-7#ref-CR11 \"Bostrom, N. Anthropic bias: Observation selection effects in science and philosophy (Routledge, 2013).\"). For example, if human existence required a 10 million year (Myr) period of evolution free from asteroid impacts, any human observers will necessarily find in their evolutionary history a period of 10 Myr that is free of asteroid impacts, regardless of the true impact rate. Inferring a rate based on those 10 Myr could therefore be misleading, and methods must to be used to correct for this bias[12](/articles/s41598-019-47540-7#ref-CR12 \"Ćirković, M. M., Sandberg, A. & Bostrom, N. Anthropic shadow: observation selection effects and human extinction risks. Risk Analysis: An Int. J. 30, 1495–1506 (2010).\").\n\nUsing data from archeological and fossil records, we place an upper bound on the natural rate of human extinction. We then test this model against possible forms of observer selection bias, and demonstrate that the data are unlikely to be severely biased due to these effects. We finally cross-check our conclusions against alternative forms of data, including mammalian extinction rates, the temporal ranges of other hominin species, and the frequency of potential catastrophes and mass extinctions.\n\nBounding the Extinction Rate Based on Age of Humanity\n-----------------------------------------------------\n\nAnatomically modern human fossils in Ethiopia have been dated to 195 ± 5 thousand years ago (kya)[13](/articles/s41598-019-47540-7#ref-CR13 \"McDougall, I., Brown, F. H. & Fleagle, J. G. Stratigraphic placement and age of modern humans from Kibish, Ethiopia. Nat. 433, 733 (2005).\"). A more recent fossil discovery in Morocco of an anatomically modern human has been dated to 315 ± 34 kya[14](/articles/s41598-019-47540-7#ref-CR14 \"Richter, D. et al. The age of the hominin fossils from Jebel Irhoud, Morocco, and the origins of the Middle Stone Age. Nat. 546, 293–296 (2017).\"),[15](/articles/s41598-019-47540-7#ref-CR15 \"Hublin, J.-J. et al. New fossils from Jebel Irhoud, Morocco and the pan-African origin of Homo sapiens. Nat. 546, 289 (2017).\") (though the fossil may exhibit more primitive neurocranial and endocranial morphology). Given that *Homo sapiens* has existed for hundreds of thousands of years, what can we infer about our background rate of extinction?\n\nAssuming that we share a common extinction rate with our predecessors, we can rule out rates that are too high to be compatible with this track record of survival. As our aim is to construct an upper bound, we can set aside the possibility that modern human technology, habitat range, and population size have reduced a number of natural extinction risks. The upper bound is only violated if we have reason to believe current extinction rates are higher than those our predecessors faced. Since we exclude anthropogenic risks from our analysis, we also set aside the majority of the ways in which this could be the case, although we acknowledge there exist boundary cases between purely natural and anthropogenic risks (e.g. a naturally emerging disease could be spread further by modern technology). Ultimately the scope of the upper bound is limited to all risks that have remained constant (or have been reduced) over the past few hundred thousand years.\n\n### Likelihood of extinction rates\n\nAnalysis of taxonomic survivorship curves and temporal ranges for a wide variety of taxa suggest that extinction probabilities can be approximated well by assuming a constant risk of extinction over time[16](#ref-CR16 \"Van Valen, L. A new evolutionary law. Evol Theory 1, 1–30 (1973).\"),[17](#ref-CR17 \"Alroy, J. Constant extinction, constrained diversification, and uncoordinated stasis in North American mammals. Palaeogeogr. Palaeoclimatol. Palaeoecol. 127, 285–311 (1996).\"),[18](/articles/s41598-019-47540-7#ref-CR18 \"Foote, M. & Raup, D. M. Fossil preservation and the stratigraphic ranges of taxa. Paleobiology 22, 121–140 (1996).\"). Under this model, extinction can be represented by the exponential distribution with constant extinction rate *μ*. The probability that humanity goes extinct before time *t* is given by the cumulative distribution function *P*(*T* ≤ *t*) = 1 − *e*−*μt*, where *T* is the random variable denoting the longevity of our species. Conversely, the probability that humanity makes it beyond time *t* is *P*(*T* ≥ *t*) = *e*−*μt*.\n\nWe want to evaluate the likelihood of an extinction rate *μ*, given the observation that humanity has lasted up to time *t* (so we know that the total longevity of humanity *T* ≥ *t*). This can be evaluated as the likelihood function \\( {\\mathcal L} (\\mu |T\\ge t)={e}^{-\\mu t}\\). We compute the likelihood of extinction rates between 10−8 and 10−4 given a number of different plausible starting dates for *Homo sapiens* outlined in Fig. [1](/articles/s41598-019-47540-7#Fig1) and Table [1](/articles/s41598-019-47540-7#Tab1).\n\n**Figure 1**[![figure 1](//media.springernature.com/lw685/springer-static/image/art%3A10.1038%2Fs41598-019-47540-7/MediaObjects/41598_2019_47540_Fig1_HTML.png)](/articles/s41598-019-47540-7/figures/1)Likelihood of extinction rates given our track record of survival so far, with estimated ranges of Hominin extinction rates, mammalian extinction rates, and mass extinction frequency included for reference. Blue horizontal lines indicate likelihood of 10% and 1%. Rates exceeding 6.9 × 10−5 are ruled out even with the most conservative data. Extending humanity’s track record of survival to match older fossils, the divergence with *Homo neanderthalensis*, or the origin of *Homo* creates even stricter bounds.\n\n[Full size image](/articles/s41598-019-47540-7/figures/1)**Table 1 Survival times and resulting upper bounds.**[Full size table](/articles/s41598-019-47540-7/tables/1)Assuming a 200 thousand year (kyr) survival time, we can be exceptionally confident that rates do not exceed 6.9 × 10−5. This corresponds to an annual extinction probability below roughly 1 in 14,000. The relative likelihood for such high extinction rates are below 10−6 (one in a million) when compared to a rate of 10−8. If we assume that our track record extends further, this upper bound becomes stronger. Using the fossil dated to 315 ka as a starting point for humanity gives an upper bound of *μ* < 4.4 × 10−5, corresponding to an annual extinction probability below 1 in 22,800. Using the emergence of *Homo* as our starting point pushes the initial bound back a full order of magnitude, resulting in an annual extinction probability below 1 in 140,000.\n\nWe can also relax the one in million relative likelihood constraint and derive less conservative upper bounds. An alternative bound would be rates with relative likelihood below 10−1 (1 in 10) when compared to the baseline rate of 10−8. If we assume humanity has lasted 200 kyr, we obtain a bound of *μ* < 1.2 × 10−5, corresponding to an annual extinction probability below 1 in 87,000. Using the 2 Myr origin of *Homo* strengthens the bound by an order of magnitude in a similar way and produces annual extinction probabilities below 1 in 870,000.\n\nIt is worth noting that this model can be generalised to allow for a varying extinction rate over time *μ*(*t*), so that the probability of surviving past time *t* is given by *P*(*T* ≥ *t*) = *e*−Θ(*t*)*t*, where \\({\\rm{\\Theta }}(t)=(1/t){\\int }\\_{0}^{t}\\,\\mu (s)ds\\). The upper bound on Θ(*t*), the average extinction rate over the interval, can then be calculated in the same way as for the constant rate model.\n\nObservation Selection Effects\n-----------------------------\n\nThe data on humanity’s survival time could be subject to survivorship bias. If early *Homo sapiens* requires a long period of time to develop the intellectual machinery needed to make scientific observations, then such observations could not include short evolutionary histories, regardless of the extinction rate. The amount of information we could derive from a long track record of survival would therefore be limited due to this observation selection effect. Such a track record could indicate a low extinction rate, or be the byproduct of lucky ancestors surviving high extinction rates long enough to beget progeny capable of making scientific observations. One might therefore object that the bounds on the extinction rate we have estimated are too low[12](/articles/s41598-019-47540-7#ref-CR12 \"Ćirković, M. M., Sandberg, A. & Bostrom, N. Anthropic shadow: observation selection effects and human extinction risks. Risk Analysis: An Int. J. 30, 1495–1506 (2010).\"),[23](/articles/s41598-019-47540-7#ref-CR23 \"Manheim, D. Questioning estimates of natural pandemic risk. Heal. security (2018).\"). Here, we examine and respond to this concern.\n\n### Models to quantify potential sample bias\n\nTo model observation selection bias, let us assume that after *Homo sapiens* first arises another step must be reached. This could represent the origin of language, writing, science, or any relevant factor that would transition early humans into the reference class of those capable of making observations (we call this step ‘observerhood’). Let this step be a random variable denoted *S*, with cumulative distribution function *F**S*(*t*). As we are examining natural risks, we assume that *S* and *T* are independent. The probability that humanity survives long enough to reach observerhood status (via intelligence, language, writing, science, etc) can be found with the following integral:\n\n$$P(T > S)={\\int }\\_{0}^{\\infty }\\,{f}\\_{T}(t){F}\\_{S}(t)dt$$\n (1)\n where *f**T*(*t*) = *μe*−*μt*, the probability of extinction at time *t*. We evaluate an adjusted likelihood function \\({ {\\mathcal L} }^{\\ast }(\\mu |T > t)\\), denoting that we are taking the likelihood of an extinction rate *μ* given that humanity has survived to time *t*, and the fact that we are conditioning on the existence of observers such that *T* > *S*. This results in the adjusted likelihood function:\n\n$${ {\\mathcal L} }^{\\ast }(\\mu |T > t)=P(T > t|T > S,\\mu )$$\n (2)\n $$=\\,\\frac{1}{c}{\\int }\\_{t}^{\\infty }\\,{f}\\_{T}(s){F}\\_{S}(s)ds$$\n (3)\n where *c* = *P*(*T* > *S*) is a normalising constant. We evaluate a model with four variations for the observerhood step: a model in which observerhood occurs as a single event that has a constant rate over time, a model with an increasing rate over time, a model with multiple steps, and a model where observerhood simply requires a fixed amount of time.\n\nIf desired, we could more crisply define this observerhood property as the ability for a species to collect reliable data on its own track record of survival (e.g. via fossil dating) and analyse it. When correcting for observation selection effects, we are simply conditioning on the fact that our species has developed the ability to conduct this analysis. The observerhood property need not invoke consciousness or be the property of a biological species—a machine estimating a parameter would need to account for observer selection bias if its ability to make such estimates were correlated with the parameter in question.\n\n### Model 1: Single step, constant rate\n\nOur first model assumes that observerhood has a constant rate of occurrence *θ*, so that *S* is exponentially distributed with cumulative distribution function: *F**S*(*t*) = 1 − *e*−*θt*. This model describes a process in which the transition from early humans into observers occurs by chance as a single step. This could represent the hypothesis that hierarchical language emerged in humans as the byproduct of a chance mutation[24](/articles/s41598-019-47540-7#ref-CR24 \"Bolhuis, J. J., Tattersall, I., Chomsky, N. & Berwick, R. C. How could language have evolved? PLoS biology 12, e1001934 (2014).\"). With this model, the probability that observers arrive before extinction is *P*(*T* > *S*) = *θ*(*θ* + *μ*)−1. Our likelihood function can be analytically derived:\n\n$${ {\\mathcal L} }^{\\ast }(\\mu |T > t)=(\\frac{\\theta +\\mu }{\\theta }){\\int }\\_{t}^{\\infty }\\,\\mu {e}^{-\\mu s}(1-{e}^{-\\theta s})ds$$\n (4)\n $$=\\,(\\frac{\\theta +\\mu }{\\theta }){e}^{-\\mu t}-(\\frac{\\mu }{\\theta }){e}^{-(\\mu +\\theta )t}$$\n (5)\n ### Model 2: single step, increasing rate\n\nOur second model similarly assumes that a single step is needed but that the rate of observerhood increases over time. This model could represent increasing population size or population density, which could in turn drive cultural evolution and increase the probability of such a step[25](/articles/s41598-019-47540-7#ref-CR25 \"Powell, A., Shennan, S. & Thomas, M. G. Late Pleistocene demography and the appearance of modern human behavior. Sci. 324, 1298–1301 (2009).\"). We represent this with a Weibull distribution with cumulative distribution function \\({F}\\_{S}(t)=1-{e}^{-{(\\theta t)}^{k}}\\) where *k* > 1 indicates increasing rate over time (when *k* = 1, this is the same as the exponential in Model 1). We use numerical integration to evaluate the likelihood function.\n\n### Model 3: multiple steps, constant rate\n\nOur third model assumes that there are multiple steps that need to occur in a sequence in order to get observers. This could represent more incremental development of tools, culture, or language. We assume that each step is exponentially distributed with rate *θ*, so that the timing of the final *k*th step follows an Erlang distribution with cumulative distribution function:\n\n$${F}\\_{S}(t)=1-\\sum \\_{n=0}^{k-1}\\,\\frac{1}{n!}{e}^{-\\theta t}{(\\theta t)}^{n}.$$\n (6)\n Note that when *k* = 1, the distribution is the same as the exponential in Model 1. We use numerical integration to evaluate the likelihood function.\n\n### Model 4: fixed time requirement\n\nOur final model assumes that it takes a fixed amount of time *τ* to reach observerhood. This is an extreme model that allows for no chance, but could represent a gradual and deterministic accumulation of traits. The probability that observerhood has been reached before time *t* is therefore *F**S*(*t*) = 1[*t*>*τ*], the characteristic function that takes the value 1 when *t* > *τ* and 0 otherwise. The probability that humanity survives past time *τ* is 1 − *F**T*(*τ*) = *e*−*μτ*. Our likelihood function of *μ* is:\n\n$${ {\\mathcal L} }^{\\ast }(\\mu |T > t)=\\frac{1}{{e}^{-\\mu \\tau }}{\\int }\\_{t}^{\\infty }\\,\\mu {e}^{-\\mu s}{1}\\_{[s > \\tau ]}ds$$\n (7)\n $$=\\,{e}^{-\\mu (t-\\tau )}.$$\n (8)\n This likelihood expression can also be derived using the memoryless property of the exponential. It is worth noting that the fixed time model is a limiting case for both the increasing rate model and the multiple steps model. Taking the limit of Model 2 as *k* → ∞ results in a fixed time model with *τ* = *θ*−1. Similarly, Model 3 converges to a fixed time model as the number of steps increases and the expected time of each step decreases (having infinitely many steps in the limit, each of which is infinitely short).\n\n### Results of sample bias models\n\nWe evaluate the likelihood of extinction rates between 10−8 and 10−2, given a human survival time of 200 kyr and a wide range of different rates at which observers could originate (Fig. [2](/articles/s41598-019-47540-7#Fig2)). The first thing to note about the first three models is that when the observerhood rates are sufficiently rapid, the likelihood function converges to the unbiased version in the previous section. This can be verified by taking limits: for all of the models as *θ* → ∞ (or *τ* → 0 in the case of the fixed time model), \\({ {\\mathcal L} }^{\\ast }(\\mu |T > t)\\to {e}^{-\\mu t}\\). If observerhood is expected to occur quickly, then we can take a 200 kyr track record of survival at face value and estimate the extinction rate without observation selection bias.\n\n**Figure 2**[![figure 2](//media.springernature.com/lw685/springer-static/image/art%3A10.1038%2Fs41598-019-47540-7/MediaObjects/41598_2019_47540_Fig2_HTML.png)](/articles/s41598-019-47540-7/figures/2)Models of observer selection bias. Surface plots show likelihood for combinations of *μ* and *θ* (where *k* = 3 for Models 2 and 3) or *τ* in Model 4. Upper righthand plots show how likelihood shifts when *θ* → 0 in Model 1, and for a variety of *k* values in Models 2 and 3. For the first three models, the unbiased model is recovered for large *θ*, and results start to become biased as the expected observerhood time approaches humanity’s track record of survival. However, even as *θ* → 0, the bias is limited, and the likelihood of rates exceeding 10−4 remains at zero. This is only violated in the final fixed time model, or in models 2 and 3 when *k* is sufficiently large.\n\n[Full size image](/articles/s41598-019-47540-7/figures/2)However, as the observerhood rates decrease to the point where the expected observerhood time approaches an order of magnitude close to 200 kyr, observer selection bias emerges. Rates that were previously ruled out by our track record of survival are assigned higher likelihoods, since a portion of the track record is a necessity for observers (Fig. [2](/articles/s41598-019-47540-7#Fig2)). For example in Model 1, when *θ* = 2 × 10−4 (corresponding to an expected observerhood time of 20 kyr), the relative likelihood of *μ* = 6.9 × 10−5 is increased by a factor of 2.3 (from 10−6 to 2.3 × 10−6). To get a likelihood of 10−6 (corresponding to the most conservative upper bound), the rate must be set at 7.3 × 10−5 (see all edited bounds in Table [2](/articles/s41598-019-47540-7#Tab2)). Interestingly though, this effect is limited. Even as observerhood rates slow to the point where expected observerhood time greatly exceeds 200 kyr (for example exceeding 20 billion years), the revised upper bounds remain within a factor of 2 of the original bounds. The stricter the bound, the weaker the potential bias: for example the 10−6 likelihood bound is only changed by a factor of about 1.2 in the limit as *θ* → 0. Although there would be some sample bias, there is a hard ceiling on how much our track record of survival can be distorted by observation selection effects.\n\n**Table 2 Upper bounds of *μ* with model 1 bias.**[Full size table](/articles/s41598-019-47540-7/tables/2)The reason slow rates of observerhood have a limited impact on our estimates is because if the extinction rate were exceptionally high, the lucky humans that do successfully survive to observerhood will have achieved such a status unusually quickly, and therefore will still observe a very short track record of survival. A long track record of survival is therefore still sufficient to rule out high extinction rates paired with low observerhood rates. We can demonstrate this by examining the typical time it takes for lucky survivors to reach observerhood, assuming a high extinction rate and a low observerhood rate. For example, in the single step constant rate model when *θ* = 10−6 (corresponding to an expected observerhood time of 1 Myr) and *μ* = 10−3 (corresponding to a typical extinction time of 1000 years), the expected observerhood time conditional on these high extinction rates is 1000 years. A typical observer will thus still have a very short track record of survival. Models with increasing rates or multiple steps exhibit the same property, although the bias is larger depending on parameter *k*. For both model 2 and 3 with *θ* = 10−6, *μ* = 10−3, and *k* = 2 (parameters normally corresponding to an expected observerhood time of 830 kyr for Model 2 and 2 Myr for model 3), the high extinction rates will still result in a typical observer emerging unusually early and having only about a 2000 year track record of survival. This can be also seen in Fig. [2](/articles/s41598-019-47540-7#Fig2) where for Models 1, 2, and 3, the likelihood of high extinction rates exceeding 10−4 are still assigned low likelihood regardless of *θ*.\n\nHowever, severe observer selection bias can occur in Models 2 and 3 as *k* becomes larger, shaping the observerhood distribution such that early observerhood is vanishingly unlikely and late observerhood almost guaranteed. In the most extreme case this is represented by the fixed time model, where the probability of observerhood jumps from 0 to 1 when *t* = *τ* (the fixed time model is also the limiting case when *k* → ∞). If that fixed amount of time is long enough (say, exceeding 190 or 195 kyr), a 200 kyr track record of survival is no longer sufficient to rule out extinction rates greater than 10−4. This result occurs as the fixed time model prohibits any possibility of observerhood occurring unusually quickly. Any lineage of *Homo sapiens* lucky enough to survive long enough to obtain observer status must necessarily have a survival time greater than *τ*, which means that being an observer with a survival time of *τ* conveys zero information about the extinction rate.\n\nFor numerous reasons, we find the fixed time model to be implausible. Virtually all biological and cultural processes involve some degree of contingency, and there is no fundamental reason to think that gaining the ability to make scientific observations would be any different. To illustrate a comparison, let us consider a world in which the extinction rate is 10−4 (averaging one extinction every 10,000 years), but observerhood status takes a fixed 200 kyr. Under this model, humanity successfully surviving long enough to reach observer status is an event with 1 in 200 million chance. Given observation selection bias, we cannot rule out the possibility of rare events that are required for our observations. But we could ask why a 1 in 200 million chance event could not also include the possibility that modern human observers would emerge unusually rapidly. Language, writing, and modern science are perhaps highly unlikely to develop within ten thousand years of the first modern humans, but it seems exceptionally overconfident to put the odds at fewer than 1 in 200 million.\n\nA similar line of reasoning can be applied to determine whether the increasing rate and multiple step models with high *k* are reasonable. We test this by asking what parameters would be needed to expect a 200 kyr track record of survival with an extinction rate at our conservative upper bound of *μ* = 6.9 × 10−5. For the increasing rate model, observerhood is expected after 203 kyr with *θ* = 10−7 and *k* = 14 and for the multiple step model, observerhood is expected after 190 kyr with *θ* = 10−7 and *k* = 16. Although these models do not assign strictly zero probability to early observerhood times, the probabilities are still vanishingly small. With an increasing rate and these parameters, observerhood has less than a one in a trillion chance of occurring within 10,000 years (3.4 × 10−14), and about 1% chance of occurring within 100,000 years. With multiple steps and these parameters, observerhood has less than one in a trillion chance of occurring within 10,000 years (5.6 × 10−17), and less than a 0.02% chance of occurring within 100,000 years. In a similar fashion to the fixed time model, we feel that these models exhibit unrealistic levels of confidence in late observerhood times.\n\nAlthough the plausibility of the fixed time (or nearly fixed time) models is hard to test directly, the wide variance in the emergence of modern human behavior across geography offers one source of data that can test their plausibility. The Upper Palaeolithic transition occurred about 45 kya in Europe and Western Asia, marked by the widespread emergence of modern human behaviour[25](/articles/s41598-019-47540-7#ref-CR25 \"Powell, A., Shennan, S. & Thomas, M. G. Late Pleistocene demography and the appearance of modern human behavior. Sci. 324, 1298–1301 (2009).\") (e.g. symbolic artwork, geometric blades, ornamentation). But strong evidence exists for the sporadic appearance of this modern human behaviour much earlier in parts of Africa[26](/articles/s41598-019-47540-7#ref-CR26 \"Henshilwood, C. S. et al. Emergence of modern human behavior: Middle Stone Age engravings from South Africa. Sci. 295, 1278–1280 (2002).\"),[27](/articles/s41598-019-47540-7#ref-CR27 \"McBrearty, S. & Brooks, A. S. The revolution that wasn’t: a new interpretation of the origin of modern human behavior. J. human evolution 39, 453–563 (2000).\"), including evidence of artwork and advanced tools as early as 164 kya[28](/articles/s41598-019-47540-7#ref-CR28 \"Marean, C. W. et al. Early human use of marine resources and pigment in South Africa during the Middle Pleistocene. Nat. 449, 905 (2007).\"). Although numerous factors could have prevented the Upper Palaeolithic transition from occurring quickly, the fact that some human communities made this transition more than 100 kyr earlier than the rest of humanity indicates that a much earlier development trajectory is not entirely out of the question.\n\nIn summary, observer selection effects are unlikely to introduce major bias to our track record of survival as long as we allow for the possibility of early observers. Deceptively long track records of survival can occur if the probability of early observers is exceptionally low, but we find these models implausible. The wide variance in modern human behavior is one source of data that suggests our track record is unlikely to be severely biased. We can also turn to other sources of indirect data to test for observer selection bias.\n\nTesting the Bound with Indirect Data\n------------------------------------\n\nWe cross check our upper bound against four other sources of data: mammalian extinction rates, survival times of other human species, rates of potential catastrophes, and mass extinction rates. Although these alternative data do not directly predict the background extinction rate of *Homo sapiens* per se, the rates of extinction are likely generated by similar processes and thus enable an indirect test of the upper bound. If our upper bound is sound (not biased by observer selection effects or otherwise flawed), we can make testable predictions that it will be (A) broadly consistent with the extinction rates for similar species, and (B) not exceeded by the rate of potential catastrophes or mass extinctions. As the extinction rate of other species and catastrophes many millions of years ago have little bearing on our ability to make scientific observations, these data are also less subject to potential observer selection bias.\n\n### Mammalian extinction rates\n\nWe first evaluate whether the upper bound is consistent with extinction rates for a typical mammalian species. Using fossil record data, median extinction rates for mammals have been estimated as high as 1.8 extinctions per million species years (E/MSY)[2](/articles/s41598-019-47540-7#ref-CR2 \"Barnosky, A. D. et al. Has the Earth’s sixth mass extinction already arrived? Nat. 471, 51 (2011).\"), or equivalently *μ* = 1.8 × 10−6. Other estimates using fossil record data range from 0.165 extinctions per million genus years[17](/articles/s41598-019-47540-7#ref-CR17 \"Alroy, J. Constant extinction, constrained diversification, and uncoordinated stasis in North American mammals. Palaeogeogr. Palaeoclimatol. Palaeoecol. 127, 285–311 (1996).\") to 0.4 E/MSY for Cenozoic mammals[18](/articles/s41598-019-47540-7#ref-CR18 \"Foote, M. & Raup, D. M. Fossil preservation and the stratigraphic ranges of taxa. Paleobiology 22, 121–140 (1996).\"). Alternative methods using molecular phylogeny suggest a much lower rate of 0.023 E/MSY for mammals[29](/articles/s41598-019-47540-7#ref-CR29 \"De Vos, J. M., Joppa, L. N., Gittleman, J. L., Stephens, P. R. & Pimm, S. L. Estimating the normal background rate of species extinction. Conserv. Biol. 29, 452–462 (2015).\") and rates of 0.219–0.359 E/MSY for primates[30](/articles/s41598-019-47540-7#ref-CR30 \"Arbour, J. H. & Santana, S. E. A major shift in diversification rate helps explain macroevolutionary patterns in primate species diversity. Evol. 71, 1600–1613 (2017).\"), although these methods have been criticized[31](/articles/s41598-019-47540-7#ref-CR31 \"Rabosky, D. L. Extinction rates should not be estimated from molecular phylogenies. Evol. 64, 1816–1824 (2010).\"). All of these estimated background rates are consistent with our upper bound. It is worth noting that *Homo sapiens* may be at lower extinction risk than a typical mammalian species due to a large habitat range, large population size, and having a generalist diet, which are all traits that militate against extinction risk (whereas long generation times and large body mass are sometimes correlated with increased extinction risk)[32](/articles/s41598-019-47540-7#ref-CR32 \"Purvis, A., Gittleman, J. L., Cowlishaw, G. & Mace, G. M. Predicting extinction risk in declining species. Proc. Royal Soc. Lond. B: Biol. Sci. 267, 1947–1952 (2000).\"),[33](/articles/s41598-019-47540-7#ref-CR33 \"Fritz, S. A., Bininda-Emonds, O. R. & Purvis, A. Geographical variation in predictors of mammalian extinction risk: big is bad, but only in the tropics. Ecol. letters 12, 538–549 (2009).\").\n\n### Hominin survival times\n\nNext, we evaluate whether the upper bound is consistent with the broader hominin fossil record. There is strong evidence that *Homo erectus* lasted over 1.7 Myr and *Homo habilis* lasted 700 kyr[21](/articles/s41598-019-47540-7#ref-CR21 \"Wood, B. & K. Boyle, E. Hominin taxic diversity: Fact or fantasy? Am. journal physical anthropology 159, 37–78 (2016).\"), indicating that our own species’ track record of survival exceeding 200 kyr is not unique within our genus. Fossil record data indicate that the median hominin temporal range is about 620 kyr, and after accounting for sample bias in the fossil record this estimate rises to 970 kyr[22](/articles/s41598-019-47540-7#ref-CR22 \"Robinson, C., Campbell, T. L., Cote, S. & de Ruiter, D. J. Temporal ranges and ancestry in the hominin fossil record: The case of Australopithecus sediba. South Afr. J. Sci. 114, 1–7 (2018).\"). Although it is notable that the hominin lineage seems to have a higher extinction rate than those typical of mammals, these values are still consistent with our upper bound. It is perhaps also notable that some hominin species were likely driven to extinction by our own lineage[34](/articles/s41598-019-47540-7#ref-CR34 \"Banks, W. E. et al. Neanderthal extinction by competitive exclusion. PLoS One 3, e3972 (2008).\"), suggesting an early form of anthropogenic extinction risk.\n\n### Individual sources of extinction risk\n\nThe upper bound can also be evaluated against the frequency of events that could pose extinction risks (examples provided in Table [3](/articles/s41598-019-47540-7#Tab3)). If any particular risk (such as those from asteroid impacts) is known to have a higher rate than our bound of 6.9 × 10−5, this could undermine and potentially falsify our hypothesis. We evaluate the frequencies of four types of potential disasters for which credible quantitative estimates exist: asteroid impacts, supervolcanic eruptions, stellar explosions, and vacuum collapse. All of these risks have been estimated to occur with a frequency well below our bound (Table [3](/articles/s41598-019-47540-7#Tab3)), with the exception of smaller supervolcanic eruptions. Recent work has suggested the frequency of eruptions ejecting >103 km3 of material exceeds our upper bound of 6.9 × 10−5 with a recurrence time of 17 kyr[35](/articles/s41598-019-47540-7#ref-CR35 \"Rougier, J., Sparks, R. S. J., Cashman, K. V. & Brown, S. K. The global magnitude–frequency relationship for large explosive volcanic eruptions. Earth Planet. Sci. Lett. 482, 621–629 (2018).\").\n\n**Table 3 Catastrophe frequency estimates.**[Full size table](/articles/s41598-019-47540-7/tables/3)However, it is important to note that the smaller eruptions within this category do not necessarily have a high probability of causing human extinction. The most severe eruption of the past 2 million years occurred just 74 kya, and it is unclear whether the human population at the time was at risk of extinction. Some argue that the human population suffered a major bottleneck at the same time as the eruption[43](/articles/s41598-019-47540-7#ref-CR43 \"Ambrose, S. H. Late pleistocene human population bottlenecks, volcanic winter, and differentiation of modern humans. J. human evolution 34, 623–651 (1998).\"), although this theory remains controversial[44](/articles/s41598-019-47540-7#ref-CR44 \"Baum, S. D., Denkenberger, D. C., Pearce, J. M., Robock, A. & Winkler, R. Resilience to global food supply catastrophes. Environ. Syst. Decis. 35, 301–313 (2015).\"). Some climate records averaged over decades fail to observe a severe volcanic winter in Africa at the time[45](/articles/s41598-019-47540-7#ref-CR45 \"Lane, C. S., Chorn, B. T. & Johnson, T. C. Ash from the toba supereruption in lake malawi shows no volcanic winter in east africa at 75 ka. Proc. Natl. Acad. Sci. 110, 8025–8029 (2013).\") and archaeological evidence shows that human communities in South Africa thrived both before and after the eruption[46](/articles/s41598-019-47540-7#ref-CR46 \"Smith, E. I. et al. Humans thrived in South Africa through the Toba eruption about 74,000 years ago. Nat. 555, 511 (2018).\") (although these data are not sufficient to rule out a severe short-lived catastrophe followed by a fast recovery in population). More conclusively, most members of the Hominidae family did not suffer population bottlenecks around the time, with the possible exception of Eastern chimpanzees and Sumatran orangutans[47](/articles/s41598-019-47540-7#ref-CR47 \"Prado-Martinez, J. et al. Great ape genetic diversity and population history. Nat. 499, 471 (2013).\"). The lack of dramatic evidence suggesting other species extinctions or bottlenecks undercuts the possibility that humanity’s survival was highly improbable and is observed only due to observation selection effects. However, a handful of substantially larger flood basalt events have taken place over the past 250 Myr that have been linked to mass extinctions[39](/articles/s41598-019-47540-7#ref-CR39 \"Rampino, M. R. & Stothers, R. B. Flood basalt volcanism during the past 250 million years. Sci. 241, 663–668 (1988).\"),[48](/articles/s41598-019-47540-7#ref-CR48 \"Bond, D. P. & Wignall, P. B. Large igneous provinces and mass extinctions: an update. Volcanism, Impacts, Mass Extinctions: Causes Eff. 505, 29–55 (2014).\"). These events occur with a frequency of roughly once every 20–30 Myr, much more infrequently than smaller eruptions. If we assume that human extinction is threatened only from larger volcanic eruptions well exceeding 103 km3, then none of the risk frequencies we have catalogued come within an order of magnitude of the conservative upper bound.\n\nSimilarly, impacts from smaller asteroid around 1 km in diameter may not have a high probability of causing human extinction. Although it is hard to estimate the consequences of such impacts, some researchers have argued that such impacts would fall below the threshold for a global catastrophe[49](/articles/s41598-019-47540-7#ref-CR49 \"Toon, O. B., Zahnle, K., Morrison, D., Turco, R. P. & Covey, C. Environmental perturbations caused by the impacts of asteroids and comets. Rev. Geophys. 35, 41–78 (1997).\"). Impacts that disperse enough dust and sulphites to significantly disrupt photosynthesis occur much more rarely, with an estimated frequency of about 15 Myr years[49](/articles/s41598-019-47540-7#ref-CR49 \"Toon, O. B., Zahnle, K., Morrison, D., Turco, R. P. & Covey, C. Environmental perturbations caused by the impacts of asteroids and comets. Rev. Geophys. 35, 41–78 (1997).\"). If we assume human extinction is only threatened by these more severe impacts exceeding 5 km, each of these catastrophe frequencies falls well below even our most optimistic bound of 1 in 870,000 chance of extinction per year.\n\n### Mass extinction frequency\n\nA mass extinction is marked by substantially increased extinction of multiple geographically widespread taxa over a relatively short period of time[50](/articles/s41598-019-47540-7#ref-CR50 \"Sepkoski, J. Phanerozoic overview of mass extinction. In Patterns and Processes in the History of Life, 277–295 (Springer, 1986).\"). There have been five major mass extinctions in the past 541 Myr[51](/articles/s41598-019-47540-7#ref-CR51 \"Raup, D. M. & Sepkoski, J. J. Mass extinctions in the marine fossil record. Sci. 215, 1501–1503 (1982).\"),[52](/articles/s41598-019-47540-7#ref-CR52 \"Jablonski, D. Extinctions in the fossil record. Phil. Trans. R. Soc. Lond. B 344, 11–17 (1994).\"), with many arguing that human activity is currently causing a sixth[2](/articles/s41598-019-47540-7#ref-CR2 \"Barnosky, A. D. et al. Has the Earth’s sixth mass extinction already arrived? Nat. 471, 51 (2011).\"). In a similar way to our previous analysis of catastrophe rates, we should expect our upper bound to be consistent with the frequency of non-anthropogenic mass extinctions. Using only the big five extinctions produces a frequency of less than one per 100 Myr, far below our upper bound. In addition to the big five, there have been 13 other mass extinctions in the fossil record[53](/articles/s41598-019-47540-7#ref-CR53 \"Bambach, R. K. Phanerozoic biodiversity mass extinctions. Annu. Rev. Earth Planet. Sci. 34, 127–155 (2006).\"). Using these numbers for 18 mass extinctions over 541 Myr still results in a frequency of about one per 30 Myr, many orders of magnitude below our upper bound.\n\nConclusions\n-----------\n\nUsing the fact that humans have survived at least 200 kyr, we can infer that the annual probability of human extinction from natural causes is less than 1 in 87,000 with modest confidence (0.1 relative likelihood) and less than 1 in 14,000 with near certainty (10−6 relative likelihood). These are the most conservative bounds. Estimates based on older fossils such as the ones found in Morocco dated to 315 kya result in annual extinction probabilities of less than 1 in 137,000 or 1 in 23,000 (for relative likelihood of 0.1 and 10−6, respectively). Using the track record of survival for the entire lineage of *Homo*, the annual probability of extinction from natural causes falls below 1 in 870,000 (relative likelihood of 0.1). We also conclude that these data are unlikely to be biased by observer selection effects, especially given that the bounds are consistent with mammalian extinction rates, the temporal range of other hominin species, and the frequency of potential catastrophes and mass extinctions.\n\nThe bounds are subject to important limitations. Most importantly, they only apply to extinction risks that have either remained constant or declined over human history. Our 200 kyr track record of survival cannot rule out much higher extinction probabilities from modern sources such as nuclear weapons or anthropogenic climate change. Some naturally occurring risks could be also be worsened by anthropogenic factors: a minor asteroid impact could be interpreted as a nuclear attack and lead to retaliation[54](/articles/s41598-019-47540-7#ref-CR54 \"Baum, S. D. Uncertain human consequences in asteroid risk analysis and the global catastrophe threshold. Nat. Hazards 94, 759–775 (2018).\"), or a naturally occurring disease which previously may have only been a local extinction risk could spread much further due to modern travel[23](/articles/s41598-019-47540-7#ref-CR23 \"Manheim, D. Questioning estimates of natural pandemic risk. Heal. security (2018).\"). In the cases where a natural risk is amplified by modern conditions, we can still derive some partial information from the upper bound by evaluating how much the risk would need to change from the purely natural baseline. For example, the claim that a natural disease poses a greater than 1 in 1,000 chance of extinction per year would require that anthropogenic conditions have increased the risk of natural disease by a factor of more than 14 to 870 (under our most conservative and optimistic upper bounds, respectively). In general, for a naturally occurring risk to violate our upper bounds via human activity by more than a factor of two, the majority of the risk would still need to come from anthropogenic circumstances.\n\nIn general, we conclude that anthropogenic extinction risks are likely greater than natural ones. We do not have a long track record of data for anthropogenic risks, so evaluating this relies far more on speculation. But despite the paucity of data, the little evidence we do have seems to be indicative of rates greatly exceeding our upper bounds. During the Cuban Missile Crisis of 1962, John F Kennedy put the odds of nuclear war at ‘somewhere between one out of three and even’[55](/articles/s41598-019-47540-7#ref-CR55 \"Kennedy, R. F. Thirteen days: A memoir of the Cuban missile crisis (WW Norton & Company, 2011).\"). If 0.1% of nuclear wars result in human extinction via nuclear winter, taking Kennedy’s odds that year would surpass our most conservative bound by more than a factor of four (and surpass our most optimistic bound by a factor of more than 250). Anthropogenic climate change could pose existential risks as well if warming is much worse than expected. A ballpark suggestion for the probability of 20 degrees of anthropogenic climate change was placed at 1%[56](/articles/s41598-019-47540-7#ref-CR56 \"Weitzman, M. L. On modeling and interpreting the economics of catastrophic climate change. The Rev. Econ. Stat. 91, 1–19 (2009).\"), which would make the planet largely uninhabitable for humans due to heat stress[57](/articles/s41598-019-47540-7#ref-CR57 \"Sherwood, S. C. & Huber, M. An adaptability limit to climate change due to heat stress. Proc. Natl. Acad. Sci. 107, 9552–9555 (2010).\"). And these are not the only risks we may face. One century ago, the existential risks posed by nuclear weapons or climate change may have seemed extremely implausible. We should therefore be cautious before dismissing the potential risks that future centuries of technological development could bring, such as those stemming from biotechnology[58](/articles/s41598-019-47540-7#ref-CR58 \"Millett, P. & Snyder-Beattie, A. Existential risk and cost-effective biosecurity. Heal. security 15, 373–383 (2017).\") or artificial general intelligence[59](/articles/s41598-019-47540-7#ref-CR59 \"Bostrom, N. Superintelligence: Paths, Dangers, Strategies (Oxford University Press, 2014).\").\n\nDespite the low probability of human extinction from natural causes, it may still be prudent to reduce these risks. Existential risks jeopardize not only the lives of those currently present, but also the existence of all future generations. Depending on how much value we place on such generations, it may still be cost-effective to reduce existential risks from natural sources[60](/articles/s41598-019-47540-7#ref-CR60 \"Matheny, J. G. Reducing the risk of human extinction. Risk Analysis: An Int. J. 27, 1335–1344 (2007).\"). However, given limited resources to spend on reducing existential risks, one may be better off focusing on greater risks from our own design.\n\n\n\n\nChange history\n--------------\n\n* ### 11 December 2019\n\nAn amendment to this paper has been published and can be accessed via a link at the top of the paper.\nReferences\n----------\n\n1. Raup, D. M. *Extinction: bad genes or bad luck?* (WW Norton & Company, 1992).\n2. Barnosky, A. D. *et al*. Has the Earth’s sixth mass extinction already arrived? *Nat.* **471**, 51 (2011).\n\n[ADS](http://adsabs.harvard.edu/cgi-bin/nph-data_query?link_type=ABSTRACT&bibcode=2011Natur.471...51B) \n [CAS](/articles/cas-redirect/1:CAS:528:DC%2BC3MXis1Cktbo%3D) \n [Google Scholar](http://scholar.google.com/scholar_lookup?&title=Has%20the%20Earth%E2%80%99s%20sixth%20mass%20extinction%20already%20arrived%3F&journal=Nat.&volume=471&publication_year=2011&author=Barnosky%2CAD)\n3. Smith, J. M. The causes of extinction. *Phil. Trans. R. Soc. Lond. B* **325**, 241–252 (1989).\n\n[ADS](http://adsabs.harvard.edu/cgi-bin/nph-data_query?link_type=ABSTRACT&bibcode=1989RSPTB.325..241S) \n [CAS](/articles/cas-redirect/1:STN:280:DyaK3c%2FpvFGrsg%3D%3D) \n [Google Scholar](http://scholar.google.com/scholar_lookup?&title=The%20causes%20of%20extinction&journal=Phil.%20Trans.%20R.%20Soc.%20Lond.%20B&volume=325&pages=241-252&publication_year=1989&author=Smith%2CJM)\n4. Benton, M. J. The Red Queen and the Court Jester: species diversity and the role of biotic and abiotic factors through time. *Sci.* **323**, 728–732 (2009).\n\n[ADS](http://adsabs.harvard.edu/cgi-bin/nph-data_query?link_type=ABSTRACT&bibcode=2009Sci...323..728B) \n [CAS](/articles/cas-redirect/1:CAS:528:DC%2BD1MXhtlertrk%3D) \n [Google Scholar](http://scholar.google.com/scholar_lookup?&title=The%20Red%20Queen%20and%20the%20Court%20Jester%3A%20species%20diversity%20and%20the%20role%20of%20biotic%20and%20abiotic%20factors%20through%20time&journal=Sci.&volume=323&pages=728-732&publication_year=2009&author=Benton%2CMJ)\n5. Schulte, P. *et al*. The Chicxulub asteroid impact and mass extinction at the Cretaceous-Paleogene boundary. *Sci.* **327**, 1214–1218 (2010).\n\n[ADS](http://adsabs.harvard.edu/cgi-bin/nph-data_query?link_type=ABSTRACT&bibcode=2010Sci...327.1214S) \n [CAS](/articles/cas-redirect/1:CAS:528:DC%2BC3cXis1CmurY%3D) \n [Google Scholar](http://scholar.google.com/scholar_lookup?&title=The%20Chicxulub%20asteroid%20impact%20and%20mass%20extinction%20at%20the%20Cretaceous-Paleogene%20boundary&journal=Sci.&volume=327&pages=1214-1218&publication_year=2010&author=Schulte%2CP)\n6. Wignall, P. B. Large igneous provinces and mass extinctions. *Earth-science reviews* **53**, 1–33 (2001).\n\n[ADS](http://adsabs.harvard.edu/cgi-bin/nph-data_query?link_type=ABSTRACT&bibcode=2001ESRv...53....1W) \n [CAS](/articles/cas-redirect/1:CAS:528:DC%2BD3MXhslOmtb4%3D) \n [Google Scholar](http://scholar.google.com/scholar_lookup?&title=Large%20igneous%20provinces%20and%20mass%20extinctions&journal=Earth-science%20reviews&volume=53&pages=1-33&publication_year=2001&author=Wignall%2CPB)\n7. Bostrom, N. Existential risk prevention as global priority. *Glob. Policy* **4**, 15–31 (2013).\n\n[Google Scholar](http://scholar.google.com/scholar_lookup?&title=Existential%20risk%20prevention%20as%20global%20priority&journal=Glob.%20Policy&volume=4&pages=15-31&publication_year=2013&author=Bostrom%2CN)\n8. Sandberg, A. Human extinction from natural hazard events. In *Oxford Research Encyclopedia of Natural Hazard Science* (2018).\n9. Gott, J. R. Implications of the Copernican principle for our future prospects. *Nat.* **363**, 315–319 (1993).\n\n[ADS](http://adsabs.harvard.edu/cgi-bin/nph-data_query?link_type=ABSTRACT&bibcode=1993Natur.363..315G) \n [Google Scholar](http://scholar.google.com/scholar_lookup?&title=Implications%20of%20the%20Copernican%20principle%20for%20our%20future%20prospects&journal=Nat.&volume=363&pages=315-319&publication_year=1993&author=Gott%2CJR)\n10. Pisaturo, R. Past longevity as evidence for the future. *Philos. Sci.* **76**, 73–100 (2009).\n\n[MathSciNet](http://www.ams.org/mathscinet-getitem?mr=2530713) \n [Google Scholar](http://scholar.google.com/scholar_lookup?&title=Past%20longevity%20as%20evidence%20for%20the%20future&journal=Philos.%20Sci.&volume=76&pages=73-100&publication_year=2009&author=Pisaturo%2CR)\n11. Bostrom, N. *Anthropic bias: Observation selection effects in science and philosophy* (Routledge, 2013).\n12. Ćirković, M. M., Sandberg, A. & Bostrom, N. Anthropic shadow: observation selection effects and human extinction risks. *Risk Analysis: An Int. J.* **30**, 1495–1506 (2010).\n\n[Google Scholar](http://scholar.google.com/scholar_lookup?&title=Anthropic%20shadow%3A%20observation%20selection%20effects%20and%20human%20extinction%20risks&journal=Risk%20Analysis%3A%20An%20Int.%20J.&volume=30&pages=1495-1506&publication_year=2010&author=%C4%86irkovi%C4%87%2CMM&author=Sandberg%2CA&author=Bostrom%2CN)\n13. McDougall, I., Brown, F. H. & Fleagle, J. G. Stratigraphic placement and age of modern humans from Kibish, Ethiopia. *Nat.* **433**, 733 (2005).\n\n[ADS](http://adsabs.harvard.edu/cgi-bin/nph-data_query?link_type=ABSTRACT&bibcode=2005Natur.433..733M) \n [CAS](/articles/cas-redirect/1:CAS:528:DC%2BD2MXhtleqs7k%3D) \n [Google Scholar](http://scholar.google.com/scholar_lookup?&title=Stratigraphic%20placement%20and%20age%20of%20modern%20humans%20from%20Kibish%2C%20Ethiopia&journal=Nat.&volume=433&publication_year=2005&author=McDougall%2CI&author=Brown%2CFH&author=Fleagle%2CJG)\n14. Richter, D. *et al*. The age of the hominin fossils from Jebel Irhoud, Morocco, and the origins of the Middle Stone Age. *Nat.* **546**, 293–296 (2017).\n\n[ADS](http://adsabs.harvard.edu/cgi-bin/nph-data_query?link_type=ABSTRACT&bibcode=2017Natur.546..293R) \n [CAS](/articles/cas-redirect/1:CAS:528:DC%2BC2sXps1Kiu7k%3D) \n [Google Scholar](http://scholar.google.com/scholar_lookup?&title=The%20age%20of%20the%20hominin%20fossils%20from%20Jebel%20Irhoud%2C%20Morocco%2C%20and%20the%20origins%20of%20the%20Middle%20Stone%20Age&journal=Nat.&volume=546&pages=293-296&publication_year=2017&author=Richter%2CD)\n15. Hublin, J.-J. *et al*. New fossils from Jebel Irhoud, Morocco and the pan-African origin of Homo sapiens. *Nat.* **546**, 289 (2017).\n\n[ADS](http://adsabs.harvard.edu/cgi-bin/nph-data_query?link_type=ABSTRACT&bibcode=2017Natur.546..289H) \n [CAS](/articles/cas-redirect/1:CAS:528:DC%2BC2sXps1Khsro%3D) \n [Google Scholar](http://scholar.google.com/scholar_lookup?&title=New%20fossils%20from%20Jebel%20Irhoud%2C%20Morocco%20and%20the%20pan-African%20origin%20of%20Homo%20sapiens&journal=Nat.&volume=546&publication_year=2017&author=Hublin%2CJ-J)\n16. Van Valen, L. A new evolutionary law. *Evol Theory* **1**, 1–30 (1973).\n\n[Google Scholar](http://scholar.google.com/scholar_lookup?&title=A%20new%20evolutionary%20law&journal=Evol%20Theory&volume=1&pages=1-30&publication_year=1973&author=Valen%2CL)\n17. Alroy, J. Constant extinction, constrained diversification, and uncoordinated stasis in North American mammals. *Palaeogeogr. Palaeoclimatol. Palaeoecol.* **127**, 285–311 (1996).\n\n[Google Scholar](http://scholar.google.com/scholar_lookup?&title=Constant%20extinction%2C%20constrained%20diversification%2C%20and%20uncoordinated%20stasis%20in%20North%20American%20mammals&journal=Palaeogeogr.%20Palaeoclimatol.%20Palaeoecol.&volume=127&pages=285-311&publication_year=1996&author=Alroy%2CJ)\n18. Foote, M. & Raup, D. M. Fossil preservation and the stratigraphic ranges of taxa. *Paleobiology* **22**, 121–140 (1996).\n\n[CAS](/articles/cas-redirect/1:STN:280:DC%2BD3MnlslOltA%3D%3D) \n [PubMed](http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=Retrieve&db=PubMed&dopt=Abstract&list_uids=11539203) \n [Google Scholar](http://scholar.google.com/scholar_lookup?&title=Fossil%20preservation%20and%20the%20stratigraphic%20ranges%20of%20taxa&journal=Paleobiology&volume=22&pages=121-140&publication_year=1996&author=Foote%2CM&author=Raup%2CDM)\n19. Meyer, M. *et al*. Nuclear DNA sequences from the Middle Pleistocene Sima de los Huesos hominins. *Nat.* **531**, 504 (2016).\n\n[ADS](http://adsabs.harvard.edu/cgi-bin/nph-data_query?link_type=ABSTRACT&bibcode=2016Natur.531..504M) \n [CAS](/articles/cas-redirect/1:CAS:528:DC%2BC28XksVKlu7Y%3D) \n [Google Scholar](http://scholar.google.com/scholar_lookup?&title=Nuclear%20DNA%20sequences%20from%20the%20Middle%20Pleistocene%20Sima%20de%20los%20Huesos%20hominins&journal=Nat.&volume=531&publication_year=2016&author=Meyer%2CM)\n20. Mendez, F. L., Poznik, G. D., Castellano, S. & Bustamante, C. D. The divergence of neandertal and modern human y chromosomes. *The Am. J. Hum. Genet.* **98**, 728–734 (2016).\n\n[CAS](/articles/cas-redirect/1:CAS:528:DC%2BC28XltFGlur8%3D) \n [PubMed](http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=Retrieve&db=PubMed&dopt=Abstract&list_uids=27058445) \n [Google Scholar](http://scholar.google.com/scholar_lookup?&title=The%20divergence%20of%20neandertal%20and%20modern%20human%20y%20chromosomes&journal=The%20Am.%20J.%20Hum.%20Genet.&volume=98&pages=728-734&publication_year=2016&author=Mendez%2CFL&author=Poznik%2CGD&author=Castellano%2CS&author=Bustamante%2CCD)\n21. Wood, B. & K. Boyle, E. Hominin taxic diversity: Fact or fantasy? *Am. journal physical anthropology* **159**, 37–78 (2016).\n\n[Google Scholar](http://scholar.google.com/scholar_lookup?&title=Hominin%20taxic%20diversity%3A%20Fact%20or%20fantasy%3F&journal=Am.%20journal%20physical%20anthropology&volume=159&pages=37-78&publication_year=2016&author=Wood%2CB&author=K.%20Boyle%2CE)\n22. Robinson, C., Campbell, T. L., Cote, S. & de Ruiter, D. J. Temporal ranges and ancestry in the hominin fossil record: The case of Australopithecus sediba. *South Afr. J. Sci.* **114**, 1–7 (2018).\n\n[Google Scholar](http://scholar.google.com/scholar_lookup?&title=Temporal%20ranges%20and%20ancestry%20in%20the%20hominin%20fossil%20record%3A%20The%20case%20of%20Australopithecus%20sediba&journal=South%20Afr.%20J.%20Sci.&volume=114&pages=1-7&publication_year=2018&author=Robinson%2CC&author=Campbell%2CTL&author=Cote%2CS&author=Ruiter%2CDJ)\n23. Manheim, D. Questioning estimates of natural pandemic risk. *Heal. security* (2018).\n24. Bolhuis, J. J., Tattersall, I., Chomsky, N. & Berwick, R. C. How could language have evolved? *PLoS biology* **12**, e1001934 (2014).\n\n[PubMed](http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=Retrieve&db=PubMed&dopt=Abstract&list_uids=25157536) \n [PubMed Central](http://www.ncbi.nlm.nih.gov/pmc/articles/PMC4144795) \n [Google Scholar](http://scholar.google.com/scholar_lookup?&title=How%20could%20language%20have%20evolved%3F&journal=PLoS%20biology&volume=12&publication_year=2014&author=Bolhuis%2CJJ&author=Tattersall%2CI&author=Chomsky%2CN&author=Berwick%2CRC)\n25. Powell, A., Shennan, S. & Thomas, M. G. Late Pleistocene demography and the appearance of modern human behavior. *Sci.* **324**, 1298–1301 (2009).\n\n[ADS](http://adsabs.harvard.edu/cgi-bin/nph-data_query?link_type=ABSTRACT&bibcode=2009Sci...324.1298P) \n [CAS](/articles/cas-redirect/1:CAS:528:DC%2BD1MXms12gtbo%3D) \n [Google Scholar](http://scholar.google.com/scholar_lookup?&title=Late%20Pleistocene%20demography%20and%20the%20appearance%20of%20modern%20human%20behavior&journal=Sci.&volume=324&pages=1298-1301&publication_year=2009&author=Powell%2CA&author=Shennan%2CS&author=Thomas%2CMG)\n26. Henshilwood, C. S. *et al*. Emergence of modern human behavior: Middle Stone Age engravings from South Africa. *Sci.* **295**, 1278–1280 (2002).\n\n[ADS](http://adsabs.harvard.edu/cgi-bin/nph-data_query?link_type=ABSTRACT&bibcode=2002Sci...295.1278H) \n [CAS](/articles/cas-redirect/1:CAS:528:DC%2BD38XhsVGiu7k%3D) \n [Google Scholar](http://scholar.google.com/scholar_lookup?&title=Emergence%20of%20modern%20human%20behavior%3A%20Middle%20Stone%20Age%20engravings%20from%20South%20Africa&journal=Sci.&volume=295&pages=1278-1280&publication_year=2002&author=Henshilwood%2CCS)\n27. McBrearty, S. & Brooks, A. S. The revolution that wasn’t: a new interpretation of the origin of modern human behavior. *J. human evolution* **39**, 453–563 (2000).\n\n[CAS](/articles/cas-redirect/1:STN:280:DC%2BD3M7gt1Kjsw%3D%3D) \n [Google Scholar](http://scholar.google.com/scholar_lookup?&title=The%20revolution%20that%20wasn%E2%80%99t%3A%20a%20new%20interpretation%20of%20the%20origin%20of%20modern%20human%20behavior&journal=J.%20human%20evolution&volume=39&pages=453-563&publication_year=2000&author=McBrearty%2CS&author=Brooks%2CAS)\n28. Marean, C. W. *et al*. Early human use of marine resources and pigment in South Africa during the Middle Pleistocene. *Nat.* **449**, 905 (2007).\n\n[ADS](http://adsabs.harvard.edu/cgi-bin/nph-data_query?link_type=ABSTRACT&bibcode=2007Natur.449..905M) \n [CAS](/articles/cas-redirect/1:CAS:528:DC%2BD2sXhtFOjt7nJ) \n [Google Scholar](http://scholar.google.com/scholar_lookup?&title=Early%20human%20use%20of%20marine%20resources%20and%20pigment%20in%20South%20Africa%20during%20the%20Middle%20Pleistocene&journal=Nat.&volume=449&publication_year=2007&author=Marean%2CCW)\n29. De Vos, J. M., Joppa, L. N., Gittleman, J. L., Stephens, P. R. & Pimm, S. L. Estimating the normal background rate of species extinction. *Conserv. Biol.* **29**, 452–462 (2015).\n\n[PubMed](http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=Retrieve&db=PubMed&dopt=Abstract&list_uids=25159086) \n [Google Scholar](http://scholar.google.com/scholar_lookup?&title=Estimating%20the%20normal%20background%20rate%20of%20species%20extinction&journal=Conserv.%20Biol.&volume=29&pages=452-462&publication_year=2015&author=Vos%2CJM&author=Joppa%2CLN&author=Gittleman%2CJL&author=Stephens%2CPR&author=Pimm%2CSL)\n30. Arbour, J. H. & Santana, S. E. A major shift in diversification rate helps explain macroevolutionary patterns in primate species diversity. *Evol.* **71**, 1600–1613 (2017).\n\n[Google Scholar](http://scholar.google.com/scholar_lookup?&title=A%20major%20shift%20in%20diversification%20rate%20helps%20explain%20macroevolutionary%20patterns%20in%20primate%20species%20diversity&journal=Evol.&volume=71&pages=1600-1613&publication_year=2017&author=Arbour%2CJH&author=Santana%2CSE)\n31. Rabosky, D. L. Extinction rates should not be estimated from molecular phylogenies. *Evol.* **64**, 1816–1824 (2010).\n\n[Google Scholar](http://scholar.google.com/scholar_lookup?&title=Extinction%20rates%20should%20not%20be%20estimated%20from%20molecular%20phylogenies&journal=Evol.&volume=64&pages=1816-1824&publication_year=2010&author=Rabosky%2CDL)\n32. Purvis, A., Gittleman, J. L., Cowlishaw, G. & Mace, G. M. Predicting extinction risk in declining species. *Proc. Royal Soc. Lond. B: Biol. Sci.* **267**, 1947–1952 (2000).\n\n[CAS](/articles/cas-redirect/1:STN:280:DC%2BD3MzjvVaitg%3D%3D) \n [Google Scholar](http://scholar.google.com/scholar_lookup?&title=Predicting%20extinction%20risk%20in%20declining%20species&journal=Proc.%20Royal%20Soc.%20Lond.%20B%3A%20Biol.%20Sci.&volume=267&pages=1947-1952&publication_year=2000&author=Purvis%2CA&author=Gittleman%2CJL&author=Cowlishaw%2CG&author=Mace%2CGM)\n33. Fritz, S. A., Bininda-Emonds, O. R. & Purvis, A. Geographical variation in predictors of mammalian extinction risk: big is bad, but only in the tropics. *Ecol. letters* **12**, 538–549 (2009).\n\n[Google Scholar](http://scholar.google.com/scholar_lookup?&title=Geographical%20variation%20in%20predictors%20of%20mammalian%20extinction%20risk%3A%20big%20is%20bad%2C%20but%20only%20in%20the%20tropics&journal=Ecol.%20letters&volume=12&pages=538-549&publication_year=2009&author=Fritz%2CSA&author=Bininda-Emonds%2COR&author=Purvis%2CA)\n34. Banks, W. E. *et al*. Neanderthal extinction by competitive exclusion. *PLoS One* **3**, e3972 (2008).\n\n[ADS](http://adsabs.harvard.edu/cgi-bin/nph-data_query?link_type=ABSTRACT&bibcode=2008PLoSO...3.3972B) \n [PubMed](http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=Retrieve&db=PubMed&dopt=Abstract&list_uids=19107186) \n [PubMed Central](http://www.ncbi.nlm.nih.gov/pmc/articles/PMC2600607) \n [Google Scholar](http://scholar.google.com/scholar_lookup?&title=Neanderthal%20extinction%20by%20competitive%20exclusion&journal=PLoS%20One&volume=3&publication_year=2008&author=Banks%2CWE)\n35. Rougier, J., Sparks, R. S. J., Cashman, K. V. & Brown, S. K. The global magnitude–frequency relationship for large explosive volcanic eruptions. *Earth Planet. Sci. Lett.* **482**, 621–629 (2018).\n\n[ADS](http://adsabs.harvard.edu/cgi-bin/nph-data_query?link_type=ABSTRACT&bibcode=2018E%26PSL.482..621R) \n [CAS](/articles/cas-redirect/1:CAS:528:DC%2BC2sXhvVahs7nM) \n [Google Scholar](http://scholar.google.com/scholar_lookup?&title=The%20global%20magnitude%E2%80%93frequency%20relationship%20for%20large%20explosive%20volcanic%20eruptions&journal=Earth%20Planet.%20Sci.%20Lett.&volume=482&pages=621-629&publication_year=2018&author=Rougier%2CJ&author=Sparks%2CRSJ&author=Cashman%2CKV&author=Brown%2CSK)\n36. Napier, W. *Hazards from comets and asteroids* (Oxford University Press: Oxford, UK, 2008).\n37. Chapman, C. R. & Morrison, D. Impacts on the Earth by asteroids and comets: assessing the hazard. *Nat.* **367**, 33 (1994).\n\n[ADS](http://adsabs.harvard.edu/cgi-bin/nph-data_query?link_type=ABSTRACT&bibcode=1994Natur.367...33C) \n [Google Scholar](http://scholar.google.com/scholar_lookup?&title=Impacts%20on%20the%20Earth%20by%20asteroids%20and%20comets%3A%20assessing%20the%20hazard&journal=Nat.&volume=367&publication_year=1994&author=Chapman%2CCR&author=Morrison%2CD)\n38. Mason, B. G., Pyle, D. M. & Oppenheimer, C. The size and frequency of the largest explosive eruptions on Earth. *Bull. Volcanol.* **66**, 735–748 (2004).\n\n[ADS](http://adsabs.harvard.edu/cgi-bin/nph-data_query?link_type=ABSTRACT&bibcode=2004BVol...66..735M) \n [Google Scholar](http://scholar.google.com/scholar_lookup?&title=The%20size%20and%20frequency%20of%20the%20largest%20explosive%20eruptions%20on%20Earth&journal=Bull.%20Volcanol.&volume=66&pages=735-748&publication_year=2004&author=Mason%2CBG&author=Pyle%2CDM&author=Oppenheimer%2CC)\n39. Rampino, M. R. & Stothers, R. B. Flood basalt volcanism during the past 250 million years. *Sci.* **241**, 663–668 (1988).\n\n[ADS](http://adsabs.harvard.edu/cgi-bin/nph-data_query?link_type=ABSTRACT&bibcode=1988Sci...241..663R) \n [CAS](/articles/cas-redirect/1:CAS:528:DyaL1cXltFeluro%3D) \n [Google Scholar](http://scholar.google.com/scholar_lookup?&title=Flood%20basalt%20volcanism%20during%20the%20past%20250%20million%20years&journal=Sci.&volume=241&pages=663-668&publication_year=1988&author=Rampino%2CMR&author=Stothers%2CRB)\n40. Melott, A. L. *et al*. Did a gamma-ray burst initiate the late Ordovician mass extinction? *Int. J. Astrobiol.* **3**, 55–61 (2004).\n\n[CAS](/articles/cas-redirect/1:CAS:528:DC%2BD2cXhtVSjtb7N) \n [Google Scholar](http://scholar.google.com/scholar_lookup?&title=Did%20a%20gamma-ray%20burst%20initiate%20the%20late%20Ordovician%20mass%20extinction%3F&journal=Int.%20J.%20Astrobiol.&volume=3&pages=55-61&publication_year=2004&author=Melott%2CAL)\n41. Beech, M. The past, present and future supernova threat to Earth’s biosphere. *Astrophys. Space Sci.* **336**, 287–302 (2011).\n\n[ADS](http://adsabs.harvard.edu/cgi-bin/nph-data_query?link_type=ABSTRACT&bibcode=2011Ap%26SS.336..287B) \n [Google Scholar](http://scholar.google.com/scholar_lookup?&title=The%20past%2C%20present%20and%20future%20supernova%20threat%20to%20Earth%E2%80%99s%20biosphere&journal=Astrophys.%20Space%20Sci.&volume=336&pages=287-302&publication_year=2011&author=Beech%2CM)\n42. Tegmark, M. & Bostrom, N. Astrophysics: Is a doomsday catastrophe likely? *Nat.* **438**, 754 (2005).\n\n[ADS](http://adsabs.harvard.edu/cgi-bin/nph-data_query?link_type=ABSTRACT&bibcode=2005Natur.438..754T) \n [CAS](/articles/cas-redirect/1:CAS:528:DC%2BD2MXht1ynsrzM) \n [Google Scholar](http://scholar.google.com/scholar_lookup?&title=Astrophysics%3A%20Is%20a%20doomsday%20catastrophe%20likely%3F&journal=Nat.&volume=438&publication_year=2005&author=Tegmark%2CM&author=Bostrom%2CN)\n43. Ambrose, S. H. Late pleistocene human population bottlenecks, volcanic winter, and differentiation of modern humans. *J. human evolution* **34**, 623–651 (1998).\n\n[CAS](/articles/cas-redirect/1:STN:280:DyaK1czhtlOruw%3D%3D) \n [Google Scholar](http://scholar.google.com/scholar_lookup?&title=Late%20pleistocene%20human%20population%20bottlenecks%2C%20volcanic%20winter%2C%20and%20differentiation%20of%20modern%20humans&journal=J.%20human%20evolution&volume=34&pages=623-651&publication_year=1998&author=Ambrose%2CSH)\n44. Baum, S. D., Denkenberger, D. C., Pearce, J. M., Robock, A. & Winkler, R. Resilience to global food supply catastrophes. *Environ. Syst. Decis.* **35**, 301–313 (2015).\n\n[Google Scholar](http://scholar.google.com/scholar_lookup?&title=Resilience%20to%20global%20food%20supply%20catastrophes&journal=Environ.%20Syst.%20Decis.&volume=35&pages=301-313&publication_year=2015&author=Baum%2CSD&author=Denkenberger%2CDC&author=Pearce%2CJM&author=Robock%2CA&author=Winkler%2CR)\n45. Lane, C. S., Chorn, B. T. & Johnson, T. C. Ash from the toba supereruption in lake malawi shows no volcanic winter in east africa at 75 ka. *Proc. Natl. Acad. Sci.* **110**, 8025–8029 (2013).\n\n[ADS](http://adsabs.harvard.edu/cgi-bin/nph-data_query?link_type=ABSTRACT&bibcode=2013PNAS..110.8025L) \n [CAS](/articles/cas-redirect/1:CAS:528:DC%2BC3sXhtV2ksL7F) \n [PubMed](http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=Retrieve&db=PubMed&dopt=Abstract&list_uids=23630269) \n [Google Scholar](http://scholar.google.com/scholar_lookup?&title=Ash%20from%20the%20toba%20supereruption%20in%20lake%20malawi%20shows%20no%20volcanic%20winter%20in%20east%20africa%20at%2075%20ka&journal=Proc.%20Natl.%20Acad.%20Sci.&volume=110&pages=8025-8029&publication_year=2013&author=Lane%2CCS&author=Chorn%2CBT&author=Johnson%2CTC)\n46. Smith, E. I. *et al*. Humans thrived in South Africa through the Toba eruption about 74,000 years ago. *Nat.* **555**, 511 (2018).\n\n[ADS](http://adsabs.harvard.edu/cgi-bin/nph-data_query?link_type=ABSTRACT&bibcode=2018Natur.555..511S) \n [CAS](/articles/cas-redirect/1:CAS:528:DC%2BC1cXktlGqt74%3D) \n [Google Scholar](http://scholar.google.com/scholar_lookup?&title=Humans%20thrived%20in%20South%20Africa%20through%20the%20Toba%20eruption%20about%2074%2C000%20years%20ago&journal=Nat.&volume=555&publication_year=2018&author=Smith%2CEI)\n47. Prado-Martinez, J. *et al*. Great ape genetic diversity and population history. *Nat.* **499**, 471 (2013).\n\n[ADS](http://adsabs.harvard.edu/cgi-bin/nph-data_query?link_type=ABSTRACT&bibcode=2013Natur.499..471P) \n [CAS](/articles/cas-redirect/1:CAS:528:DC%2BC3sXhtVKhtbrO) \n [Google Scholar](http://scholar.google.com/scholar_lookup?&title=Great%20ape%20genetic%20diversity%20and%20population%20history&journal=Nat.&volume=499&publication_year=2013&author=Prado-Martinez%2CJ)\n48. Bond, D. P. & Wignall, P. B. Large igneous provinces and mass extinctions: an update. *Volcanism, Impacts, Mass Extinctions: Causes Eff.* **505**, 29–55 (2014).\n\n[Google Scholar](http://scholar.google.com/scholar_lookup?&title=Large%20igneous%20provinces%20and%20mass%20extinctions%3A%20an%20update&journal=Volcanism%2C%20Impacts%2C%20Mass%20Extinctions%3A%20Causes%20Eff.&volume=505&pages=29-55&publication_year=2014&author=Bond%2CDP&author=Wignall%2CPB)\n49. Toon, O. B., Zahnle, K., Morrison, D., Turco, R. P. & Covey, C. Environmental perturbations caused by the impacts of asteroids and comets. *Rev. Geophys.* **35**, 41–78 (1997).\n\n[ADS](http://adsabs.harvard.edu/cgi-bin/nph-data_query?link_type=ABSTRACT&bibcode=1997RvGeo..35...41T) \n [CAS](/articles/cas-redirect/1:CAS:528:DyaK2sXmsVGjtLY%3D) \n [Google Scholar](http://scholar.google.com/scholar_lookup?&title=Environmental%20perturbations%20caused%20by%20the%20impacts%20of%20asteroids%20and%20comets&journal=Rev.%20Geophys.&volume=35&pages=41-78&publication_year=1997&author=Toon%2COB&author=Zahnle%2CK&author=Morrison%2CD&author=Turco%2CRP&author=Covey%2CC)\n50. Sepkoski, J. Phanerozoic overview of mass extinction. In *Patterns and Processes in the History of Life*, 277–295 (Springer, 1986).\n51. Raup, D. M. & Sepkoski, J. J. Mass extinctions in the marine fossil record. *Sci.* **215**, 1501–1503 (1982).\n\n[ADS](http://adsabs.harvard.edu/cgi-bin/nph-data_query?link_type=ABSTRACT&bibcode=1982Sci...215.1501R) \n [CAS](/articles/cas-redirect/1:STN:280:DC%2BC3cvjt1Crtw%3D%3D) \n [Google Scholar](http://scholar.google.com/scholar_lookup?&title=Mass%20extinctions%20in%20the%20marine%20fossil%20record&journal=Sci.&volume=215&pages=1501-1503&publication_year=1982&author=Raup%2CDM&author=Sepkoski%2CJJ)\n52. Jablonski, D. Extinctions in the fossil record. *Phil. Trans. R. Soc. Lond. B* **344**, 11–17 (1994).\n\n[ADS](http://adsabs.harvard.edu/cgi-bin/nph-data_query?link_type=ABSTRACT&bibcode=1994RSPTB.344...11J) \n [Google Scholar](http://scholar.google.com/scholar_lookup?&title=Extinctions%20in%20the%20fossil%20record&journal=Phil.%20Trans.%20R.%20Soc.%20Lond.%20B&volume=344&pages=11-17&publication_year=1994&author=Jablonski%2CD)\n53. Bambach, R. K. Phanerozoic biodiversity mass extinctions. *Annu. Rev. Earth Planet. Sci.* **34**, 127–155 (2006).\n\n[ADS](http://adsabs.harvard.edu/cgi-bin/nph-data_query?link_type=ABSTRACT&bibcode=2006AREPS..34..127B) \n [CAS](/articles/cas-redirect/1:CAS:528:DC%2BD28XlvFCntbs%3D) \n [Google Scholar](http://scholar.google.com/scholar_lookup?&title=Phanerozoic%20biodiversity%20mass%20extinctions&journal=Annu.%20Rev.%20Earth%20Planet.%20Sci.&volume=34&pages=127-155&publication_year=2006&author=Bambach%2CRK)\n54. Baum, S. D. Uncertain human consequences in asteroid risk analysis and the global catastrophe threshold. *Nat. Hazards* **94**, 759–775 (2018).\n\n[Google Scholar](http://scholar.google.com/scholar_lookup?&title=Uncertain%20human%20consequences%20in%20asteroid%20risk%20analysis%20and%20the%20global%20catastrophe%20threshold&journal=Nat.%20Hazards&volume=94&pages=759-775&publication_year=2018&author=Baum%2CSD)\n55. Kennedy, R. F. *Thirteen days: A memoir of the Cuban missile crisis* (WW Norton & Company, 2011).\n56. Weitzman, M. L. On modeling and interpreting the economics of catastrophic climate change. *The Rev. Econ. Stat.* **91**, 1–19 (2009).\n\n[Google Scholar](http://scholar.google.com/scholar_lookup?&title=On%20modeling%20and%20interpreting%20the%20economics%20of%20catastrophic%20climate%20change&journal=The%20Rev.%20Econ.%20Stat.&volume=91&pages=1-19&publication_year=2009&author=Weitzman%2CML)\n57. Sherwood, S. C. & Huber, M. An adaptability limit to climate change due to heat stress. *Proc. Natl. Acad. Sci.* **107**, 9552–9555 (2010).\n\n[ADS](http://adsabs.harvard.edu/cgi-bin/nph-data_query?link_type=ABSTRACT&bibcode=2010PNAS..107.9552S) \n [CAS](/articles/cas-redirect/1:CAS:528:DC%2BC3cXntFGqs74%3D) \n [PubMed](http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=Retrieve&db=PubMed&dopt=Abstract&list_uids=20439769) \n [Google Scholar](http://scholar.google.com/scholar_lookup?&title=An%20adaptability%20limit%20to%20climate%20change%20due%20to%20heat%20stress&journal=Proc.%20Natl.%20Acad.%20Sci.&volume=107&pages=9552-9555&publication_year=2010&author=Sherwood%2CSC&author=Huber%2CM)\n58. Millett, P. & Snyder-Beattie, A. Existential risk and cost-effective biosecurity. *Heal. security* **15**, 373–383 (2017).\n\n[Google Scholar](http://scholar.google.com/scholar_lookup?&title=Existential%20risk%20and%20cost-effective%20biosecurity&journal=Heal.%20security&volume=15&pages=373-383&publication_year=2017&author=Millett%2CP&author=Snyder-Beattie%2CA)\n59. Bostrom, N. *Superintelligence: Paths, Dangers, Strategies* (Oxford University Press, 2014).\n60. Matheny, J. G. Reducing the risk of human extinction. *Risk Analysis: An Int. J.* **27**, 1335–1344 (2007).\n\n[Google Scholar](http://scholar.google.com/scholar_lookup?&title=Reducing%20the%20risk%20of%20human%20extinction&journal=Risk%20Analysis%3A%20An%20Int.%20J.&volume=27&pages=1335-1344&publication_year=2007&author=Matheny%2CJG)\n\n[Download references](https://citation-needed.springer.com/v2/references/10.1038/s41598-019-47540-7?format=refman&flavour=references)\n\nAcknowledgements\n----------------\n\nWe thank Carl Shulman, Dave Waltham, and Nick Bostrom for feedback and comments. This work was funded by Jaan Tallinn and the Open Philanthropy Project.\n\nAuthor information\n------------------\n\n### Authors and Affiliations\n\n1. University of Oxford, Mathematical Ecology Research Group, Department of Zoology, Oxford, OX1 3SZ, UK\n\nAndrew E. Snyder-Beattie & Michael B. Bonsall\n2. University of Oxford, Future of Humanity Institute, Faculty of Philosophy, Oxford, OX1 1PT, UK\n\nToby Ord\n\nAuthors1. Andrew E. Snyder-Beattie[View author publications](/search?author=Andrew%20E.%20Snyder-Beattie)You can also search for this author in\n [PubMed](http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=search&term=Andrew%20E.%20Snyder-Beattie) [Google Scholar](http://scholar.google.co.uk/scholar?as_q=&num=10&btnG=Search+Scholar&as_epq=&as_oq=&as_eq=&as_occt=any&as_sauthors=%22Andrew%20E.%20Snyder-Beattie%22&as_publication=&as_ylo=&as_yhi=&as_allsubj=all&hl=en)\n2. Toby Ord[View author publications](/search?author=Toby%20Ord)You can also search for this author in\n [PubMed](http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=search&term=Toby%20Ord) [Google Scholar](http://scholar.google.co.uk/scholar?as_q=&num=10&btnG=Search+Scholar&as_epq=&as_oq=&as_eq=&as_occt=any&as_sauthors=%22Toby%20Ord%22&as_publication=&as_ylo=&as_yhi=&as_allsubj=all&hl=en)\n3. Michael B. Bonsall[View author publications](/search?author=Michael%20B.%20Bonsall)You can also search for this author in\n [PubMed](http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=search&term=Michael%20B.%20Bonsall) [Google Scholar](http://scholar.google.co.uk/scholar?as_q=&num=10&btnG=Search+Scholar&as_epq=&as_oq=&as_eq=&as_occt=any&as_sauthors=%22Michael%20B.%20Bonsall%22&as_publication=&as_ylo=&as_yhi=&as_allsubj=all&hl=en)\n### Contributions\n\nA.S.B. conducted the analysis and wrote the manuscript. T.O. and M.B.B. assisted in developing the methods and writing the manuscript. All authors reviewed the manuscript.\n\n### Corresponding author\n\nCorrespondence to\n [Andrew E. Snyder-Beattie](mailto:andrew.snyder-beattie@zoo.ox.ac.uk).\n\nEthics declarations\n-------------------\n\n\n### Competing Interests\n\n\nThe authors declare no competing interests.\n\n\nAdditional information\n----------------------\n\n**Publisher’s note:** Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.\n\nRights and permissions\n----------------------\n\n\n**Open Access** This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons license, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons license and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this license, visit .\n\n\n[Reprints and Permissions](https://s100.copyright.com/AppDispatchServlet?title=An%20upper%20bound%20for%20the%20background%20rate%20of%20human%20extinction&author=Andrew%20E.%20Snyder-Beattie%20et%20al&contentID=10.1038%2Fs41598-019-47540-7©right=The%20Author%28s%29&publication=2045-2322&publicationDate=2019-07-30&publisherName=SpringerNature&orderBeanReset=true&oa=CC%20BY)\n\n\n\n\n\nThis article is cited by\n------------------------\n\n\n\n* ### \n[Assessing natural global catastrophic risks](https://doi.org/10.1007/s11069-022-05660-w)\n\n\n\t+ Seth D. Baum*Natural Hazards* (2023)\n\n\n\n\n\nComments\n--------\n\nBy submitting a comment you agree to abide by our [Terms](/info/tandc.html) and [Community Guidelines](/info/community-guidelines.html). If you find something abusive or that does not comply with our terms or guidelines please flag it as inappropriate.", "url": "https://www.nature.com/articles/s41598-019-47540-7", "title": "An upper bound for the background rate of human extinction", "source": "html_articles", "source_type": "journalArticle", "source_filetype": "pdf", "date_published": "2019-07-29T22:00:00Z", "authors": ["Andrew E. Snyder-Beattie", "Toby Ord", "Michael B. Bonsall"], "summary": [], "id": "48b78dfa7f033ee3c1fc59dfe43a5f65"} {"text": "[Download PDF](/articles/s41598-019-50145-9.pdf)\n\n\n\n\n\n\n### Subjects\n\n\n* [Human behaviour](/subjects/human-behaviour)\n* [Psychology and behaviour](/subjects/psychology-and-behaviour)\n\n\n\n\n\nAbstract\n--------\n\nThe 21st century will likely see growing risks of human extinction, but currently, relatively small resources are invested in reducing such existential risks. Using three samples (UK general public, US general public, and UK students; total N = 2,507), we study how laypeople reason about human extinction. We find that people think that human extinction needs to be prevented. Strikingly, however, they do not think that an extinction catastrophe would be uniquely bad relative to near-extinction catastrophes, which allow for recovery. More people find extinction uniquely bad when (a) asked to consider the extinction of an animal species rather than humans, (b) asked to consider a case where human extinction is associated with less direct harm, and (c) they are explicitly prompted to consider long-term consequences of the catastrophes. We conclude that an important reason why people do not find extinction uniquely bad is that they focus on the immediate death and suffering that the catastrophes cause for fellow humans, rather than on the long-term consequences. Finally, we find that (d) laypeople—in line with prominent philosophical arguments—think that the quality of the future is relevant: they do find extinction uniquely bad when this means forgoing a utopian future.\n\n\n\n\n\nIntroduction\n------------\n\nThe ever-increasing powers of technology can be used for good and ill. In the 21st century, technological advances will likely yield great benefits to humanity, but experts warn that they will also lead to growing risks of human extinction[1](#ref-CR1 \"Bostrom, N. & Cirkovic, M. M. Global Catastrophic Risks (OUP Oxford, 2011).\"),[2](#ref-CR2 \"Bostrom, N. Superintelligence (Oxford University Press, 2014).\"),[3](#ref-CR3 \"Rees, M. Our Final Hour: A Scientist’s Warning (Hachette UK, 2009).\"),[4](/articles/s41598-019-50145-9#ref-CR4 \"Rees, M. Denial of catastrophic risks. Science. 339, 1123 (2013).\"). The risks stem both from existing technologies such as nuclear weapons, as well as emerging technologies such as synthetic biology and artificial intelligence[5](/articles/s41598-019-50145-9#ref-CR5 \"Cotton-Barratt, O., Farquhar, S., Halstead, J., Schubert, S. & Snyder-Beattie, A. Global catastrophic risks 2016. Global Challenges Foundation (2016).\"). A small but growing number of research institutes, such as the University of Oxford’s *Future of Humanity Institute* and the University of Cambridge’s *Centre for the Study of Existential Risk*, are studying these risks and how to mitigate them. Yet besides them, relatively small resources are explicitly devoted to reducing these risks.\n\nHere, we study the general public’s views of the badness of human extinction. We hypothesize that most people judge human extinction to be bad. But *how bad* do they find it? And *why* do they find it bad? Besides being highly policy-relevant, these questions are central for humanity’s understanding of itself and its place in nature. Human extinction is a pervasive theme in myths and religious writings[6](#ref-CR6 \"Soage, A. B. The End of Days: Essays on the Apocalypse from Antiquity to Modernity. Totalitarian Movements and Political Religions. 10, 375–377 (2009).\"),[7](#ref-CR7 \"Banks, A. C. The End of the World As We Know It: Faith, Fatalism, and Apocalypse. Nova Religio. 3, 420–421 (2000).\"),[8](#ref-CR8 \"Hall, J. R. Apocalypse: From Antiquity to the Empire of Modernity (John Wiley & Sons, 2013).\"),[9](#ref-CR9 \"O’Leary, S. D. Arguing the Apocalypse: A Theory of Millennial Rhetoric (Oxford University Press, 1998).\"),[10](/articles/s41598-019-50145-9#ref-CR10 \"Baumgartner, F. J., Graziano, F. & Weber, E. Longing for the End: A History of Millennialism in Western Civilization. Utop. Stud. 11, 214–218 (2000).\").\n\nOne view is that human extinction is bad primarily because it would harm many concrete individuals: it would mean the death of all currently living people. On this view, human extinction is a very bad event, but it is not much worse than catastrophes that kill *nearly* all currently living people—since the difference in terms of numbers of deaths would be relatively small. Another view is that the human extinction is bad primarily because it would mean that the human species would go extinct and that humanity’s future would be lost forever. On this view, human extinction is *uniquely* bad: much worse even than catastrophes killing nearly everyone, since we could recover from them and re-build civilization. Whether extinction is uniquely bad or not depends on which of these considerations is the stronger: the immediate harm, or the long-term consequences.\n\nHere is one way to pit these considerations against each other. Consider three outcomes: no catastrophe, a catastrophe killing 80% (near-extinction), and a catastrophe killing 100% (extinction). According to both considerations, no catastrophe is the best outcome, and extinction the worst outcome. But they come apart regarding the *relative differences* between the three outcomes. If the immediate harm is the more important consideration, then *the first difference*, between no catastrophe and near-extinction, is greater than *the second difference*, between near-extinction and extinction. That is because the first difference is greater in terms of numbers of harmed individuals. On the other hand, if the long-term consequences are more important, then the second difference is greater. The first difference compares two non-extinction outcomes, whereas the second difference compares a non-extinction outcome with an extinction outcome—and only the extinction outcome means that the future would be forever lost.\n\nThis thought-experiment was conceived by the well-known philosopher Derek Parfit[11](/articles/s41598-019-50145-9#ref-CR11 \"Parfit, D. Reasons and Persons (OUP Oxford, 1984).\") (we have adapted the three outcomes slightly; see the Methods section). Parfit argued that most people would find the first difference to be greater, but he himself thought that the second difference is greater. Many other philosophers and other academics working to reduce the risk of human extinction agree with Parfit[12](#ref-CR12 \"Bostrom, N. Existential risks: Analyzing human extinction scenarios and related hazards. Journal of Evolution and Technology 9, 1–30 (2002).\"),[13](#ref-CR13 \"Bostrom, N. When machines outsmart humans. Futures. 35, 759–764 (2003).\"),[14](#ref-CR14 \"Bostrom, N. The doomsday argument. Think. 6, 23–28 (2008).\"),[15](/articles/s41598-019-50145-9#ref-CR15 \"Beckstead, N. On the overwhelming importance of shaping the far future. PhD Thesis, Rutgers University-Graduate School-New Brunswick (2013).\"). On their view, the badness of human extinction is greatly dependent on how long the future would otherwise be, and what the quality of future people’s lives would be. As the philosopher Nick Bostrom notes, predictions about the long-term future have often been left to theology and fiction, whilst being neglected by science[16](/articles/s41598-019-50145-9#ref-CR16 \"Bostrom, N. The Future of Humanity. In New Waves in Philosophy, eds. Jan-Kyrre Berg Olsen, Evan Selinger, & Soren Riis (New York: Palgrave McMillan, 2009).\"). However, in recent years, researchers have tried to assess what the long-term future may be like. They argue that if humanity does not go extinct, then the future could be both extraordinarily long and extraordinarily good, involving much greater quality of life than the current world. For instance, Nick Bostrom argues that a conservative estimate of humanity’s future potential is “at least 1016 human lives of normal duration”, which could “be considerably better than the average contemporary human life, which is so often marred by disease, poverty [and] injustice”[17](/articles/s41598-019-50145-9#ref-CR17 \"Bostrom, N. Existential Risk Prevention as Global Priority. Glob Policy. 4, 15–31 (2013).\"). He goes on to argue that less conservative estimates would yield even greater numbers, and a drastically improved quality of life. The argument is that if humanity develops to a sufficiently high technological level, then it will either cause its own extinction via misuse of powerful technologies, or use those technological powers to greatly improve the level of well-being. Furthermore, they argue, based on the view that new happy people coming into existence is morally valuable[11](/articles/s41598-019-50145-9#ref-CR11 \"Parfit, D. Reasons and Persons (OUP Oxford, 1984).\"), that it is of paramount moral importance to make sure that we realize our future potential, and prevent human extinction.\n\nWhile philosophers have discussed the ethics of human extinction for some time, the general public’s views on this matter has not received much study. There are some studies on perceptions of risk of extinction, however. Two studies found that a slight majority do not think that humanity will go extinct, and that most of those who thought that it would go extinct thought that would happen at least 500 years into the future[18](/articles/s41598-019-50145-9#ref-CR18 \"Tonn, B. Beliefs about human extinction. Futures. 41, 766–773 (2009).\"),[19](/articles/s41598-019-50145-9#ref-CR19 \"Tonn, B., Hemrick, A. & Conrad, F. Cognitive representations of the future: Survey results. Futures. 38, 810–829 (2006).\"). There is also a related literature on catastrophic risk in general, focusing primarily on non-extinction catastrophes. For instance, it has been argued that the fact that people use the availability heuristic—they focus on risks which have salient historical examples—leads to a neglect of new types of risks and risks of major catastrophes (which are rare, and therefore less psychologically available)[20](/articles/s41598-019-50145-9#ref-CR20 \"Wiener, J. B. The Tragedy of the Uncommons: On the Politics of Apocalypse. Glob Policy. 7, 67–80 (2016).\"). Similarly, it has been argued that the fact that risk mitigation is a public good leads to under-investment, since it means that it is not possible to exclude free riders from benefiting from it[21](/articles/s41598-019-50145-9#ref-CR21 \"Hauser, O. P., Rand, D. G., Peysakhovich, A. & Nowak, M. A. Cooperating with the future. Nature. 511, 220–223 (2014).\"). On specific risks, there is a literature on the psychology of climate change showing that people fail to act to mitigate climate change because they engage in temporal discounting[22](/articles/s41598-019-50145-9#ref-CR22 \"Pahl, S., Sheppard, S., Boomsma, C. & Groves, C. Perceptions of time in relation to climate change. WIREs Clim Change. 5, 375–388 (2014).\"),[23](/articles/s41598-019-50145-9#ref-CR23 \"Jacquet, J. et al. Intra- and intergenerational discounting in the climate game. Nat. Clim. Chang. 3, 1025 (2013).\") and motivated reasoning about its severity[24](/articles/s41598-019-50145-9#ref-CR24 \"Kahan, D. M. Ideology, Motivated Reasoning, and Cognitive Reflection: An Experimental Study. Judgm Decis Mak. 8, 407–424 (2013).\"), and because of psychological distance[25](/articles/s41598-019-50145-9#ref-CR25 \"Spence, A., Poortinga, W. & Pidgeon, N. The psychological distance of climate change. Risk Anal. 32, 957–972 (2012).\") (e.g., temporal and social distance). However, to date there have been no studies on how laypeople reason about the moral aspect of human extinction: how bad it would be. Is the extinction of our own species something people care about? Do they recognize it as being fundamentally different in quality from other catastrophes? And if so, why?\n\nResults\n-------\n\n### Study 1\n\nIn Study 1 (US sample, *n* = 183, mean age 38.2, 50.81% female), we studied the general public’s judgments of the badness of human extinction. A large majority of the participants (78.14%, 143/183 participants) found human extinction to be bad on a binary question (bad vs. not bad), and we got similar results on a seven-point scale (1 = *definitely not bad*, 4 = *midpoint*, 7 = *definitely bad*; *M* = 5.61; *SD* = 2.11). Participants also felt strongly that human extinction needs to be prevented (1 = *not at all*, 4 = *midpoint*, 7 = *very strongly*; *M* = 6.01, *SD* = 1.65), that they have a moral obligation to prevent it (1 = *definitely no*, 4 = *midpoint*, 7 = *definitely yes*; *M* = 5.69, *SD* = 1.86), and that funding work to reduce the risk of human extinction is more important than funding other areas of government, such as education, health care and social security (1 = *much less important to fund work to reduce the risk of human extinction*, 4 = *midpoint*, 7 = *much more important*; *M* = 5.43, *SD* = 1.72). Participants believed that provided that humanity will not go extinct, the future is going to be roughly as good as the present (1 = *much worse than the present world*, 4 = *about as good as the present world*, 7 = *much better than the present world*; *M* = 4.48, *SD* = 1.57), and the better they thought the future would be, the worse they considered extinction to be (*r* = 0.51, *P* < 0.001), as measured by the seven-point scale. Similarly, more optimistic[26](/articles/s41598-019-50145-9#ref-CR26 \"Kemper, C. J., Kovaleva, A., Beierlein, C. & Rammstedt, B. Measuring the construct of Optimism-Pessimism with single item indicators. Paper presented at the 4th Conference of the European Survey Research Association (ESRA), Lausanne, Switzerland (2011).\") participants judged extinction to be worse (*r* = 0.32, *P* < 0.001). Participants’ responses to the question whether the world gets better if a happy person comes into existence were close to the midpoint (1 = *definitely not better*, 4 = *midpoint*, 7 = *definitely better*; *M* = 4.45, *SD* = 1.73), and people who thought that that would make the world better were more likely (*r* = 0.22, *P* = 0.003) to find extinction bad. For further details about the results, see Supplementary Materials.\n\n### Study 2a\n\nHaving thus observed that people do find human extinction bad, we turned to studying whether they find it *uniquely* bad relative to non-extinction catastrophes in Study 2a (pre-registered at ; British sample). Participants (*n* = 1,251, mean age 36.6, 35.33% female) were randomly divided into a control condition and four experimental conditions: “the animals condition”, “the “sterilization condition”, “the salience condition” and “the utopia condition” (see below for explanations of the manipulations). Participants in the control condition (257 participants) were presented with the three outcomes described above—no catastrophe, a catastrophe killing 80%, and a catastrophe killing 100%—and were asked how they would rank them from best to worst. As Parfit expected, a large majority (82.88%, 213/257 participants, cf. Fig. [1](/articles/s41598-019-50145-9#Fig1)) ranked no catastrophe as the best outcome and 100% dying as the worst outcome. However, this was just a preliminary question: as per the discussion above, what we were primarily interested in was which difference participants that gave the expected ranking found greater: the first difference (meaning that extinction is not uniquely bad) or the second difference (meaning that extinction is uniquely bad). (Recall that the first difference was the difference between no catastrophe and a catastrophe killing 80%, and the second difference the difference between a catastrophe killing 80% and a catastrophe killing 100%.) We therefore asked participants who gave the expected ranking (but not the other participants) which difference they judged to be greater. We found that most people did not find extinction uniquely bad: only a relatively small minority (23.47%, 50/213 participants) judged the second difference to be greater than the first difference.\n\n**Figure 1**[![figure 1](//media.springernature.com/lw685/springer-static/image/art%3A10.1038%2Fs41598-019-50145-9/MediaObjects/41598_2019_50145_Fig1_HTML.png)](/articles/s41598-019-50145-9/figures/1)Proportions of participants who found extinction uniquely bad. (This means that they found the difference, in terms of badness, between a catastrophe killing 80% and a catastrophe killing 100% to be greater than the difference between no catastrophe and a catastrophe killing 80%.) Laypeople consistently did not find extinction uniquely bad in the control condition (*Control*), but did so in a scenario where the future would be very long and good conditional on survival (*Utopia*). The animals condition (*Animals*), sterilization condition (*Sterilization*) and salience condition (*Salience*) yielded in-between results. People explicitly devoted to existential risk reduction (*Existential risk mitigators*) consistently found extinction uniquely bad.\n\n[Full size image](/articles/s41598-019-50145-9/figures/1)As stated, we included four experimental conditions aiming to explain these results. We thought that one reason why participants do not find extinction uniquely bad in the control condition is that they feel strongly for the victims of the catastrophes. Therefore, they focus on the immediate suffering and death that the catastrophes cause, which leads them to judge the difference between no one dying and 80% dying to be greater than the difference between 80% dying and 100% dying. To test this hypothesis, we included two conditions designed to trigger a weaker focus on the immediate harm. First, we included a condition where the catastrophes affected an animal species (zebras) rather than humans (“the animals condition”; otherwise identical to the control condition; 246 participants). (We chose zebras because zebra extinction would likely have small effects on humans; in contrast to extinction of, for example, pigs or dogs.) We hypothesized that people focus less on the immediate harm that the catastrophes cause if the catastrophes affect animals rather than humans[27](/articles/s41598-019-50145-9#ref-CR27 \"Caviola, L., Everett, J. A. C., Faber, N. S. The moral standing of animals: Towards a psychology of speciesism. J. Pers. Soc. Psychol., 116, 1011–1029 (2019).\"). Second, we included a condition where the catastrophes led to 80%/100% of the world’s population being unable to have children, rather than getting killed (“the sterilization condition”; otherwise identical to the control condition; 252 participants). We hypothesized that people would focus less strongly on the immediate harm that the catastrophes cause if they lead to sterilization rather than death.\n\nThus, we hypothesized that a greater share of the participants who gave the expected ranking would find extinction uniquely bad in the animals condition and the sterilization condition than in the control condition. We found, first, that a large majority ranked no catastrophe as the best outcome and 100% dying as the worst outcome (the expected ranking) both in the animals condition (89.84%, 221/246 participants) and the sterilization condition (82.54%, 208/252 participants). Subsequently, we found that our hypotheses were confirmed. The proportion of the participants who gave the expected ranking that found extinction uniquely bad was significantly larger (*χ*2(1) = 8.82, *P* = 0.003) in the animals condition (44.34%, 98/221 participants) than in the control condition (23.47%, 50/213 participants). Similarly, the proportion of the participants who gave the expected ranking that found extinction uniquely bad was significantly larger (*χ*2(1) = 23.83, *P* < 0.001) in the sterilization condition (46.63%, 97/208 participants) than in the control condition (23.47%, 50/213 participants).\n\nWe had another hypothesis for why control condition participants do not find extinction uniquely bad, namely that they neglect the long-term consequences of the catastrophes. To test this hypothesis, we included a condition where we made the long-term consequences salient (“the salience condition”; 248 participants). This condition was identical to the control condition, with the exception that we added a brief text explicitly asking the participants to consider the long-term consequences of the three outcomes. It said that if humanity does not go extinct (including if it suffers a non-extinction catastrophe, from which it can recover) it could go on to a long future, whereas that would not happen if humanity went extinct (see the Methods section for the full vignette). We also wanted to know whether participants see empirical information about the quality of the future as relevant for their judgments of the badness of extinction. Does it make a difference how good the future will be? We therefore included a maximally positive scenario, the “utopia condition” (248 participants), where it was said that provided that humanity does not go extinct, it “goes on to live for a very long time in a future which is better than today in every conceivable way”. It was also said that “there are no longer any wars, any crimes, or any people experiencing depression or sadness” and that “human suffering is massively reduced, and people are much happier than they are today” (in the scenario where 80% die in a catastrophe, it was said that this occurred after a recovery period; see the Methods section for the full text). Conversely, participants were told that if 100% are killed, then “no humans will ever live anymore, and all of human knowledge and culture will be lost forever.” We hypothesized that both of these manipulations would make more participants judge extinction to be uniquely bad compared with the control condition.\n\nWe found again that a large majority ranked no catastrophe as the best outcome and 100% dying as the worst outcome (the expected ranking) both in the salience condition (77.82%, 193/248 participants) and the utopia condition (86.69%, 215/248 participants). Subsequently, we found that our hypotheses were confirmed. The proportion of the participants who chose the expected ranking that found extinction uniquely bad was significantly larger (*χ*2(1) = 29.90, *P* < 0.001) in the salience condition (50.25%, 97/193 participants) than in the control condition (23.47%, 50/213 participants). Similarly, the proportion of the participants who chose the expected ranking that found extinction uniquely bad was significantly larger (*χ*2(1) = 30.30, *P* < 0.001) in the utopia condition (76.74%, 165/215 participants) than in the control condition (23.47%, 50/213 participants). We also found that there was a significant difference between the utopia condition and the salience condition (*χ*2(1) = 29.90, *P* < 0.001).\n\nOur interpretation of these results is as follows. The utopia manipulation effectively does two things: it highlights the long-term consequences of the outcomes, and it says that unless humanity goes extinct, those consequences are going to be extraordinarily good. The salience manipulation only highlights the long-term consequences. Thus, we can infer that merely highlighting the long-term consequences make people more likely to find extinction uniquely bad, and that adding that the long-term future will be extraordinarily good make them still more likely to find extinction uniquely bad.\n\nLastly, we found that across all conditions, the more cognitively reflective the participants were (as measured by the Cognitive Reflection Test[28](/articles/s41598-019-50145-9#ref-CR28 \"Frederick, S. Cognitive Reflection and Decision Making. J. Econ. Perspect. 19, 25–42 (2005).\")), the more likely they were to judge extinction to be uniquely bad (*Exp*(*B*) = 0.15, *P* = 0.01, Odds ratio = 1.6).\n\nIn conclusion, we find that people do not find extinction uniquely bad when asked without further prompts, and have identified several reasons why that is. As evidenced by the animals and the sterilization conditions, they focus on the immediate harm that the catastrophes cause, because they feel strongly for the victims of the catastrophes—and on that criterion, near-extinction is almost as bad as extinction. As evidenced by the salience condition, they neglect the long-term consequences of the outcomes. We also find that participants’ empirical beliefs about the quality of the future make a difference: telling participants that the future will be extraordinarily good makes them significantly more likely to find extinction uniquely bad.\n\n### Study 2b\n\nTo find out whether these results would hold up with different demographics, we aimed to replicate them using a sample of the US general public (pre-registered at ; *N* = 855, mean age 36.85, 48.65% female) in Study 2b. We found again that large majorities ranked no catastrophe as the best outcome and 100% dying as the worst outcome (the expected ranking) in the control condition (87.80%, 144/164 participants), the animals condition (92.44%, 159/172 participants), the sterilization condition (91.62%, 153/167 participants), the salience condition (83.05%, 147/177 participants) and the utopia condition (89.71%, 157/175 participants). And again we found that only a small minority of the participants who chose the expected ranking judged extinction to be uniquely bad in the control condition (18.75%, 27/144 participants). The proportion of the participants who chose the expected ranking who found extinction uniquely bad was significantly larger in the animals condition (34.59%, 55/159 participants; *χ*2(1) = 8.82, *P* = 0.003), the salience condition (39.45%, 58/147 participants; *χ*2(1) = 14.10, *P* < 0.001) and the utopia condition (66.88%, 105/157 participants; *χ*2(1) = 68.72, *P* < 0.001) than in the control condition. We also again found a significant difference between the utopia condition and the salience condition (*χ*2(1) = 21.868, *P* < 0.001). However, in the sterilization condition, only 28.75% (44/153 participants) of the participants who chose the expected ranking found extinction uniquely bad, which meant that the difference with the control condition was not significant on the 0.05-level (*χ*2(1) = 3.55, *P* = 0.059). Lastly, we found again that (across all conditions) the more cognitively reflective the participants were (as measured by the Cognitive Reflection Test), the more likely they were to judge extinction to be uniquely bad (*Exp*(*B*) = 0.21, *P* = 0.005, Odds ratio = 1.2).\n\n### Study 2c\n\nTo further test the robustness of our findings across different demographics, we conducted Study 2c as another replication, this time using a sample of University of Oxford students (*N* = 196, mean age 24.27, 61% female). We only included the control and the utopia conditions. We found again that most participants ranked no catastrophe as the best outcome and 100% dying as the worst outcome (the expected ranking) both in the control condition (65.7%, 65/99 participants) and the utopia condition (84.5%, 82/97 participants). We then found again that a minority of the participants who chose the expected ranking found extinction to be uniquely bad in the control condition (36.92%, 24/65 participants), though this minority was slightly larger than in the two samples of the general public (cf. Fig. [1](/articles/s41598-019-50145-9#Fig1)). We also found again that the proportion of the utopia condition participants who chose the expected ranking that found extinction uniquely bad (76.83%, 63/82 participants) was significantly larger (*χ*2(1) = 22.28, *P* < 0.001) than in the control condition. (These findings were further supported by five supplementary studies; see Supplementary Materials).\n\n### Study 3\n\nIn Studies 2a to 2c, we thus found that when asked without further prompts, laypeople do not find extinction uniquely bad. In Study 3 (*N* = 71, mean age 30.52, 14.00% female) we aimed to test whether people devoted to preventing human extinction (existential risk mitigators) judge human extinction to be uniquely bad already when asked without further prompts. (Existential risks also include risks that threaten to drastically curtail humanity’s potential[12](#ref-CR12 \"Bostrom, N. Existential risks: Analyzing human extinction scenarios and related hazards. Journal of Evolution and Technology 9, 1–30 (2002).\"),[13](#ref-CR13 \"Bostrom, N. When machines outsmart humans. Futures. 35, 759–764 (2003).\"),[14](#ref-CR14 \"Bostrom, N. The doomsday argument. Think. 6, 23–28 (2008).\"),[15](/articles/s41598-019-50145-9#ref-CR15 \"Beckstead, N. On the overwhelming importance of shaping the far future. PhD Thesis, Rutgers University-Graduate School-New Brunswick (2013).\"), without causing it to go extinct, but we focus on risks of human extinction.) This would support the validity of our task by demonstrating a link between participants’ responses and behavior in the real world.\n\nWe recruited participants via the Effective Altruism Newsletter and social media groups dedicated to existential risk reduction, and only included respondents who put down reducing existential risk as their “most important cause”. Again we had two conditions, the control condition and the utopia condition. We hypothesized that a majority of participants would find extinction uniquely bad in both conditions. We found again that most participants ranked no catastrophe as the best outcome and 100% dying as the worst outcome (the expected ranking) both in the control condition (90.32%, 28/31 participants) and the utopia condition (92.50%, 37/40 participants). But unlike the samples in Studies 2a to 2c, and in line with our hypotheses, substantial majorities of the participants who chose the expected ranking found extinction uniquely bad both in the control condition (85.71%, 24/28 participants) and the utopia condition (94.59%, 35/37). The difference between the conditions was not significant (*χ*2(1) = 0.63, *P* = 0.43). In contrast to laypeople, existential risk mitigators thus found human extinction to be uniquely bad even when the description of the outcomes did not include information about the quality of the future. This suggests that judging human extinction to be uniquely bad, as measured by our task, may be a key motivator for devoting oneself to preventing it.\n\nDiscussion\n----------\n\nOur studies show that people find that human extinction is bad, and that it is important to prevent it. However, when presented with a scenario involving no catastrophe, a near-extinction catastrophe and an extinction catastrophe as possible outcomes, they do not see human extinction as *uniquely* bad compared with non-extinction. We find that this is partly because people feel strongly for the victims of the catastrophes, and therefore focus on the immediate consequences of the catastrophes. The immediate consequences of near-extinction are not that different from those of extinction, so this naturally leads them to find near-extinction almost as bad as extinction. Another reason is that they neglect the long-term consequences of the outcomes. Lastly, their empirical beliefs about the quality of the future make a difference: telling them that the future will be extraordinarily good makes more people find extinction uniquely bad.\n\nThus, when asked in the most straightforward and unqualified way, participants do not find human extinction uniquely bad. This could partly explain why we currently invest relatively small resources in reducing existential risk. However, these responses should not necessarily be seen as reflecting people’s well-considered views on the badness of human extinction. Rather, it seems that they partly reflect the fact that people often fail to consider the long-term consequences of extinction. Our studies suggest that it could be that if people reflected more carefully, they might to a greater extent agree that extinction is uniquely bad. A suggestive finding with regards to this is that higher scores on the cognitive reflection test predicted a greater tendency to find extinction uniquely bad. This could mean that deliberative thought-processes lead to finding extinction uniquely bad, whereas intuitive thought-processes lead to the opposite conclusion. More research is needed on the role of deliberation and intuition, as well as many other questions, such as the role of cognitive ability, and the ultimate evolutionary causes of why humans struggle to think clearly about their own extinction.\n\nFinally, let us consider possible policy implications. If it is right that human extinction is uniquely bad, then we should arguably invest much more in making sure it does not happen. We should also change policy in many other ways; e.g., shift technology policy in a more cautious direction[29](/articles/s41598-019-50145-9#ref-CR29 \"Farquhar, S., Cotton-Barratt, O., & Snyder-Beattie, A. Pricing Externalities to Balance Public Risks and Benefits of Research. Health Secur 15, 401–408 (2017).\"). On this view, we should, if necessary, be prepared to make substantial sacrifices in order to make sure that humanity realizes its future potential. Hence much hinges on the complex question whether we deem our own extinction to be uniquely bad.\n\nMethods\n-------\n\nAll studies were approved by the University of Oxford’s Central University Research Ethics Committee (approval number: R56657/RE002) and participants in each study gave their informed consent beforehand. All studies were performed in accordance with relevant guidelines and regulations.\n\n### Study 1\n\n#### Participants\n\nWe recruited 210 participants and excluded 27 for not completing the study or failing the attention check.\n\n#### Procedure\n\nLevel of optimism was measured by asking participants “how optimistic are you in general?” (where optimistic people were defined as “people who look to the future with confidence and who mostly expect good things to happen”)[26](/articles/s41598-019-50145-9#ref-CR26 \"Kemper, C. J., Kovaleva, A., Beierlein, C. & Rammstedt, B. Measuring the construct of Optimism-Pessimism with single item indicators. Paper presented at the 4th Conference of the European Survey Research Association (ESRA), Lausanne, Switzerland (2011).\"). In addition to the measures reported above, we gave the Oxford Utilitarianism Scale, the Cognitive Reflection Test and demographic questions to the participants. The Oxford Utilitarianism Scale[30](/articles/s41598-019-50145-9#ref-CR30 \"Kahane, G. et al. Beyond sacrificial harm: A two-dimensional model of utilitarian psychology. Psychol Rev, 125, 131–164.\") consists of two subscales: the impartial beneficence (IB) subscale and the instrumental harm (IH) subscale. The OUS-IB measures the degree to which someone values maximizing overall welfare, independent of its recipient. The OUS-IH measures the degree to which someone is willing to accept that harm is done in order to maximize overall welfare. The Cognitive Reflection Test[28](/articles/s41598-019-50145-9#ref-CR28 \"Frederick, S. Cognitive Reflection and Decision Making. J. Econ. Perspect. 19, 25–42 (2005).\") measures the tendency to answer questions reflectively and resist reporting the first response that comes to mind.\n\n### Study 2a\n\n#### Participants\n\nWe recruited 1301 participants via Prolific and excluded 50 for not completing the study or failing an attention check. The study was pre-registered at .\n\n#### Procedure\n\nParticipants were first asked to consider three outcomes, A, B and C, and rank them from best to worst. The outcomes in the control condition were described as follows:\n\n\n**The control condition:**\n\n\n1. (A)\nThere is no catastrophe.\n2. (B)\nThere is a catastrophe that immediately kills 80% of the world’s population.\n3. (C)\nThere is a catastrophe that immediately kills 100% of the world’s population.\n\n\nThis meant that our text differed from Parfit’s[13](/articles/s41598-019-50145-9#ref-CR13 \"Bostrom, N. When machines outsmart humans. Futures. 35, 759–764 (2003).\") as follows. We used “no catastrophe” rather than Parfit’s “peace” because we thought that “peace” had positive associations that could be a potential confounder. We used “a catastrophe” rather than Parfit’s “a nuclear war” because we thought that there was no reason to specify the nature of the catastrophe. And we said that 80%, rather than 99%, die in the non-extinction catastrophe, to make it more plausible that humanity could recover.\n\nThe outcomes in the other conditions were described as follows:\n\n\n**The animals condition:**\n\n\n1. (A)\nThere is no catastrophe.\n2. (B)\nThere is a catastrophe that immediately kills 80% of the world’s zebra population.\n3. (C)\nThere is a catastrophe that immediately kills 100% of the world’s zebra population.\n\n\n\n**The sterilization condition:**\n\n\n1. (A)\nThere is no catastrophe.\n2. (B)\nThere is a catastrophe that immediately causes 80% of the world’s population to go sterile, meaning they cannot have children.\n3. (C)\nThere is a catastrophe that immediately causes 100% of the world’s population to go sterile, meaning they cannot have children.\n\n\n\n**The salience condition:**\n\n\n1. (A)\nThere is no catastrophe.\n2. (B)\nThere is a catastrophe that immediately kills 80% of the world’s population.\n3. (C)\nThere is a catastrophe that immediately kills 100% of the world’s population.\n\n\nPlease rank these three outcomes from best to worst.\n\nWhen you do so, please remember to consider **the long-term consequences** each scenario will have for humanity. If humanity does not go extinct, it could go on to a long future. This is true even if many (but not all) humans die in a catastrophe, since that leaves open the possibility of recovery. However, if humanity goes extinct (if 100% are killed), there will be no future for humanity.\n\n\n**The utopia condition:**\n\n\n1. (A)\nThere is no catastrophe and humanity goes on to live for a very long time in a future which is better than today in every conceivable way. There are no longer any wars, any crimes, or any people experiencing depression or sadness. Human suffering is massively reduced, and people are much happier than they are today.\n2. (B)\nThere is a catastrophe that immediately kills 80% of the world’s population. However, humanity eventually recovers to its original size, and then goes on to live for a very long time in a future which is better than today in every conceivable way. There are no longer any wars, any crimes, or any people experiencing depression or sadness. Human suffering is massively reduced, and people are much happier than they are today.\n3. (C)\nThere is a catastrophe that immediately kills 100% of the world’s population. This means that humanity will go extinct, that no humans will ever live anymore, and all of human knowledge and culture will be lost forever.\n\n\nOn the next page, participants who ranked A as the best outcome and C as the worst were again presented with the three outcomes (other participants were excluded), and told:\n\n“We are now interested in your views of how much better A is than B, and how much better B is than C. In terms of badness, which difference is greater: the difference between A and B, or the difference between B and C?”\n\nIn addition to the measures reported above, we gave the Oxford Utilitarianism Scale and demographic questions to participants (see Supplementary Materials for results).\n\n### Study 2b\n\n#### Participants\n\nWe recruited 994 participants and excluded 139 for not completing the study or failing an attention check. In addition to the measures reported above, we gave the Oxford Utilitarianism Scale and demographic questions to the participants (see Supplementary Materials for results). The study was pre-registered at .\n\n#### Procedure\n\nThe procedure was the same as in study 2a.\n\n### Study 2c\n\n#### Participants\n\nWe recruited 204 participants and excluded 8 for not giving an answer. The procedure was the same as for study 2a, except only the control condition and the utopia condition were included. In addition to the measures reported above, we gave the Oxford Utilitarianism Scale and demographic questions to the participants (see Supplementary Materials for results).\n\n### Study 3\n\n#### Participants\n\nWe recruited 196 participants. However, since we were only interested in those effective altruists who consider existential risk mitigation to be the top cause area, only 83 were included into the analysis. 12 participants were excluded for failing an attention check. The final sample was 71 participants.\n\n#### Procedure\n\nThe procedure was the same as for study 2a, except only the control condition and the utopia condition were included. In addition, participants were asked three questions assessing how uniquely bad they found extinction (extinction prevention questions). The first question concerned which scenario (A = a catastrophe kills 50% of the world’s population, but humanity recovers to its original size and goes on to live for a very long time, B = painless extinction) they would want to prevent (1 = *definitely A*, 4 = *midpoint*, 7 = *definitely B*). The second question asked if participants would rather support political party A, which works to reduce the risk of scenario A, or political party B, which works to reduce the risk of scenario B (1 = *definitely A*, 4 = *midpoint*, 7 = *definitely B*). The third question asked what they thought the morally right choice for government leaders would be if they had to choose between reducing the risk for either scenario A or B (1 = *definitely reduce the risk of A*, 4 = *midpoint*, 7 = *definitely reduce the risk of B*).\n\nWe also gave the Oxford Utilitarianism Scale, The Cognitive Reflection Test and demographic questions to participants. (See Supplementary Materials for additional results).\n\n\n\n\nData Availability\n-----------------\n\n\nReports of all measures, manipulations, and exclusions, and all data, analysis code, and experimental materials for all studies are available for download at: .\n\n\nReferences\n----------\n\n1. Bostrom, N. & Cirkovic, M. M. *Global Catastrophic Risks* (OUP Oxford, 2011).\n2. Bostrom, N. *Superintelligence* (Oxford University Press, 2014).\n3. Rees, M. *Our Final Hour: A Scientist’s Warning* (Hachette UK, 2009).\n4. Rees, M. Denial of catastrophic risks. *Science.* **339**, 1123 (2013).\n\n[Article](https://doi.org/10.1126%2Fscience.1236756) \n [ADS](http://adsabs.harvard.edu/cgi-bin/nph-data_query?link_type=ABSTRACT&bibcode=2013Sci...339.1123R) \n [CAS](/articles/cas-redirect/1:CAS:528:DC%2BC3sXktFKgs78%3D) \n [Google Scholar](http://scholar.google.com/scholar_lookup?&title=Denial%20of%20catastrophic%20risks&journal=Science.&doi=10.1126%2Fscience.1236756&volume=339&publication_year=2013&author=Rees%2CM)\n5. Cotton-Barratt, O., Farquhar, S., Halstead, J., Schubert, S. & Snyder-Beattie, A. Global catastrophic risks 2016. *Global Challenges Foundation* (2016).\n6. Soage, A. B. The End of Days: Essays on the Apocalypse from Antiquity to Modernity. *Totalitarian Movements and Political Religions.* **10**, 375–377 (2009).\n\n[Article](https://doi.org/10.1080%2F14690760903396385) \n [Google Scholar](http://scholar.google.com/scholar_lookup?&title=The%20End%20of%20Days%3A%20Essays%20on%20the%20Apocalypse%20from%20Antiquity%20to%20Modernity&journal=Totalitarian%20Movements%20and%20Political%20Religions.&doi=10.1080%2F14690760903396385&volume=10&pages=375-377&publication_year=2009&author=Soage%2CAB)\n7. Banks, A. C. The End of the World As We Know It: Faith, Fatalism, and Apocalypse. *Nova Religio.* **3**, 420–421 (2000).\n\n[Article](https://doi.org/10.1525%2Fnr.2000.3.2.420) \n [Google Scholar](http://scholar.google.com/scholar_lookup?&title=The%20End%20of%20the%20World%20As%20We%20Know%20It%3A%20Faith%2C%20Fatalism%2C%20and%20Apocalypse&journal=Nova%20Religio.&doi=10.1525%2Fnr.2000.3.2.420&volume=3&pages=420-421&publication_year=2000&author=Banks%2CAC)\n8. Hall, J. R. *Apocalypse: From Antiquity to the Empire of Modernity* (John Wiley & Sons, 2013).\n9. O’Leary, S. D. *Arguing the Apocalypse: A Theory of Millennial Rhetoric* (Oxford University Press, 1998).\n10. Baumgartner, F. J., Graziano, F. & Weber, E. Longing for the End: A History of Millennialism in Western Civilization. *Utop. Stud.* **11**, 214–218 (2000).\n\n[Google Scholar](http://scholar.google.com/scholar_lookup?&title=Longing%20for%20the%20End%3A%20A%20History%20of%20Millennialism%20in%20Western%20Civilization&journal=Utop.%20Stud.&volume=11&pages=214-218&publication_year=2000&author=Baumgartner%2CFJ&author=Graziano%2CF&author=Weber%2CE)\n11. Parfit, D. *Reasons and Persons* (OUP Oxford, 1984).\n12. Bostrom, N. Existential risks: Analyzing human extinction scenarios and related hazards. *Journal of Evolution and Technology* **9**, 1–30 (2002).\n\n[Google Scholar](http://scholar.google.com/scholar_lookup?&title=Existential%20risks%3A%20Analyzing%20human%20extinction%20scenarios%20and%20related%20hazards&journal=Journal%20of%20Evolution%20and%20Technology&volume=9&pages=1-30&publication_year=2002&author=Bostrom%2CN)\n13. Bostrom, N. When machines outsmart humans. *Futures.* **35**, 759–764 (2003).\n\n[Article](https://doi.org/10.1016%2FS0016-3287%2803%2900026-0) \n [Google Scholar](http://scholar.google.com/scholar_lookup?&title=When%20machines%20outsmart%20humans&journal=Futures.&doi=10.1016%2FS0016-3287%2803%2900026-0&volume=35&pages=759-764&publication_year=2003&author=Bostrom%2CN)\n14. Bostrom, N. The doomsday argument. *Think.* **6**, 23–28 (2008).\n\n[Article](https://doi.org/10.1017%2FS1477175600002943) \n [Google Scholar](http://scholar.google.com/scholar_lookup?&title=The%20doomsday%20argument&journal=Think.&doi=10.1017%2FS1477175600002943&volume=6&pages=23-28&publication_year=2008&author=Bostrom%2CN)\n15. Beckstead, N. On the overwhelming importance of shaping the far future. PhD Thesis, Rutgers University-Graduate School-New Brunswick (2013).\n16. Bostrom, N. The Future of Humanity. In *New Waves in* Philosophy, *eds*. Jan-Kyrre Berg Olsen, Evan Selinger, & Soren Riis (New York: Palgrave McMillan, 2009).\n17. Bostrom, N. Existential Risk Prevention as Global Priority. *Glob Policy.* **4**, 15–31 (2013).\n\n[Article](https://doi.org/10.1111%2F1758-5899.12002) \n [Google Scholar](http://scholar.google.com/scholar_lookup?&title=Existential%20Risk%20Prevention%20as%20Global%20Priority&journal=Glob%20Policy.&doi=10.1111%2F1758-5899.12002&volume=4&pages=15-31&publication_year=2013&author=Bostrom%2CN)\n18. Tonn, B. Beliefs about human extinction. *Futures.* **41**, 766–773 (2009).\n\n[Article](https://doi.org/10.1016%2Fj.futures.2009.07.001) \n [Google Scholar](http://scholar.google.com/scholar_lookup?&title=Beliefs%20about%20human%20extinction&journal=Futures.&doi=10.1016%2Fj.futures.2009.07.001&volume=41&pages=766-773&publication_year=2009&author=Tonn%2CB)\n19. Tonn, B., Hemrick, A. & Conrad, F. Cognitive representations of the future: Survey results. *Futures.* **38**, 810–829 (2006).\n\n[Article](https://doi.org/10.1016%2Fj.futures.2005.12.005) \n [Google Scholar](http://scholar.google.com/scholar_lookup?&title=Cognitive%20representations%20of%20the%20future%3A%20Survey%20results&journal=Futures.&doi=10.1016%2Fj.futures.2005.12.005&volume=38&pages=810-829&publication_year=2006&author=Tonn%2CB&author=Hemrick%2CA&author=Conrad%2CF)\n20. Wiener, J. B. The Tragedy of the Uncommons: On the Politics of Apocalypse. *Glob Policy.* **7**, 67–80 (2016).\n\n[Article](https://doi.org/10.1111%2F1758-5899.12319) \n [Google Scholar](http://scholar.google.com/scholar_lookup?&title=The%20Tragedy%20of%20the%20Uncommons%3A%20On%20the%20Politics%20of%20Apocalypse&journal=Glob%20Policy.&doi=10.1111%2F1758-5899.12319&volume=7&pages=67-80&publication_year=2016&author=Wiener%2CJB)\n21. Hauser, O. P., Rand, D. G., Peysakhovich, A. & Nowak, M. A. Cooperating with the future. *Nature.* **511**, 220–223 (2014).\n\n[Article](https://doi.org/10.1038%2Fnature13530) \n [ADS](http://adsabs.harvard.edu/cgi-bin/nph-data_query?link_type=ABSTRACT&bibcode=2014Natur.511..220H) \n [CAS](/articles/cas-redirect/1:CAS:528:DC%2BC2cXhtFehsL3E) \n [Google Scholar](http://scholar.google.com/scholar_lookup?&title=Cooperating%20with%20the%20future&journal=Nature.&doi=10.1038%2Fnature13530&volume=511&pages=220-223&publication_year=2014&author=Hauser%2COP&author=Rand%2CDG&author=Peysakhovich%2CA&author=Nowak%2CMA)\n22. Pahl, S., Sheppard, S., Boomsma, C. & Groves, C. Perceptions of time in relation to climate change. *WIREs Clim Change.* **5**, 375–388 (2014).\n\n[Article](https://doi.org/10.1002%2Fwcc.272) \n [Google Scholar](http://scholar.google.com/scholar_lookup?&title=Perceptions%20of%20time%20in%20relation%20to%20climate%20change&journal=WIREs%20Clim%20Change.&doi=10.1002%2Fwcc.272&volume=5&pages=375-388&publication_year=2014&author=Pahl%2CS&author=Sheppard%2CS&author=Boomsma%2CC&author=Groves%2CC)\n23. Jacquet, J. *et al*. Intra- and intergenerational discounting in the climate game. *Nat. Clim. Chang.* **3**, 1025 (2013).\n\n[Article](https://doi.org/10.1038%2Fnclimate2024) \n [ADS](http://adsabs.harvard.edu/cgi-bin/nph-data_query?link_type=ABSTRACT&bibcode=2013NatCC...3.1025J) \n [Google Scholar](http://scholar.google.com/scholar_lookup?&title=Intra-%20and%20intergenerational%20discounting%20in%20the%20climate%20game&journal=Nat.%20Clim.%20Chang.&doi=10.1038%2Fnclimate2024&volume=3&publication_year=2013&author=Jacquet%2CJ)\n24. Kahan, D. M. Ideology, Motivated Reasoning, and Cognitive Reflection: An Experimental Study. *Judgm Decis Mak.* **8**, 407–424 (2013).\n\n[Google Scholar](http://scholar.google.com/scholar_lookup?&title=Ideology%2C%20Motivated%20Reasoning%2C%20and%20Cognitive%20Reflection%3A%20An%20Experimental%20Study&journal=Judgm%20Decis%20Mak.&volume=8&pages=407-424&publication_year=2013&author=Kahan%2CDM)\n25. Spence, A., Poortinga, W. & Pidgeon, N. The psychological distance of climate change. *Risk Anal.* **32**, 957–972 (2012).\n\n[Article](https://doi.org/10.1111%2Fj.1539-6924.2011.01695.x) \n [Google Scholar](http://scholar.google.com/scholar_lookup?&title=The%20psychological%20distance%20of%20climate%20change&journal=Risk%20Anal.&doi=10.1111%2Fj.1539-6924.2011.01695.x&volume=32&pages=957-972&publication_year=2012&author=Spence%2CA&author=Poortinga%2CW&author=Pidgeon%2CN)\n26. Kemper, C. J., Kovaleva, A., Beierlein, C. & Rammstedt, B. Measuring the construct of Optimism-Pessimism with single item indicators. Paper presented at the 4th Conference of the European Survey Research Association (ESRA), Lausanne, Switzerland (2011).\n27. Caviola, L., Everett, J. A. C., Faber, N. S. The moral standing of animals: Towards a psychology of speciesism. *J*. *Pers*. *Soc*. *Psychol*., **116**, 1011–1029 (2019).\n28. Frederick, S. Cognitive Reflection and Decision Making. *J. Econ. Perspect.* **19**, 25–42 (2005).\n\n[Article](https://doi.org/10.1257%2F089533005775196732) \n [Google Scholar](http://scholar.google.com/scholar_lookup?&title=Cognitive%20Reflection%20and%20Decision%20Making&journal=J.%20Econ.%20Perspect.&doi=10.1257%2F089533005775196732&volume=19&pages=25-42&publication_year=2005&author=Frederick%2CS)\n29. Farquhar, S., Cotton-Barratt, O., & Snyder-Beattie, A. Pricing Externalities to Balance Public Risks and Benefits of Research. *Health Secur* **15**, 401–408 (2017).\n\n[Article](https://doi.org/10.1089%2Fhs.2016.0118) \n [Google Scholar](http://scholar.google.com/scholar_lookup?&title=Pricing%20Externalities%20to%20Balance%20Public%20Risks%20and%20Benefits%20of%20Research&journal=Health%20Secur&doi=10.1089%2Fhs.2016.0118&volume=15&pages=401-408&publication_year=2017&author=Farquhar%2CS&author=Cotton-Barratt%2CO&author=Snyder-Beattie%2CA)\n30. Kahane, G. *et al*. Beyond sacrificial harm: A two-dimensional model of utilitarian psychology. *Psychol Rev*, **125**, 131–164.\n\n[Download references](https://citation-needed.springer.com/v2/references/10.1038/s41598-019-50145-9?format=refman&flavour=references)\n\nAcknowledgements\n----------------\n\nThe Berkeley Existential Risk Initiative, Centre for Effective Altruism, Janggen-Poehn Stiftung, Swiss Study Foundation, and the Oxford Martin School (Oxford Martin Programme on Collective Responsibility for Infectious Disease) supported this research. The funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript. The authors thank Fabienne Sandkühler for her extensive help and comments, Dillon Plunkett for the idea of one of the conditions, and Gregory Lewis, Pablo Stafforini and Andreas Mogensen for their helpful suggestions.\n\nAuthor information\n------------------\n\nAuthor notes1. Stefan Schubert and Lucius Caviola contributed equally.\n\n### Authors and Affiliations\n\n1. Department of Experimental Psychology, University of Oxford, New Radcliffe House, Radcliffe Observatory Quarter, Woodstock Road, OX2 6GG, Oxford, United Kingdom\n\nStefan Schubert, Lucius Caviola & Nadira S. Faber\n2. Oxford Uehiro Centre for Practical Ethics, University of Oxford, 16-17 St Ebbes St, Oxford, OX1 1PT, United Kingdom\n\nNadira S. Faber\n3. College of Life and Environmental Sciences, University of Exeter, Washington Singer Building, Exeter, EX4 4QG, United Kingdom\n\nNadira S. Faber\n\nAuthors1. Stefan Schubert[View author publications](/search?author=Stefan%20Schubert)You can also search for this author in\n [PubMed](http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=search&term=Stefan%20Schubert) [Google Scholar](http://scholar.google.co.uk/scholar?as_q=&num=10&btnG=Search+Scholar&as_epq=&as_oq=&as_eq=&as_occt=any&as_sauthors=%22Stefan%20Schubert%22&as_publication=&as_ylo=&as_yhi=&as_allsubj=all&hl=en)\n2. Lucius Caviola[View author publications](/search?author=Lucius%20Caviola)You can also search for this author in\n [PubMed](http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=search&term=Lucius%20Caviola) [Google Scholar](http://scholar.google.co.uk/scholar?as_q=&num=10&btnG=Search+Scholar&as_epq=&as_oq=&as_eq=&as_occt=any&as_sauthors=%22Lucius%20Caviola%22&as_publication=&as_ylo=&as_yhi=&as_allsubj=all&hl=en)\n3. Nadira S. Faber[View author publications](/search?author=Nadira%20S.%20Faber)You can also search for this author in\n [PubMed](http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=search&term=Nadira%20S.%20Faber) [Google Scholar](http://scholar.google.co.uk/scholar?as_q=&num=10&btnG=Search+Scholar&as_epq=&as_oq=&as_eq=&as_occt=any&as_sauthors=%22Nadira%20S.%20Faber%22&as_publication=&as_ylo=&as_yhi=&as_allsubj=all&hl=en)\n### Contributions\n\nS.S., L.C. and N.S.F. planned the studies. S.S. and L.C. collected and analyzed the data. S.S., L.C. and N.S.F. interpreted that data and wrote the paper.\n\n### Corresponding author\n\nCorrespondence to\n [Stefan Schubert](mailto:stefan.schubert@psy.ox.ac.uk).\n\nEthics declarations\n-------------------\n\n\n### Competing Interests\n\n\nThe authors declare no competing interests.\n\n\nAdditional information\n----------------------\n\n**Publisher’s note** Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.\n\nSupplementary information\n-------------------------\n\n### [Supplementary Materials](https://static-content.springer.com/esm/art%3A10.1038%2Fs41598-019-50145-9/MediaObjects/41598_2019_50145_MOESM1_ESM.pdf)\n\nRights and permissions\n----------------------\n\n\n**Open Access** This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons license, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons license and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this license, visit .\n\n\n[Reprints and Permissions](https://s100.copyright.com/AppDispatchServlet?title=The%20Psychology%20of%20Existential%20Risk%3A%20Moral%20Judgments%20about%20Human%20Extinction&author=Stefan%20Schubert%20et%20al&contentID=10.1038%2Fs41598-019-50145-9©right=The%20Author%28s%29&publication=2045-2322&publicationDate=2019-10-21&publisherName=SpringerNature&orderBeanReset=true&oa=CC%20BY)\n\n\n\nComments\n--------\n\nBy submitting a comment you agree to abide by our [Terms](/info/tandc.html) and [Community Guidelines](/info/community-guidelines.html). If you find something abusive or that does not comply with our terms or guidelines please flag it as inappropriate.", "url": "https://www.nature.com/articles/s41598-019-50145-9", "title": "The Psychology of Existential Risk: Moral Judgments about Human Extinction", "source": "html_articles", "source_type": "journalArticle", "source_filetype": "pdf", "date_published": "2019-10-20T22:00:00Z", "authors": ["Stefan Schubert", "Lucius Caviola", "Nadira S. Faber"], "summary": [], "id": "85e0a7744deca72ea5410f4a555dfce2"} {"text": "### Subjects\n\n\n* [Electrical and electronic engineering](/subjects/electrical-and-electronic-engineering)\n* [Ethics](/subjects/ethics)\n* [Policy](/subjects/policy)\n* [Science, technology and society](/subjects/science-technology-and-society)\n\n\n\n\n\nDebate about the impacts of AI is often split into two camps, one associated with the near term and the other with the long term. This divide is a mistake — the connections between the two perspectives deserve more attention, say Stephen Cave and Seán S. ÓhÉigeartaigh.\n\n\n\n\n\n[Access through your institution](https://wayf.springernature.com?redirect_uri=https%3A%2F%2Fwww.nature.com%2Farticles%2Fs42256-018-0003-2)\n\n\n[Buy or subscribe](#access-options)\n\n\n\n\n\nThese two sets of issues are often seen as entirely disconnected[1](/articles/s42256-018-0003-2#ref-CR1 \"Baum, S. D. AI Soc. \n https://doi.org/10.1007/s00146-017-0734-3\n \n (2017).\"). Researchers working on near-term issues see longer-term issues as a distraction from real and pressing challenges[8](/articles/s42256-018-0003-2#ref-CR8 \"Calo, R. Artificial Intelligence Policy: A Roadmap (UC Davis, Davis, 2017).\"), or as too distant, uncertain or speculative to allow for productive work now[9](/articles/s42256-018-0003-2#ref-CR9 \"Williams, C. The Register \n https://www.theregister.co.uk/2015/03/19/andrew_ng_baidu_ai/\n \n (2015).\"). On the other hand, those focused on longer-term challenges argue that their potential impact dwarfs that of present-day systems[7](/articles/s42256-018-0003-2#ref-CR7 \"Tegmark, M. Life 3.0. Being Human in the Age of Artificial Intelligence (Allen Lane, New York, 2017).\"), and that these issues therefore deserve a proportionate share of research attention.\n\n\n\nThis is a preview of subscription content, [access via your institution](https://wayf.springernature.com?redirect_uri=https%3A%2F%2Fwww.nature.com%2Farticles%2Fs42256-018-0003-2)\n\n\n\n\n if (window.dataLayer) {\n window.dataLayer.push({\n content: { article: { relevantArticlesCount: 3 }}\n })\n }\n \n\n\nRelevant articles\n-----------------\n\n\n\nOpen Access articles citing this article.\n\n\n* ### \n[Embedding responsibility in intelligent systems: from AI ethics to responsible AI ecosystems](https://doi.org/10.1038/s41598-023-34622-w)\n\n\n\t+ Bernd Carsten Stahl\n*Scientific Reports*\nOpen Access\n18 May 2023\n* ### \n[General intelligence disentangled via a generality metric for natural and artificial intelligence](https://doi.org/10.1038/s41598-021-01997-7)\n\n\n\t+ José Hernández-Orallo\n\t+ , Bao Sheng Loe\n\t+ … Seán Ó hÉigeartaigh\n*Scientific Reports*\nOpen Access\n24 November 2021\n* ### \n[From computer ethics and the ethics of AI towards an ethics of digital ecosystems](https://doi.org/10.1007/s43681-021-00080-1)\n\n\n\t+ Bernd Carsten Stahl\n*AI and Ethics*\nOpen Access\n31 July 2021\n\n\n\n\n\nAccess options\n--------------\n\n\n\n\n\n[Access through your institution](https://wayf.springernature.com?redirect_uri=https%3A%2F%2Fwww.nature.com%2Farticles%2Fs42256-018-0003-2)\n\n\n\n\n\n\n[Access through your institution](https://wayf.springernature.com?redirect_uri=https%3A%2F%2Fwww.nature.com%2Farticles%2Fs42256-018-0003-2)\n\n\n[Change institution](https://wayf.springernature.com?redirect_uri=https%3A%2F%2Fwww.nature.com%2Farticles%2Fs42256-018-0003-2)\n\n\n[Buy or subscribe](#access-options)\n\n\n/\\* style specs start \\*/\nstyle{display:none!important}.LiveAreaSection-193358632 \\*{align-content:stretch;align-items:stretch;align-self:auto;animation-delay:0s;animation-direction:normal;animation-duration:0s;animation-fill-mode:none;animation-iteration-count:1;animation-name:none;animation-play-state:running;animation-timing-function:ease;azimuth:center;backface-visibility:visible;background-attachment:scroll;background-blend-mode:normal;background-clip:borderBox;background-color:transparent;background-image:none;background-origin:paddingBox;background-position:0 0;background-repeat:repeat;background-size:auto auto;block-size:auto;border-block-end-color:currentcolor;border-block-end-style:none;border-block-end-width:medium;border-block-start-color:currentcolor;border-block-start-style:none;border-block-start-width:medium;border-bottom-color:currentcolor;border-bottom-left-radius:0;border-bottom-right-radius:0;border-bottom-style:none;border-bottom-width:medium;border-collapse:separate;border-image-outset:0s;border-image-repeat:stretch;border-image-slice:100%;border-image-source:none;border-image-width:1;border-inline-end-color:currentcolor;border-inline-end-style:none;border-inline-end-width:medium;border-inline-start-color:currentcolor;border-inline-start-style:none;border-inline-start-width:medium;border-left-color:currentcolor;border-left-style:none;border-left-width:medium;border-right-color:currentcolor;border-right-style:none;border-right-width:medium;border-spacing:0;border-top-color:currentcolor;border-top-left-radius:0;border-top-right-radius:0;border-top-style:none;border-top-width:medium;bottom:auto;box-decoration-break:slice;box-shadow:none;box-sizing:border-box;break-after:auto;break-before:auto;break-inside:auto;caption-side:top;caret-color:auto;clear:none;clip:auto;clip-path:none;color:initial;column-count:auto;column-fill:balance;column-gap:normal;column-rule-color:currentcolor;column-rule-style:none;column-rule-width:medium;column-span:none;column-width:auto;content:normal;counter-increment:none;counter-reset:none;cursor:auto;display:inline;empty-cells:show;filter:none;flex-basis:auto;flex-direction:row;flex-grow:0;flex-shrink:1;flex-wrap:nowrap;float:none;font-family:initial;font-feature-settings:normal;font-kerning:auto;font-language-override:normal;font-size:medium;font-size-adjust:none;font-stretch:normal;font-style:normal;font-synthesis:weight style;font-variant:normal;font-variant-alternates:normal;font-variant-caps:normal;font-variant-east-asian:normal;font-variant-ligatures:normal;font-variant-numeric:normal;font-variant-position:normal;font-weight:400;grid-auto-columns:auto;grid-auto-flow:row;grid-auto-rows:auto;grid-column-end:auto;grid-column-gap:0;grid-column-start:auto;grid-row-end:auto;grid-row-gap:0;grid-row-start:auto;grid-template-areas:none;grid-template-columns:none;grid-template-rows:none;height:auto;hyphens:manual;image-orientation:0deg;image-rendering:auto;image-resolution:1dppx;ime-mode:auto;inline-size:auto;isolation:auto;justify-content:flexStart;left:auto;letter-spacing:normal;line-break:auto;line-height:normal;list-style-image:none;list-style-position:outside;list-style-type:disc;margin-block-end:0;margin-block-start:0;margin-bottom:0;margin-inline-end:0;margin-inline-start:0;margin-left:0;margin-right:0;margin-top:0;mask-clip:borderBox;mask-composite:add;mask-image:none;mask-mode:matchSource;mask-origin:borderBox;mask-position:0 0;mask-repeat:repeat;mask-size:auto;mask-type:luminance;max-height:none;max-width:none;min-block-size:0;min-height:0;min-inline-size:0;min-width:0;mix-blend-mode:normal;object-fit:fill;object-position:50% 50%;offset-block-end:auto;offset-block-start:auto;offset-inline-end:auto;offset-inline-start:auto;opacity:1;order:0;orphans:2;outline-color:initial;outline-offset:0;outline-style:none;outline-width:medium;overflow:visible;overflow-wrap:normal;overflow-x:visible;overflow-y:visible;padding-block-end:0;padding-block-start:0;padding-bottom:0;padding-inline-end:0;padding-inline-start:0;padding-left:0;padding-right:0;padding-top:0;page-break-after:auto;page-break-before:auto;page-break-inside:auto;perspective:none;perspective-origin:50% 50%;pointer-events:auto;position:static;quotes:initial;resize:none;right:auto;ruby-align:spaceAround;ruby-merge:separate;ruby-position:over;scroll-behavior:auto;scroll-snap-coordinate:none;scroll-snap-destination:0 0;scroll-snap-points-x:none;scroll-snap-points-y:none;scroll-snap-type:none;shape-image-threshold:0;shape-margin:0;shape-outside:none;tab-size:8;table-layout:auto;text-align:initial;text-align-last:auto;text-combine-upright:none;text-decoration-color:currentcolor;text-decoration-line:none;text-decoration-style:solid;text-emphasis-color:currentcolor;text-emphasis-position:over right;text-emphasis-style:none;text-indent:0;text-justify:auto;text-orientation:mixed;text-overflow:clip;text-rendering:auto;text-shadow:none;text-transform:none;text-underline-position:auto;top:auto;touch-action:auto;transform:none;transform-box:borderBox;transform-origin:50% 50%0;transform-style:flat;transition-delay:0s;transition-duration:0s;transition-property:all;transition-timing-function:ease;vertical-align:baseline;visibility:visible;white-space:normal;widows:2;width:auto;will-change:auto;word-break:normal;word-spacing:normal;word-wrap:normal;writing-mode:horizontalTb;z-index:auto;-webkit-appearance:none;-moz-appearance:none;-ms-appearance:none;appearance:none;margin:0}.LiveAreaSection-193358632{width:100%}.LiveAreaSection-193358632 .login-option-buybox{display:block;width:100%;font-size:17px;line-height:30px;color:#222;padding-top:30px;font-family:Harding,Palatino,serif}.LiveAreaSection-193358632 .additional-access-options{display:block;font-weight:700;font-size:17px;line-height:30px;color:#222;font-family:Harding,Palatino,serif}.LiveAreaSection-193358632 .additional-login>li:not(:first-child)::before{transform:translateY(-50%);content:\"\";height:1rem;position:absolute;top:50%;left:0;border-left:2px solid #999}.LiveAreaSection-193358632 .additional-login>li:not(:first-child){padding-left:10px}.LiveAreaSection-193358632 .additional-login>li{display:inline-block;position:relative;vertical-align:middle;padding-right:10px}.BuyBoxSection-683559780{display:flex;flex-wrap:wrap;flex:1;flex-direction:row-reverse;margin:-30px -15px 0}.BuyBoxSection-683559780 .box-inner{width:100%;height:100%}.BuyBoxSection-683559780 .readcube-buybox{background-color:#f3f3f3;flex-shrink:1;flex-grow:1;flex-basis:255px;background-clip:content-box;padding:0 15px;margin-top:30px}.BuyBoxSection-683559780 .subscribe-buybox{background-color:#f3f3f3;flex-shrink:1;flex-grow:4;flex-basis:300px;background-clip:content-box;padding:0 15px;margin-top:30px}.BuyBoxSection-683559780 .subscribe-buybox-nature-plus{background-color:#f3f3f3;flex-shrink:1;flex-grow:4;flex-basis:100%;background-clip:content-box;padding:0 15px;margin-top:30px}.BuyBoxSection-683559780 .title-readcube,.BuyBoxSection-683559780 .title-buybox{display:block;margin:0;margin-right:10%;margin-left:10%;font-size:24px;line-height:32px;color:#222;padding-top:30px;text-align:center;font-family:Harding,Palatino,serif}.BuyBoxSection-683559780 .title-asia-buybox{display:block;margin:0;margin-right:5%;margin-left:5%;font-size:24px;line-height:32px;color:#222;padding-top:30px;text-align:center;font-family:Harding,Palatino,serif}.BuyBoxSection-683559780 .asia-link{color:#069;cursor:pointer;text-decoration:none;font-size:1.05em;font-family:-apple-system,BlinkMacSystemFont,\"Segoe UI\",Roboto,Oxygen-Sans,Ubuntu,Cantarell,\"Helvetica Neue\",sans-serif;line-height:1.05em6}.BuyBoxSection-683559780 .access-readcube{display:block;margin:0;margin-right:10%;margin-left:10%;font-size:14px;color:#222;padding-top:10px;text-align:center;font-family:-apple-system,BlinkMacSystemFont,\"Segoe UI\",Roboto,Oxygen-Sans,Ubuntu,Cantarell,\"Helvetica Neue\",sans-serif;line-height:20px}.BuyBoxSection-683559780 .access-asia-buybox{display:block;margin:0;margin-right:5%;margin-left:5%;font-size:14px;color:#222;padding-top:10px;text-align:center;font-family:-apple-system,BlinkMacSystemFont,\"Segoe UI\",Roboto,Oxygen-Sans,Ubuntu,Cantarell,\"Helvetica Neue\",sans-serif;line-height:20px}.BuyBoxSection-683559780 .access-buybox{display:block;margin:0;margin-right:10%;margin-left:10%;font-size:14px;color:#222;opacity:.8px;padding-top:10px;text-align:center;font-family:-apple-system,BlinkMacSystemFont,\"Segoe UI\",Roboto,Oxygen-Sans,Ubuntu,Cantarell,\"Helvetica Neue\",sans-serif;line-height:20px}.BuyBoxSection-683559780 .price-buybox{display:block;font-size:30px;color:#222;font-family:-apple-system,BlinkMacSystemFont,\"Segoe UI\",Roboto,Oxygen-Sans,Ubuntu,Cantarell,\"Helvetica Neue\",sans-serif;padding-top:30px;text-align:center}.BuyBoxSection-683559780 .price-buybox-to{display:block;font-size:30px;color:#222;font-family:-apple-system,BlinkMacSystemFont,\"Segoe UI\",Roboto,Oxygen-Sans,Ubuntu,Cantarell,\"Helvetica Neue\",sans-serif;text-align:center}.BuyBoxSection-683559780 .price-info-text{font-size:16px;padding-right:10px;color:#222;font-family:-apple-system,BlinkMacSystemFont,\"Segoe UI\",Roboto,Oxygen-Sans,Ubuntu,Cantarell,\"Helvetica Neue\",sans-serif}.BuyBoxSection-683559780 .price-value{font-size:30px;font-family:-apple-system,BlinkMacSystemFont,\"Segoe UI\",Roboto,Oxygen-Sans,Ubuntu,Cantarell,\"Helvetica Neue\",sans-serif}.BuyBoxSection-683559780 .price-per-period{font-family:-apple-system,BlinkMacSystemFont,\"Segoe UI\",Roboto,Oxygen-Sans,Ubuntu,Cantarell,\"Helvetica Neue\",sans-serif}.BuyBoxSection-683559780 .price-from{font-size:14px;padding-right:10px;color:#222;font-family:-apple-system,BlinkMacSystemFont,\"Segoe UI\",Roboto,Oxygen-Sans,Ubuntu,Cantarell,\"Helvetica Neue\",sans-serif;line-height:20px}.BuyBoxSection-683559780 .issue-buybox{display:block;font-size:13px;text-align:center;color:#222;font-family:-apple-system,BlinkMacSystemFont,\"Segoe UI\",Roboto,Oxygen-Sans,Ubuntu,Cantarell,\"Helvetica Neue\",sans-serif;line-height:19px}.BuyBoxSection-683559780 .no-price-buybox{display:block;font-size:13px;line-height:18px;text-align:center;padding-right:10%;padding-left:10%;padding-bottom:20px;padding-top:30px;color:#222;font-family:-apple-system,BlinkMacSystemFont,\"Segoe UI\",Roboto,Oxygen-Sans,Ubuntu,Cantarell,\"Helvetica Neue\",sans-serif}.BuyBoxSection-683559780 .vat-buybox{display:block;margin-top:5px;margin-right:20%;margin-left:20%;font-size:11px;color:#222;padding-top:10px;padding-bottom:15px;text-align:center;font-family:-apple-system,BlinkMacSystemFont,\"Segoe UI\",Roboto,Oxygen-Sans,Ubuntu,Cantarell,\"Helvetica Neue\",sans-serif;line-height:17px}.BuyBoxSection-683559780 .tax-buybox{display:block;width:100%;color:#222;padding:20px 16px;text-align:center;font-family:-apple-system,BlinkMacSystemFont,\"Segoe UI\",Roboto,Oxygen-Sans,Ubuntu,Cantarell,\"Helvetica Neue\",sans-serif;line-height:NaNpx}.BuyBoxSection-683559780 .button-container{display:flex;padding-right:20px;padding-left:20px;justify-content:center}.BuyBoxSection-683559780 .button-container>\\*{flex:1px}.BuyBoxSection-683559780 .button-container>a:hover,.Button-505204839:hover,.Button-1078489254:hover,.Button-2496381730:hover{text-decoration:none}.BuyBoxSection-683559780 .readcube-button{background:#fff;margin-top:30px}.BuyBoxSection-683559780 .button-asia{background:#069;border:1px solid #069;border-radius:0;cursor:pointer;display:block;padding:9px;outline:0;text-align:center;text-decoration:none;min-width:80px;margin-top:75px}.BuyBoxSection-683559780 .button-label-asia,.ButtonLabel-3869432492,.ButtonLabel-3296148077,.ButtonLabel-1651148777{display:block;color:#fff;font-size:17px;line-height:20px;font-family:-apple-system,BlinkMacSystemFont,\"Segoe UI\",Roboto,Oxygen-Sans,Ubuntu,Cantarell,\"Helvetica Neue\",sans-serif;text-align:center;text-decoration:none;cursor:pointer}.Button-505204839,.Button-1078489254,.Button-2496381730{background:#069;border:1px solid #069;border-radius:0;cursor:pointer;display:block;padding:9px;outline:0;text-align:center;text-decoration:none;min-width:80px;max-width:320px;margin-top:10px}.Button-505204839 .readcube-label,.Button-1078489254 .readcube-label,.Button-2496381730 .readcube-label{color:#069}\n/\\* style specs end \\*/Access Nature and 54 other Nature Portfolio journals\n\nGet Nature+, our best-value online-access subscription\n\n24,99 € / 30 days\n\ncancel any time\n\n[Learn more](https://shop.nature.com/products/plus)Subscribe to this journal\n\nReceive 12 digital issues and online access to articles\n\n99,00 € per year\n\nonly 8,25 € per issue\n\n[Learn more](/natmachintell/subscribe)Rent or buy this article\n\nPrices vary by article type\n\nfrom$1.95\n\nto$39.95\n\n[Learn more](//www.nature.com/articles/s42256-018-0003-2.epdf?no_publisher_access=1&r3_referer=nature)Prices may be subject to local taxes which are calculated during checkout\n\n\n\n### Additional access options:\n\n\n* [Log in](https://idp.nature.com/authorize/natureuser?client_id=grover&redirect_uri=https%3A%2F%2Fwww.nature.com%2Farticles%2Fs42256-018-0003-2)\n* [Learn about institutional subscriptions](https://www.springernature.com/gp/librarians/licensing/license-options)\n* [Read our FAQs](https://support.nature.com/en/support/home)\n* [Contact customer support](https://www.springernature.com/gp/contact)\n\n\n\n\n\n\nReferences\n----------\n\n1. Baum, S. D. *AI Soc.* (2017).\n\n[Article](https://doi.org/10.1007%2Fs00146-017-0734-3) \n [Google Scholar](http://scholar.google.com/scholar_lookup?&title=&journal=AI%20Soc.&doi=10.1007%2Fs00146-017-0734-3&publication_year=2017&author=Baum%2CSD)\n2. Crawford, K. et al. *The AI Now Report: The Social and Economic Implications of Artificial Intelligence Technologies in the Near-Term* (AI Now, 2016); \n3. Brundage, M. et al. Preprint at (2018).\n4. Dietterich, T. G. & Horvitz, E. J. *Commun. ACM* **58**, 38–40 (2015).\n\n[Article](https://doi.org/10.1145%2F2770869) \n [Google Scholar](http://scholar.google.com/scholar_lookup?&title=&journal=Commun.%20ACM&doi=10.1145%2F2770869&volume=58&pages=38-40&publication_year=2015&author=Dietterich%2CTG&author=Horvitz%2CEJ)\n5. Frey, C. B. & Osborne, M. A. *Technol. Forecast. Soc. Change* **114**, 254–280 (2017).\n\n[Article](https://doi.org/10.1016%2Fj.techfore.2016.08.019) \n [Google Scholar](http://scholar.google.com/scholar_lookup?&title=&journal=Technol.%20Forecast.%20Soc.%20Change&doi=10.1016%2Fj.techfore.2016.08.019&volume=114&pages=254-280&publication_year=2017&author=Frey%2CCB&author=Osborne%2CMA)\n6. Bostrom, N. *Superintelligence: Paths, Dangers, Strategies* (Oxford Univ. Press, Oxford, 2014).\n\n[Google Scholar](http://scholar.google.com/scholar_lookup?&title=Superintelligence%3A%20Paths%2C%20Dangers%2C%20Strategies&publication_year=2014&author=Bostrom%2CN)\n7. Tegmark, M. *Life 3.0. Being Human in the Age of Artificial Intelligence* (Allen Lane, New York, 2017).\n8. Calo, R. *Artificial Intelligence Policy: A Roadmap* (UC Davis, Davis, 2017).\n\n[Google Scholar](http://scholar.google.com/scholar_lookup?&title=Artificial%20Intelligence%20Policy%3A%20A%20Roadmap&publication_year=2017&author=Calo%2CR)\n9. Williams, C. *The Register* (2015).\n10. *The Dawn of Artificial Intelligence* (US Government Publishing Office, 2016); \n11. Russell, S., Dewey, D. & Tegmark, M. *AI Magazine* **36**, 105–114 (Winter, 2015).\n12. Amodei, D. et al. Preprint at (2016).\n13. Owen, R. et al. in *Responsible Innovation: Managing the Responsible Emergence of Science and Innovation in Society* (eds Owen, R., Bessant, J. & Heintz, M.) 27–50 (Wiley, Chichester, 2013).\n\n[Download references](https://citation-needed.springer.com/v2/references/10.1038/s42256-018-0003-2?format=refman&flavour=references)\n\nAuthor information\n------------------\n\n### Authors and Affiliations\n\n1. Leverhulme Centre for the Future of Intelligence, University of Cambridge, Cambridge, UK\n\nStephen Cave & Seán S. ÓhÉigeartaigh\n\nAuthors1. Stephen Cave[View author publications](/search?author=Stephen%20Cave)You can also search for this author in\n [PubMed](http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=search&term=Stephen%20Cave) [Google Scholar](http://scholar.google.co.uk/scholar?as_q=&num=10&btnG=Search+Scholar&as_epq=&as_oq=&as_eq=&as_occt=any&as_sauthors=%22Stephen%20Cave%22&as_publication=&as_ylo=&as_yhi=&as_allsubj=all&hl=en)\n2. Seán S. ÓhÉigeartaigh[View author publications](/search?author=Se%C3%A1n%20S.%20%C3%93h%C3%89igeartaigh)You can also search for this author in\n [PubMed](http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=search&term=Se%C3%A1n%20S.%20%C3%93h%C3%89igeartaigh) [Google Scholar](http://scholar.google.co.uk/scholar?as_q=&num=10&btnG=Search+Scholar&as_epq=&as_oq=&as_eq=&as_occt=any&as_sauthors=%22Se%C3%A1n%20S.%20%C3%93h%C3%89igeartaigh%22&as_publication=&as_ylo=&as_yhi=&as_allsubj=all&hl=en)\n### Corresponding authors\n\nCorrespondence to\n [Stephen Cave](mailto:sjc53@cam.ac.uk) or [Seán S. ÓhÉigeartaigh](mailto:so348@cam.ac.uk).\n\nEthics declarations\n-------------------\n\n\n### Competing interests\n\n\nThe authors declare no competing interests.\n\n\nRights and permissions\n----------------------\n\n[Reprints and Permissions](https://s100.copyright.com/AppDispatchServlet?title=Bridging%20near-%20and%20long-term%20concerns%20about%20AI&author=Stephen%20Cave%20et%20al&contentID=10.1038%2Fs42256-018-0003-2©right=Springer%20Nature%20Limited&publication=2522-5839&publicationDate=2019-01-07&publisherName=SpringerNature&orderBeanReset=true)\n\n\n\n\n\nThis article is cited by\n------------------------\n\n\n\n* ### \n[Embedding responsibility in intelligent systems: from AI ethics to responsible AI ecosystems](https://doi.org/10.1038/s41598-023-34622-w)\n\n\n\t+ Bernd Carsten Stahl*Scientific Reports* (2023)\n* ### \n[From computer ethics and the ethics of AI towards an ethics of digital ecosystems](https://doi.org/10.1007/s43681-021-00080-1)\n\n\n\t+ Bernd Carsten Stahl*AI and Ethics* (2022)\n* ### \n[General intelligence disentangled via a generality metric for natural and artificial intelligence](https://doi.org/10.1038/s41598-021-01997-7)\n\n\n\t+ José Hernández-Orallo\n\t+ Bao Sheng Loe\n\t+ Seán Ó hÉigeartaigh*Scientific Reports* (2021)\n* ### \n[Facilitators and Barriers of Artificial Intelligence Adoption in Business – Insights from Opinions Using Big Data Analytics](https://doi.org/10.1007/s10796-021-10219-4)\n\n\n\t+ Arpan Kumar Kar\n\t+ Amit Kumar Kushwaha*Information Systems Frontiers* (2021)\n* ### \n[Our future in the Anthropocene biosphere](https://doi.org/10.1007/s13280-021-01544-8)\n\n\n\t+ Carl Folke\n\t+ Stephen Polasky\n\t+ Brian H. Walker*Ambio* (2021)", "url": "https://www.nature.com/articles/s42256-018-0003-2", "title": "Bridging near- and long-term concerns about AI", "source": "html_articles", "source_type": "journalArticle", "source_filetype": "pdf", "date_published": "2018-12-31T23:00:00Z", "authors": ["Stephen Cave", "Seán S. Ó hÉigeartaigh"], "summary": [], "id": "95d39eeed2c9d74c09f8c804005c6183"} {"text": "[Download PDF](/articles/s42256-020-0195-0.pdf)\n\n\n\n\n\n\n### Subjects\n\n\n* [Ethics](/subjects/ethics)\n* [Infectious diseases](/subjects/infectious-diseases)\n* [SARS-CoV-2](/subjects/sars-cov-2)\n* [Science, technology and society](/subjects/science-technology-and-society)\n\n\n\n\n\nArtificial intelligence tools can help save lives in a pandemic. However, the need to implement technological solutions rapidly raises challenging ethical issues. We need new approaches for ethics with urgency, to ensure AI can be safely and beneficially used in the COVID-19 response and beyond.\n\n\n\n\n\nThe novel coronavirus pandemic (COVID-19) is the largest global crisis in a generation, hitting the world at a time when artificial intelligence (AI) is showing potential for widespread real-world application. We are currently seeing a rapid increase in proposals for how AI can be used in many stages of pandemic prevention and response. AI can aid in detecting, understanding and predicting the spread of disease, which can provide early warning signs and inform effective interventions[1](/articles/s42256-020-0195-0#ref-CR1 \"van der Schaar, M. et al. Preprint at \nhttp://www.vanderschaar-lab.com/NewWebsite/covid-19/post1/paper.pdf\n\n (2020).\"). AI may improve the medical response to the pandemic in several ways: supporting physicians by automating aspects of diagnosis[2](/articles/s42256-020-0195-0#ref-CR2 \"Wang, S. et al. Preprint at \nhttps://doi.org/10.1101/2020.02.14.20023028\n\n (2020).\"), prioritizing healthcare resources[3](/articles/s42256-020-0195-0#ref-CR3 \"Butt, C., Gill, J., Chun, D. & Babu, B. A. Appl. Intell. \nhttps://doi.org/10.1007/s10489-020-01714-3\n\n (2020).\"), and improving vaccine and drug development[4](/articles/s42256-020-0195-0#ref-CR4 \"Zhang, H. et al. Preprint at \nhttps://doi.org/10.20944/preprints202002.0061.v1\n\n (2020).\"). AI also has potential applications beyond immediate response, such as in combating online misinformation about COVID-19[5](/articles/s42256-020-0195-0#ref-CR5 \"Infodemic management - infodemiology. World Health Organization \nhttps://www.who.int/teams/risk-communication/infodemic-management\n\n (2020).\").\n\nThe current crisis presents an unprecedented opportunity to leverage AI for societal benefit. However, the urgency with which new technologies must be deployed raises particularly challenging ethical issues and risks. There is growing concern that the use of AI and data in response to COVID-19 may compromise privacy and civil liberties by incentivizing the collection and processing of large amounts of data, which may often be private or personal[6](/articles/s42256-020-0195-0#ref-CR6 \"Ienca, M. & Vayena, E. Nat. Med. 26, 463–464 (2020).\"). More broadly, although AI clearly has a great deal to offer, we must be careful not to overestimate its potential. Its efficacy will heavily depend on the reliability and relevance of the data available. With the worldwide spread of COVID-19 occurring so quickly, obtaining sufficient data for accurate AI forecasting and diagnosis is challenging. Even where AI models are strictly speaking accurate, they may have differential impacts across subpopulations, with harmful consequences that are difficult to predict in advance[7](/articles/s42256-020-0195-0#ref-CR7 \"Wynants, L. et al. BMJ 369, m1328 (2020).\"). A further concern is that the lack of transparency in AI systems used to aid decision-making around COVID-19 may make it near impossible for the decisions of governments and public officials to be subject to public scrutiny and legitimation[8](/articles/s42256-020-0195-0#ref-CR8 \"Nyrup, R., Whittlestone, J. & Cave, S. Why Value Judgements Should Not Be Automated (Leverhulme Centre for the Future of Intelligence, 2019); \nhttps://doi.org/10.17863/CAM.41552\n\n\n\"). Finally, the current crisis may have longer-term impacts on public trust and norms around the use of AI in society. How these develop will depend on perceptions of how successful and responsible use of AI to address COVID-19 is.\n\nThe challenge of ethics in a crisis\n-----------------------------------\n\nRobust ethics and risk assessment processes are needed to ensure AI is used responsibly in response to COVID-19. However, implementing these at a time of crisis is far from straightforward, especially where new technologies need to be deployed at unprecedented speed and scale. For example, forecasting models have to be available at the early stages of disease spread and make use of all possible data to productively inform policy interventions. Current processes for ethics and risk assessment around uses of AI are still relatively immature, and the urgency of a crisis highlights their limitations.\n\nMuch work in AI ethics in recent years has focused on developing high-level principles, but these principles say nothing about what to do when principles come into conflict with one another[9](/articles/s42256-020-0195-0#ref-CR9 \"Whittlestone, J., Nyrup, R., Alexandrova, A. & Cave, S. In Proc. 2019 AAAI/ACM Conf. AI, Ethics, and Society 195–200 (ACM, 2019).\"). For example, principles do not tell us how to balance the potential of AI to save lives (the principle of ‘beneficence’) against other important values such as privacy or fairness. One common suggestion for navigating such tensions is through engagement with diverse stakeholder groups, but this may be difficult to enact with sufficient speed at times of crisis.\n\nWhen new technologies may pose unknown risks, we would ordinarily try to introduce them in gradual, iterative ways, allowing time for issues to be identified and addressed. In the context of a crisis, however, there is a stark trade-off between a cautious approach and the need to deploy technological solutions at scale. For example, there may be pressure to rely on systems with less human oversight and potential for override due to staff shortages and time pressures, but this must be carefully balanced against the risk of failing to notice or override crucial failures.\n\nThis does not mean that ethics should be neglected at times of crisis. It only emphasizes that we must find ways to conduct ethical review and risk assessment with the same urgency that motivates the development of AI-based solutions.\n\nDoing ethics with urgency\n-------------------------\n\nWe suggest that ethics with urgency must at a minimum incorporate the following components: (1) the ability to think ahead rather than dealing with problems reactively, (2) more robust procedures for assuring the behaviour and safety of AI systems, and (3) building public trust through independent oversight.\n\nFirst, ethics with urgency must involve thinking through possible issues and risks as thoroughly as possible before systems are developed and deployed in the world. This need to think ahead is reflected in the notion of ‘ethics by design’: making ethical considerations part of the process of developing new applications of AI, not an afterthought[10](/articles/s42256-020-0195-0#ref-CR10 \"d’Aquin, M. et al. In Proc. 2018 AAAI/ACM Conf. AI, Ethics, and Society 54–59 (ACM, 2018).\"). For example, questions such as ‘what data do we need and what issues might this raise?’ and ‘how do we build this model so that it is possible to interrogate key assumptions?’ need to be considered throughout the development process. This means that experts in ethics and risk assessment need to be involved in teams developing AI-based solutions from the beginning, and much clearer guidelines are needed for engineers and developers to think through these issues. An ethics by design approach should also be supplemented with more extensive foresight work, looking beyond the more obvious and immediate ethical issues, and considering a wider range of longer-term and more systemic impacts. By synthesizing diverse sources of expertise, established foresight methodologies can be used to identify new risks and key uncertainties likely to shape the future, and use this to make better informed decisions today[11](/articles/s42256-020-0195-0#ref-CR11 \"The Futures Toolkit: Tools for Futures Thinking and Foresight Across UK Government (Government Office for Science, 2017).\").\n\nSecond, where applications of AI are used at scale in safety-critical domains such as healthcare, ensuring the safety and reliability of those systems across a range of scenarios is of crucial importance. Finding ways to rapidly conduct robust testing and verification of systems will therefore be central to doing ethics with urgency. We suggest that the application of AI in crisis scenarios should in particular be heavily informed by research on best practices for the verification and validation of autonomous systems[12](/articles/s42256-020-0195-0#ref-CR12 \"Lyons, J. B., Clark, M. A., Wagner, A. R. & Schuelke, M. J. AI Mag. 38(3), 37–49 (2017).\"). It may also be worthwhile for governments to fund further work on methods for establishing the reliability of machine learning systems across a range of circumstances, particularly where those systems may be deployed in high-stakes crisis scenarios.\n\nThird, an important aspect of ethics with urgency is building public trust in how AI is being used. If governments use AI systems in ways perceived to be either mistaken or problematically value-laden, this could result in a loss of public trust severe enough to drastically reduce support for beneficial uses of AI not just in this crisis, but also in the future. Building public trust around new uses of technology may be particularly challenging in crisis times, where the need to move fast makes it easier for governments to fall back on opaque and centralized forms of decision-making. Several analyses of past pandemics have argued that transparency and public scrutiny are essential for maintaining public trust[13](/articles/s42256-020-0195-0#ref-CR13 \"O’Malley, P., Rainford, J. & Thompson, A. Bull. World Health Organ. 87, 614–618 (2009).\"). An independent oversight body, responsible for reviewing any potential risks and ethical issues associated with new technologies and producing publicly available reports, could help ensure public transparency. This oversight body could, among other approaches, make use of techniques such as ‘red teaming’ to rigorously challenge systems and their assumptions, unearthing any limitations and biases in the applications being proposed[14](/articles/s42256-020-0195-0#ref-CR14 \"Brundage, M. et al. Preprint at \nhttps://arxiv.org/abs/2004.07213\n\n (2020).\"). Red teaming is widely used in security settings, but can be applied broadly: at its core, red teaming is a way of challenging the blind spots of a team by explicitly looking for flaws from an outsider or adversarial perspective. As well as allowing developers to identify and fix issues before deployment, such processes could help assure public stakeholders that the interests and values of different groups are being thoroughly considered, and that all eventualities are prepared for.\n\nConclusion\n----------\n\nAs the COVID-19 pandemic illustrates, times of crisis can necessitate rapid deployment of new technologies in order to save lives. However, this urgency both makes it more likely that ethical issues and risks will arise, and makes them more challenging to address. Rather than neglecting ethics, we must find ways to do ethics with urgency too. We strongly encourage technologists, ethicists, policymakers and healthcare professionals to consider how ethics can be implemented at speed in the ongoing response to the COVID-19 crisis. If ethical practices can be implemented with urgency, the current crisis could provide an opportunity to drive greater application of AI for societal benefit, and to build public trust in such applications.\n\n\n\n\nReferences\n----------\n\n1. van der Schaar, M. et al. Preprint at (2020).\n2. Wang, S. et al. Preprint at (2020).\n3. Butt, C., Gill, J., Chun, D. & Babu, B. A. *Appl. Intell.* (2020).\n4. Zhang, H. et al. Preprint at (2020).\n5. Infodemic management - infodemiology. *World Health Organization* (2020).\n6. Ienca, M. & Vayena, E. *Nat. Med.* **26**, 463–464 (2020).\n\n[Article](https://doi.org/10.1038%2Fs41591-020-0832-5) \n [Google Scholar](http://scholar.google.com/scholar_lookup?&title=&journal=Nat.%20Med.&doi=10.1038%2Fs41591-020-0832-5&volume=26&pages=463-464&publication_year=2020&author=Ienca%2CM&author=Vayena%2CE)\n7. Wynants, L. et al. *BMJ* **369**, m1328 (2020).\n\n[Article](https://doi.org/10.1136%2Fbmj.m1328) \n [Google Scholar](http://scholar.google.com/scholar_lookup?&title=&journal=BMJ&doi=10.1136%2Fbmj.m1328&volume=369&publication_year=2020&author=Wynants%2CL)\n8. Nyrup, R., Whittlestone, J. & Cave, S. *Why Value Judgements Should Not Be Automated* (Leverhulme Centre for the Future of Intelligence, 2019); \n9. Whittlestone, J., Nyrup, R., Alexandrova, A. & Cave, S. In *Proc. 2019 AAAI/ACM Conf. AI, Ethics, and Society* 195–200 (ACM, 2019).\n10. d’Aquin, M. et al. In *Proc*. *2018 AAAI/ACM Conf. AI, Ethics, and Society* 54–59 (ACM, 2018).\n11. *The Futures Toolkit: Tools for Futures Thinking and Foresight Across UK Government* (Government Office for Science, 2017).\n12. Lyons, J. B., Clark, M. A., Wagner, A. R. & Schuelke, M. J. *AI Mag.* **38**(3), 37–49 (2017).\n\n[Article](https://doi.org/10.1609%2Faimag.v38i3.2717) \n [Google Scholar](http://scholar.google.com/scholar_lookup?&title=&journal=AI%20Mag.&doi=10.1609%2Faimag.v38i3.2717&volume=38&issue=3&pages=37-49&publication_year=2017&author=Lyons%2CJB&author=Clark%2CMA&author=Wagner%2CAR&author=Schuelke%2CMJ)\n13. O’Malley, P., Rainford, J. & Thompson, A. *Bull. World Health Organ.* **87**, 614–618 (2009).\n\n[Article](https://doi.org/10.2471%2FBLT.08.056689) \n [Google Scholar](http://scholar.google.com/scholar_lookup?&title=&journal=Bull.%20World%20Health%20Organ.&doi=10.2471%2FBLT.08.056689&volume=87&pages=614-618&publication_year=2009&author=O%E2%80%99Malley%2CP&author=Rainford%2CJ&author=Thompson%2CA)\n14. Brundage, M. et al. Preprint at (2020).\n\n[Download references](https://citation-needed.springer.com/v2/references/10.1038/s42256-020-0195-0?format=refman&flavour=references)\n\nAuthor information\n------------------\n\n### Authors and Affiliations\n\n1. Centre for the Study of Existential Risk, University of Cambridge, Cambridge, UK\n\nAsaf Tzachor & Lalitha Sundaram\n2. Leverhulme Centre for the Future of Intelligence, University of Cambridge, Cambridge, UK\n\nJess Whittlestone & Seán Ó hÉigeartaigh\n\nAuthors1. Asaf Tzachor[View author publications](/search?author=Asaf%20Tzachor)You can also search for this author in\n [PubMed](http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=search&term=Asaf%20Tzachor) [Google Scholar](http://scholar.google.co.uk/scholar?as_q=&num=10&btnG=Search+Scholar&as_epq=&as_oq=&as_eq=&as_occt=any&as_sauthors=%22Asaf%20Tzachor%22&as_publication=&as_ylo=&as_yhi=&as_allsubj=all&hl=en)\n2. Jess Whittlestone[View author publications](/search?author=Jess%20Whittlestone)You can also search for this author in\n [PubMed](http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=search&term=Jess%20Whittlestone) [Google Scholar](http://scholar.google.co.uk/scholar?as_q=&num=10&btnG=Search+Scholar&as_epq=&as_oq=&as_eq=&as_occt=any&as_sauthors=%22Jess%20Whittlestone%22&as_publication=&as_ylo=&as_yhi=&as_allsubj=all&hl=en)\n3. Lalitha Sundaram[View author publications](/search?author=Lalitha%20Sundaram)You can also search for this author in\n [PubMed](http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=search&term=Lalitha%20Sundaram) [Google Scholar](http://scholar.google.co.uk/scholar?as_q=&num=10&btnG=Search+Scholar&as_epq=&as_oq=&as_eq=&as_occt=any&as_sauthors=%22Lalitha%20Sundaram%22&as_publication=&as_ylo=&as_yhi=&as_allsubj=all&hl=en)\n4. Seán Ó hÉigeartaigh[View author publications](/search?author=Se%C3%A1n%20%C3%93%20h%C3%89igeartaigh)You can also search for this author in\n [PubMed](http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=search&term=Se%C3%A1n%20%C3%93%20h%C3%89igeartaigh) [Google Scholar](http://scholar.google.co.uk/scholar?as_q=&num=10&btnG=Search+Scholar&as_epq=&as_oq=&as_eq=&as_occt=any&as_sauthors=%22Se%C3%A1n%20%C3%93%20h%C3%89igeartaigh%22&as_publication=&as_ylo=&as_yhi=&as_allsubj=all&hl=en)\n### Corresponding author\n\nCorrespondence to\n [Asaf Tzachor](mailto:at875@cam.ac.uk).\n\nEthics declarations\n-------------------\n\n\n### Competing interests\n\n\nThe authors declare no competing interests.\n\n\nRights and permissions\n----------------------\n\n[Reprints and Permissions](https://s100.copyright.com/AppDispatchServlet?title=Artificial%20intelligence%20in%20a%20crisis%20needs%20ethics%20with%20urgency&author=Asaf%20Tzachor%20et%20al&contentID=10.1038%2Fs42256-020-0195-0©right=Springer%20Nature%20Limited&publication=2522-5839&publicationDate=2020-06-22&publisherName=SpringerNature&orderBeanReset=true)\n\n\n\n\n\nThis article is cited by\n------------------------\n\n\n\n* ### \n[Dismantling AI capitalism: the commons as an alternative to the power concentration of Big Tech](https://doi.org/10.1007/s00146-022-01437-8)\n\n\n\t+ Pieter Verdegem*AI & SOCIETY* (2022)\n* ### \n[Foundations for the future: institution building for the purpose of artificial intelligence governance](https://doi.org/10.1007/s43681-021-00093-w)\n\n\n\t+ Charlotte Stix*AI and Ethics* (2022)\n* ### \n[The ethical use of high-performance computing and artificial intelligence: fighting COVID-19 at Barcelona Supercomputing Center](https://doi.org/10.1007/s43681-021-00056-1)\n\n\n\t+ Ulises Cortés\n\t+ Atia Cortés\n\t+ Enric Àlvarez*AI and Ethics* (2022)\n* ### \n[Synthetic data in machine learning for medicine and healthcare](https://doi.org/10.1038/s41551-021-00751-8)\n\n\n\t+ Richard J. Chen\n\t+ Ming Y. Lu\n\t+ Faisal Mahmood*Nature Biomedical Engineering* (2021)", "url": "https://www.nature.com/articles/s42256-020-0195-0", "title": "Artificial intelligence in a crisis needs ethics with urgency", "source": "html_articles", "source_type": "journalArticle", "source_filetype": "pdf", "date_published": "2020-06-30T22:00:00Z", "authors": ["Asaf Tzachor", "Jess Whittlestone", "Lalitha Sundaram", "Seán Ó hÉigeartaigh"], "summary": [], "id": "9640e42d6bfae810e033d6ac89e9d141"} {"text": "### Subjects\n\n\n* [Conferences and meetings](/subjects/conferences-and-meetings)\n* [Policy](/subjects/policy)\n* [Publishing](/subjects/publishing)\n\n\n\n\n\nAbstract\n--------\n\nTurning principles into practice is one of the most pressing challenges of artificial intelligence (AI) governance. In this Perspective, we reflect on a governance initiative by one of the world’s largest AI conferences. In 2020, the Conference on Neural Information Processing Systems (NeurIPS) introduced a requirement for submitting authors to include a statement on the broader societal impacts of their research. Drawing insights from similar governance initiatives, including institutional review boards (IRBs) and impact requirements for funding applications, we investigate the risks, challenges and potential benefits of such an initiative. Among the challenges, we list a lack of recognized best practice and procedural transparency, researcher opportunity costs, institutional and social pressures, cognitive biases and the inherently difficult nature of the task. The potential benefits, on the other hand, include improved anticipation and identification of impacts, better communication with policy and governance experts, and a general strengthening of the norms around responsible research. To maximize the chance of success, we recommend measures to increase transparency, improve guidance, create incentives to engage earnestly with the process, and facilitate public deliberation on the requirement’s merits and future. Perhaps the most important contribution from this analysis are the insights we can gain regarding effective community-based governance and the role and responsibility of the AI research community more broadly.\n\n\n\n\n\n[Access through your institution](https://wayf.springernature.com?redirect_uri=https%3A%2F%2Fwww.nature.com%2Farticles%2Fs42256-021-00298-y)\n\n\n[Buy or subscribe](#access-options)\n\n\n\n\n\n\nThis is a preview of subscription content, [access via your institution](https://wayf.springernature.com?redirect_uri=https%3A%2F%2Fwww.nature.com%2Farticles%2Fs42256-021-00298-y)\n\n\n\n\n if (window.dataLayer) {\n window.dataLayer.push({\n content: { article: { relevantArticlesCount: 1 }}\n })\n }\n \n\n\nRelevant articles\n-----------------\n\n\n\nOpen Access articles citing this article.\n\n\n* ### \n[Operationalising AI governance through ethics-based auditing: an industry case study](https://doi.org/10.1007/s43681-022-00171-7)\n\n\n\t+ Jakob Mökander\n\t+ & Luciano Floridi\n*AI and Ethics*\nOpen Access\n31 May 2022\n\n\n\n\n\nAccess options\n--------------\n\n\n\n\n\n[Access through your institution](https://wayf.springernature.com?redirect_uri=https%3A%2F%2Fwww.nature.com%2Farticles%2Fs42256-021-00298-y)\n\n\n\n\n\n\n[Access through your institution](https://wayf.springernature.com?redirect_uri=https%3A%2F%2Fwww.nature.com%2Farticles%2Fs42256-021-00298-y)\n\n\n[Change institution](https://wayf.springernature.com?redirect_uri=https%3A%2F%2Fwww.nature.com%2Farticles%2Fs42256-021-00298-y)\n\n\n[Buy or subscribe](#access-options)\n\n\n/\\* style specs start \\*/\nstyle{display:none!important}.LiveAreaSection-193358632 \\*{align-content:stretch;align-items:stretch;align-self:auto;animation-delay:0s;animation-direction:normal;animation-duration:0s;animation-fill-mode:none;animation-iteration-count:1;animation-name:none;animation-play-state:running;animation-timing-function:ease;azimuth:center;backface-visibility:visible;background-attachment:scroll;background-blend-mode:normal;background-clip:borderBox;background-color:transparent;background-image:none;background-origin:paddingBox;background-position:0 0;background-repeat:repeat;background-size:auto auto;block-size:auto;border-block-end-color:currentcolor;border-block-end-style:none;border-block-end-width:medium;border-block-start-color:currentcolor;border-block-start-style:none;border-block-start-width:medium;border-bottom-color:currentcolor;border-bottom-left-radius:0;border-bottom-right-radius:0;border-bottom-style:none;border-bottom-width:medium;border-collapse:separate;border-image-outset:0s;border-image-repeat:stretch;border-image-slice:100%;border-image-source:none;border-image-width:1;border-inline-end-color:currentcolor;border-inline-end-style:none;border-inline-end-width:medium;border-inline-start-color:currentcolor;border-inline-start-style:none;border-inline-start-width:medium;border-left-color:currentcolor;border-left-style:none;border-left-width:medium;border-right-color:currentcolor;border-right-style:none;border-right-width:medium;border-spacing:0;border-top-color:currentcolor;border-top-left-radius:0;border-top-right-radius:0;border-top-style:none;border-top-width:medium;bottom:auto;box-decoration-break:slice;box-shadow:none;box-sizing:border-box;break-after:auto;break-before:auto;break-inside:auto;caption-side:top;caret-color:auto;clear:none;clip:auto;clip-path:none;color:initial;column-count:auto;column-fill:balance;column-gap:normal;column-rule-color:currentcolor;column-rule-style:none;column-rule-width:medium;column-span:none;column-width:auto;content:normal;counter-increment:none;counter-reset:none;cursor:auto;display:inline;empty-cells:show;filter:none;flex-basis:auto;flex-direction:row;flex-grow:0;flex-shrink:1;flex-wrap:nowrap;float:none;font-family:initial;font-feature-settings:normal;font-kerning:auto;font-language-override:normal;font-size:medium;font-size-adjust:none;font-stretch:normal;font-style:normal;font-synthesis:weight style;font-variant:normal;font-variant-alternates:normal;font-variant-caps:normal;font-variant-east-asian:normal;font-variant-ligatures:normal;font-variant-numeric:normal;font-variant-position:normal;font-weight:400;grid-auto-columns:auto;grid-auto-flow:row;grid-auto-rows:auto;grid-column-end:auto;grid-column-gap:0;grid-column-start:auto;grid-row-end:auto;grid-row-gap:0;grid-row-start:auto;grid-template-areas:none;grid-template-columns:none;grid-template-rows:none;height:auto;hyphens:manual;image-orientation:0deg;image-rendering:auto;image-resolution:1dppx;ime-mode:auto;inline-size:auto;isolation:auto;justify-content:flexStart;left:auto;letter-spacing:normal;line-break:auto;line-height:normal;list-style-image:none;list-style-position:outside;list-style-type:disc;margin-block-end:0;margin-block-start:0;margin-bottom:0;margin-inline-end:0;margin-inline-start:0;margin-left:0;margin-right:0;margin-top:0;mask-clip:borderBox;mask-composite:add;mask-image:none;mask-mode:matchSource;mask-origin:borderBox;mask-position:0 0;mask-repeat:repeat;mask-size:auto;mask-type:luminance;max-height:none;max-width:none;min-block-size:0;min-height:0;min-inline-size:0;min-width:0;mix-blend-mode:normal;object-fit:fill;object-position:50% 50%;offset-block-end:auto;offset-block-start:auto;offset-inline-end:auto;offset-inline-start:auto;opacity:1;order:0;orphans:2;outline-color:initial;outline-offset:0;outline-style:none;outline-width:medium;overflow:visible;overflow-wrap:normal;overflow-x:visible;overflow-y:visible;padding-block-end:0;padding-block-start:0;padding-bottom:0;padding-inline-end:0;padding-inline-start:0;padding-left:0;padding-right:0;padding-top:0;page-break-after:auto;page-break-before:auto;page-break-inside:auto;perspective:none;perspective-origin:50% 50%;pointer-events:auto;position:static;quotes:initial;resize:none;right:auto;ruby-align:spaceAround;ruby-merge:separate;ruby-position:over;scroll-behavior:auto;scroll-snap-coordinate:none;scroll-snap-destination:0 0;scroll-snap-points-x:none;scroll-snap-points-y:none;scroll-snap-type:none;shape-image-threshold:0;shape-margin:0;shape-outside:none;tab-size:8;table-layout:auto;text-align:initial;text-align-last:auto;text-combine-upright:none;text-decoration-color:currentcolor;text-decoration-line:none;text-decoration-style:solid;text-emphasis-color:currentcolor;text-emphasis-position:over right;text-emphasis-style:none;text-indent:0;text-justify:auto;text-orientation:mixed;text-overflow:clip;text-rendering:auto;text-shadow:none;text-transform:none;text-underline-position:auto;top:auto;touch-action:auto;transform:none;transform-box:borderBox;transform-origin:50% 50%0;transform-style:flat;transition-delay:0s;transition-duration:0s;transition-property:all;transition-timing-function:ease;vertical-align:baseline;visibility:visible;white-space:normal;widows:2;width:auto;will-change:auto;word-break:normal;word-spacing:normal;word-wrap:normal;writing-mode:horizontalTb;z-index:auto;-webkit-appearance:none;-moz-appearance:none;-ms-appearance:none;appearance:none;margin:0}.LiveAreaSection-193358632{width:100%}.LiveAreaSection-193358632 .login-option-buybox{display:block;width:100%;font-size:17px;line-height:30px;color:#222;padding-top:30px;font-family:Harding,Palatino,serif}.LiveAreaSection-193358632 .additional-access-options{display:block;font-weight:700;font-size:17px;line-height:30px;color:#222;font-family:Harding,Palatino,serif}.LiveAreaSection-193358632 .additional-login>li:not(:first-child)::before{transform:translateY(-50%);content:\"\";height:1rem;position:absolute;top:50%;left:0;border-left:2px solid #999}.LiveAreaSection-193358632 .additional-login>li:not(:first-child){padding-left:10px}.LiveAreaSection-193358632 .additional-login>li{display:inline-block;position:relative;vertical-align:middle;padding-right:10px}.BuyBoxSection-683559780{display:flex;flex-wrap:wrap;flex:1;flex-direction:row-reverse;margin:-30px -15px 0}.BuyBoxSection-683559780 .box-inner{width:100%;height:100%}.BuyBoxSection-683559780 .readcube-buybox{background-color:#f3f3f3;flex-shrink:1;flex-grow:1;flex-basis:255px;background-clip:content-box;padding:0 15px;margin-top:30px}.BuyBoxSection-683559780 .subscribe-buybox{background-color:#f3f3f3;flex-shrink:1;flex-grow:4;flex-basis:300px;background-clip:content-box;padding:0 15px;margin-top:30px}.BuyBoxSection-683559780 .subscribe-buybox-nature-plus{background-color:#f3f3f3;flex-shrink:1;flex-grow:4;flex-basis:100%;background-clip:content-box;padding:0 15px;margin-top:30px}.BuyBoxSection-683559780 .title-readcube,.BuyBoxSection-683559780 .title-buybox{display:block;margin:0;margin-right:10%;margin-left:10%;font-size:24px;line-height:32px;color:#222;padding-top:30px;text-align:center;font-family:Harding,Palatino,serif}.BuyBoxSection-683559780 .title-asia-buybox{display:block;margin:0;margin-right:5%;margin-left:5%;font-size:24px;line-height:32px;color:#222;padding-top:30px;text-align:center;font-family:Harding,Palatino,serif}.BuyBoxSection-683559780 .asia-link{color:#069;cursor:pointer;text-decoration:none;font-size:1.05em;font-family:-apple-system,BlinkMacSystemFont,\"Segoe UI\",Roboto,Oxygen-Sans,Ubuntu,Cantarell,\"Helvetica Neue\",sans-serif;line-height:1.05em6}.BuyBoxSection-683559780 .access-readcube{display:block;margin:0;margin-right:10%;margin-left:10%;font-size:14px;color:#222;padding-top:10px;text-align:center;font-family:-apple-system,BlinkMacSystemFont,\"Segoe UI\",Roboto,Oxygen-Sans,Ubuntu,Cantarell,\"Helvetica Neue\",sans-serif;line-height:20px}.BuyBoxSection-683559780 .access-asia-buybox{display:block;margin:0;margin-right:5%;margin-left:5%;font-size:14px;color:#222;padding-top:10px;text-align:center;font-family:-apple-system,BlinkMacSystemFont,\"Segoe UI\",Roboto,Oxygen-Sans,Ubuntu,Cantarell,\"Helvetica Neue\",sans-serif;line-height:20px}.BuyBoxSection-683559780 .access-buybox{display:block;margin:0;margin-right:10%;margin-left:10%;font-size:14px;color:#222;opacity:.8px;padding-top:10px;text-align:center;font-family:-apple-system,BlinkMacSystemFont,\"Segoe UI\",Roboto,Oxygen-Sans,Ubuntu,Cantarell,\"Helvetica Neue\",sans-serif;line-height:20px}.BuyBoxSection-683559780 .price-buybox{display:block;font-size:30px;color:#222;font-family:-apple-system,BlinkMacSystemFont,\"Segoe UI\",Roboto,Oxygen-Sans,Ubuntu,Cantarell,\"Helvetica Neue\",sans-serif;padding-top:30px;text-align:center}.BuyBoxSection-683559780 .price-buybox-to{display:block;font-size:30px;color:#222;font-family:-apple-system,BlinkMacSystemFont,\"Segoe UI\",Roboto,Oxygen-Sans,Ubuntu,Cantarell,\"Helvetica Neue\",sans-serif;text-align:center}.BuyBoxSection-683559780 .price-info-text{font-size:16px;padding-right:10px;color:#222;font-family:-apple-system,BlinkMacSystemFont,\"Segoe UI\",Roboto,Oxygen-Sans,Ubuntu,Cantarell,\"Helvetica Neue\",sans-serif}.BuyBoxSection-683559780 .price-value{font-size:30px;font-family:-apple-system,BlinkMacSystemFont,\"Segoe UI\",Roboto,Oxygen-Sans,Ubuntu,Cantarell,\"Helvetica Neue\",sans-serif}.BuyBoxSection-683559780 .price-per-period{font-family:-apple-system,BlinkMacSystemFont,\"Segoe UI\",Roboto,Oxygen-Sans,Ubuntu,Cantarell,\"Helvetica Neue\",sans-serif}.BuyBoxSection-683559780 .price-from{font-size:14px;padding-right:10px;color:#222;font-family:-apple-system,BlinkMacSystemFont,\"Segoe UI\",Roboto,Oxygen-Sans,Ubuntu,Cantarell,\"Helvetica Neue\",sans-serif;line-height:20px}.BuyBoxSection-683559780 .issue-buybox{display:block;font-size:13px;text-align:center;color:#222;font-family:-apple-system,BlinkMacSystemFont,\"Segoe UI\",Roboto,Oxygen-Sans,Ubuntu,Cantarell,\"Helvetica Neue\",sans-serif;line-height:19px}.BuyBoxSection-683559780 .no-price-buybox{display:block;font-size:13px;line-height:18px;text-align:center;padding-right:10%;padding-left:10%;padding-bottom:20px;padding-top:30px;color:#222;font-family:-apple-system,BlinkMacSystemFont,\"Segoe UI\",Roboto,Oxygen-Sans,Ubuntu,Cantarell,\"Helvetica Neue\",sans-serif}.BuyBoxSection-683559780 .vat-buybox{display:block;margin-top:5px;margin-right:20%;margin-left:20%;font-size:11px;color:#222;padding-top:10px;padding-bottom:15px;text-align:center;font-family:-apple-system,BlinkMacSystemFont,\"Segoe UI\",Roboto,Oxygen-Sans,Ubuntu,Cantarell,\"Helvetica Neue\",sans-serif;line-height:17px}.BuyBoxSection-683559780 .tax-buybox{display:block;width:100%;color:#222;padding:20px 16px;text-align:center;font-family:-apple-system,BlinkMacSystemFont,\"Segoe UI\",Roboto,Oxygen-Sans,Ubuntu,Cantarell,\"Helvetica Neue\",sans-serif;line-height:NaNpx}.BuyBoxSection-683559780 .button-container{display:flex;padding-right:20px;padding-left:20px;justify-content:center}.BuyBoxSection-683559780 .button-container>\\*{flex:1px}.BuyBoxSection-683559780 .button-container>a:hover,.Button-505204839:hover,.Button-1078489254:hover,.Button-2496381730:hover{text-decoration:none}.BuyBoxSection-683559780 .readcube-button{background:#fff;margin-top:30px}.BuyBoxSection-683559780 .button-asia{background:#069;border:1px solid #069;border-radius:0;cursor:pointer;display:block;padding:9px;outline:0;text-align:center;text-decoration:none;min-width:80px;margin-top:75px}.BuyBoxSection-683559780 .button-label-asia,.ButtonLabel-3869432492,.ButtonLabel-3296148077,.ButtonLabel-1651148777{display:block;color:#fff;font-size:17px;line-height:20px;font-family:-apple-system,BlinkMacSystemFont,\"Segoe UI\",Roboto,Oxygen-Sans,Ubuntu,Cantarell,\"Helvetica Neue\",sans-serif;text-align:center;text-decoration:none;cursor:pointer}.Button-505204839,.Button-1078489254,.Button-2496381730{background:#069;border:1px solid #069;border-radius:0;cursor:pointer;display:block;padding:9px;outline:0;text-align:center;text-decoration:none;min-width:80px;max-width:320px;margin-top:10px}.Button-505204839 .readcube-label,.Button-1078489254 .readcube-label,.Button-2496381730 .readcube-label{color:#069}\n/\\* style specs end \\*/Access Nature and 54 other Nature Portfolio journals\n\nGet Nature+, our best-value online-access subscription\n\n24,99 € / 30 days\n\ncancel any time\n\n[Learn more](https://shop.nature.com/products/plus)Subscribe to this journal\n\nReceive 12 digital issues and online access to articles\n\n99,00 € per year\n\nonly 8,25 € per issue\n\n[Learn more](/natmachintell/subscribe)Rent or buy this article\n\nPrices vary by article type\n\nfrom$1.95\n\nto$39.95\n\n[Learn more](//www.nature.com/articles/s42256-021-00298-y.epdf?no_publisher_access=1&r3_referer=nature)Prices may be subject to local taxes which are calculated during checkout\n\n\n\n### Additional access options:\n\n\n* [Log in](https://idp.nature.com/authorize/natureuser?client_id=grover&redirect_uri=https%3A%2F%2Fwww.nature.com%2Farticles%2Fs42256-021-00298-y)\n* [Learn about institutional subscriptions](https://www.springernature.com/gp/librarians/licensing/license-options)\n* [Read our FAQs](https://support.nature.com/en/support/home)\n* [Contact customer support](https://www.springernature.com/gp/contact)\n\n\n\n\n\n\nReferences\n----------\n\n1. Winfield, A. F. T. & Jirotka, M. Ethical governance is essential to building trust in robotics and artificial intelligence systems. *Phil. Trans. R. Soc. A* **376**, 20180085 (2018).\n\n[Article](https://doi.org/10.1098%2Frsta.2018.0085) \n [Google Scholar](http://scholar.google.com/scholar_lookup?&title=Ethical%20governance%20is%20essential%20to%20building%20trust%20in%20robotics%20and%20artificial%20intelligence%20systems&journal=Phil.%20Trans.%20R.%20Soc.%20A&doi=10.1098%2Frsta.2018.0085&volume=376&publication_year=2018&author=Winfield%2CAFT&author=Jirotka%2CM)\n2. Fisher, E., Mahajan, R. L. & Mitcham, C. Midstream modulation of technology: governance from within. *Bull. Sci. Technol. Soc.* **26**, 485–496 (2006).\n\n[Article](https://doi.org/10.1177%2F0270467606295402) \n [Google Scholar](http://scholar.google.com/scholar_lookup?&title=Midstream%20modulation%20of%20technology%3A%20governance%20from%20within&journal=Bull.%20Sci.%20Technol.%20Soc.&doi=10.1177%2F0270467606295402&volume=26&pages=485-496&publication_year=2006&author=Fisher%2CE&author=Mahajan%2CRL&author=Mitcham%2CC)\n3. NeurIPS *Call For Papers* (2020); \n4. Johnson, K. NeurIPS requires AI researchers to account for societal impact and financial conflicts of interest. *Venturebeat* (24 February 2020).\n5. Brundage, M. Artificial intelligence and responsible innovation. In *Fundamental Issues of Artificial Intelligence* 543−554 (Synthese Library, 2016).\n6. Hecht, B. et al. *It’s Time To Do Something: Mitigating The Negative Impacts Of Computing Through A Change To The Peer Review Process* (ACM Future of Computing Academy, 29 March 2018); \n7. NeurIPS *Getting Started with NeurIPS 2020* (2020); \n8. NeurIPS *NeurIPS 2020 FAQ for Authors* (2020); \n9. Lin, H.-T., Balcan, M. F., Hadsell, R. & Ranzato, M. A. What we learned from NeurIPS 2020 reviewing process. *Medium* (2020).\n10. Hamburger, P. The new censorship: institutional review boards. *Supreme Court Rev.* **2004**, 271–354 (2004).\n\n[Article](https://doi.org/10.1086%2Fscr.2004.3536972) \n [Google Scholar](http://scholar.google.com/scholar_lookup?&title=The%20new%20censorship%3A%20institutional%20review%20boards&journal=Supreme%20Court%20Rev.&doi=10.1086%2Fscr.2004.3536972&volume=2004&pages=271-354&publication_year=2004&author=Hamburger%2CP)\n11. Buchanan, E., Aycock, J., Dexter, S., Dittrich, D. & Hvizdak, E. Computer science security research and human subjects: emerging considerations for research ethics boards. *J. Emp. Res. Human Res. Ethics* **6**, 71–83 (2011).\n\n[Article](https://doi.org/10.1525%2Fjer.2011.6.2.71) \n [Google Scholar](http://scholar.google.com/scholar_lookup?&title=Computer%20science%20security%20research%20and%20human%20subjects%3A%20emerging%20considerations%20for%20research%20ethics%20boards&journal=J.%20Emp.%20Res.%20Human%20Res.%20Ethics&doi=10.1525%2Fjer.2011.6.2.71&volume=6&pages=71-83&publication_year=2011&author=Buchanan%2CE&author=Aycock%2CJ&author=Dexter%2CS&author=Dittrich%2CD&author=Hvizdak%2CE)\n12. Amorim, P. F., Sacramento, C., Capra, E. P., Tavares, P. Z. & Ferreira, S. B. L. Submit or not my HCI research project to the ethics committee, that is the question. In *Proc. 18th Brazilian Symp. on Human Factors in Computing Systems (IHC ’19)* 1−11 (Association for Computing Machinery, 2019).\n13. Abbott, L. & Grady, C. A systematic review of the empirical literature evaluating IRBs: what we know and what we still need to learn. *J. Emp. Res. Human Res. Ethics* **6**, 3–19 (2011).\n\n[Article](https://doi.org/10.1525%2Fjer.2011.6.1.3) \n [Google Scholar](http://scholar.google.com/scholar_lookup?&title=A%20systematic%20review%20of%20the%20empirical%20literature%20evaluating%20IRBs%3A%20what%20we%20know%20and%20what%20we%20still%20need%20to%20learn&journal=J.%20Emp.%20Res.%20Human%20Res.%20Ethics&doi=10.1525%2Fjer.2011.6.1.3&volume=6&pages=3-19&publication_year=2011&author=Abbott%2CL&author=Grady%2CC)\n14. Hyman, D. A. Institutional review boards: is this the least worst we can do? *Northwestern Univ. Law Rev.* **101**, 749–774 (2007).\n\n[Google Scholar](http://scholar.google.com/scholar_lookup?&title=Institutional%20review%20boards%3A%20is%20this%20the%20least%20worst%20we%20can%20do%3F&journal=Northwestern%20Univ.%20Law%20Rev.&volume=101&pages=749-774&publication_year=2007&author=Hyman%2CDA)\n15. Zywicki, T. J. Institutional review boards as academic bureaucracies: an economic and experiential analysis. *Northwestern Univ. Law Rev.* **101**, 861–896 (2007).\n\n[Google Scholar](http://scholar.google.com/scholar_lookup?&title=Institutional%20review%20boards%20as%20academic%20bureaucracies%3A%20an%20economic%20and%20experiential%20analysis&journal=Northwestern%20Univ.%20Law%20Rev.&volume=101&pages=861-896&publication_year=2007&author=Zywicki%2CTJ)\n16. Whitney, S. N. et al. Principal investigator views of the IRB system. *Int. J. Med. Sci.* **5**, 68–72 (2008).\n\n[Article](https://doi.org/10.7150%2Fijms.5.68) \n [Google Scholar](http://scholar.google.com/scholar_lookup?&title=Principal%20investigator%20views%20of%20the%20IRB%20system&journal=Int.%20J.%20Med.%20Sci.&doi=10.7150%2Fijms.5.68&volume=5&pages=68-72&publication_year=2008&author=Whitney%2CSN)\n17. Chadwick, G. L. & Dunn, C. Institutional review boards: changing with the times? *J. Public Health Manage. Practice* **6**, 19–27 (2000).\n\n[Article](https://doi.org/10.1097%2F00124784-200006060-00005) \n [Google Scholar](http://scholar.google.com/scholar_lookup?&title=Institutional%20review%20boards%3A%20changing%20with%20the%20times%3F&journal=J.%20Public%20Health%20Manage.%20Practice&doi=10.1097%2F00124784-200006060-00005&volume=6&pages=19-27&publication_year=2000&author=Chadwick%2CGL&author=Dunn%2CC)\n18. Fost, N. & Levine, R. J. The dysregulation of human subjects research. *JAMA* **298**, 2196 (2007).\n\n[Article](https://doi.org/10.1001%2Fjama.298.18.2196) \n [Google Scholar](http://scholar.google.com/scholar_lookup?&title=The%20dysregulation%20of%20human%20subjects%20research&journal=JAMA&doi=10.1001%2Fjama.298.18.2196&volume=298&publication_year=2007&author=Fost%2CN&author=Levine%2CRJ)\n19. Dziak, K. et al. Variations among institutional review board reviews in a multisite health services research study. *Health Serv. Res.* **40**, 279–290 (2005).\n\n[Article](https://doi.org/10.1111%2Fj.1475-6773.2005.00353.x) \n [Google Scholar](http://scholar.google.com/scholar_lookup?&title=Variations%20among%20institutional%20review%20board%20reviews%20in%20a%20multisite%20health%20services%20research%20study&journal=Health%20Serv.%20Res.&doi=10.1111%2Fj.1475-6773.2005.00353.x&volume=40&pages=279-290&publication_year=2005&author=Dziak%2CK)\n20. Larson, E., Bratts, T., Zwanziger, J. & Stone, P. A survey of IRB process in 68 U.S. hospitals. *J. Nurs. Scholarship* **36**, 260–264 (2004).\n\n[Article](https://doi.org/10.1111%2Fj.1547-5069.2004.04047.x) \n [Google Scholar](http://scholar.google.com/scholar_lookup?&title=A%20survey%20of%20IRB%20process%20in%2068%20U.S.%20hospitals&journal=J.%20Nurs.%20Scholarship&doi=10.1111%2Fj.1547-5069.2004.04047.x&volume=36&pages=260-264&publication_year=2004&author=Larson%2CE&author=Bratts%2CT&author=Zwanziger%2CJ&author=Stone%2CP)\n21. Shah, S., Whittle, A., Wilfond, B., Gensler, G. & Wendler, D. How do institutional review boards apply the federal risk and benefit standards for pediatric research? *JAMA* **291**, 476 (2004).\n\n[Article](https://doi.org/10.1001%2Fjama.291.4.476) \n [Google Scholar](http://scholar.google.com/scholar_lookup?&title=How%20do%20institutional%20review%20boards%20apply%20the%20federal%20risk%20and%20benefit%20standards%20for%20pediatric%20research%3F&journal=JAMA&doi=10.1001%2Fjama.291.4.476&volume=291&publication_year=2004&author=Shah%2CS&author=Whittle%2CA&author=Wilfond%2CB&author=Gensler%2CG&author=Wendler%2CD)\n22. McWilliams, R. Problematic variation in local institutional review of a multicenter genetic epidemiology study. *JAMA* **290**, 360 (2003).\n\n[Article](https://doi.org/10.1001%2Fjama.290.3.360) \n [Google Scholar](http://scholar.google.com/scholar_lookup?&title=Problematic%20variation%20in%20local%20institutional%20review%20of%20a%20multicenter%20genetic%20epidemiology%20study&journal=JAMA&doi=10.1001%2Fjama.290.3.360&volume=290&publication_year=2003&author=McWilliams%2CR)\n23. Goldman, J. Inconsistency and institutional review boards. *JAMA* **248**, 197 (1982).\n\n[Article](https://doi.org/10.1001%2Fjama.1982.03330020041027) \n [Google Scholar](http://scholar.google.com/scholar_lookup?&title=Inconsistency%20and%20institutional%20review%20boards&journal=JAMA&doi=10.1001%2Fjama.1982.03330020041027&volume=248&publication_year=1982&author=Goldman%2CJ)\n24. Reeser, J. C., Austin, D. M., Jaros, L. M., Mukesh, B. N. & McCarty, C. A. Investigating perceived institutional review board quality and function using the IRB researcher assessment tool. *J. Emp. Res. Human Res. Ethics* **3**, 25–34 (2008).\n\n[Article](https://doi.org/10.1525%2Fjer.2008.3.1.25) \n [Google Scholar](http://scholar.google.com/scholar_lookup?&title=Investigating%20perceived%20institutional%20review%20board%20quality%20and%20function%20using%20the%20IRB%20researcher%20assessment%20tool&journal=J.%20Emp.%20Res.%20Human%20Res.%20Ethics&doi=10.1525%2Fjer.2008.3.1.25&volume=3&pages=25-34&publication_year=2008&author=Reeser%2CJC&author=Austin%2CDM&author=Jaros%2CLM&author=Mukesh%2CBN&author=McCarty%2CCA)\n25. Stryjewski, T. P., Kalish, B. T., Silverman, B. & Lehmann, L. S. The impact of institutional review boards (IRBs) on clinical innovation: a survey of investigators and IRB members. *J. Emp. Res. Human Res. Ethics* **10**, 481–487 (2015).\n\n[Article](https://doi.org/10.1177%2F1556264615614936) \n [Google Scholar](http://scholar.google.com/scholar_lookup?&title=The%20impact%20of%20institutional%20review%20boards%20%28IRBs%29%20on%20clinical%20innovation%3A%20a%20survey%20of%20investigators%20and%20IRB%20members&journal=J.%20Emp.%20Res.%20Human%20Res.%20Ethics&doi=10.1177%2F1556264615614936&volume=10&pages=481-487&publication_year=2015&author=Stryjewski%2CTP&author=Kalish%2CBT&author=Silverman%2CB&author=Lehmann%2CLS)\n26. Keith-Spiegel, P., Koocher, G. P. & Tabachnick, B. What scientists want from their research ethics committee. *J. Emp. Res. Human Res. Ethics* **1**, 67–81 (2006).\n\n[Article](https://doi.org/10.1525%2Fjer.2006.1.1.67) \n [Google Scholar](http://scholar.google.com/scholar_lookup?&title=What%20scientists%20want%20from%20their%20research%20ethics%20committee&journal=J.%20Emp.%20Res.%20Human%20Res.%20Ethics&doi=10.1525%2Fjer.2006.1.1.67&volume=1&pages=67-81&publication_year=2006&author=Keith-Spiegel%2CP&author=Koocher%2CGP&author=Tabachnick%2CB)\n27. Saleem, T. & Khalid, U. Institutional review boards—a mixed blessing. *Int. Arch. Med.* **4**, 19 (2011).\n\n[Article](https://doi.org/10.1186%2F1755-7682-4-19) \n [Google Scholar](http://scholar.google.com/scholar_lookup?&title=Institutional%20review%20boards%E2%80%94a%20mixed%20blessing&journal=Int.%20Arch.%20Med.&doi=10.1186%2F1755-7682-4-19&volume=4&publication_year=2011&author=Saleem%2CT&author=Khalid%2CU)\n28. ACM SIGMETRICS 2021. Call for Papers (2020). \n29. Narayanan, A. & Zevenbergen, B. *No Encore for Encore? Ethical Questions for Web-Based Censorship Measurement* SSRN Scholarly Paper ID 2665148 (Social Science Research Network, 2015).\n30. Kenneally, E. & Bailey, M. Cyber-security research ethics dialogue and strategy workshop. *ACM SIGCOMM Comput. Commun. Rev.* **44**, 76–79 (2014).\n\n[Article](https://doi.org/10.1145%2F2602204.2602217) \n [Google Scholar](http://scholar.google.com/scholar_lookup?&title=Cyber-security%20research%20ethics%20dialogue%20and%20strategy%20workshop&journal=ACM%20SIGCOMM%20Comput.%20Commun.%20Rev.&doi=10.1145%2F2602204.2602217&volume=44&pages=76-79&publication_year=2014&author=Kenneally%2CE&author=Bailey%2CM)\n31. Burnett, S. & Feamster, N. Encore: lightweight measurement of web censorship with cross-origin requests. In *Proc. 2015 ACM Conf. on Special Interest Group on Data Communication (SIGCOMM ’15)* 653-667 (Association for Computing Machinery, 2015).\n32. Kramer, A. D. I., Guillory, J. E. & Hancock, J. T. Experimental evidence of massive-scale emotional contagion through social networks. *Proc. Natl Acad. Sci.* **111**, 8788–8790 (2014).\n\n[Article](https://doi.org/10.1073%2Fpnas.1320040111) \n [Google Scholar](http://scholar.google.com/scholar_lookup?&title=Experimental%20evidence%20of%20massive-scale%20emotional%20contagion%20through%20social%20networks&journal=Proc.%20Natl%20Acad.%20Sci.&doi=10.1073%2Fpnas.1320040111&volume=111&pages=8788-8790&publication_year=2014&author=Kramer%2CADI&author=Guillory%2CJE&author=Hancock%2CJT)\n33. Editorial Expression of Concern: Experimental evidence of massive-scale emotional contagion through social networks. *Proc. Natl Acad. Sci.* **111**, 10779−10779 (2014).\n34. EPSRC *Framework for Responsible Innovation* (2020).\n35. NSF *Ch. II—Proposal Preparation Instructions. Proposal & Award Policies & Procedures Guide* (29 January 2018); \n36. Tretkoff, E. NSF’s ‘broader impacts’ criterion gets mixed reviews. *Am. Phys. Soc. News* **16**, (2007).\n37. Frodeman, R. & Holbrook, J. B. Science’s social effects. *Iss. Sci. Technol.* **23**, 28–30 (2007).\n\n[Google Scholar](http://scholar.google.com/scholar_lookup?&title=Science%E2%80%99s%20social%20effects&journal=Iss.%20Sci.%20Technol.&volume=23&pages=28-30&publication_year=2007&author=Frodeman%2CR&author=Holbrook%2CJB)\n38. Bozeman, B. & Boardman, C. Broad impacts and narrow perspectives: passing the buck on science and social impacts. *Soc. Epist.* **23**, 183–198 (2009).\n\n[Article](https://doi.org/10.1080%2F02691720903364019) \n [Google Scholar](http://scholar.google.com/scholar_lookup?&title=Broad%20impacts%20and%20narrow%20perspectives%3A%20passing%20the%20buck%20on%20science%20and%20social%20impacts&journal=Soc.%20Epist.&doi=10.1080%2F02691720903364019&volume=23&pages=183-198&publication_year=2009&author=Bozeman%2CB&author=Boardman%2CC)\n39. Holbrook, J. B. & Frodeman, R. Peer review and the ex ante assessment of societal impacts. *Res. Eval.* **20**, 239–246 (2011).\n\n[Article](https://doi.org/10.3152%2F095820211X12941371876788) \n [Google Scholar](http://scholar.google.com/scholar_lookup?&title=Peer%20review%20and%20the%20ex%20ante%20assessment%20of%20societal%20impacts&journal=Res.%20Eval.&doi=10.3152%2F095820211X12941371876788&volume=20&pages=239-246&publication_year=2011&author=Holbrook%2CJB&author=Frodeman%2CR)\n40. Bozeman, B. & Youtie, J. Socio-economic impacts and public value of government-funded research: lessons from four US National Science Foundation initiatives. *Res. Pol.* **46**, 1387–1398 (2017).\n\n[Article](https://doi.org/10.1016%2Fj.respol.2017.06.003) \n [Google Scholar](http://scholar.google.com/scholar_lookup?&title=Socio-economic%20impacts%20and%20public%20value%20of%20government-funded%20research%3A%20lessons%20from%20four%20US%20National%20Science%20Foundation%20initiatives&journal=Res.%20Pol.&doi=10.1016%2Fj.respol.2017.06.003&volume=46&pages=1387-1398&publication_year=2017&author=Bozeman%2CB&author=Youtie%2CJ)\n41. Owen, R. & Goldberg, N. Responsible innovation: a pilot study with the U.K. Engineering and Physical Sciences Research Council. *Risk Anal.* **30**, 1699–1707 (2010).\n\n[Article](https://doi.org/10.1111%2Fj.1539-6924.2010.01517.x) \n [Google Scholar](http://scholar.google.com/scholar_lookup?&title=Responsible%20innovation%3A%20a%20pilot%20study%20with%20the%20U.K.%20Engineering%20and%20Physical%20Sciences%20Research%20Council&journal=Risk%20Anal.&doi=10.1111%2Fj.1539-6924.2010.01517.x&volume=30&pages=1699-1707&publication_year=2010&author=Owen%2CR&author=Goldberg%2CN)\n42. EPSRC *Anticipate, Reflect, Engage And Act (AREA)* (2020).\n43. Owen, R., Macnaghten, P. & Stilgoe, J. Responsible research and innovation: from science in society to science for society, with society. *Sci. Public Pol.* **39**, 751–760 (2012).\n\n[Article](https://doi.org/10.1093%2Fscipol%2Fscs093) \n [Google Scholar](http://scholar.google.com/scholar_lookup?&title=Responsible%20research%20and%20innovation%3A%20from%20science%20in%20society%20to%20science%20for%20society%2C%20with%20society&journal=Sci.%20Public%20Pol.&doi=10.1093%2Fscipol%2Fscs093&volume=39&pages=751-760&publication_year=2012&author=Owen%2CR&author=Macnaghten%2CP&author=Stilgoe%2CJ)\n44. Stilgoe, J., Owen, R. & Macnaghten, P. Developing a framework for responsible innovation. *Res. Pol.* **42**, 1568–1580 (2013).\n\n[Article](https://doi.org/10.1016%2Fj.respol.2013.05.008) \n [Google Scholar](http://scholar.google.com/scholar_lookup?&title=Developing%20a%20framework%20for%20responsible%20innovation&journal=Res.%20Pol.&doi=10.1016%2Fj.respol.2013.05.008&volume=42&pages=1568-1580&publication_year=2013&author=Stilgoe%2CJ&author=Owen%2CR&author=Macnaghten%2CP)\n45. Marchant, G. E., Allenby, B. R. & Herkert, J. R. (eds.) *The Growing Gap Between Emerging Technologies and Legal-Ethical Oversight: The Pacing Problem* (The International Library of Ethics, Law and Technology, Springer, 2011).\n46. Gray, I. M. & Edwards-Jones, G. A review of the quality of environmental impact assessments in the Scottish forest sector. *Forestry Int. J. Forest Res.* **72**, 1–10 (1999).\n\n[Article](https://doi.org/10.1093%2Fforestry%2F72.1.1) \n [Google Scholar](http://scholar.google.com/scholar_lookup?&title=A%20review%20of%20the%20quality%20of%20environmental%20impact%20assessments%20in%20the%20Scottish%20forest%20sector&journal=Forestry%20Int.%20J.%20Forest%20Res.&doi=10.1093%2Fforestry%2F72.1.1&volume=72&pages=1-10&publication_year=1999&author=Gray%2CIM&author=Edwards-Jones%2CG)\n47. *Assessing the Social and Environmental Impacts of European Research* Tech. Rep. EUR 21702 (European Commission, 2005).\n48. Spaapen, J. & van Drooge, L. Introducing ’productive interactions’ in social impact assessment. *Res. Eval.* **20**, 211–218 (2011).\n\n[Article](https://doi.org/10.3152%2F095820211X12941371876742) \n [Google Scholar](http://scholar.google.com/scholar_lookup?&title=Introducing%20%E2%80%99productive%20interactions%E2%80%99%20in%20social%20impact%20assessment&journal=Res.%20Eval.&doi=10.3152%2F095820211X12941371876742&volume=20&pages=211-218&publication_year=2011&author=Spaapen%2CJ&author=Drooge%2CL)\n49. *Pathways to Impact: Impact core to the UK Research and Innovation Application Process* (UK Research and Innovation, 2020); \n50. Bietti, E. From ethics washing to ethics bashing: a view on tech ethics from within moral philosophy. In *Proc. 2020 Conf. on Fairness, Accountability, and Transparency* 210−219 (Association for Computing Machinery, 2020).\n51. Hagendorff, T. & Meding, K. The big picture: ethical considerations and statistical analysis of industry involvement in machine learning research. Preprint at (2020).\n52. Stanovich, K. E., West, R. F. & Toplak, M. E. Myside bias, rational thinking, and intelligence. *Curr. Dir. Psychol. Sci.* **22**, 259–264 (2013).\n\n[Article](https://doi.org/10.1177%2F0963721413480174) \n [Google Scholar](http://scholar.google.com/scholar_lookup?&title=Myside%20bias%2C%20rational%20thinking%2C%20and%20intelligence&journal=Curr.%20Dir.%20Psychol.%20Sci.&doi=10.1177%2F0963721413480174&volume=22&pages=259-264&publication_year=2013&author=Stanovich%2CKE&author=West%2CRF&author=Toplak%2CME)\n53. Plous, S. *The Psychology Of Judgment And Decision Making* (McGraw-Hill, 1993).\n54. Curley, S. P., Yates, J. F. & Abrams, R. A. Psychological sources of ambiguity avoidance. *Org. Behav. Human Decision Process.* **38**, 230–256 (1986).\n\n[Article](https://doi.org/10.1016%2F0749-5978%2886%2990018-X) \n [Google Scholar](http://scholar.google.com/scholar_lookup?&title=Psychological%20sources%20of%20ambiguity%20avoidance&journal=Org.%20Behav.%20Human%20Decision%20Process.&doi=10.1016%2F0749-5978%2886%2990018-X&volume=38&pages=230-256&publication_year=1986&author=Curley%2CSP&author=Yates%2CJF&author=Abrams%2CRA)\n55. Nickerson, R. S. Confirmation bias: a ubiquitous phenomenon in many guises. *Rev. Gen. Psychol.* **2**, 175–220 (1998).\n\n[Article](https://doi.org/10.1037%2F1089-2680.2.2.175) \n [Google Scholar](http://scholar.google.com/scholar_lookup?&title=Confirmation%20bias%3A%20a%20ubiquitous%20phenomenon%20in%20many%20guises&journal=Rev.%20Gen.%20Psychol.&doi=10.1037%2F1089-2680.2.2.175&volume=2&pages=175-220&publication_year=1998&author=Nickerson%2CRS)\n56. Ashurst, C. et al. *A Guide to Writing the NeurIPS Impact Statement* (Centre for the Governance of AI, 13 May 2020); \n57. Hecht, B. *Suggestions for Writing NeurIPS 2020 Broader Impacts Statements* (22 February, 2020); \n58. Porter, A. L., Garner, J. & Crowl, T. Research coordination networks: evidence of the relationship between funded interdisciplinary networking and scholarly impact. *BioScience* **62**, 282–288 (2012).\n\n[Article](https://doi.org/10.1525%2Fbio.2012.62.3.9) \n [Google Scholar](http://scholar.google.com/scholar_lookup?&title=Research%20coordination%20networks%3A%20evidence%20of%20the%20relationship%20between%20funded%20interdisciplinary%20networking%20and%20scholarly%20impact&journal=BioScience&doi=10.1525%2Fbio.2012.62.3.9&volume=62&pages=282-288&publication_year=2012&author=Porter%2CAL&author=Garner%2CJ&author=Crowl%2CT)\n\n[Download references](https://citation-needed.springer.com/v2/references/10.1038/s42256-021-00298-y?format=refman&flavour=references)\n\nAcknowledgements\n----------------\n\nWe thank J. Tenenbaum, Y. Gal, T. Shevlane and colleagues at the Centre for the Governance of AI for helpful feedback and comments.\n\nAuthor information\n------------------\n\n### Authors and Affiliations\n\n1. Institute for Ethics in AI, University of Oxford, Oxford, UK\n\nCarina E. A. Prunkl\n2. Future of Humanity Institute, University of Oxford, Oxford, UK\n\nCarina E. A. Prunkl, Carolyn Ashurst, Markus Anderljung, Jan Leike & Allan Dafoe\n3. Department of Computer Science, University of Oxford, Oxford, UK\n\nHelena Webb\n\nAuthors1. Carina E. A. Prunkl[View author publications](/search?author=Carina%20E.%20A.%20Prunkl)You can also search for this author in\n [PubMed](http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=search&term=Carina%20E.%20A.%20Prunkl) [Google Scholar](http://scholar.google.co.uk/scholar?as_q=&num=10&btnG=Search+Scholar&as_epq=&as_oq=&as_eq=&as_occt=any&as_sauthors=%22Carina%20E.%20A.%20Prunkl%22&as_publication=&as_ylo=&as_yhi=&as_allsubj=all&hl=en)\n2. Carolyn Ashurst[View author publications](/search?author=Carolyn%20Ashurst)You can also search for this author in\n [PubMed](http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=search&term=Carolyn%20Ashurst) [Google Scholar](http://scholar.google.co.uk/scholar?as_q=&num=10&btnG=Search+Scholar&as_epq=&as_oq=&as_eq=&as_occt=any&as_sauthors=%22Carolyn%20Ashurst%22&as_publication=&as_ylo=&as_yhi=&as_allsubj=all&hl=en)\n3. Markus Anderljung[View author publications](/search?author=Markus%20Anderljung)You can also search for this author in\n [PubMed](http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=search&term=Markus%20Anderljung) [Google Scholar](http://scholar.google.co.uk/scholar?as_q=&num=10&btnG=Search+Scholar&as_epq=&as_oq=&as_eq=&as_occt=any&as_sauthors=%22Markus%20Anderljung%22&as_publication=&as_ylo=&as_yhi=&as_allsubj=all&hl=en)\n4. Helena Webb[View author publications](/search?author=Helena%20Webb)You can also search for this author in\n [PubMed](http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=search&term=Helena%20Webb) [Google Scholar](http://scholar.google.co.uk/scholar?as_q=&num=10&btnG=Search+Scholar&as_epq=&as_oq=&as_eq=&as_occt=any&as_sauthors=%22Helena%20Webb%22&as_publication=&as_ylo=&as_yhi=&as_allsubj=all&hl=en)\n5. Jan Leike[View author publications](/search?author=Jan%20Leike)You can also search for this author in\n [PubMed](http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=search&term=Jan%20Leike) [Google Scholar](http://scholar.google.co.uk/scholar?as_q=&num=10&btnG=Search+Scholar&as_epq=&as_oq=&as_eq=&as_occt=any&as_sauthors=%22Jan%20Leike%22&as_publication=&as_ylo=&as_yhi=&as_allsubj=all&hl=en)\n6. Allan Dafoe[View author publications](/search?author=Allan%20Dafoe)You can also search for this author in\n [PubMed](http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=search&term=Allan%20Dafoe) [Google Scholar](http://scholar.google.co.uk/scholar?as_q=&num=10&btnG=Search+Scholar&as_epq=&as_oq=&as_eq=&as_occt=any&as_sauthors=%22Allan%20Dafoe%22&as_publication=&as_ylo=&as_yhi=&as_allsubj=all&hl=en)\n### Corresponding author\n\nCorrespondence to\n [Carina E. A. Prunkl](mailto:carina.prunkl@philosophy.ox.ac.uk).\n\nEthics declarations\n-------------------\n\n\n### Competing interests\n\n\nThe authors declare no competing interests.\n\n\nAdditional information\n----------------------\n\n**Peer review information** *Nature Machine Intelligence* thanks Gillian Hadfield, Sean Legassick and the other, anonymous, reviewer(s) for their contribution to the peer review of this work.\n\n**Publisher’s note** Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.\n\nRights and permissions\n----------------------\n\n[Reprints and Permissions](https://s100.copyright.com/AppDispatchServlet?title=Institutionalizing%20ethics%20in%20AI%20through%20broader%20impact%20requirements&author=Carina%20E.%20A.%20Prunkl%20et%20al&contentID=10.1038%2Fs42256-021-00298-y©right=Springer%20Nature%20Limited&publication=2522-5839&publicationDate=2021-02-17&publisherName=SpringerNature&orderBeanReset=true)\n\n\n\n\n\nThis article is cited by\n------------------------\n\n\n\n* ### \n[Operationalising AI governance through ethics-based auditing: an industry case study](https://doi.org/10.1007/s43681-022-00171-7)\n\n\n\t+ Jakob Mökander\n\t+ Luciano Floridi*AI and Ethics* (2023)\n* ### \n[Advancing ethics review practices in AI research](https://doi.org/10.1038/s42256-022-00585-2)\n\n\n\t+ Madhulika Srikumar\n\t+ Rebecca Finlay\n\t+ Joelle Pineau*Nature Machine Intelligence* (2022)\n* ### \n[Ethics methods are required as part of reporting guidelines for artificial intelligence in healthcare](https://doi.org/10.1038/s42256-022-00479-3)\n\n\n\t+ Viknesh Sounderajah\n\t+ Melissa D. McCradden\n\t+ Ara Darzi*Nature Machine Intelligence* (2022)\n* ### \n[Much to discuss in AI ethics](https://doi.org/10.1038/s42256-022-00598-x)\n\n\n*Nature Machine Intelligence* (2022)\n* ### \n[Dual use of artificial-intelligence-powered drug discovery](https://doi.org/10.1038/s42256-022-00465-9)\n\n\n\t+ Fabio Urbina\n\t+ Filippa Lentzos\n\t+ Sean Ekins*Nature Machine Intelligence* (2022)", "url": "https://www.nature.com/articles/s42256-021-00298-y", "title": "Institutionalizing ethics in AI through broader impact requirements", "source": "html_articles", "source_type": "journalArticle", "source_filetype": "pdf", "date_published": "2021-01-31T23:00:00Z", "authors": ["Carina E. A. Prunkl", "Carolyn Ashurst", "Markus Anderljung", "Helena Webb", "Jan Leike", "Allan Dafoe"], "summary": [], "id": "951d27830e22c353f5d62ca10d3d8670"} {"text": "Front Syst Neurosci. 2014; 8: 107. Published online 2014 Jun 11. doi: [10.3389/fnsys.2014.00107](//doi.org/10.3389%2Ffnsys.2014.00107)PMCID: PMC4052735PMID: [24999320](https://pubmed.ncbi.nlm.nih.gov/24999320)Pharmacological cognitive enhancement—how neuroscientific research could advance ethical debate\n===============================================================================================\n\n[Hannah Maslen](https://pubmed.ncbi.nlm.nih.gov/?term=Maslen%20H%5BAuthor%5D),1,\\* [Nadira Faulmüller](https://pubmed.ncbi.nlm.nih.gov/?term=Faulmüller%20N%5BAuthor%5D),2,3 and [Julian Savulescu](https://pubmed.ncbi.nlm.nih.gov/?term=Savulescu%20J%5BAuthor%5D)4### Hannah Maslen\n\n1Oxford Martin School, University of Oxford, Oxford, UK\n\nFind articles by [Hannah Maslen](https://pubmed.ncbi.nlm.nih.gov/?term=Maslen%20H%5BAuthor%5D)### Nadira Faulmüller\n\n2Department of Experimental Psychology, University of Oxford, Oxford, UK\n\n3Department Values, Technology and Innovation, Delft University of Technology, Delft, Netherlands\n\nFind articles by [Nadira Faulmüller](https://pubmed.ncbi.nlm.nih.gov/?term=Faulmüller%20N%5BAuthor%5D)### Julian Savulescu\n\n4Oxford Uehiro Centre for Practical Ethics, University of Oxford, Oxford, UK\n\nFind articles by [Julian Savulescu](https://pubmed.ncbi.nlm.nih.gov/?term=Savulescu%20J%5BAuthor%5D)[Author information](#) [Article notes](#) [Copyright and License information](#) [Disclaimer](/pmc/about/disclaimer/)1Oxford Martin School, University of Oxford, Oxford, UK2Department of Experimental Psychology, University of Oxford, Oxford, UK3Department Values, Technology and Innovation, Delft University of Technology, Delft, Netherlands4Oxford Uehiro Centre for Practical Ethics, University of Oxford, Oxford, UKEdited by: Mikhail Lebedev, Duke University, USAReviewed by: Elisabeth Hildt, University of Mainz, Germany; Brendon Boot, Harvard University Medical School, USA; Patricia Anne O'Malley, Miami Valley Hospital Center of Nursing Excellence, USA\\*Correspondence: Hannah Maslen, Oxford Martin School, University of Oxford, 34 Broad Street, Oxford, OX1 3BD, UK e-mail: [ku.ca.xo.yhposolihp@nelsam.hannah](mailto:dev@null)This article was submitted to the journal Frontiers in Systems Neuroscience.Received 2014 Jan 31; Accepted 2014 May 20.[Copyright](/pmc/about/copyright/) © 2014 Maslen, Faulmüller and Savulescu.This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) or licensor are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.Abstract\n--------\n\nThere are numerous ways people can improve their cognitive capacities: good nutrition and regular exercise can produce long-term improvements across many cognitive domains, whilst commonplace stimulants such as coffee temporarily boost levels of alertness and concentration. Effects like these have been well-documented in the medical literature and they raise few (if any) ethical issues. More recently, however, clinical research has shown that the off-label use of some pharmaceuticals can, under certain conditions, have modest cognition-improving effects. Substances such as methylphenidate and modafinil can improve capacities such as working memory and concentration in some healthy individuals. Unlike their more mundane predecessors, these methods of “cognitive enhancement” are thought to raise a multitude of ethical issues. This paper presents the six principal ethical issues raised in relation to pharmacological cognitive enhancers (PCEs)—issues such as whether: (1) the medical safety-profile of PCEs justifies restricting or permitting their elective or required use; (2) the enhanced mind can be an “authentic” mind; (3) individuals might be coerced into using PCEs; (4), there is a meaningful distinction to be made between the treatment vs. enhancement effect of the same PCE; (5) unequal access to PCEs would have implications for distributive justice; and (6) PCE use constitutes cheating in competitive contexts. In reviewing the six principal issues, the paper discusses how neuroscientific research might help advance the ethical debate. In particular, the paper presents new arguments about the contribution neuroscience could make to debates about justice, fairness, and cheating, ultimately concluding that neuroscientific research into “personalized enhancement” will be essential if policy is to be truly informed and ethical. We propose an “ethical agenda” for neuroscientific research into PCEs.\n\n**Keywords:** cognitive enhancement, brain function augmentation, ethics, modafinil, ritalin, justice, cheating, personalized enhancementIntroduction\n------------\n\nRecent research in neuroscience and pharmacology has demonstrated that various pharmaceuticals can have modest cognition-enhancing effects in healthy individuals (for reviews, see Repantis et al., [2010](#B47); Husain and Mehta, [2011](#B28)). For example, some studies have shown that modafinil—originally developed for the treatment of narcolepsy—can improve various dimensions of cognitive function in sleep-deprived (Wesensten et al., [2005](#B63); Thomas and Kwong, [2006](#B59)) and non-sleep-deprived healthy adults (Turner et al., [2003](#B61); Müller et al., [2004](#B41)). Similarly, methylphenidate—originally developed for the treatment of Attention Deficit Hyperactivity Disorder (ADHD)—has been shown to improve spatial working memory and planning in healthy adults (Elliott et al., [1997](#B19); Mehta et al., [2000](#B35)).\n\nUnlike the more mundane methods for improving cognitive function—such as exercise and good nutrition (Dresler et al., [2012](#B17))—these pharmaceutical cognitive enhancers (PCEs) are thought to raise a host of ethical issues for individuals and society (Greely et al., [2008](#B25); Bostrom and Sandberg, [2009](#B8)). At the individual level, concerns are raised about medical safety and side effects, the authenticity of the enhanced mind and the value of achievements facilitated by pharmaceutical intervention. At the societal level, ethical questions can be asked about whether the availability of PCEs would increase or undermine equality, and about whether individuals will be directly or indirectly coerced into using PCEs. Further normative questions emerge particularly in the healthcare setting: should we be drawing a sharp line between treatment and enhancement and should individuals be given access to PCEs through medical professionals?\n\nIn this paper, we outline the key issues at stake in the normative debate about pharmacological cognitive enhancement (PCE) and, for each issue, suggest the contribution that neuroscientific research could make. The greatest contribution will be made to the discussions surrounding the safety and efficacy of PCEs. Although the question of what harms are worth risking in the pursuit of certain benefits is to a large extent normative, the dearth of evidence about the effectiveness and safety of PCEs in real-world contexts renders the discussion mostly hypothetical at this point. More research on the risks of dependency is also urgently needed. Data of this kind will be crucial for discussions about regulation, and for debates about the permissibility of requiring or encouraging people to use PCEs.\n\nIn addition to the contribution neuroscience will make to understanding the risk-benefit profiles of PCEs, we suggest that a more nuanced understanding of the neural systems affected by different substances will enrich the debate about whether PCE use constitutes cheating. Also related to cheating, we further suggest that the neuroscientific evidence on the functional trade-offs precipitated by some PCE adds an important dimension to the debate about whether achievements facilitated by PCEs should be seen to be effortless and involve little sacrifice. Drawing together our conclusions, we propose an “ethical agenda” for future neuroscientific research on PCE. This agenda sets out what sort of research would help move the ethical debates forward, and why. Resolving these debates will be crucial for ensuring that society responds to the increasing use of PCE in the most responsible, fair and rational way. For a summary of our “ethical agenda” for neuroscientific research, see Table [​Table11](/pmc/articles/PMC4052735/table/T1/).\n\n### Table 1\n\n**Summary of ethical agenda for neuroscientific research**.\n\n\n\n| **Suggested type of study** | **Advancement in ethical debate** |\n| --- | --- |\n| Longitudinal studies investigating the long-term safety profile of PCEs | This is perhaps the most pressing task for neuroscientists. The long-term, real-world safety profile of PCEs is of considerable import to potential users and to all debates about PCE ethics and policy. In relation to the latter concerns, longitudinal studies will advance ethical debates about: (1) whether PCEs should be placed on the open market for enhancement purposes (and with what restrictions), and (2) whether employees doing particular types of jobs can legitimately be required to take PCEs |\n| Identification of pathology associated with mental or psychiatric disorders or limitations to enable classificatory separation of conditions which are diseases from those which constitute normal human variation | Will advance the ethical debate about whether the administration and effects of particular PCEs constitute treatment or enhancement, and how resources should be deployed accordingly |\n| Identification of the effects of PCEs in targeted and specified populations of ethical significance, such as those who are worst off. In particular, further research into the baseline effect should be conducted | Will advance the debate about distributive justice and access to PCEs. If PCEs have differential effects on those who are already worst off, this will be highly relevant to their permissibility and just distribution |\n| More precise distinction between the different cognitive effects of different PCEs | Will (1) be of central relevance to whether certain putative PCEs will be used for enhancement and, if so, in which contexts and (2) advance the debate on cheating in competitive contexts: some effects (e.g., creativity) might be considered more unfair than others (e.g., wakefulness) and enhancing motivation vs. enhancing effectiveness might be considered relevant to the value of any resulting achievements |\n| Investigation of the functional trade-offs associated with different PCEs | Will (1) be of central relevance to whether certain putative PCEs will be used for enhancement and, if so, in which contexts and will (2) advance the debate about the nature of the sacrifice possibly required for achievements to have value. It will also (3) advance the debate about the practicality and legitimacy of requiring certain people to take PCEs |\n| Pursuit of a “personalized enhancement” approach to bring us closer to understanding what effect any particular PCE will have in any particular person | Will be relevant to many (if not all) ethical debates and policy considerations including: (1) whether particular people could legitimately be required to take PCEs in certain contexts, (2) who should be given priority access to which PCEs, (3) whether unequal effects have ramifications for cheating. Only when we can predict the *personal* benefits and costs of enhancement can policy be truly informed and ethical |\n\n[Open in a separate window](/pmc/articles/PMC4052735/table/T1/?report=objectonly)Overview of pharmacological cognitive enhancement\n-------------------------------------------------\n\nWhat it means to “enhance” is notoriously difficult to pin down. To enhance is essentially to improve or increase, but what this improvement must be relative to is not obvious. On the broadest definitions of enhancement, some capacity is enhanced if it is improved relative to its prior level of functioning such that it increases the individual's chances of leading a good life—enhancement thus occurs regardless of how well- or poorly-functioning the capacity originally was (Savulescu et al., [2011](#B53)). On more restrictive definitions of enhancement, a capacity is enhanced if it is improved beyond a particular point—perhaps a species mean or agreed “normal” level of functioning (c.f. Sabin and Daniels, [1994](#B49)). Others define enhancement as any improvement which goes beyond correcting pathology. For example: “A cognitively enhanced person [… ] is not necessarily somebody with particularly high (let alone super-human) cognitive capacities. A cognitively enhanced person, rather, is somebody who has benefited from an intervention that improves the performance of some cognitive subsystem without correcting some specific, identifiable pathology or dysfunction of that subsystem” (Bostrom and Sandberg, [2009](#B8)). In this paper, we adopt the broader understanding of cognitive enhancement. We do this in part because the substances currently available and likely to be available in the near future effect only modest improvements (Husain and Mehta, [2011](#B28)), but also because we believe that any line intended to mark the point at which an improvement counts as enhancement necessarily involves a value judgement involving normative (ethical) considerations.\n\nMost of the substances cited as putative PCEs were originally developed for clinical use, to treat conditions that are at least partly characterized by some observable cognitive defect. Here, again, it is sometimes difficult to decide what should count as a cognitive *defect*. However, in the case of defective or deficient capacities, decisions must be made about where to place the line to determine who should receive medical attention and resources. For example, two of the substances receiving the most attention from those interested in enhancement—methylphenidate and modafinil—were originally developed to treat the symptoms of ADHD and narcolepsy, respectively. More recently, however, these substances have been used off-label by healthy individuals to improve their memories, level of alertness, or powers of concentration (e.g., Maher, [2008](#B34)). Other substances with some modest enhancing effects on cognition include donepezil, dopamine agonists (such as d-amphetamine, bromocriptine, and pergolide), guanfacine, atomoxetine, reboxetine, galantamine, rivastigmine, and memantine. Working pharmacologically in different ways, these substances have been shown to improve cognitive functions such as response inhibition, working memory, episodic memory, attention, vigilance, and incidental learning (see de Jongh et al., [2008](#B15); Lanni et al., [2008](#B33); Husain and Mehta, [2011](#B28)). However, this limited evidence of effectiveness should be cautiously considered alongside studies producing null results and some evidence of task-specific impairments (see Hall and Lucke, [2010](#B27) and Advokat, [2010](#B2) for less optimistic reviews of the scientific literature on PCE).\n\nThe prospect of being able to enhance any of these cognitive functions probably would be attractive to many individuals. Whether the goal of such enhancement would be to perform better at work, to learn a skill or language quicker, to decrease the need for rest in leisure time, or even just to experience one's mind as “sharper,” improving cognition would presumably come with many benefits. Data from various prevalence studies indicate that there are groups of individuals who use some of the substances listed above for purposes of studying, to combat jet-lag or even to facilitate completion of household chores (for a review of student uses, see Smith and Farah, [2011](#B57); see also Maher, [2008](#B34)).\n\nWhilst the neuroscientific literature is reporting some modest enhancement effects of these substances on the cognition of healthy individuals (c.f. Husain and Mehta, [2011](#B28)), the ethical literature has been raising and responding to a variety of issues pertaining to their use (for overview see Greely et al., [2008](#B25); Bostrom and Sandberg, [2009](#B8)). Some of these issues are practical, some socio-political and others relate to the individual user. The overarching goal is to ascertain how permissible and how moral PCE use is and how society and regulatory bodies should respond to it. Although the ethical debate is principally a normative enterprise, it cannot reach firm conclusions about how to proceed based purely on hypothetical reasoning and untutored speculation: it must be informed by neuroscientific research providing the empirical facts about PCEs. In what follows, we outline the key issues in the enhancement debate, emphasizing where we think neuroscientific research might have particular importance for the normative debate.\n\nEthical debate and the relevance of neuroscientific research\n------------------------------------------------------------\n\n### Medical safety and effectiveness\n\nIn many ethical discussions of cognitive enhancers the first issue to be raised (often to be set aside so that there can be any further discussion at all) is whether cognitive enhancers are *medically safe* to use. Since there are no longitudinal studies yet examining the long-term use of pharmaceuticals such as modafinil and methylphenidate, some authors argue that we currently do not know enough about the potential dangers and that the availability and use of PCEs should be avoided on this basis (e.g., Drabiak-Syed, [2011](#B16); Boot et al., [2012](#B7)).\n\nDespite the huge interest in PCE from philosophers and scientists, the evidence of their *effectiveness* is still inconclusive. Moreover, where there is evidence of enhancement effects, they often tend to be limited to improvements on specific tasks, are only seen at certain dosages and are not observed in all people (Ragan et al., [2013](#B45); Farah et al., [2014](#B21)). Crucially, it must be remembered that the degree and nature of any cognitive improvements will be different for each PCE and so no sweeping claims should be made about the effectiveness of PCEs in general. In terms of both effectiveness and safety, it should also be noted that short term studies carried out in laboratory settings are not representative of long term use in real world contexts.\n\nIn his meta-analysis of randomized controlled trials of methyphendidate, Repantis et al. ([2010](#B47)) found a significant improvement in the long-term memory of healthy participants, particularly when there was a longer interval between the learning phase and recall. However, the meta-analysis revealed no significant improvements in attention, mood or executive functions. Similar findings emerged from Farah et al.'s ([2014](#B21)) review of more than fifty experiments on the effects of amphetamine and methylphenidate: they found convincing evidence of an enhancing effect of stimulants on learning under some circumstances, specifically when the retention interval between study and test was longer than an hour, but not at shorter intervals. They also concluded that the evidence for improvement of executive functions was much less clear. There is some evidence to suggest that the effects of methylphenidate on cognitive control are only significantly positive in participants whose performance on placebo was lowest (Smith and Farah, [2011](#B57)).\n\nIn relation to the effectiveness of modafinil, Farah et al.'s ([2014](#B21)) recent review of single dose studies of modafinil concluded that there is clear evidence of enhanced executive function and memory for sleep-deprived individuals but, for rested adults, whilst there were some positive findings for specific tasks such as those requiring inhibitory control, there were also a large number of null results and the occasional finding of impairment. They refer to this pattern—of limited improvements on some specific tasks and impairment on others—as being “familiar” for PCEs.\n\nThere are also some reviews of the effectiveness of anti-dementia medications for cognitive enhancement. These include acetylcholinesterase inhibitors such as donepezil, rivastigmine, and galantamine. A review conducted by Repantis ([2013](#B46)) concluded that the few existing studies of effects in healthy participants provide no consistent evidence for a neuroenhancement effect. In the case of, donepezil there was some evidence to suggest improvements on retention of training on complex aviation tasks (Yesavage et al., [2002](#B65)), improvements in verbal memory and episodic memory (Gron et al., [2005](#B26)). However, other studies showed no or limited effects on memory and attention and two others showed transient impairment of episodic memory (Beglinger et al., [2004](#B4), [2005](#B5)). The same pattern of results suggesting enhancement in some cases but no effect or even impairment in others can be seen for donepezil. Further, a review of the efficacy of these putative cognitive enhancers for patients with mild cognitive impairment concluded that they did not improve cognition or function among patients with low-level impairment (Tricco et al., [2013](#B60)).\n\nThe *medical safety* of PCEs varies from substance to substance, and side effects relate not only to the direct pharmacological effects but also to broader psychological and physiological changes. The review conducted by Repantis ([2013](#B46)) concluded that in the majority of trials, the drugs were well tolerated. However, side effects were noted. In relation to methylphenidate, side effects included increased heart rate and some instances of increases in blood pressure. Headaches, anxiety, nervousness, dizziness, drowsiness, and insomnia were also typical complaints.\n\nRepantis ([2013](#B46)) summarizes similar side effects for modafinil, where adverse reactions included headache, dizziness, gastrointestinal complaints (e.g., nausea, abdominal pain, dry mouth), increased diuresis, palpitations, nervousness, restlessness, and sleep disturbances and insomnia (especially in studies with non-sleep deprived individuals). In their recent review, Ragan et al. ([2013](#B45)) highlight the fact that modafinil was reviewed by the European Medicines Agency ([2010](#B20)), who concluded that it should not be prescribed for obstructive sleep apnea, shift-work sleep disorder, and idiopathic hypersomnia because of the risks of serious skin reaction, suicidality, depression, psychosis, and adverse cardiovascular events.\n\nIn relation to anti-dementia drugs, Repantis ([2013](#B46)) concluded that, in the majority of the trials in healthy adults, donepezil was well tolerated. However, some side effects were reported in some participants, including gastrointestinal complaints (e.g., nausea), headaches, dizziness, nightmares, and insomnia. The meta analysis of anti-dementia drugs for people with mild cognitive impairment (Tricco et al., [2013](#B60)) revealed that patients taking these medications experienced significantly more nausea, diarrhea, vomiting, and headaches than patients taking placebo. The authors also suggest that patients taking these medications might be at greater cardiac risk, with one study finding a higher incidence of bradycardia among patients who received galantamine.\n\nAs Farah et al. ([2014](#B21)) emphasize, there is another type of risk that should not be ignored in a consideration of the safety of PCEs. Many pharmaceuticals, especially stimulants, present a risk of dependence. The authors cite a nationwide survey analyzed by Kroutil et al. ([2006](#B32a)) which estimates that almost one in twenty nonmedical users of prescription stimulants meet the criteria for dependence or abuse (For further discussion of the potential for addiction in student populations see Outram, [2010](#B42) and White et al., [2006](#B64)).\n\nFinally, as Ragan et al. ([2013](#B45)) point out, there is no such thing as a completely safe drug, only a drug whose benefits outweigh its drawbacks. However, it is also worth emphasizing that, even if there are long-term risks associated with these substances, this does not (by itself) mean that they should automatically be prohibited. There are serious risks associated with many activities that the state permits because it is believed that individuals should decide for themselves whether these risks are worth taking. Dangerous sports and cosmetic surgery both come with risks, but the value some individuals attach to the respective sporting experiences and cosmetic effects justifies giving these individuals the choice to take risks in their pursuit.\n\nThis caveat notwithstanding, and taking into account potential costs to the healthcare system, greater knowledge about safety and efficacy will allow regulators to decide whether the decision about which risks are worth taking should be put in the hands of consumers (for a detailed discussion of the way risks and benefits should be assessed for cognitive enhancement devices, such as brain stimulators, see Maslen et al., [2014](#B39)). The ethical debate about the level of risk consumers should be allowed to take is of great practical importance when it comes to making policy recommendations. In addition, the question of whether the harms of a certain PCE outweigh its benefits will be important to discussions about the permissibility of requiring individuals to use PCEs and about the possible need to protect individuals from pressure to take any of the substances under discussion.\n\nFinally, the empirical project of identifying the different effects PCEs have across a different individuals (c.f. Husain and Mehta, [2011](#B28)) is likely to feed into the normative debate about which effects (for which individuals) constitute a form of treatment and which effects (for which individuals) constitute enhancement. We discuss these and other ethical issues in what follows.\n\n### Authenticity and naturalness\n\nThere are a bundle of related ethical issues that are sometimes raised under the broad heading of *authenticity* (see Bublitz and Merkel, [2009](#B9); Juth, [2011](#B30)). Some of these pertain to numerical personal identity—do individuals become categorically different persons when they transform themselves via enhancement? (DeGrazia, [2005](#B14))—some consider less drastically what it is for an individual to be to be more or less his or her “real” self (The President's Council on Bioethics, [2003](#B44)), and other ethical concerns pertain to what it is to be, and function as, a human being (Kass, [2003](#B32)).\n\nThe principal tenet underlying authenticity objections against the use of PCEs is that individuals are most themselves when they are in their “natural,” unaltered state. If capacities and characteristics fundamental to one's identity are changed, then the individual is recast as an altered or inauthentic person (e.g., Elliott, [1999](#B18)). This argument is premised on the idea that there is a “real,” true self, and that this real self is to be preserved as much as possible. However, this assumption can be challenged: individuals often (and understandably) try to improve themselves in ways that allow them to more successfully achieve their goals. Being autonomous is to form goals for how one's life is to go, including what kind of person to be. On this model of authenticity as autonomy, whether PCE is authentic depends on whether it helps a person to achieve her autonomous goals. For example, an individual might teach him or herself motivational strategies to overcome his or her naturally lazy disposition; another individual might use techniques from cognitive behavior therapy to overcome his or her propensity for generalized anxiety (e.g., Butler et al., [2006](#B11)) or shyness, or gregariousness, or bad temper, or gullibility. Such strategies may not render the individuals inauthentic, but rather assist them in removing barriers that otherwise prevent them from maximizing self-actualization. Correspondingly, if PCEs can, for example, help an individual to concentrate better so that he or she can achieve the goals he or she values, this acts in service of authenticity rather than undermines it. There is great human variation, and variation within individuals subject to many intrinsic and extrinsic factors (see Kahane and Savulescu, [2013](#B31)). Even if the authentic self were defined, it seems likely that many factors interfere and PCEs may reduce the effect of such influences.\n\nHowever, some deny that authenticity is reducible to autonomy. Such writers (e.g., Taylor, [1991](#B58)) appeal to a “real self.” But even on such an account, the real self may be complex and multifaceted. Often people have a range of qualities and they may use PCEs to bring out some of their qualities, while suppressing others. Thus, whether an enhanced self compromises the real self depends on what constitutes a person's real self and what the effect of the PCE is—both questions for cognitive science. If PCEs merely amplify, rather than add entirely new qualities, then they enable the self to evolve, rather than replacing one individual with a set of attributes with another with different attributes.\n\nThere is a related but different concern about *naturalness*. The idea that enhancements will take us too far from what it is to be human altogether is often accompanied by the idea that too much technological intervention will lead to an over-mechanization of the mind. The activities in which we engage—and, more importantly, the ways in which we engage in them—are said to have a certain quality to them that makes them “human” activities (President's Council on Bioethics, [2003](#B44)). In this vein, Kass ([2003](#B32)) argues that since individuals play no role in bringing about the effects of biomedical interventions, they cannot understand these effects “in human terms.” His suggestion is that whereas the effects of studying or training are “intelligible” to us, the effects of direct interventions are not comprehensible and thus our use of them departs “from “genuine,” unmediated, and (in principle) self-transparent human activity” (p. 23).\n\nHowever, we argue that we make use of many directly-acting substances, in medicine and in leisure, that do not result in departure from “genuine” human activity. Just because their pharmacological mechanisms are not understood by the average person does not mean that they cannot be made sense of as part of a human narrative. Kass cites alcohol, caffeine and nicotine as not having the same unintelligible quality as direct biomedical interventions. He says this is because “we use these agents not as pure chemicals but in forms and social contexts that, arguably, give them a meaning different from what they would have were we to take them as pills” (p. 22). An obvious objection to Kass' resistance to PCEs would be to add PCEs to beverages, as caffeine currently is. It would then be “intelligible” in the same way that caffeine is said to be “intelligible.” Moreover, if intelligibility can be conferred by social context then the social context of, for example, studying, or conducting research should equally make PCEs part of a comprehensible human enterprise. Perhaps his distinction between the forms alcohol, caffeine, and nicotine tend to take, and the form of a simple pill, is supposed to indicate that the former are enjoyed for themselves, rather than being instrumental to achieve some goal. However, studies have reported that some individuals take PCEs for recreational purposes (see Smith and Farah, [2011](#B57)) and it is common knowledge that caffeine is regularly used exclusively for alertness and for performance enhancement. Even if it might be the case in lay people's current perceptions (cf. Faulmüller et al., [2013](#B22); Schelle et al., [2014](#B54)), from a normative stance it cannot be that form and context make all the difference between the human intelligibility of an espresso and a caffeine pill and a PCE.\n\nThe core of such an “intelligibility” objection may be that PCEs and other new technologies work in ways entirely alien to the way the human mind normally works, adding a completely new way of being. For example, chips inserted into the human brain that allowed us to perceive other people's thoughts directly would be entirely new. Neuroscience can assist by unravelling the way the mind does work, and does not, and by enabling categorization of enhancers into those which harness natural processes, and those that introduce entirely new capacities. Most enhancers at present appear to harness existing neurobiological physiology, though exactly how many enhance performance remains to be determined.\n\nThe ethical debate about authenticity and naturalness is unlikely to be advanced solely by the findings of neuroscientific research. The disagreement is partly a normative one about what constitutes the “real” self and whether our “real” selves are the selves we are most prone to being or the selves that we aspire to develop in to—or whether it makes sense to speak of “real” selves at all. Qualitative research, such as that conducted by Singh ([2005](#B56a)) or Bolt and Schermer ([2009](#B6)), will helpfully provide a clearer picture of the sorts of experiences individuals have when taking PCEs.\n\nIn summary, it is important to recognize that most PCEs, if not all, harness innate biological systems, for example, affecting release, reuptake or sensitivity to neurotransmitters that cause cognitive activity. They do not at present introduce radical “new ways of being” divorced from the ordinary human way—they really just provide “more of the same.” Indeed, humans vary in the ways in which their cognitive systems function and in some cases, PCEs may bring those at the lower end of normal up to the level of function of those in the mid to upper range.\n\nMore importantly, we suggest that what matters more than whether the experiences are in some sense authentic is whether the individual wants and values the effects of the PCE and whether the individual is autonomous in his or her decision to use PCE. This, we suggest, is a legitimate concern and is addressed in the following section.\n\n### Coercion\n\nIf PCEs were to become more commonplace, then employers might start to require their employees to use PCEs. The Academy of Medical Sciences et al. ([2012](#B1)) suggested in a recent report that “[O]ccupations that require particular patterns of focus could benefit from enhancements that facilitate achieving such patterns. For example, surgeons may need to be able to concentrate for extended periods, whereas other jobs such as air traffic control can require very rapid reactions during periods of relative uniformity. As an extrapolation to this, it is possible that in these high-responsibility occupations enhancement could be seen as a moral obligation, or even demanded by the public.” (p. 38, for a discussion see also Maslen et al., [in press](#B40)). The US Airforce has already approved the use of modafinil by its pilots (Caldwell and Caldwell, [2005](#B12)) and some medical practitioners are beginning to wonder whether enhancement might be required of them in the future (Rose and Curry, [2010](#B48)). Writing in the Journal of Surgical Research, surgeons have suggested that the use of PCEs may come to be required practice. They say, “The prospect of fatigued surgeons taking a prescription drug, such as modafinil, to allow them to operate for longer, and possibly to a higher standard, is perhaps not as far-fetched as some may suggest. This drug has already been trialed in emergency physicians, when performing non-medical-related tasks at the end of a nightshift.” (Warren et al., [2009](#B62), p. 168).\n\nFurther, the authors note that there are “useful and warranted forms of coercion” (p. 170) such as forcing surgeons to undertake hygiene practices such as handwashing prior to and during surgery. Given that this *coercion* is acceptable, they go on to ask, “What will our employers feel about a drug that makes us less prone to error, able to work longer hours, or to operate more efficiently? Employers are able to request certain behavioral standards from their employees, dictate rest periods, and insist on abstinence from certain drugs to ensure that their doctors perform well—will a day arise where they can recommend or even insist on surgeons being artificially enhanced? This may seem fanciful, but recent work has suggested that a mixture of napping and caffeine attenuates fatigue in interns and thus should be adopted by hospital administration. Why not other types of stimulant?” (p. 171).\n\nThe ethical objection often raised in this context is that, although it is thought to be reasonable to require certain things of employees, such as compulsory training and codes of conduct, requiring them to ingest psychoactive substances into their bodies is too demanding a requirement. It would require a compelling justification (perhaps pointing to the severity of harm that would be prevented through requiring enhancement) to trump the value we place on preserving the right individuals have to determine what happens to their bodies and minds (for discussion of the right to mental self-determination in relation to enhancement and other mental manipulation, see Bublitz and Merkel, [2014](#B10)). As far as possible, this right should be preserved, and this is especially the case where there is not enough evidence about the harms to which an employer would be subjecting his or her employee. Neuroscientific evidence will have a large role to play in understanding the seriousness of any proposed requirement. In addition to the risks posed by individual instances of PCE use, more data on the potential for dependency will be essential for this discussion. Whilst we *might* think it permissible to require some employees to take small, isolated personal risks, requiring them to do something that results in substance dependency would more comprehensively infringe an individual's autonomy. In this connection, although PCEs may become more common in the workplace, one of us has argued elsewhere that for these and other reasons, it is unlikely that there will ever be a legal obligation for a professional like a surgeon to take a PCE (Goold and Maslen, [2014](#B24)). At present, no employer requires employees to take caffeine. Caffeine is a PCE.\n\nEven if people were not directly coerced to take enhancers it could still be objected that permitting PCE use could result in indirect pressure to use them. The perception that others are taking substances that make them more productive could lead to the belief that taking them is necessary to keep up (Academy of Medical Sciences et al., 2012) and not taking PCEs might render one *de facto* ineligible for certain jobs (Chatterjee, [2004](#B13)). However, whether indirect pressure to take PCEs would in fact result in their more prevalent use is a question for social science. (For empirical data relating to this question, see Franke et al., [2011](#B22a) and Maier et al., [2013](#B38)). Neuroscientific research will have little to contribute to the debate about the limits of acceptable social pressure and restriction on employees' autonomy. However, as noted above, opposition to enforced PCE use is partly motivated by the current lack of evidence on long-term safety and efficacy. What we can legitimately require of people is closely related to what risks we can require them to take. Assessment of the legitimacy of requiring certain individuals to take PCEs will depend in large part on their medical safety and efficacy. If PCEs are very safe and efficacious, their use in life-saving/threatening professions (e.g., surgeons, politicians, truck drivers, airline pilots, etc.) may legitimately be required.\n\n### Treatment vs. enhancement\n\nAs noted in the introductory section, there is much disagreement about what should count as enhancement (c.f., Parens, [1998](#B43)). Sometimes this disagreement is framed as a debate about where *treatment* ends and *enhancement* begins. The distinction often made is that treatments serve to cure illness and preserve health whereas enhancements make people “better than well.” For example, Juengst ([1998](#B29)) defines enhancement as the term “usually used in bioethics to characterize interventions designed to improve human form or functioning beyond what is necessary to sustain or restore good health” (p. 29).\n\nHowever, a common objection to this distinction is that, in many cases, what we define as “healthy” and “normal” is arbitrary. This objection does not deny that there can be clear failures of function or physiology as a result of pathology which most would agree are inimical to good health, such as the effects of a brain hemorrhage or stroke. Rather, it emphasizes that the boundary between healthy and unhealthy cognition in many cases is a matter of where we choose to draw the line, not based on either statistically significant subfunctioning or pathology. For example, delimiting normal from defective powers of concentration when diagnosing ADHD is necessarily to engage in marking a categorical point on what is otherwise a continuum (c.f. Schermer and Bolt, [2011](#B56)). The point could be selected further to the left or right on that continuum of functioning. Would selecting a point which increased ADHD diagnosis increase the instances of individuals being treated or would some be receiving enhancement through the back door? Since the point is to some extent arbitrary, the corresponding labels of treatment and enhancement appear less meaningful in this context.\n\nSimilarly, it is difficult to know whether to classify substances used to combat age-related cognitive decline as instances of treatment or enhancement. Drawing sharp lines could have the result that a young person with cognitive abilities just above the cut off for being classified as having a mental disability would be “enhanced” by a drug but the elderly person whose abilities slipped to a level still above the young person would be receiving “treatment” if given the same substance (for a similar example, see Sandberg, [2011](#B50)). Given the slipperiness of the distinction, one of us has argued (Savulescu et al., [2011](#B53)) that instead of trying to determine whether certain drugs or certain of their effects constitute treatment or enhancement, it is more coherent and useful to think of a continuum of well-being which can be increased or diminished by various interventions.\n\nIt might be thought that evidence from neuroscience could adjudicate between instances of treatment and enhancement. If substances have discernable, discrete effects on different groups of people, it could be argued that these discrete effects mark the difference between a treatment and an enhancement. For example, although the way modafinil works is still unknown in detail (Minzenberg and Carter, [2008](#B37)), neurologists do know that the brain of the narcoleptic is not neurophysiologically equivalent to the brain of the sleep-deprived individual and, correspondingly, it might be hypothesized that the effects of modafinil on the two groups will differ. Most forms of narcolepsy are associated with a deficiency in the hypothalamic neurotransmitter orexin (Mignot, [2010](#B36)). The average sleep-deprived person, in contrast, does not exhibit such a deficiency. Accordingly, it might be thought that the more differences neuroscience can reveal between the narcoleptic and the non-narcoleptic, the better equipped we will be to distinguish between the treatment and enhancement effects of at least this PCE.\n\nHowever, such knowledge would still not provide a definitive solution to which effects we should refer to as treatment and which we should call enhancement. Modafinil is also prescribed for shift work sleep disorder (SWSD), which is a product of unusual working patterns affecting circadian rhythms, not of underlying neurophysiology (Åkerstedt and Wright, [2009](#B8)). This being said, it should be noted that not everyone who does shift work suffers from SWSD. This suggests that there must be some physiological or psychological difference between sufferers and non-sufferers and our lack of knowledge as to the cause of this difference does not make the disorder less of a treatable disorder.\n\nIn labeling the prescription of PCEs for SWSD an instance of treatment, a normative or ethical decision is still being made about which conditions and patterns of functionality should attract medical attention and resources. We are also implicitly making an assessment that medical treatment is the just and appropriate course of action for sufferers of the disorder, rather than prioritizing a change away from shift work. Neither the individual's underlying neurophysiology nor the particular mechanism of action of the substance tells us anything about whether this decision is the correct one.\n\nOne avenue through which neuroscience might illuminate the treatment vs. enhancement debate is by identifying pathology associated with mental or psychiatric disorders or limitations. So far, accurate tissue or cellular level pathological classification of psychiatric disease or disorder has eluded researchers. However, if psychiatric disorders could be characterized in the same way as neurological disorders, the presence of pathology would separate conditions which are diseases from those which constitute normal human variation.\n\nGiven that PCEs are not universally available through the healthcare system, individuals without conditions for which PCEs are approved would currently have to obtain them through other, unauthorized routes. This means that some people will have access to them but others will not. Even if PCEs were available on an open market, there could still be financial or other barriers to their accessibility. We discuss this issue and its potential implications next.\n\n### Distributive justice\n\nSociety-level debates about PCE-related inequality consider *distributive justice*, and are related the question of whether PCEs will exacerbate existing socio-economic inequality. A common argument is that, as with many technologies, the rich and informed will have access to them whilst the poor and uninformed will not (e.g., Fukuyama, [2002](#B22b)). Assuming that cognitive enhancement confers some benefits, this will make those already at an advantage even better off. Whether this would in fact happen would depend on factors such as the affordability and accessibility of PCEs, as well as on the realities of their cognition-improving effects: the affordability and accessibility of PCEs will determine whether people are able to use them; the effects of the substances will determine whether they really put people who do so at an advantage. However, although there is the potential for PCEs to exacerbate unfairness if their distribution is unregulated, as one of us points out elsewhere, this is not a necessary consequence (Sandberg and Savulescu, [2011](#B51)): if PCEs were distributed according to a principle of justice such as “prioritarianism”—the principle that says that we should give priority to those who are worst off, but also aim to maximize well-being of everyone in society—then PCEs would be most accessible to the worst off, becoming less accessible (but not inaccessible) as need decreases.\n\nFurther as we go on to discuss below in relation to competitive fairness, neuroscientific evidence supports the hypothesis that there is a base-line effect of many PCEs: their effects seem to depend on the subject's baseline working memory capacity. Individuals with low working-memory capacity improve while high-span individuals are either not affected or are even impaired (de Jongh et al., [2008](#B15)). This means that those most in need of PCE would benefit most from it, with those less in need not benefiting at all or even experiencing impairment from the same substance. Given this evidence, it has been suggested that enhancement might actually serve to *reduce* inequality (Bostrom and Sandberg, [2009](#B8)). However, whilst this could be true in terms of the equality of cognitive capacity, it must be remembered that cognitive capacity and socio-economic status are not always correlated: there would still be people with more opportunities and resources who could improve their prospects further. Whilst policy decisions about access to PCEs will be principally socio-political matters, those making the decisions will need to know how enhancers affect members of the population in order to best serve the interests of justice and equality. If PCEs have differential effects on those who are already worst off, this will be highly relevant to their permissibility and just distribution. Neuroscience research can thus contribute to ethical debate if effects in targeted and specified populations of ethical significance are studied. This would require ethically relevant population stratification.\n\n### Competitive fairness and cheating\n\nThe ethical discussion of whether using cognitive enhancers constitutes *cheating*—perhaps in exams or at work—is more nuanced than the simple question of whether taking enhancers is “against the rules.” It can extend beyond considerations of *fairness in competitive contexts* to ask whether personal achievements facilitated by PCEs are devalued for this reason (c.f., Schermer, [2008](#B55); Goodman, [2010](#B23); Santoni de Sio et al., [in press](#B52)). We suggest that evidence from neuroscience will help to develop the cheating debate in important ways. Below, we argue that three types of empirical inquiry are relevant to the ethical discussion. The first, the phenomenon of the “inverted U”—according to which the enhancing effects of PCEs are often baseline dependent and exhibit non-linear dose response curves —(de Jongh et al., [2008](#B15); Husain and Mehta, [2011](#B28)), is relevant to efficacy questions involved in debates about cheating. The second type of study relevant to the debate is that which seeks to identify the particular neural systems affected by different substances, leading to disparate effects (e.g., Lanni et al., [2008](#B33)): whether a substance improves creativity or rote learning may matter for some possible conceptions of what constitutes cheating. Similarly, whether a substance improves motivation and task enjoyment vs. memory capacity might matter for those who place a lot of value on success requiring effort. Third, we argue that the neuroscientific evidence pointing to the likelihood of cognitive trade-offs (de Jongh et al., [2008](#B15); Husain and Mehta, [2011](#B28)) adds an underdeveloped dimension to the cheating debate: if the complaint is that achievements facilitated by PCE are devalued because they do not involve enough personal sacrifice, then evidence suggesting that enhancement in some domains comes at the cost of impairments in others offers a challenge to this view.\n\n#### The inverted U curve and baseline dependency\n\nNeuroscientific research so far shows that the effects of many purported PCEs are base-line dependent and have an inverted U-shaped dose-response curve (de Jongh et al., [2008](#B15); Husain and Mehta, [2011](#B28)). This is important to the cheating debate as it means that some individuals will benefit from taking PCEs whereas others will gain no benefit and might even be impaired: low performing individuals will tend to be on the upward slope of the inverted-U and so benefit from a substance that moves them further up this slope. High performing individuals, on the other hand, will tend to be at the peak of the inverted U and will therefore become impaired by a substance that increases neurotransmitter levels further. If neuroscience were to more precisely identify the neurological profiles of those who are able to benefit from PCEs and those who are not then ethicists would be able to consider in greater detail whether the prospect of some being able to enhance whilst others cannot counts more decisively against PCE in competitive contexts than if all could enhance in these contexts. They would need to consider whether it is the case that enhancement is only fair if everyone could (in principle) avail themselves of it or whether is it permissible given that some are physiologically denied the possibility of improving.\n\n#### Disparate effects of different PCEs\n\nAlthough the exact mechanisms of substances like methylphenidate and modafinil are not yet fully understood, researchers have begun to investigate which PCEs affect which underlying systems, and with which effects (Lanni et al., [2008](#B33); Smith and Farah, [2011](#B57)). Although cognitive functions necessarily interact, attempts have been made to ascertain the primary cognitive functions improved by particular PCEs based on their effects on neurotransmitters. Husain and Mehta ([2011](#B28)) explain that “a simple mapping between a specific neurotransmitter and a particular cognitive function—such as [working memory]—[… ] seems untenable. However, subtle but important differences in the precise processes modulated might provide some discriminating value: for instance, dopamine has an established role in reinforcement learning in response to rewards, whereas serotonin seems to modulate reinforcement learning for aversive stimuli.” (p. 29). Pursuing such discrimination, Lanni et al. ([2008](#B33)) review the neuroscience literature investigating the neuronal circuits, neurotransmitters and molecular events underlying the cognitive domains of memory, attention, and creativity to distinguish the effects of different enhancement substances. Elsewhere, Smith and Farah ([2011](#B57)) review the cognitive neuroscience literature to examine whether (and which) prescription stimulants improve learning, working memory, cognitive control, and other executive functions.\n\nIf neuroscientific research were able to distinguish between the effects of different PCEs this could have some implications for discussions about cheating. This is again effect stratification. Combined with population stratification, neuroscience research could bring us closer to understanding what effect this particular PCE will have in this person. This reflects the move to “personalized medicine” and might be dubbed “personalized enhancement.” Only when we can predict the *personal* benefits and costs of enhancement can policy be truly informed and ethical.\n\nIt might be thought that the enhancement of some cognitive functions is more unfair than the enhancement of others. For example, the enhancement of creative thinking might be thought to constitute more significant cheating than improving wakefulness or even memory capacity. Imagine someone who says “when I take enhancers my work is no better, I can just do more of the same for longer” vs. someone who says “when I taken enhancers my work is much better than I can do without them.” Having links with the debate about authenticity, it is as if the former individual is enabled to make better use of his or her own cognitive resources, whereas the latter is given new cognitive resources upon which he or she can draw. Those who think PCE use is unfair because the achievement is not a reflection of the person's natural abilities to solve and create might be less concerned by a PCE that simply allowed more efficient work of the standard the person could naturally achieve. A PCE that promoted wakefulness might allow an individual to work for longer but it will not come up with ideas on his or her behalf. Of course, it is important to remember that a PCE that improved creativity still has its effects on and through the individual's own brain. What will be interesting for ethicists to discuss is whether “assistance” with time management and efficiency is relevantly different to “assistance” with the content of ideas (if, indeed, we want to characterize the respective effects in this way).\n\nPractical consequences might be to consider certain substances unfair for certain types of tests or for entry into certain types of employment: employers might only be troubled by the use of PCEs, the effects of which are *necessary* to carry out the job. This would be a practical consideration: could the employee continue to work without the PCE? For example, an architect who could only perform satisfactorily when taking a substance like modafinil that seems to improves spatial planning and visual pattern recognition memory (Turner et al., [2003](#B61)) might be thought to be a higher risk employee than one who uses a memory enhancer which enables him or her to remember the names of building materials that he could look up without problem in the absence of the substance.\n\nFurther, neuroscientific research that could distinguish between substances that enhance the *effectiveness* of cognitive capacities, such as working memory, from those that instead (or additionally) increase *motivation* could also have implications for the competitive fairness debate. In the ethical literature, the point is sometimes made that it is effort and striving that makes achievements intelligible and valuable. For example, Fox ([2005](#B22c)) argues that “[b]ecause they act directly on the human body and mind, biotechnological enhancements tempt us to shirk individual striving and struggle” (p. 1150).\n\nA common rebuttal to this type of argument is that, whilst PCEs can make efforts more effective, they do not replace the need for dedicated, sustained study—striving and struggle is still required in order to achieve. For example, Greely ([2010](#B25a)) notes that “the more plausible cognitive enhancements would not eliminate the need to study; they would just make studying more effective” (p. 6).\n\nIf, however, there were a significant enough effect of a PCE on motivation and/or task enjoyment, then it would be open to ethicists to argue that this *does* in some sense reduce the amount of effort that the person puts in. The drive to work or achieve no longer emanates from the individual and no struggle is encountered.\n\nOn the motivating effects of prescription stimulants, Smith and Farah ([2011](#B57)) write: “Another empirical question concerns the effects of stimulants on motivation, which can affect academic and occupational performance independent of cognitive ability. Volkow et al. ([2004](#B61a)) showed that [methylphenidate] increased participants' self-rated interest in a relatively dull mathematical task. This is consistent with student reports that prescription stimulants make schoolwork seem more interesting (e.g., DeSantis et al., [2008](#B15a)). To what extent are the motivational effects of prescription stimulants distinct from their cognitive effects, and to what extent might they be more robust to differences in individual traits, dosage and task? Are the motivational effects of stimulants responsible for their usefulness when taken by normal healthy individuals for cognitive enhancement?” (p. 735).\n\nIf particular PCEs were shown to significantly improve motivation and/or task enjoyment whilst others only improve effectiveness, ethicists would need to consider whether there is any relevant difference between enhancing motivation and enhancing effectiveness and, if so, what the implications would be for the value of resulting achievements.\n\n#### Enhancement is likely to involve trade-offs\n\nResearch suggests that enhancing one domain of cognition might come at the cost of impairing another. de Jongh et al. ([2008](#B15)) review evidence suggesting trade-offs between long-term memory and working memory; between stability and flexibility of long-term memory; between stability and flexibility of working memory; and perhaps, they conjecture, between cognition and mood. If a PCE comes at a cost—and, especially, a mental cost—this could also add a new dimension to the debate about cheating and the value of achievements.\n\nIn terms of gaining an unfair advantage over others in exams and other competitive tasks, the trade-offs would be relevant if the test required exercise of *both* the enhanced and the impaired capacity. Whilst the individual gains some advantage in some parts of the test, he or she would be disadvantaged in other parts. More generally, neuroscientific evidence of trade-offs are interesting to the debate about fairness and the value of achievements because some of the objections rest heavily on the idea that using PCEs means that no sacrifice—usually conceived as sacrifice of time, energy or other opportunities—is made by the individual.\n\nFor example, Kass ([2003](#B32)) says: “Yet in those areas of human life in which excellence has until now been achieved only by discipline and effort, the attainment of those achievements by means of drugs, genetic engineering, or implanted devices looks to be “cheating” or “cheap.” We believe—or until only yesterday believed—that people should work hard for their achievements. “Nothing good comes easily.”” (p. 21).\n\nIf enhancement of one domain of cognition comes at the cost of another then it does seem that some sort of sacrifice has been made. We might conceive of an individual who chooses to enhance his or her working memory such that he or she can solve complicated puzzles quickly. This same individual might accept that this enhancement comes at the cost of him or her finding it harder to recall facts and experiences from longer ago. Accordingly, whilst the physical act of ingesting a substance might be easy, there is a sense in which the enhanced capacity did not come easily—it did not come without personal cost. Whilst the conceptually most interesting trade-offs will involve impairments to cognitive capacities—like for like—it should also be noted that the more general side effects of PCEs (discussed in relation to medical safety above) also constitute an additional sort of “cost” to enhancement. The evidence on medical safety reviewed in section Medical Safety and Effectiveness suggests that PCE use will always come at a cost and may involve multiple costs of different kinds. The number and nature of these unavoidable costs constitute further challenge to the view that achievements facilitated by enhancement involve no sacrifice.\n\nImportant to note is that these costs of a trade-off are not like financial costs, which can be trivial and will constitute diminishment only insofar as they prevent the individual from making other purchases important to him or her. Rather, the costs of an enhancement trade-off are often mental costs—like for like—and are of a kind much more likely to constitute diminishment. Thus, neuroscientific research poses questions for those engaged in the cheating debate about whether there are relevant differences between different various costs of achievement—effort, opportunity, physiological side effects, cognitive trade-offs—and which (if any) are required for achievements to involve a sufficient level of sacrifice.\n\nConclusion\n----------\n\nWe have reviewed six of the main issues debated by ethicists working on PCE. Often, their purpose in debating these issues is to clarify concepts and normative positions, which then serve as a basis for recommending how society—and especially those tasked with its regulation—should respond to the emergence of PCEs. We have argued that whilst some of these issues are mostly political (coercion) or metaphysical (what constitutes authenticity), others have much to gain from emerging neuroscientific research. As well as providing data on safety and effectiveness, neuroscience will also allow a more fine-grained debate about whether the effects of some PCEs are more unfair than others in competitive contexts and whether employers should be more wary of employee reliance on some PCEs than on others. Further, due to emerging evidence on trade-offs, those who object to PCE on the ground that it facilitates individual gain without any attendant pain will have to explain why accepting an associated impairment in exchange for an enhancement is not a relevant sacrifice. Although we anticipate that ethicists will be far from stumped by this challenge, we hope to have demonstrated that it will, in large part, be though responding to emerging scientific evidence that normative accounts become more refined, complete and practically relevant.\n\nIn general, neuroscience can contribute to the formation of ethical policy on PCEs by adopting a “personalized” approach: personalized enhancement. Fine grained and stratified research should seek to identify specific risks, benefits, and trade-offs in small ethically relevant populations, or ideally in individuals. In doing this, according to the ethical values principles and criteria we choose, we can form policy on who should access which PCEs in which ways.\n\n### Conflict of interest statement\n\nThe authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.\n\nAcknowledgments\n---------------\n\nThis work was supported by the Wellcome Trust [086041/Z/08/Z]; the Oxford Martin School; and the Uehiro Foundation on Ethics and Education.\n\nReferences\n----------\n\n* Academy of Medical Sciences, Royal Society, British Academy, Royal Academy of Engineering. (2012). Human Enhancement and the Future of Work (Report from Joint Workshop). Available online at: (Accessed 22 May 2013).\n* Advokat C. (2010). What are the cognitive effects of stimulant medication? Emphasis on adults with attentiondeficit/hyperactivity disorder. Neurosci. Biobehav. Rev. 34, 1256–1266\n 10.1016/j.neubiorev.2010.03.006 [[PubMed](https://pubmed.ncbi.nlm.nih.gov/20381522)] [[CrossRef](//doi.org/10.1016%2Fj.neubiorev.2010.03.006)] [[Google Scholar](https://scholar.google.com/scholar_lookup?journal=Neurosci.+Biobehav.+Rev&title=What+are+the+cognitive+effects+of+stimulant+medication?+Emphasis+on+adults+with+attentiondeficit/hyperactivity+disorder&author=C.+Advokat&volume=34&publication_year=2010&pages=1256-1266&pmid=20381522&doi=10.1016/j.neubiorev.2010.03.006&)]\n* Beglinger L. J., Gaydos B. L., Kareken D. A., Tangphao-Daniels O., Siemers E. R., Mohs R. C. (2004). Neuropsychological test performance in healthy volunteers before and after donepezil administration. J. Psychopharmacol. 18, 102–108\n 10.1177/0269881104040248 [[PubMed](https://pubmed.ncbi.nlm.nih.gov/15107192)] [[CrossRef](//doi.org/10.1177%2F0269881104040248)] [[Google Scholar](https://scholar.google.com/scholar_lookup?journal=J.+Psychopharmacol&title=Neuropsychological+test+performance+in+healthy+volunteers+before+and+after+donepezil+administration&author=L.+J.+Beglinger&author=B.+L.+Gaydos&author=D.+A.+Kareken&author=O.+Tangphao-Daniels&author=E.+R.+Siemers&volume=18&publication_year=2004&pages=102-108&pmid=15107192&doi=10.1177/0269881104040248&)]\n* Beglinger L. J., Tangphao-Daniels O., Kareken D. A., Zhang L., Mohs R., Siemers E. R. (2005). Neuropsychological test performance in healthy elderly volunteers before and after donepezil administration: a randomized, controlled study. J. Clin. Psychopharmacol. 25, 159–165\n 10.1097/01.jcp.0000155822.51962.b4 [[PubMed](https://pubmed.ncbi.nlm.nih.gov/15738747)] [[CrossRef](//doi.org/10.1097%2F01.jcp.0000155822.51962.b4)] [[Google Scholar](https://scholar.google.com/scholar_lookup?journal=J.+Clin.+Psychopharmacol&title=Neuropsychological+test+performance+in+healthy+elderly+volunteers+before+and+after+donepezil+administration:+a+randomized,+controlled+study&author=L.+J.+Beglinger&author=O.+Tangphao-Daniels&author=D.+A.+Kareken&author=L.+Zhang&author=R.+Mohs&volume=25&publication_year=2005&pages=159-165&pmid=15738747&doi=10.1097/01.jcp.0000155822.51962.b4&)]\n* Bolt I., Schermer M. (2009). Psychopharmacological enhancers: enhancing identity?\nNeuroethics\n2, 103–111\n 10.1007/s12152-008-9031-7 [[CrossRef](//doi.org/10.1007%2Fs12152-008-9031-7)] [[Google Scholar](https://scholar.google.com/scholar_lookup?journal=Neuroethics&title=Psychopharmacological+enhancers:+enhancing+identity?&author=I.+Bolt&author=M.+Schermer&volume=2&publication_year=2009&pages=103-111&doi=10.1007/s12152-008-9031-7&)]\n* Boot B. P., Partridge B., Hall W. (2012). Letter to the editor: better evidence for safety and efficacy is needed before neurologists prescribe drugs for neuroenhancement to healthy people. Neurocase\n18, 181–184\n 10.1080/13554794.2011.588174 [[PubMed](https://pubmed.ncbi.nlm.nih.gov/22007842)] [[CrossRef](//doi.org/10.1080%2F13554794.2011.588174)] [[Google Scholar](https://scholar.google.com/scholar_lookup?journal=Neurocase&title=Letter+to+the+editor:+better+evidence+for+safety+and+efficacy+is+needed+before+neurologists+prescribe+drugs+for+neuroenhancement+to+healthy+people&author=B.+P.+Boot&author=B.+Partridge&author=W.+Hall&volume=18&publication_year=2012&pages=181-184&pmid=22007842&doi=10.1080/13554794.2011.588174&)]\n* Bostrom N., Sandberg A. (2009). Cognitive enhancement: methods, ethics, regulatory challenges. Sci. Eng. Ethics\n15, 311–341\n 10.1007/s11948-009-9142-5 [[PubMed](https://pubmed.ncbi.nlm.nih.gov/19543814)] [[CrossRef](//doi.org/10.1007%2Fs11948-009-9142-5)] [[Google Scholar](https://scholar.google.com/scholar_lookup?journal=Sci.+Eng.+Ethics&title=Cognitive+enhancement:+methods,+ethics,+regulatory+challenges&author=N.+Bostrom&author=A.+Sandberg&volume=15&publication_year=2009&pages=311-341&pmid=19543814&doi=10.1007/s11948-009-9142-5&)]\n* Bublitz J. C., Merkel R. (2009). Autonomy and authenticity of enhanced personality traits. Bioethics\n23, 360–374\n 10.1111/j.1467-8519.2009.01725.x [[PubMed](https://pubmed.ncbi.nlm.nih.gov/19527264)] [[CrossRef](//doi.org/10.1111%2Fj.1467-8519.2009.01725.x)] [[Google Scholar](https://scholar.google.com/scholar_lookup?journal=Bioethics&title=Autonomy+and+authenticity+of+enhanced+personality+traits&author=J.+C.+Bublitz&author=R.+Merkel&volume=23&publication_year=2009&pages=360-374&pmid=19527264&doi=10.1111/j.1467-8519.2009.01725.x&)]\n* Bublitz J. C., Merkel R. (2014). Crimes against minds: on mental manipulations, harms and a human right to mental self-determination. Crim. Law and Philos. 8, 51–77\n 10.1007/s11572-012-9172-y [[CrossRef](//doi.org/10.1007%2Fs11572-012-9172-y)] [[Google Scholar](https://scholar.google.com/scholar_lookup?journal=Crim.+Law+and+Philos&title=Crimes+against+minds:+on+mental+manipulations,+harms+and+a+human+right+to+mental+self-determination&author=J.+C.+Bublitz&author=R.+Merkel&volume=8&publication_year=2014&pages=51-77&doi=10.1007/s11572-012-9172-y&)]\n* Butler A. C., Chapman J. E., Forman E. M., Beck A. T. (2006). The empirical status of cognitive-behavioral therapy: a review of meta-analyses. Clin. Psychol. Rev. 26, 17–31\n 10.1016/j.cpr.2005.07.003 [[PubMed](https://pubmed.ncbi.nlm.nih.gov/16199119)] [[CrossRef](//doi.org/10.1016%2Fj.cpr.2005.07.003)] [[Google Scholar](https://scholar.google.com/scholar_lookup?journal=Clin.+Psychol.+Rev&title=The+empirical+status+of+cognitive-behavioral+therapy:+a+review+of+meta-analyses&author=A.+C.+Butler&author=J.+E.+Chapman&author=E.+M.+Forman&author=A.+T.+Beck&volume=26&publication_year=2006&pages=17-31&pmid=16199119&doi=10.1016/j.cpr.2005.07.003&)]\n* Caldwell J. A., Caldwell J. L. (2005). Fatigue in military aviation: an overview of us military-approved pharmacological countermeasures. Aviat. Space Environ. Med. 76(7 Suppl.), C39–C51\n [[PubMed](https://pubmed.ncbi.nlm.nih.gov/16018329)] [[Google Scholar](https://scholar.google.com/scholar_lookup?journal=Aviat.+Space+Environ.+Med&title=Fatigue+in+military+aviation:+an+overview+of+us+military-approved+pharmacological+countermeasures&author=J.+A.+Caldwell&author=J.+L.+Caldwell&volume=76&issue=7+Suppl.&publication_year=2005&pages=C39-C51&pmid=16018329&)]\n* Chatterjee A. (2004). Cosmetic neurology the controversy over enhancing movement, mentation, and mood. Neurology\n63, 968–974\n 10.1212/01.WNL.0000138438.88589.7C [[PubMed](https://pubmed.ncbi.nlm.nih.gov/15452285)] [[CrossRef](//doi.org/10.1212%2F01.WNL.0000138438.88589.7C)] [[Google Scholar](https://scholar.google.com/scholar_lookup?journal=Neurology&title=Cosmetic+neurology+the+controversy+over+enhancing+movement,+mentation,+and+mood&author=A.+Chatterjee&volume=63&publication_year=2004&pages=968-974&pmid=15452285&doi=10.1212/01.WNL.0000138438.88589.7C&)]\n* DeGrazia D. (2005). Human Identity and Bioethics. Cambridge: Cambridge University Press [[Google Scholar](https://scholar.google.com/scholar_lookup?title=Human+Identity+and+Bioethics&author=D.+DeGrazia&publication_year=2005&)]\n* de Jongh R., Bolt I., Schermer M., Olivier B. (2008). Botox for the brain: enhancement of cognition, mood and pro-social behavior and blunting of unwanted memories. Neurosci. Biobehav. Rev. 32, 760–776\n 10.1016/j.neubiorev.2007.12.001 [[PubMed](https://pubmed.ncbi.nlm.nih.gov/18295885)] [[CrossRef](//doi.org/10.1016%2Fj.neubiorev.2007.12.001)] [[Google Scholar](https://scholar.google.com/scholar_lookup?journal=Neurosci.+Biobehav.+Rev&title=Botox+for+the+brain:+enhancement+of+cognition,+mood+and+pro-social+behavior+and+blunting+of+unwanted+memories&author=R.+de+Jongh&author=I.+Bolt&author=M.+Schermer&author=B.+Olivier&volume=32&publication_year=2008&pages=760-776&pmid=18295885&doi=10.1016/j.neubiorev.2007.12.001&)]\n* DeSantis A. D., Webb E. M., Noar S. M. (2008). Illicit use of prescription ADHD medications on a college campus: a multimethodological approach. J. Am. Coll. Health\n57, 315–324\n 10.3200/JACH.57.3.315-324 [[PubMed](https://pubmed.ncbi.nlm.nih.gov/18980888)] [[CrossRef](//doi.org/10.3200%2FJACH.57.3.315-324)] [[Google Scholar](https://scholar.google.com/scholar_lookup?journal=J.+Am.+Coll.+Health&title=Illicit+use+of+prescription+ADHD+medications+on+a+college+campus:+a+multimethodological+approach&author=A.+D.+DeSantis&author=E.+M.+Webb&author=S.+M.+Noar&volume=57&publication_year=2008&pages=315-324&pmid=18980888&doi=10.3200/JACH.57.3.315-324&)]\n* Drabiak-Syed K. (2011). Reining in the pharmacological enhancement train: we should remain vigilant about regulatory standards for prescribing controlled substances. J. Law Med. Ethics\n39, 272–279\n 10.1111/j.1748-720X.2011.00596.x [[PubMed](https://pubmed.ncbi.nlm.nih.gov/21561522)] [[CrossRef](//doi.org/10.1111%2Fj.1748-720X.2011.00596.x)] [[Google Scholar](https://scholar.google.com/scholar_lookup?journal=J.+Law+Med.+Ethics&title=Reining+in+the+pharmacological+enhancement+train:+we+should+remain+vigilant+about+regulatory+standards+for+prescribing+controlled+substances&author=K.+Drabiak-Syed&volume=39&publication_year=2011&pages=272-279&pmid=21561522&doi=10.1111/j.1748-720X.2011.00596.x&)]\n* Dresler M., Sandberg A., Ohla K., Bublitz C., Trenado C., Mroczko-Wasowicz A., et al. (2012). Non-pharmacological cognitive enhancement. Neuropharmacology\n64, 529–543\n 10.1016/j.neuropharm.2012.07.002 [[PubMed](https://pubmed.ncbi.nlm.nih.gov/22828638)] [[CrossRef](//doi.org/10.1016%2Fj.neuropharm.2012.07.002)] [[Google Scholar](https://scholar.google.com/scholar_lookup?journal=Neuropharmacology&title=Non-pharmacological+cognitive+enhancement&author=M.+Dresler&author=A.+Sandberg&author=K.+Ohla&author=C.+Bublitz&author=C.+Trenado&volume=64&publication_year=2012&pages=529-543&pmid=22828638&doi=10.1016/j.neuropharm.2012.07.002&)]\n* Elliott C. (1999). A Philosophical Disease: Bioethics, Culture and Identity. Psychology Press [[Google Scholar](https://scholar.google.com/scholar_lookup?title=A+Philosophical+Disease:+Bioethics,+Culture+and+Identity&author=C.+Elliott&publication_year=1999&)]\n* Elliott R., Sahakian B. J., Matthews K., Bannerjea A., Rimmer J., Robbins T. W. (1997). Effects of methylphenidate on spatial working memory and planning in healthy young adults. Psychopharmacology\n131, 196–206\n 10.1007/s002130050284 [[PubMed](https://pubmed.ncbi.nlm.nih.gov/9201809)] [[CrossRef](//doi.org/10.1007%2Fs002130050284)] [[Google Scholar](https://scholar.google.com/scholar_lookup?journal=Psychopharmacology&title=Effects+of+methylphenidate+on+spatial+working+memory+and+planning+in+healthy+young+adults&author=R.+Elliott&author=B.+J.+Sahakian&author=K.+Matthews&author=A.+Bannerjea&author=J.+Rimmer&volume=131&publication_year=1997&pages=196-206&pmid=9201809&doi=10.1007/s002130050284&)]\n* European Medicines Agency. (2010). Questions and Answers on the Review of Medicines Containing Modafinil. *EMA/CHMP/*460496/2010. Available online at: \n* Farah M. J., Smith M. E., Ilieva I., Hamilton R. H. (2014). Cognitive enhancement. Wiley Interdiscipl. Rev. Cogn. Sci. 5, 95–103\n 10.1002/wcs.1250 [[PubMed](https://pubmed.ncbi.nlm.nih.gov/26304298)] [[CrossRef](//doi.org/10.1002%2Fwcs.1250)] [[Google Scholar](https://scholar.google.com/scholar_lookup?journal=Wiley+Interdiscipl.+Rev.+Cogn.+Sci&title=Cognitive+enhancement&author=M.+J.+Farah&author=M.+E.+Smith&author=I.+Ilieva&author=R.+H.+Hamilton&volume=5&publication_year=2014&pages=95-103&doi=10.1002/wcs.1250&)]\n* Faulmüller N., Maslen H., Santoni de Sio F. (2013). The indirect psychological costs of cognitive enhancement. Am. J. Bioeth. 13, 45–47\n 10.1080/15265161.2013.794880 [[PubMed](https://pubmed.ncbi.nlm.nih.gov/23767441)] [[CrossRef](//doi.org/10.1080%2F15265161.2013.794880)] [[Google Scholar](https://scholar.google.com/scholar_lookup?journal=Am.+J.+Bioeth&title=The+indirect+psychological+costs+of+cognitive+enhancement&author=N.+Faulmüller&author=H.+Maslen&author=F.+Santoni+de+Sio&volume=13&publication_year=2013&pages=45-47&pmid=23767441&doi=10.1080/15265161.2013.794880&)]\n* Fox D. (2005). Safety, Efficacy, and Authenticity: The Gap between Ethics and law in FDA Decisionmaking. Available online at: \n* Franke A. G., Bonertz C., Christmann M., Huss M., Fellgiebel A., Hildt E., et al. (2011). Non-medical use of prescription stimulants and illicit use of stimulants for cognitive enhancement in pupils and students in Germany. Pharmacopsychiatry\n44, 60–66\n 10.1055/s-0030-1268417 [[PubMed](https://pubmed.ncbi.nlm.nih.gov/21161883)] [[CrossRef](//doi.org/10.1055%2Fs-0030-1268417)] [[Google Scholar](https://scholar.google.com/scholar_lookup?journal=Pharmacopsychiatry&title=Non-medical+use+of+prescription+stimulants+and+illicit+use+of+stimulants+for+cognitive+enhancement+in+pupils+and+students+in+Germany&author=A.+G.+Franke&author=C.+Bonertz&author=M.+Christmann&author=M.+Huss&author=A.+Fellgiebel&volume=44&publication_year=2011&pages=60-66&pmid=21161883&doi=10.1055/s-0030-1268417&)]\n* Fukuyama F. (2002). Our Post human Future: Consequences of the Biotechnology Revolution. New York, NY: Farrar, Straus and Giroux [[Google Scholar](https://scholar.google.com/scholar_lookup?title=Our+Post+human+Future:+Consequences+of+the+Biotechnology+Revolution&author=F.+Fukuyama&publication_year=2002&)]\n* Goodman R. (2010). Cognitive enhancement, cheating, and accomplishment. Kennedy Inst. Ethics J. 20, 145–160\n 10.1353/ken.0.0309 [[PubMed](https://pubmed.ncbi.nlm.nih.gov/20653250)] [[CrossRef](//doi.org/10.1353%2Fken.0.0309)] [[Google Scholar](https://scholar.google.com/scholar_lookup?journal=Kennedy+Inst.+Ethics+J&title=Cognitive+enhancement,+cheating,+and+accomplishment&author=R.+Goodman&volume=20&publication_year=2010&pages=145-160&pmid=20653250&doi=10.1353/ken.0.0309&)]\n* Goold I., Maslen H. (2014). Must the surgeon take the pill? Negligence duty in the context of cognitive enhancement. Mod. Law Rev. 77, 60–86\n 10.1111/1468-2230.12056 [[CrossRef](//doi.org/10.1111%2F1468-2230.12056)] [[Google Scholar](https://scholar.google.com/scholar_lookup?journal=Mod.+Law+Rev&title=Must+the+surgeon+take+the+pill?+Negligence+duty+in+the+context+of+cognitive+enhancement&author=I.+Goold&author=H.+Maslen&volume=77&publication_year=2014&pages=60-86&doi=10.1111/1468-2230.12056&)]\n* Greely H., Sahakian B., Harris J., Kessler R. C., Gazzaniga M., Campbell P., et al. (2008). Towards responsible use of cognitive-enhancing drugs by the healthy. Nature\n456, 702–705\n 10.1038/456702a [[PubMed](https://pubmed.ncbi.nlm.nih.gov/19060880)] [[CrossRef](//doi.org/10.1038%2F456702a)] [[Google Scholar](https://scholar.google.com/scholar_lookup?journal=Nature&title=Towards+responsible+use+of+cognitive-enhancing+drugs+by+the+healthy&author=H.+Greely&author=B.+Sahakian&author=J.+Harris&author=R.+C.+Kessler&author=M.+Gazzaniga&volume=456&publication_year=2008&pages=702-705&pmid=19060880&doi=10.1038/456702a&)]\n* Greely H. T. (2010). Enhancing brains: what are we afraid of? in Cerebrum: the Dana Forum on Brain Science, Vol. 2010 (Dana Foundation). Available online at: [[PMC free article](/pmc/articles/PMC3574770/)] [[PubMed](https://pubmed.ncbi.nlm.nih.gov/23447760)] [[Google Scholar](https://scholar.google.com/scholar_lookup?journal=Cerebrum:+the+Dana+Forum+on+Brain+Science&title=Enhancing+brains:+what+are+we+afraid+of?&author=H.+T.+Greely&volume=Vol.+2010&publication_year=2010&)]\n* Gron G., Kirstein M., Thielscher A., Riepe M. W., Spitzer M. (2005). Cholinergic enhancement of episodic memory in healthy young adults. Psychopharmacology\n182, 170–179\n 10.1007/s00213-005-0043-2 [[PubMed](https://pubmed.ncbi.nlm.nih.gov/16021483)] [[CrossRef](//doi.org/10.1007%2Fs00213-005-0043-2)] [[Google Scholar](https://scholar.google.com/scholar_lookup?journal=Psychopharmacology&title=Cholinergic+enhancement+of+episodic+memory+in+healthy+young+adults&author=G.+Gron&author=M.+Kirstein&author=A.+Thielscher&author=M.+W.+Riepe&author=M.+Spitzer&volume=182&publication_year=2005&pages=170-179&pmid=16021483&doi=10.1007/s00213-005-0043-2&)]\n* Hall W. D., Lucke J. C. (2010). Enhancement uses of neuropharmaceuticals: more caution and skepticism needed. Addiction\n105, 2041–2043\n 10.1111/j.1360-0443.2010.03211.x [[PubMed](https://pubmed.ncbi.nlm.nih.gov/21054609)] [[CrossRef](//doi.org/10.1111%2Fj.1360-0443.2010.03211.x)] [[Google Scholar](https://scholar.google.com/scholar_lookup?journal=Addiction&title=Enhancement+uses+of+neuropharmaceuticals:+more+caution+and+skepticism+needed&author=W.+D.+Hall&author=J.+C.+Lucke&volume=105&publication_year=2010&pages=2041-2043&pmid=21054609&doi=10.1111/j.1360-0443.2010.03211.x&)]\n* Husain M., Mehta M. A. (2011). Cognitive enhancement by drugs in health and disease. Trends Cogn. Sci. 15, 28–36\n 10.1016/j.tics.2010.11.002 [[PMC free article](/pmc/articles/PMC3020278/)] [[PubMed](https://pubmed.ncbi.nlm.nih.gov/21146447)] [[CrossRef](//doi.org/10.1016%2Fj.tics.2010.11.002)] [[Google Scholar](https://scholar.google.com/scholar_lookup?journal=Trends+Cogn.+Sci&title=Cognitive+enhancement+by+drugs+in+health+and+disease&author=M.+Husain&author=M.+A.+Mehta&volume=15&publication_year=2011&pages=28-36&pmid=21146447&doi=10.1016/j.tics.2010.11.002&)]\n* Juengst E. T. (1998). What does enhancement mean?, in Enhancing Human Traits: Ethical and Social Implications, ed Parens E. (Georgetown University press; ), 29–47 [[Google Scholar](https://scholar.google.com/scholar_lookup?title=Enhancing+Human+Traits:+Ethical+and+Social+Implications&author=E.+T.+Juengst&publication_year=1998&)]\n* Juth N. (2011). Enhancement, autonomy, and authenticity, in Enhancing Human Capacities, eds Savulescu J., ter Meulen R., Kahane G. (Oxford: Wiley-Blackwell; ), 34–48 [[Google Scholar](https://scholar.google.com/scholar_lookup?title=Enhancing+Human+Capacities&author=N.+Juth&publication_year=2011&)]\n* Kahane G., Savulescu J. (2013). Normal human variation: refocussing the enhancement debate. Bioethics. [Epub ahead of print]. 10.1111/bioe.12045 [[PMC free article](/pmc/articles/PMC4278839/)] [[PubMed](https://pubmed.ncbi.nlm.nih.gov/23906367)] [[CrossRef](//doi.org/10.1111%2Fbioe.12045)] [[Google Scholar](https://scholar.google.com/scholar_lookup?journal=Bioethics&title=Normal+human+variation:+refocussing+the+enhancement+debate&author=G.+Kahane&author=J.+Savulescu&publication_year=2013&pmid=23906367&doi=10.1111/bioe.12045&)]\n* Kass L. (2003). Ageless bodies, happy souls. New Atlantis\n1, 9–28\n [[PubMed](https://pubmed.ncbi.nlm.nih.gov/15584192)] [[Google Scholar](https://scholar.google.com/scholar_lookup?journal=New+Atlantis&title=Ageless+bodies,+happy+souls&author=L.+Kass&volume=1&publication_year=2003&pages=9-28&pmid=15584192&)]\n* Kroutil L. A., Van Brunt D. L., Herman-Stahl M. A., Heller D. C., Bray R. M., Penne M. A. (2006). Nonmedical use of prescription stimulants in the United States. Drug Alcohol Depend. 84, 135–143\n 10.1016/j.drugalcdep.2005.12.011 [[PubMed](https://pubmed.ncbi.nlm.nih.gov/16480836)] [[CrossRef](//doi.org/10.1016%2Fj.drugalcdep.2005.12.011)] [[Google Scholar](https://scholar.google.com/scholar_lookup?journal=Drug+Alcohol+Depend&title=Nonmedical+use+of+prescription+stimulants+in+the+United+States&author=L.+A.+Kroutil&author=D.+L.+Van+Brunt&author=M.+A.+Herman-Stahl&author=D.+C.+Heller&author=R.+M.+Bray&volume=84&publication_year=2006&pages=135-143&pmid=16480836&doi=10.1016/j.drugalcdep.2005.12.011&)]\n* Lanni C., Lenzken S. C., Pascale A., Del Vecchio I., Racchi M., Pistoia F., et al. (2008). Cognition enhancers between treating and doping the mind. Pharmacol. Res. 57, 196–213\n 10.1016/j.phrs.2008.02.004 [[PubMed](https://pubmed.ncbi.nlm.nih.gov/18353672)] [[CrossRef](//doi.org/10.1016%2Fj.phrs.2008.02.004)] [[Google Scholar](https://scholar.google.com/scholar_lookup?journal=Pharmacol.+Res&title=Cognition+enhancers+between+treating+and+doping+the+mind&author=C.+Lanni&author=S.+C.+Lenzken&author=A.+Pascale&author=I.+Del+Vecchio&author=M.+Racchi&volume=57&publication_year=2008&pages=196-213&pmid=18353672&doi=10.1016/j.phrs.2008.02.004&)]\n* Maher B. (2008). Poll results: look who's doping. Nature\n452, 674–675\n 10.1038/452674a [[PubMed](https://pubmed.ncbi.nlm.nih.gov/18401370)] [[CrossRef](//doi.org/10.1038%2F452674a)] [[Google Scholar](https://scholar.google.com/scholar_lookup?journal=Nature&title=Poll+results:+look+who's+doping&author=B.+Maher&volume=452&publication_year=2008&pages=674-675&pmid=18401370&doi=10.1038/452674a&)]\n* Mehta M. A., Owen A. M., Sahakian B. J., Mavaddat N., Pickard J. D., Robbins T. W. (2000). Methylphenidate enhances working memory by modulating discrete frontal and parietal lobe regions in the human brain. J. Neurosci. 20:RC65\n [[PMC free article](/pmc/articles/PMC6772505/)] [[PubMed](https://pubmed.ncbi.nlm.nih.gov/10704519)] [[Google Scholar](https://scholar.google.com/scholar_lookup?journal=J.+Neurosci&title=Methylphenidate+enhances+working+memory+by+modulating+discrete+frontal+and+parietal+lobe+regions+in+the+human+brain&author=M.+A.+Mehta&author=A.+M.+Owen&author=B.+J.+Sahakian&author=N.+Mavaddat&author=J.+D.+Pickard&volume=20&publication_year=2000&pages=RC65&pmid=10704519&)]\n* Mignot E. (2010). Narcolepsy: genetic predisposition and pathophysiology, in Narcolepsy: A Clinical Guide, eds Goswami M., Pandi-Perumal S. R., Thorpy M. J. (New York, NY: Springer; ), 3–21\n 10.1007/978-1-4419-0854-4\\_1 [[CrossRef](//doi.org/10.1007%2F978-1-4419-0854-4_1)] [[Google Scholar](https://scholar.google.com/scholar_lookup?title=Narcolepsy:+A+Clinical+Guide&author=E.+Mignot&publication_year=2010&)]\n* Minzenberg M. J., Carter C. S. (2008). Modafinil: a review of neurochemical actions and effects on cognition. Neuropsychopharmacology\n33, 1477–1502\n 10.1038/sj.npp.1301534 [[PubMed](https://pubmed.ncbi.nlm.nih.gov/17712350)] [[CrossRef](//doi.org/10.1038%2Fsj.npp.1301534)] [[Google Scholar](https://scholar.google.com/scholar_lookup?journal=Neuropsychopharmacology&title=Modafinil:+a+review+of+neurochemical+actions+and+effects+on+cognition&author=M.+J.+Minzenberg&author=C.+S.+Carter&volume=33&publication_year=2008&pages=1477-1502&pmid=17712350&doi=10.1038/sj.npp.1301534&)]\n* Maier L. J., Liechti M. E., Herzig F., Schaub M. P. (2013). To dope or not to dope: neuroenhancement with prescription drugs and drugs of abuse among Swiss university students. PLoS ONE\n8:e77967\n 10.1371/journal.pone.0077967 [[PMC free article](/pmc/articles/PMC3827185/)] [[PubMed](https://pubmed.ncbi.nlm.nih.gov/24236008)] [[CrossRef](//doi.org/10.1371%2Fjournal.pone.0077967)] [[Google Scholar](https://scholar.google.com/scholar_lookup?journal=PLoS+ONE&title=To+dope+or+not+to+dope:+neuroenhancement+with+prescription+drugs+and+drugs+of+abuse+among+Swiss+university+students&author=L.+J.+Maier&author=M.+E.+Liechti&author=F.+Herzig&author=M.+P.+Schaub&volume=8&publication_year=2013&pages=e77967&pmid=24236008&doi=10.1371/journal.pone.0077967&)]\n* Maslen H., Douglas T., Cohen Kadosh R., Levy N., Savulescu J. (2014). The regulation of cognitive enhancement devices: extending the medical model. J. Law Biosci. 1, 68–93\n 10.1093/jlb/lst003 [[PMC free article](/pmc/articles/PMC4168724/)] [[PubMed](https://pubmed.ncbi.nlm.nih.gov/25243073)] [[CrossRef](//doi.org/10.1093%2Fjlb%2Flst003)] [[Google Scholar](https://scholar.google.com/scholar_lookup?journal=J.+Law+Biosci&title=The+regulation+of+cognitive+enhancement+devices:+extending+the+medical+model&author=H.+Maslen&author=T.+Douglas&author=R.+Cohen+Kadosh&author=N.+Levy&author=J.+Savulescu&volume=1&publication_year=2014&pages=68-93&doi=10.1093/jlb/lst003&)]\n* Maslen H., Santoni de Sio F., Faulmüller N. (in press). With cognitive enhancement comes great responsibility?, in Responsible Innovation, Vol. 2, eds Koops B. J., et al. (Dordrecht: Springer; ). [[Google Scholar](https://scholar.google.com/scholar_lookup?title=Responsible+Innovation&author=H.+Maslen&author=F.+Santoni+de+Sio&author=N.+Faulmüller&)]\n* Müller U., Steffenhagen N., Regenthal R., Bublak P. (2004). Effects of modafinil on working memory processes in humans. Psychopharmacology\n177, 161–169\n 10.1007/s00213-004-1926-3 [[PubMed](https://pubmed.ncbi.nlm.nih.gov/15221200)] [[CrossRef](//doi.org/10.1007%2Fs00213-004-1926-3)] [[Google Scholar](https://scholar.google.com/scholar_lookup?journal=Psychopharmacology&title=Effects+of+modafinil+on+working+memory+processes+in+humans&author=U.+Müller&author=N.+Steffenhagen&author=R.+Regenthal&author=P.+Bublak&volume=177&publication_year=2004&pages=161-169&pmid=15221200&doi=10.1007/s00213-004-1926-3&)]\n* Outram S. M. (2010). The use of methylphenidate among students: the future of enhancement?. J. Med. Ethics\n36, 198–202\n 10.1136/jme.2009.034421 [[PubMed](https://pubmed.ncbi.nlm.nih.gov/20338928)] [[CrossRef](//doi.org/10.1136%2Fjme.2009.034421)] [[Google Scholar](https://scholar.google.com/scholar_lookup?journal=J.+Med.+Ethics&title=The+use+of+methylphenidate+among+students:+the+future+of+enhancement?&author=S.+M.+Outram&volume=36&publication_year=2010&pages=198-202&pmid=20338928&doi=10.1136/jme.2009.034421&)]\n* Parens E. (1998). Is better always good?: the enhancement project. Hastings Center Rep. 28, s1–s17\n 10.2307/3527981 [[PubMed](https://pubmed.ncbi.nlm.nih.gov/9539044)] [[CrossRef](//doi.org/10.2307%2F3527981)] [[Google Scholar](https://scholar.google.com/scholar_lookup?journal=Hastings+Center+Rep&title=Is+better+always+good?:+the+enhancement+project&author=E.+Parens&volume=28&publication_year=1998&pages=s1-s17&pmid=9539044&doi=10.2307/3527981&)]\n* President's Council on Bioethics. (2003). Beyond Therapy. Washington, DC: U.S. Government Printing Office, 253 [[Google Scholar](https://scholar.google.com/scholar_lookup?title=Beyond+Therapy&publication_year=2003&)]\n* Ragan C. I., Bard I., Singh I. (2013). What should we do about student use of cognitive enhancers? An analysis of current evidence. Neuropharmacology\n64, 588–595\n 10.1016/j.neuropharm.2012.06.016 [[PubMed](https://pubmed.ncbi.nlm.nih.gov/22732441)] [[CrossRef](//doi.org/10.1016%2Fj.neuropharm.2012.06.016)] [[Google Scholar](https://scholar.google.com/scholar_lookup?journal=Neuropharmacology&title=What+should+we+do+about+student+use+of+cognitive+enhancers?+An+analysis+of+current+evidence&author=C.+I.+Ragan&author=I.+Bard&author=I.+Singh&volume=64&publication_year=2013&pages=588-595&pmid=22732441&doi=10.1016/j.neuropharm.2012.06.016&)]\n* Repantis D. (2013). Psychopharmacological neuroenhancement: evidence on safety and efficacy, in Cognitive Enhancement, eds Hildt E., Franke A. G. (Dordrecht: Springer; ), 29–38 [[Google Scholar](https://scholar.google.com/scholar_lookup?title=Cognitive+Enhancement&author=D.+Repantis&publication_year=2013&)]\n* Repantis D., Schlattmann P., Laisney O., Heuser I. (2010). Modafinil and methylphenidate for neuroenhancement in healthy individuals: a systematic review. Pharmacol. Res. 62, 187–206\n 10.1016/j.phrs.2010.04.002 [[PubMed](https://pubmed.ncbi.nlm.nih.gov/20416377)] [[CrossRef](//doi.org/10.1016%2Fj.phrs.2010.04.002)] [[Google Scholar](https://scholar.google.com/scholar_lookup?journal=Pharmacol.+Res&title=Modafinil+and+methylphenidate+for+neuroenhancement+in+healthy+individuals:+a+systematic+review&author=D.+Repantis&author=P.+Schlattmann&author=O.+Laisney&author=I.+Heuser&volume=62&publication_year=2010&pages=187-206&pmid=20416377&doi=10.1016/j.phrs.2010.04.002&)]\n* Rose S., Curry T. (2010). Fatigue countermeasures, and performance enhancement in resident physicians—reply. Mayo Clin. Proc. 85, 301–302\n 10.4065/mcp.2009.0704 [[PMC free article](/pmc/articles/PMC2843117/)] [[PubMed](https://pubmed.ncbi.nlm.nih.gov/20194157)] [[CrossRef](//doi.org/10.4065%2Fmcp.2009.0704)] [[Google Scholar](https://scholar.google.com/scholar_lookup?journal=Mayo+Clin.+Proc&title=Fatigue+countermeasures,+and+performance+enhancement+in+resident+physicians—reply&author=S.+Rose&author=T.+Curry&volume=85&publication_year=2010&pages=301-302&pmid=20194157&doi=10.4065/mcp.2009.0704&)]\n* Sabin J. E., Daniels N. (1994). Determining “medical necessity” in mental health practice. Hastings Center Rep. 24, 5–13\n 10.2307/3563458 [[PubMed](https://pubmed.ncbi.nlm.nih.gov/7860291)] [[CrossRef](//doi.org/10.2307%2F3563458)] [[Google Scholar](https://scholar.google.com/scholar_lookup?journal=Hastings+Center+Rep&title=Determining+“medical+necessity”+in+mental+health+practice&author=J.+E.+Sabin&author=N.+Daniels&volume=24&publication_year=1994&pages=5-13&pmid=7860291&doi=10.2307/3563458&)]\n* Sandberg A. (2011). Cognition enancement: upgrading the brain, in Enhancing Human Capacities, eds Savulescu J., ter Meulen R., Kahane G. (Oxford: Wiley-Blackwell; ), 71–91 [[Google Scholar](https://scholar.google.com/scholar_lookup?title=Enhancing+Human+Capacities&author=A.+Sandberg&publication_year=2011&)]\n* Sandberg A., Savulescu J. (2011). The social and economic impacts of cognitive enhancements, in Enhancing Human Capacities, eds Savulescu J., ter Meulen R., Kahane G. (Oxford: Wiley-Blackwell; ), 93–112 [[Google Scholar](https://scholar.google.com/scholar_lookup?title=Enhancing+Human+Capacities&author=A.+Sandberg&author=J.+Savulescu&publication_year=2011&)]\n* Santoni de Sio F., Faulmüller N., Savulescu J., Vincent N. A. (in press). Why less praise for enhanced performance? Moving beyond responsibility-shifting, authenticity, and cheating to a nature of activities approach, in Cognitive Enhancement: Ethical and Policy Implications in International Perspectives, eds Jotterand F., Dubljevic V. (Oxford: Oxford University Press; ). [[Google Scholar](https://scholar.google.com/scholar_lookup?title=Cognitive+Enhancement:+Ethical+and+Policy+Implications+in+International+Perspectives&author=F.+Santoni+de+Sio&author=N.+Faulmüller&author=J.+Savulescu&author=N.+A.+Vincent&)]\n* Savulescu J., Sandberg A., Kahane G. (2011). Well-being and enhancement, in Enhancing Human Capacities, eds Savulescu J., ter Meulen R., Kahane G. (Oxford: Wiley-Blackwell; ), 3–18 [[Google Scholar](https://scholar.google.com/scholar_lookup?title=Enhancing+Human+Capacities&author=J.+Savulescu&author=A.+Sandberg&author=G.+Kahane&publication_year=2011&)]\n* Schelle K. J., Faulmüller N., Caviola L., Hewstone M. (2014). Attitudes towards pharmacological cognitive enhancement – a review. Front. Syst. Neurosci. 8:53\n 10.3389/fnsys.2014.00053 [[PMC free article](/pmc/articles/PMC4029025/)] [[PubMed](https://pubmed.ncbi.nlm.nih.gov/24860438)] [[CrossRef](//doi.org/10.3389%2Ffnsys.2014.00053)] [[Google Scholar](https://scholar.google.com/scholar_lookup?journal=Front.+Syst.+Neurosci&title=Attitudes+towards+pharmacological+cognitive+enhancement+–+a+review&author=K.+J.+Schelle&author=N.+Faulmüller&author=L.+Caviola&author=M.+Hewstone&volume=8&publication_year=2014&pages=53&pmid=24860438&doi=10.3389/fnsys.2014.00053&)]\n* Schermer M. (2008). Enhancements, easy shortcuts, and the richness of human activities. Bioethics\n22, 355–363\n 10.1111/j.1467-8519.2008.00657.x [[PubMed](https://pubmed.ncbi.nlm.nih.gov/18445089)] [[CrossRef](//doi.org/10.1111%2Fj.1467-8519.2008.00657.x)] [[Google Scholar](https://scholar.google.com/scholar_lookup?journal=Bioethics&title=Enhancements,+easy+shortcuts,+and+the+richness+of+human+activities&author=M.+Schermer&volume=22&publication_year=2008&pages=355-363&pmid=18445089&doi=10.1111/j.1467-8519.2008.00657.x&)]\n* Schermer M., Bolt I. (2011). What's in a name? ADHD and the gray area between treatment and enhancement, in Enhancing Human Capacities, eds Savulescu J., ter Meulen R., Kahane G. (Oxford: Wiley-Blackwell; ), 179–193 [[Google Scholar](https://scholar.google.com/scholar_lookup?title=Enhancing+Human+Capacities&author=M.+Schermer&author=I.+Bolt&publication_year=2011&)]\n* Singh I. (2005). Will the “real boy” please behave: dosing dilemmas for parents of boys with ADHD. Am. J. Bioeth. 5, 34–47\n 10.1080/15265160590945129 [[PubMed](https://pubmed.ncbi.nlm.nih.gov/16006369)] [[CrossRef](//doi.org/10.1080%2F15265160590945129)] [[Google Scholar](https://scholar.google.com/scholar_lookup?journal=Am.+J.+Bioeth&title=Will+the+“real+boy”+please+behave:+dosing+dilemmas+for+parents+of+boys+with+ADHD&author=I.+Singh&volume=5&publication_year=2005&pages=34-47&pmid=16006369&doi=10.1080/15265160590945129&)]\n* Smith M. E., Farah M. J. (2011). Are prescription stimulants “smart pills”? The epidemiology and cognitive neuroscience of prescription stimulant use by normal healthy individuals. Psychol. Bull. 137, 717–741\n 10.1037/a0023825 [[PMC free article](/pmc/articles/PMC3591814/)] [[PubMed](https://pubmed.ncbi.nlm.nih.gov/21859174)] [[CrossRef](//doi.org/10.1037%2Fa0023825)] [[Google Scholar](https://scholar.google.com/scholar_lookup?journal=Psychol.+Bull&title=Are+prescription+stimulants+“smart+pills”?+The+epidemiology+and+cognitive+neuroscience+of+prescription+stimulant+use+by+normal+healthy+individuals&author=M.+E.+Smith&author=M.+J.+Farah&volume=137&publication_year=2011&pages=717-741&pmid=21859174&doi=10.1037/a0023825&)]\n* Taylor C. (1991). The Ethics of Authenticity. Cambridge, MA: Harvard University Press [[Google Scholar](https://scholar.google.com/scholar_lookup?title=The+Ethics+of+Authenticity&author=C.+Taylor&publication_year=1991&)]\n* Thomas R. J., Kwong K. (2006). Modafinil activates cortical and subcortical sites in the sleep-deprived state. Sleep\n29, 1471–1481\n [[PubMed](https://pubmed.ncbi.nlm.nih.gov/17162995)] [[Google Scholar](https://scholar.google.com/scholar_lookup?journal=Sleep&title=Modafinil+activates+cortical+and+subcortical+sites+in+the+sleep-deprived+state&author=R.+J.+Thomas&author=K.+Kwong&volume=29&publication_year=2006&pages=1471-1481&pmid=17162995&)]\n* Tricco A. C., Soobiah C., Berliner S., Ho J. M., Ng C. H., Ashoor H. M., et al. (2013). Efficacy and safety of cognitive enhancers for patients with mild cognitive impairment: a systematic review and meta-analysis. Can. Med. Assoc. J. 185, 1393–1401\n 10.1503/cmaj.130451 [[PMC free article](/pmc/articles/PMC3826344/)] [[PubMed](https://pubmed.ncbi.nlm.nih.gov/24043661)] [[CrossRef](//doi.org/10.1503%2Fcmaj.130451)] [[Google Scholar](https://scholar.google.com/scholar_lookup?journal=Can.+Med.+Assoc.+J&title=Efficacy+and+safety+of+cognitive+enhancers+for+patients+with+mild+cognitive+impairment:+a+systematic+review+and+meta-analysis&author=A.+C.+Tricco&author=C.+Soobiah&author=S.+Berliner&author=J.+M.+Ho&author=C.+H.+Ng&volume=185&publication_year=2013&pages=1393-1401&pmid=24043661&doi=10.1503/cmaj.130451&)]\n* Turner D. C., Robbins T. W., Clark L., Aron A. R., Dowson J., Sahakian B. J. (2003). Cognitive enhancing effects of modafinil in healthy volunteers. Psychopharmacology\n165, 260–269\n 10.1007/s00213-002-1250-8 [[PubMed](https://pubmed.ncbi.nlm.nih.gov/12417966)] [[CrossRef](//doi.org/10.1007%2Fs00213-002-1250-8)] [[Google Scholar](https://scholar.google.com/scholar_lookup?journal=Psychopharmacology&title=Cognitive+enhancing+effects+of+modafinil+in+healthy+volunteers&author=D.+C.+Turner&author=T.+W.+Robbins&author=L.+Clark&author=A.+R.+Aron&author=J.+Dowson&volume=165&publication_year=2003&pages=260-269&pmid=12417966&doi=10.1007/s00213-002-1250-8&)]\n* Volkow N. D., Wang G. J., Fowler J. S., Telang F., Maynard L., Logan J., et al. (2004). Evidence that methylphenidate enhances the saliency of a mathematical task by increasing dopamine in the human brain. Am. J. Psychiatry. 161, 1173–1180\n 10.1176/appi.ajp.161.7.1173 [[PubMed](https://pubmed.ncbi.nlm.nih.gov/15229048)] [[CrossRef](//doi.org/10.1176%2Fappi.ajp.161.7.1173)] [[Google Scholar](https://scholar.google.com/scholar_lookup?journal=Am.+J.+Psychiatry&title=Evidence+that+methylphenidate+enhances+the+saliency+of+a+mathematical+task+by+increasing+dopamine+in+the+human+brain&author=N.+D.+Volkow&author=G.+J.+Wang&author=J.+S.+Fowler&author=F.+Telang&author=L.+Maynard&volume=161&publication_year=2004&pages=1173-1180&pmid=15229048&doi=10.1176/appi.ajp.161.7.1173&)]\n* Warren O. J., Leff D. R., Athanasiou T., Kennard C., Darzi A. (2009). The neurocognitive enhancement of surgeons: an ethical perspective. J. Surg. Res. 152, 167–172\n 10.1016/j.jss.2007.12.761 [[PubMed](https://pubmed.ncbi.nlm.nih.gov/18394651)] [[CrossRef](//doi.org/10.1016%2Fj.jss.2007.12.761)] [[Google Scholar](https://scholar.google.com/scholar_lookup?journal=J.+Surg.+Res&title=The+neurocognitive+enhancement+of+surgeons:+an+ethical+perspective&author=O.+J.+Warren&author=D.+R.+Leff&author=T.+Athanasiou&author=C.+Kennard&author=A.+Darzi&volume=152&publication_year=2009&pages=167-172&pmid=18394651&doi=10.1016/j.jss.2007.12.761&)]\n* Wesensten N. J., Killgore W. D., Balkin T. J. (2005). Performance and alertness effects of caffeine, dextroamphetamine, and modafinil during sleep deprivation. J. Sleep Res. 14, 255–266\n 10.1111/j.1365-2869.2005.00468.x [[PubMed](https://pubmed.ncbi.nlm.nih.gov/16120100)] [[CrossRef](//doi.org/10.1111%2Fj.1365-2869.2005.00468.x)] [[Google Scholar](https://scholar.google.com/scholar_lookup?journal=J.+Sleep+Res&title=Performance+and+alertness+effects+of+caffeine,+dextroamphetamine,+and+modafinil+during+sleep+deprivation&author=N.+J.+Wesensten&author=W.+D.+Killgore&author=T.+J.+Balkin&volume=14&publication_year=2005&pages=255-266&pmid=16120100&doi=10.1111/j.1365-2869.2005.00468.x&)]\n* White B. P., Becker-Blease K. A., Grace-Bishop K. (2006). Stimulant medication use, misuse, and abuse in an undergraduate and graduate student sample. J. Am. Coll. Health\n54, 261–268\n 10.3200/JACH.54.5.261-268 [[PubMed](https://pubmed.ncbi.nlm.nih.gov/16539218)] [[CrossRef](//doi.org/10.3200%2FJACH.54.5.261-268)] [[Google Scholar](https://scholar.google.com/scholar_lookup?journal=J.+Am.+Coll.+Health&title=Stimulant+medication+use,+misuse,+and+abuse+in+an+undergraduate+and+graduate+student+sample&author=B.+P.+White&author=K.+A.+Becker-Blease&author=K.+Grace-Bishop&volume=54&publication_year=2006&pages=261-268&pmid=16539218&doi=10.3200/JACH.54.5.261-268&)]\n* Yesavage J. A., Mumenthaler M. S., Taylor J. L., Friedman L., O'Hara R., Sheikh J., et al. (2002) Donepezil and flight simulator performance: effects on retention of complex skills. Neurology\n59, 123–125\n 10.1212/WNL.59.1.123 [[PubMed](https://pubmed.ncbi.nlm.nih.gov/12105320)] [[CrossRef](//doi.org/10.1212%2FWNL.59.1.123)] [[Google Scholar](https://scholar.google.com/scholar_lookup?journal=Neurology&title=Donepezil+and+flight+simulator+performance:+effects+on+retention+of+complex+skills&author=J.+A.+Yesavage&author=M.+S.+Mumenthaler&author=J.+L.+Taylor&author=L.+Friedman&author=R.+O'Hara&volume=59&publication_year=2002&pages=123-125&pmid=12105320&doi=10.1212/WNL.59.1.123&)]\n\n\n---\n\nArticles from Frontiers in Systems Neuroscience are provided here courtesy of **Frontiers Media SA**\n\n---", "url": "https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4052735/", "title": "Non-pharmacological cognitive enhancement", "source": "html_articles", "source_type": "journalArticle", "source_filetype": "pdf", "date_published": "2012-12-31T23:00:00Z", "authors": ["Martin Dresler", "Anders Sandberg", "Kathrin Ohla", "Christoph Bublitz", "Carlos Trenado", "Aleksandra Mroczko-Wąsowicz", "Simone Kühn", "Dimitris Repantis"], "summary": [], "id": "fd31d1bc06e6532e439a94ef19583e40"} {"text": "Soc Epistemol. 2016 Jul 3; 30(4): 350–371. Published online 2016 Jan 26. doi: [10.1080/02691728.2015.1108373](//doi.org/10.1080%2F02691728.2015.1108373)PMCID: PMC4959137PMID: [27499570](https://pubmed.ncbi.nlm.nih.gov/27499570)The Unilateralist’s Curse and the Case for a Principle of Conformity\n====================================================================\n\n[Nick Bostrom](https://pubmed.ncbi.nlm.nih.gov/?term=Bostrom%20N%5BAuthor%5D), [Thomas Douglas](https://pubmed.ncbi.nlm.nih.gov/?term=Douglas%20T%5BAuthor%5D),\\* and [Anders Sandberg](https://pubmed.ncbi.nlm.nih.gov/?term=Sandberg%20A%5BAuthor%5D)[Author information](#) [Copyright and License information](#) [Disclaimer](/pmc/about/disclaimer/)Correspondence to: Thomas Douglas, Oxford Uehiro Centre for Practical Ethics, Faculty of Philosophy, Suite 8, Littlegate House, St Ebbe’s Street, Oxford OX1 1PT, UK. Email: [ku.ca.xo.yhposolihp@salguod.samoht](mailto:dev@null)[Copyright](/pmc/about/copyright/) © 2016 The Author(s). Published by Informa UK Limited, trading as Taylor & Francis GroupThis is an Open Access article distributed under the terms of the Creative Commons Attribution License (), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.Abstract\n--------\n\nIn some situations a number of agents each have the ability to undertake an initiative that would have significant effects on the others. Suppose that each of these agents is purely motivated by an altruistic concern for the common good. We show that if each agent acts on her own personal judgment as to whether the initiative should be undertaken, then the initiative will be undertaken more often than is optimal. We suggest that this phenomenon, which we call *the unilateralist’s curse*, arises in many contexts, including some that are important for public policy. To lift the curse, we propose a *principle of conformity*, which would discourage unilateralist action. We consider three different models for how this principle could be implemented, and respond to an objection that could be raised against it.\n\n**Keywords:** The Winner’s Curse, Disagreement, Rationality, Aumann1.  Introduction\n----------------\n\nConsider the following hypothetical scenarios:\n\n* (1) A group of scientists working on the development of an HIV vaccine has accidentally created an air-transmissible variant of HIV. The scientists must decide whether to publish their discovery, knowing that it might be used to create a devastating biological weapon, but also that it could help those who hope to develop defenses against such weapons. Most members of the group think publication is too risky, but one disagrees. He mentions the discovery at a conference, and soon the details are widely known.\n* (2) A sports team is planning a surprise birthday party for its coach. One of the players decides that it would be more fun to tell the coach in advance about the planned event. Although the other players think it would be better to keep it a surprise, the unilateralist lets word slip about the preparations underway.\n* (3) Geoengineering techniques have developed to the point that it is possible for any of the world’s twenty most technologically advanced nations to substantially reduce the earth’s average temperature by emitting sulfate aerosols. Each of these nations separately considers whether to release such aerosols. Nineteen decide against, but one nation estimates that the benefits of lowering temperature would exceed the costs. It presses ahead with its sulfate aerosol program and the global average temperature drops by almost 1°.\n\n\n\n\nIn each of these cases, each of a number of agents is in a position to undertake an initiative, *X*. Suppose that each agent decides whether or not to undertake *X* on the basis of her own independent judgment of the value of *X*, where the value of *X* is assumed to be independent of *who* undertakes *X*, and is supposed to be determined by the contribution of *X* to the common good.[1](#EN0001) Each agent’s judgment is subject to error—some agents might overestimate the value of *X*, others might underestimate it. If the true value of *X* is negative, then the larger the number of agents, the greater the chances that at least one agent will overestimate *X* sufficiently to make the value of *X* seem positive. Thus, if agents act unilaterally, the initiative is too likely to be undertaken, and if such scenarios repeat, an excessively large number of initiatives are likely to be undertaken. We shall call this phenomenon the *unilateralist’s curse*.\n\nThough we have chosen to introduce the unilateralist’s curse with hypothetical examples, it is not merely a hypothetical problem. There are numerous historical examples, ranging from the mundane to the high-tech. Here is one:\n\nUntil the late 1970s, the mechanism of the hydrogen bomb was one of the world’s best kept scientific secrets: it is thought that only four governments were in possession of it, each having decided not to divulge it. But staff at the Progressive magazine believed that nuclear secrecy was fuelling the Cold War by enabling nuclear policy to be determined by a security elite without proper public scrutiny. They pieced together the mechanism of the bomb and published it in their magazine, arguing that the cost, in the form of aiding countries such as India, Pakistan and South Africa in acquiring hydrogen bombs, was outweighed by the benefits of undermining nuclear secrecy.[2](#EN0002)\n\n\nAnother possible example from atomic physics had occurred several decades earlier:\n\nIn 1939 the Polish nuclear physicist Joseph Rotblat noticed that the fission of uranium released more neutrons than used to trigger it, realizing that it could produce a chain reaction leading to an explosion of unprecedented power. He assumed that other scientists elsewhere were doing similar experiments, and were thus in a position to release similar information, an assumption that turned out to be correct. Initially, Rotblat vowed to tell no-one of his discovery, believing it to be a threat to mankind, and it is plausible that others did likewise, for similar reasons. However, when the war broke out, Rotblat decided that releasing the information was now in the public interest, given the likelihood that the Germans were working on an atomic bomb. He confided in colleagues and thus unilaterally triggered the United Kingdom’s atomic bomb project.[3](#EN0003)\n\n\nRotblat was later to leave the Manhattan Project, coming to the view that his had overestimated the German nuclear threat, and underestimated the likelihood that the US would use an atomic bomb offensively.\n\nIt is perhaps too soon to say whether these unilateral actions were suboptimal. But in other cases, it is clearer that unilateral action led to a suboptimal outcome:\n\nIn the mid-nineteenth century there were virtually no wild rabbits in Australia, though many were in a position to introduce them. In 1859, Thomas Austin, a wealthy grazier, took it upon himself to do so. He had a dozen or two European rabbits imported from England and is reported to have said that “The introduction of a few rabbits could do little harm and might provide a touch of home, in addition to a spot of hunting.”[4](#EN0004) However, the rabbit population grew dramatically, and rabbits quickly became Australia’s most reviled pests, destroying large swathes of agricultural land.[5](#EN0005)\n\n\nThe abovementioned examples were isolated incidents, but similar situations occur regularly in some spheres of activity, for instance, in the media:\n\nMedia outlets sometimes find themselves in the situation that journalists have access to information that is of public interest but could also harm specific individuals or institutions: the name of a not-yet charged murder suspect (publication may bias legal proceedings), the news that a celebrity committed suicide (publication may risk copycat suicides), or sensitive government documents such as those leaked by Wikileaks and Edward Snowden (publication may endanger national security). It is enough that one outlet decides that the public interest outweighs the risk for the information to be released. Thus, the more journalists have access to the information the more likely it is to be published.\n\nUnilateralist situations also regularly crop up in regards to new biotechnologies:\n\nGene drives, a technique for inducing altered genes to be inherited by nearly all offspring (rather than just 50%) of a genetically modified organism, have potential for spreading altered genes across a population, enabling ecological control (e.g. making mosquitos incapable of spreading malaria or reducing herbicide resistance) but also potentially creating worrisome risks (e.g. to genetic diversity or of sabotage). Here unilateral action could both be taken in releasing a particular altered organism into the environment, and in releasing the information about how to produce it in the first place. There is scientific disagreement on the utility and risk of both.[6](#EN0006)\n\n\n2.  The Unilateralist’s Curse: A Model\n--------------------------------------\n\nThe unilateralist’s curse is closely related to a problem in auction theory known as the winner’s curse. The winner’s curse is the phenomenon that the winning bid in an auction has a high likelihood of being higher than the actual value of the good sold.[7](#EN0007) Each bidder makes an independent estimate and the bidder with the highest estimate outbids the others. But if the average estimate is likely to be an accurate estimate of the value, then the winner overpays. The larger the number of bidders, the more likely it is that at least one of them has overestimated the value.\n\nThe unilateralist’s curse and the winner’s curse have the same basic structure. The difference between them lies in the goals of the agents and the nature of the decision. In the winner’s curse, each agent aims to make a purchase if and only if doing so will be valuable *for her*. In the unilateralist’s curse, the decision-maker chooses whether to undertake an initiative with an eye to the common good, that is, seeking to undertake the initiative if and only if the initiative contributes positively to the common good.\n\nThe unilateralist’s curse can be illustrated using a simple mathematical model. Assume *N* agents, each considering whether to undertake an initiative. Each agent wishes to proceed if and only if the value of the initiative is positive, but the agents do not know the true value *V*\n\\* of the initiative (which may be negative or positive). Instead each agent forms an estimate that is the sum of *V*\n\\* and a random independent error *d* drawn from a distribution with cumulative distribution function *F*(*d*)*.* This means that the probability *p* that any given agent will estimate the value of the initiative to be positive when it is in fact negative (*V*\n\\* < 0) is *p* = 1 − *F*(−*V*\n\\*).[8](#EN0008) The probability *P* that at least one of the agents will incorrectly estimate the value to be positive is *p* = 1 − (1 − *p*)*N* *=* 1 − *F*(−*V*\n\\*)*N*.\n\nFor the case with 5 agents and *d* as a random error drawn from a normal distribution with standard deviation 1 and mean zero, the probability that any initiative will be undertaken (regardless of whether it is a good idea or not) is high even when the true value is quite negative, and the probability rises steeply as the true value of the initiative approaches zero from below (Figure [​(Figure11](/pmc/articles/PMC4959137/figure/F0001/)).\n\n[[![An external file that holds a picture, illustration, etc.\nObject name is tsep_a_1108373_f0001_oc.jpg](/pmc/articles/PMC4959137/bin/tsep_a_1108373_f0001_oc.jpg \"Click on image to zoom\")](/core/lw/2.0/html/tileshop_pmc/tileshop_pmc_inline.html?title=Click%20on%20image%20to%20zoom&p=PMC3&id=4959137_tsep_a_1108373_f0001_oc.jpg)[Open in a separate window](/pmc/articles/PMC4959137/figure/F0001/?report=objectonly)](/pmc/articles/PMC4959137/figure/F0001/)[Figure 1](/pmc/articles/PMC4959137/figure/F0001/) The probability of an initiative being undertaken as a function of the actual value, *V*\n\\*, for five agents and assuming normally distributed errors with variance 1 (these assumptions will be used in all subsequent figures except when otherwise noted). Note that 50% probability of action occurs near a value of −1: a strong unilateralist bias exists.\n\nFor mildly negative values of the initiative there is nearly always someone who misjudges the value of the initiative and undertakes it. There is no problem for positive initiatives since even if one or two agents are overly cautious, it is very likely that somebody will undertake the initiative, which is the optimal result (Figure [​(Figure22](/pmc/articles/PMC4959137/figure/F0002/)).\n\n[[![An external file that holds a picture, illustration, etc.\nObject name is tsep_a_1108373_f0002_oc.jpg](/pmc/articles/PMC4959137/bin/tsep_a_1108373_f0002_oc.jpg \"Click on image to zoom\")](/core/lw/2.0/html/tileshop_pmc/tileshop_pmc_inline.html?title=Click%20on%20image%20to%20zoom&p=PMC3&id=4959137_tsep_a_1108373_f0002_oc.jpg)[Open in a separate window](/pmc/articles/PMC4959137/figure/F0002/?report=objectonly)](/pmc/articles/PMC4959137/figure/F0002/)[Figure 2](/pmc/articles/PMC4959137/figure/F0002/) The expected payoff for naive agents (who act if and only if their evaluation of the initiative is positive) and ideal omniscient estimators who are assumed to know the true value.\n\nIncreasing the number of agents capable of undertaking the initiative also exacerbates the problem: as *N* grows, the likelihood of someone proceeding incorrectly increases monotonically towards 1.[9](#EN0009) The magnitude of this effect can be quite large even for a relatively small number of agents. For example, with the same error assumptions as above, if the true value of the initiative *V*\n\\* = −1 (the initiative is undesirable), then the probability of erroneously undertaking the initiative grows rapidly with *N*, passing 50% for just four agents (Figure [​(Figure33](/pmc/articles/PMC4959137/figure/F0003/)).\n\n[[![An external file that holds a picture, illustration, etc.\nObject name is tsep_a_1108373_f0003_oc.jpg](/pmc/articles/PMC4959137/bin/tsep_a_1108373_f0003_oc.jpg \"Click on image to zoom\")](/core/lw/2.0/html/tileshop_pmc/tileshop_pmc_inline.html?title=Click%20on%20image%20to%20zoom&p=PMC3&id=4959137_tsep_a_1108373_f0003_oc.jpg)[Open in a separate window](/pmc/articles/PMC4959137/figure/F0003/?report=objectonly)](/pmc/articles/PMC4959137/figure/F0003/)[Figure 3](/pmc/articles/PMC4959137/figure/F0003/) Probability of an erroneous action in the case of *V*\n\\* = −1 for different numbers of agents.\n\nThere are six features of the unilateralist’s curse that that need to be emphasized.\n\nFirst, in cases where the curse arises, the risk of erroneously undertaking an initiative is not caused by self-interest. In the model, all agents act for the common good, they simply disagree about the contribution of the initiative to the common good.[10](#EN0010)\n\n\nSecond, though the curse could be described as a group-level bias in favor of undertaking initiatives, in does not arise from biases in the individual estimates of the value that would result from undertaking the initiative. The model above assumes symmetric random errors in the estimates of the true value.[11](#EN0011)\n\n\nThird, there is a sense in which the unilateralist’s curse is the obverse of Condorcet’s jury theorem.[12](#EN0012) The jury theorem states that the *average* estimate of a group of people with above 50% likelihood of guessing correctly and with uncorrelated errors will tend to be close to the correct value, and will tend to move closer to the true value as the size of the group increases. But what is also true, and relevant to the argument in this paper, is that the *highest* estimate will tend to be above the true value, and the expected overestimation of this highest estimate *increases* with the size of the group. In the cases we are interested in here, it is the highest estimate that will determine whether an initiative is undertaken, not the average estimate.\n\nFourth, though we have chosen to illustrate the curse using initiatives that are (probably) irreversible, the problem can arise in other cases too. The problem becomes sharper if the initiative is irreversible, but even for actions that can be undone the problem remains in a milder form. Resources will be wasted on undoing erroneous initiatives, and if the bad consequences are not obvious they might occur before the problem is noticed. There might even be a costly tug-o-war between disagreeing agents.\n\nFinally, fifth, though we have thus far focused on cases where a number of agents can undertake an initiative and it matters only whether at least one of them does so, a similar problem arises when any one of a group of agents can *spoil* an initiative—for instance, where universal action is required to bring about an intended outcome. Consider the following example:\n\n\n> In Norse mythology, the goddess Hel of the underworld promised to release the universally beloved god Baldr if all objects, alive and dead, would shed a tear for him. All did, except the giantess Þökk. The god was forced to remain in the underworld.[13](#EN0013)\n> \n> \n> \n\n\n\n\nSimilar situations can arise when all the actors in a play must come together in order for a rehearsal to take place, when all members of committee must attend a meeting in order for it to be quorate, or when all signatories to an international treaty must ratify it in order for it to come into effect. The United Nations Security Council frequently provides examples of unilateral spoiling. The five permanent members of the Council—currently China, France, Russia, the United Kingdom and the United States—each possesses the power to veto the adoption of any non-procedural resolution. In the early years of the Council, this veto power was frequently employed by the Soviet Union to block applications for new membership of the United Nations. More recently, it has been used by the United States to block resolutions criticizing Israel, and by Russia and China to block resolutions on the Syria conflict.[14](#EN0014) While some of these vetoes presumably reflect differences in the national interests of the council members, others may reflect different estimations of the contribution that a resolution would make to the common good. Certainly, considerations relating to the common good are often invoked in their defence. For instance, the United States’ 2011 veto of a draft resolution condemning Israeli settlements in Palestinian territory was defended on the grounds that the resolution would be an impediment to peace talks.[15](#EN0015)\n\n\nThese cases of unilateral spoiling or abstinence are formally equivalent to the original unilateralist curse, with merely the sign reversed.\n\nSince the problem in these cases is the result of *unilateral* abstinence, it seems appropriate to include them within the scope of the unilateralist’s curse. Thus, in what follows, we assume that the unilateralist’s curse can arise when each member of a group can unilaterally undertake *or spoil* an initiative (though for ease of exposition we sometimes mention only the former case).\n\n3.  Lifting the Curse\n---------------------\n\nLet a unilateralist situation be one in which each member of a group of agents can undertake or spoil an initiative regardless of the cooperation or opposition of other members of the group. We will say that a policy would lift the unilateralist’s curse if universal adherence to it by all agents in unilateralist situations should be expected (*ex ante*) to eliminate any surfeit or deficit of initiatives that the unilateralist’s curse might otherwise produce.\n\n\n> \n> *The Principle of Conformity*\n> \n> \n> When acting out of concern for the common good in a unilateralist situation, reduce your likelihood of unilaterally undertaking or spoiling the initiative to a level that *ex ante* would be expected to lift the curse.\n> \n> \n\n\n\n\nIn the following subsections we will explore various ways in which one might bring oneself into compliance with this principle.[16](#EN0016) These can be organized around three models: collective deliberation, epistemic deference, and moral deference. The three models are applicable in somewhat different circumstances, and their suitability might depend on the type of agents involved.\n\nIt should be noted that, though some of the methods discussed below do not require agents to be aware of the nature of the situation, most hinge on agents recognizing that they are in an unilateralist situation. However, this is not to say that agents must be able to identify the other parties to the unilateralist situation: this is necessary for some but not all of our proposed solutions.\n\n### 3.1. The Collective Deliberation Model\n\nA first line of defense against the unilateralist’s curse could be to share data and reasoning between agents in the hope that this will resolve their disagreement about the desirability of proceeding with the contested initiative.\n\nIn some cases, however, extensive information sharing among all potential decision-making agents is impractical. Communication is often costly and time-consuming, and participants in a unilateralist situation may not be able to identify one another. Furthermore, in certain cases information disclosure might itself be the initiative whose desirability is in dispute, such as when information hazards are associated with disseminating relevant data.[17](#EN0017)\n\n\nMoreover, even when information is fully shared, a consensus can remain elusive. Disagreements about the net value of undertaking some project often persist after decision-makers have been thoroughly briefed on all obviously relevant and easily communicable facts and after having had opportunities to engage in joint deliberation.\n\nBecause complete information sharing may not be practical or desirable, and because it may not produce consensus when it does occur, the principle of conformity requires us to explore additional models for lifting the unilateralist’s curse.\n\n### 3.2. The Meta-rationality Model\n\nOne approach would be to appeal to each agent’s reflective rationality. A party to an epistemic disagreement should ideally reflect on the fallibility of their own judgment and adjust their posterior probability to take into account the fact that other agents have different opinions.\n\nRobert Aumann has shown that rational Bayesian agents with identical priors and common knowledge of each other’s posteriors (and of each other’s rationality) must have identical posterior probabilities.[18](#EN0018) Disagreement between such agents is impossible. This sounds like good news: if all agents make the same estimate of the benefits of action, the unilateralist curse is lifted.\n\nThere is, however, some skepticism about the relevance of Aumann’s result for practical cases of disagreement.[19](#EN0019) The assumption of identical priors, in particular, is problematic.[20](#EN0020) Furthermore, the same challenges that can make data sharing difficult can also make it difficult to make each agent’s honest posterior probability estimates of the value of the initiative common knowledge among all agents.\n\nIt turns out, however, that sufficiently rational agents can manage the curse even without communication. In the literature on the winner’s curse it has been argued that rational expected utility-maximizing will not be affected by it.[21](#EN0021) Rational agents will take the winner’s curse into account and adjust their bids accordingly. This is known as *bid shading*. Rational agents place bids that are lower than their *ex ante* expectation of the value of the good, but equal to their expectation of the value of the good conditional upon them winning the auction.\n\nThe counterpart of this response would be for agents in a unilateralist situation to estimate the value of the initiative conditional on the agent’s first-order estimate of the initiative’s value being the highest (or, in spoiler cases, the lowest).\n\nIn other words, on finding themselves in a unilateralist situation, each rational agent will initially estimate the value of the initiative based on his prior probability distribution. He will then take into account the case where his decision is decisive. In the case where agents can unilaterally undertake an initiative, the agent will condition on the situation in which he is the most sanguine and everybody else thinks the action should not be done. (In spoiler cases, the agent conditions on the situation in which he is the most pessimistic and everybody else thinks the initiative should be undertaken.) He then creates a posterior distribution of value that is used to make an adjusted decision.\n\n![equation image](/pmc/articles/PMC4959137/bin/tsep_a_1108373_m0001.jpg \"equation image\")\n\n\nwhere “win” represents being the deciding agent. Note that this typically requires knowing or estimating the number of other agents.\n\n\n*Example*\n\n\nIn the simple case where the agent assumes all other agents have the same priors and are acting independently, only differing in the noisy data about *V*\n\\* they have received:\n\n![equation image](/pmc/articles/PMC4959137/bin/tsep_a_1108373_m0002.jpg \"equation image\")\n\n\nwhere *F*(*V*) is the cumulative distribution function of the errors. The posterior distribution of *V*\n\\* becomes:\n\n![equation image](/pmc/articles/PMC4959137/bin/tsep_a_1108373_m0003.jpg \"equation image\")\n\n\nwhere *K* is a normalization constant. The posterior action should then be based on the expectation *E*(*V*\n\\*\n*|*win).\n\nIf the agents choose to act when the received data is above a fixed threshold *T*, *V*\n\\* is normally distributed with zero mean and variance 1, and they get estimates of *V*\n\\* with normal noise (again with mean zero and variance 1), then the optimal threshold is the one that maximizes the expected value (Figure [​(Figure44](/pmc/articles/PMC4959137/figure/F0004/)):\n\n![equation image](/pmc/articles/PMC4959137/bin/tsep_a_1108373_m0004.jpg \"equation image\")\n\n\n[[![An external file that holds a picture, illustration, etc.\nObject name is tsep_a_1108373_f0004_oc.jpg](/pmc/articles/PMC4959137/bin/tsep_a_1108373_f0004_oc.jpg \"Click on image to zoom\")](/core/lw/2.0/html/tileshop_pmc/tileshop_pmc_inline.html?title=Click%20on%20image%20to%20zoom&p=PMC3&id=4959137_tsep_a_1108373_f0004_oc.jpg)[Open in a separate window](/pmc/articles/PMC4959137/figure/F0004/?report=objectonly)](/pmc/articles/PMC4959137/figure/F0004/)[Figure 4](/pmc/articles/PMC4959137/figure/F0004/) The optimal threshold *T*\nopt(*N*) for action as a function of the number of agents. Agents who only act if the perceived value of the initiative is higher than *T*\nopt(*N*) will maximize their expected (joint) result.\n\n\n*T*\nopt(*N*) increases rapidly with *N*, reaching 0.54 for two agents and 1 for 4 agents: even for a small group it is rational to be far more cautious than in the single agent case. Note that in this case all agents are aware of the prior distribution, noise distribution, independence, and that the other agents are using this strategy (Figure [​(Figure55](/pmc/articles/PMC4959137/figure/F0005/)).[22](#EN0022)\n\n\n[[![An external file that holds a picture, illustration, etc.\nObject name is tsep_a_1108373_f0005_oc.jpg](/pmc/articles/PMC4959137/bin/tsep_a_1108373_f0005_oc.jpg \"Click on image to zoom\")](/core/lw/2.0/html/tileshop_pmc/tileshop_pmc_inline.html?title=Click%20on%20image%20to%20zoom&p=PMC3&id=4959137_tsep_a_1108373_f0005_oc.jpg)[Open in a separate window](/pmc/articles/PMC4959137/figure/F0005/?report=objectonly)](/pmc/articles/PMC4959137/figure/F0005/)[Figure 5](/pmc/articles/PMC4959137/figure/F0005/) The expected payoff for different actual values of the initiative for alternative ways of handling the unilateralist’s curse. Using the optimal individual threshold *T*\nopt(5) reduces the losses significantly.\n\nOne might raise questions about the practical applicability of this sophisticated Bayesian approach, however. Even if rational Bayesian agents would agree, humans are at best approximations of rational Bayesian agents and they have far more limited mental computation power—even when leaving out biasing factors.[23](#EN0023) Value in practical cases is also seldom in the form of easily manipulable and comparable scalar quantities. Hence implementing the sophisticated Bayesian approach to lifting the unilateralist’s curse might typically be infeasible.[24](#EN0024)\n\n\n### 3.3. The Moral Deference Model\n\nSuppose a unilateralist situation exists and that it is not feasible for all agents to lift the curse through communication and adjustment of beliefs. It might nevertheless be possible for the group to lift the curse if each agent complies with a moral norm which reduces the likelihood that he acts unilaterally, for example, by assigning decision-making authority to the group as a whole or to one individual within it. We call this the moral deference model.\n\nIn contrast to the two models presented above, the moral deference model does not require agents to defer to the group in forming their beliefs regarding the value of the initiative. However, it does require them to defer to the group in deciding whether to act on those beliefs. A slogan for this approach could be “comply in action, defy in thought.”\n\nThere are many norms such that universal compliance with the norm by a group of agents would lift the unilateralist’s curse. For example, a norm that assigned decision-making authority to an arbitrary member of the group would lift it. Consider the norm: when in a unilateralist situation, if you are the tallest person able to undertake the initiative, then undertake it if and only if you believe its value exceeds zero; if you are not the tallest person able to undertake the initiative, do not undertake it.\n\nUniversal compliance with this norm would prevent the unilateralist’s curse from arising in the sense that, in the absence of any bias towards or against action in the individual members of the group (and thus in the group’s tallest member), this norm will produce no group-level bias towards or against the initiative.[25](#EN0025) The payoffs associated with this tallest-decides norm in a five-agent situation are depicted in Figure [​Figure66](/pmc/articles/PMC4959137/figure/F0006/) below. The tallest-decides norm, however, has several epistemically and pragmatically unattractive features. For example, it does not protect against biases or errors that might impair the judgment of the group’s tallest member. Furthermore, it is very unlikely that such a norm would gain wide acceptance.\n\n[[![An external file that holds a picture, illustration, etc.\nObject name is tsep_a_1108373_f0006_oc.jpg](/pmc/articles/PMC4959137/bin/tsep_a_1108373_f0006_oc.jpg \"Click on image to zoom\")](/core/lw/2.0/html/tileshop_pmc/tileshop_pmc_inline.html?title=Click%20on%20image%20to%20zoom&p=PMC3&id=4959137_tsep_a_1108373_f0006_oc.jpg)[Open in a separate window](/pmc/articles/PMC4959137/figure/F0006/?report=objectonly)](/pmc/articles/PMC4959137/figure/F0006/)[Figure 6](/pmc/articles/PMC4959137/figure/F0006/) Expected payoff for different actual values of the initiative for alternative ways of handling the unilateralist curse. The tallest decides case achieves a significant reduction of loss, nearly reaching the payoff of the more complex Bayesian threshold method.\n\nFortunately, there are other norms that could lift the curse and may lack these unattractive features. One norm would recommend that agents conform to the rules of existing institutions that militate against unilateral action:\n\n(1) When in a unilateralist’s situation, defer to existing institutions, such as laws or customs, if universal deference to those institutions would lift the unilateralist’s curse.\n\nNational and international laws often militate against the unilateralist’s curse, for example by specifying that decisions must be made democratically or by individuals or institutions that have been given special authority over a particular realm of decision-making. In other cases, there are informal conventions that may do the job. For example, following the publication early last decade of two studies thought by some to aid bioweapons development,[26](#EN0026) a group of scientific journals agreed to introduce screening procedures to identify papers containing information that is especially prone to misuse and to seek external advice on the publication of such papers.[27](#EN0027) Though these procedures lacked legal status, compliance with them by journals may have helped lift the curse.\n\nOne virtue of (1) is that, since it simply reinforces existing institutional norms which may already command significant support, it may be relatively easy for it to achieve wide acceptance. However, (1) will not lift the curse in all cases. In many areas with an international dimension, for example, there are no relevant international laws and deference to national laws would merely create a new unilateralist situation between nations: the nation that evaluates the initiative most positively is most likely to allow it. Moreover, (1) may sometimes recommend deferring to biased procedures or agents.\n\nIt might be possible for a group of agents to lift the curse even in cases where (1) fails by complying with a different norm, one that promotes the development of and compliance with a new procedure for group decision-making. This approach was adopted by a large group of American microbiologists in mid-1974 when they agreed to a moratorium on recombinant DNA research until such time as the safety concerns that it raised could be jointly discussed and resolved. The moratorium held until the now-famous Asilomar Conference, which took place in February 1975 and resulted in a broad-based agreement on guidelines regarding the conditions under which recombinant DNA research ought to proceed.[28](#EN0028)\n\n\nThe recombinant DNA moratorium and guidelines were developed via consensus, but another approach would be to employ a voting procedure. For example, suppose all agents faced with a unilateralist situation complied with the norm:\n\n(2) When in a unilateralist’s situation, promote the holding of a majority vote among those capable of undertaking the initiative. If the vote takes place, then (a) defer to its verdict, and (b) encourage others to do likewise.\n\nUniversal compliance with this norm is likely to lift the curse. Since it is effectively using the median estimate it is robust to outliers. It will also tend to reduce systematic bias at the group level provided that individual biases are at least partially independent of one another.[29](#EN0029) And since majority voting is a common and widely accepted method for group decision-making, this norm would have relatively good prospects of gaining wide acceptance.\n\nCompliance with norms (1) and (2) will, however, lift the unilateralist’s curse only when a high degree of communication and coordination is possible. There are other norms whose universal adoption could lift the curse even in the absence of communication and coordination. Consider the norm:\n\n(3) When in a unilateralist situation, bring about the outcome if and only if you judge that a majority vote among those capable of undertaking the initiative would yield a majority in favor of doing so.\n\nInsofar as each individual capable of undertaking the initiative makes an accurate prediction of the views of all others, universal adoption of this norm will eliminate any group-level bias due to the unilateralist’s curse. Even if predictions of the views of others are inaccurate (e.g. because each agent overestimates the extent to which others share her views), universal adoption of this principle can still be expected to somewhat mitigate the unilateralist’s curse. It will tend to reduce the likelihood that those who value the initiative most favorably will undertake it, provided that these agents realize they are at the optimistic end of the spectrum.[30](#EN0030)\n\n\nFigure [​Figure77](/pmc/articles/PMC4959137/figure/F0007/) depicts, for a five-agent case, the expected payoffs associated with two of the norms discussed in this section—tallest decides, and the actual majority vote (norm (2))—and it compares these with other strategies described in Section [3.2](#S0005) above. Under our assumptions, the majority vote does rather well—it is close to the maximum available payoff represented by the omniscient case.\n\n[[![An external file that holds a picture, illustration, etc.\nObject name is tsep_a_1108373_f0007_oc.jpg](/pmc/articles/PMC4959137/bin/tsep_a_1108373_f0007_oc.jpg \"Click on image to zoom\")](/core/lw/2.0/html/tileshop_pmc/tileshop_pmc_inline.html?title=Click%20on%20image%20to%20zoom&p=PMC3&id=4959137_tsep_a_1108373_f0007_oc.jpg)[Open in a separate window](/pmc/articles/PMC4959137/figure/F0007/?report=objectonly)](/pmc/articles/PMC4959137/figure/F0007/)[Figure 7](/pmc/articles/PMC4959137/figure/F0007/) The expected payoff associated with universal compliance with six different strategies at different actual values of the initiative. The fully shared information strategy consists in pooling the information between the agents and acting on the group’s best joint estimate of *V*\n\\*;[33](#EN0033) this requires maximal communication. Despite the lack of communication in tallest decides and threshold setting, the agents achieve an average outcome close to the cases where communication is possible.\n\nHowever, in the real world, different strategies will work well in different cases. It is thus likely that the best norm to adopt, under the moral deference model, would be some composite of simple norms such as (1)–(3). For example, a group might adopt a norm that specifies that the group should act as specified by (1), (2) or (3) depending on what laws and conventions already exist, what forms of communication and coordination among group members are possible, and how costly such communication and coordination is likely to be, among other factors.\n\nWe do not wish to commit ourselves to norms (1)–(3) as the best building blocks from which to construct such a composite norm. We believe that each of (1)–(3) are at least plausible candidates for inclusion in a composite norm. However, there may be other norms that would more fully lift the curse or which have other advantages over (1)–(3). For example, there are well-known problems with majority voting which should perhaps lead us to prefer a different voting procedure under norms (2) and (3).\n\nOne other set of concerns regarding norms (2) and (3) warrants mentioning. Both of these norms involve holding a vote (real or hypothetical) *among agents capable of undertaking the initiative in question*. But it might be argued, on either epistemic or moral grounds, that any actual or hypothetical vote should include more individuals than merely those capable of undertaking the initiative. For example, perhaps the vote should include all whose capacity to evaluate the initiative passes some threshold of epistemic competence. Or perhaps, on moral grounds, the electorate should be expanded to include all individuals who will be affected by the initiative. Consider a case in which there are three agents who could undertake an initiative and two of the three judge that it would be best to do so. However, millions of others will be affected by the initiative and almost all of them judge that the initiative has net disvalue. In this case, it might seem morally preferable to hold (or imagine) a vote among all who will be affected by the initiative rather than limiting the vote to the three agent’s capable of undertaking it.\n\nA more specific problem with excluding individuals who are incapable of undertaking the initiative is that this might seem to skew the vote. There might be some agents who are not capable of undertaking the initiative, but could have been capable of doing so; they are incapable only because they previously judged that undertaking the initiative would be a bad idea and thus ceased to develop the necessary capacities. Excluding these agents from a vote might seem to skew the vote in favor of those who deem the initiative to be valuable and who have thus sought to develop the capacities necessary to undertake it. Thus, limiting the vote to those capable of undertaking the initiative may be epistemically, as well as morally, problematic.\n\nAt the same time, it might be argued that some agents capable of undertaking the initiative should be *excluded* from the vote. Suppose that each of five nations is capable of undertaking some geoengineering project with worldwide consequences. Four agree to hold a majority vote among the five nations and to abide by the outcome of that vote. The fifth wishes to take part in the vote but is resolved to press ahead with the project regardless of the outcome of the vote. It might seem doubtful whether the first four nations should include the fifth in the vote. Arguably, deferring to a majority vote in unilateralist cases involves making a sacrifice. It involves giving away some of one’s autonomous decision-making authority. It might seem that it would be unfair for the fifth nation to exert an influence over the decisions of others by participating in a vote without also being prepared to make the same sacrifice that the others are prepared to make. This may count in favor of excluding the fifth nation. Excluding the fifth nation might also help to incentivize deference to majority votes in unilateralist situations.\n\nThere are thus arguments both for expanding and for restricting the group of agents given a vote in norms (2) and (3). We cannot assess these arguments here. We mention them only to flag them as topics for further discussion. However, it is worth noting that including all and only those agents who are capable of undertaking an initiative does at least have the virtue of picking out a group that would, in many cases, be relatively easy to identify.\n\nWe should end this section on the moral deference model with an important clarification: the model does not rely on a commitment to any particular moral theory. Proponents of a range of different moral theories could accept norms of the sort described above, though they would assign different statuses to them.\n\nA rule consequentialist, for example, might treat these norms as genuine moral principles—principles that determine which acts are right and which are wrong. According to one formulation of rule consequentialism, a rule of action is a genuine moral principle just in case it is part of the set of rules of action whose general acceptance can be expected to have consequences as good as the general acceptance of any alternative set of rules.[31](#EN0031) Given the risk of premature or erroneous action created by the unilateralist’s curse and the likelihood that most agents are not sophisticated enough belief-formers to apply our meta-rationality model, it is plausible that the optimal set of rules will contain a norm of the sort that we have discussed.\n\nOn some other moral theories, these norms would serve not as genuine moral principles, but as guidelines for helping agents to comply with such principles. Adherents of many moral theories, both consequentialist and deontological, could accept something like the following moral principle:\n\nAgents have moral reasons to undertake an initiative if and only if that initiative would contribute to the common good, and to spoil an initiative if and only if that initiative would detract from the common good.\n\nNorms of the sort discussed above could help agents to better comply with this principle in unilateralist situations.[32](#EN0032)\n\n\n4.  Discussion\n--------------\n\nWe proposed:\n\n\n*The Principle of Conformity*\n\n\nWhen acting out of concern for the common good in a unilateralist situation, reduce your likelihood of unilaterally undertaking or spoiling the initiative to a level that *ex ante* would be expected to lift the curse.\n\nWe also outlined three different ways in which agents who find themselves in unilateralist situations might comply with this principle. We do not claim that any one of these models is superior to the others in all situations. Which model should be adopted will depend, among other things, on the sophistication of the agents, the degree of communication and coordination that is possible, and the nature of existing laws and conventions bearing on the decision.\n\nIn this section we discuss a concern that might be raised regarding our principle.\n\nAdoption of the principle of conformity is meant to make things better. Yet if we “backtest” the principle on historical experience, it is not at all clear that universal adoption of the principle of conformity would have had a net positive effect. It seems that, quite often, what is now widely recognized as important progress was instigated by the unilateral actions of mavericks, dissidents, and visionaries who undertook initiatives that most of their contemporaries would have viewed with hostility and that existing institutions sought to suppress. The benefits of iconoclasm and defiance of authority have been stated especially forcefully in the Enlightenment tradition and by proponents of scientific and technological progress. They are also evident in many cases of “whistleblowing.”\n\nConsider the case of Daniel Ellsberg, famous for leaking the Pentagon Papers, which revealed the hopelessness of the US military situation in Vietnam. Most of Ellsberg’s peers, who had the high-level security clearance required to access the relevant documents, presumably did not believe that leaking the material to the press would contribute positively to the common good. If Ellsberg had sought to follow the principle of conformity, for example by imagining a vote among all those in a position to leak the documents, it would seem he would have had to conclude that the documents ought not be leaked. This might seem an undesirable outcome.\n\nIt is possible that the appearance that unilateralism has historically been mostly for the good is illusory. Historical unilateralism might be more salient when it worked out well than when it worked out badly, perhaps because successes have been more extreme but less frequent than the failures.\n\nMoreover, it may be that, in some cases where the principle of conformity appears to recommend a net harmful course of action, this implication can be avoided by attending to how the group of (imaginary or actual) voters or epistemic peers is defined. For example, if one allows that these groups might be defined more broadly than the group of agents capable of undertaking an action, it may be possible to avoid the implication that Ellsberg should have refrained from whistleblowing. (Suppose that many “outsiders” would have voted in favor of his releasing the information.)\n\nHowever, even if unilateralism *has* historically provided a net benefit to humanity, this need not undermine our argument. The claim that the unilateralist curse is an important phenomenon and that we have reason to lift it is consistent with the claim that the curse has provided a net benefit to humanity.\n\nThe main effect of the curse is to produce a tendency towards unilateral initiatives, and if it has historically been the case that there have been other factors that have tended to strongly inhibit unilateral initiatives, then it could be the case that the curse has had the net effect of moving the overall amount of unilateralism closer to the optimal level. For example, it might be argued that the scholars of past ages were usually far too deferential to authority, for reasons independent of the factors discussed in this paper. Their failure to take into account our arguments might then have had the salutary effect of not further inhibiting whatever propensity remained to promote new thoughts.\n\n5.  Concluding Thoughts\n-----------------------\n\nWe have described a moral analog of the winner’s curse. The unilateralist’s curse arises when each of a group of agents can, regardless of the opposition of others, undertake or spoil an initiative that has significant effects on others. In such cases, if each agent decides whether to undertake (or spoil) the initiative based on his own independent naive assessment of its value, there will be a group-level bias towards undertaking (spoiling) the initiative. Importantly, this effect arises even if all the agents are assumed to be motivated solely by concern for the common good.\n\nWe proposed a principle—the principle of conformity—which instructs agents faced with a unilateralist situation to reduce their likelihood of unilaterally undertaking (or spoiling) the initiative. We then outlined three models for accomplishing this. They involved, respectively, (1) sharing information and reasoning before forming one’s evaluation of the initiative, (2) adjusting one’s evaluation in the light of the curse, and (3) deferring to the group in making one’s decision.\n\nAs we acknowledged in the previous section, there may be considerations that militate against the principle of conformity. For example, if there is already a group-level bias against unilateralism, then compliance with the principle would exacerbate this bias. However, we maintain that there is a *prima facie* case for complying with the principle. Moreover, since the level of bias due to such other factors towards or against unilateralism presumably varies across different contexts, it is likely that there will be some contexts in which the *prima facie* case for complying with the principle will be decisive. Those will be the contexts in which the group-level bias due to the unilateralist’s curse is greater than any countervailing bias against unilateralism.\n\nIt is also possible that, at least within the domain of science, the principle of conformity is more relevant today than it was, say, prior to the Enlightenment. At that time, there was, plausibly, a strong bias against thinking and acting independently in intellectual matters, at least where this would involve diverging from the views of the Church. Since the Enlightenment, however, there may have been a significant weakening of this bias. Independence of thought and action is now more widely regarded as a virtue in scientists and other intellectuals. Honors and prizes are won based on claims to originality and precedence. There may now be no bias, or only a weak bias, against unilateralism in science. Thus, the risk posed by the unilateralist curse in scientific contexts may be greater now than ever.\n\nTo resist the unilateralists’ curse one first has to become aware of when one is in a curse situation. We hope this paper will help achieve that.\n\nFunding\n-------\n\nThis work was supported by the The Oxford Martin School; The Wellcome Trust [grant number WT087211].\n\nBiographies\n-----------\n\n• Nick Bostrom is Professor of Philosophy and Director of the Future of Humanity Institute at the University of Oxford, Oxford, UK.\n\n• Thomas Douglas is Senior Research Fellow in Philosophy at the University of Oxford, Oxford, UK.\n\n• Anders Sandberg is James Martin Research Fellow at the University of Oxford, Oxford, UK.\n\n Notes\n-----\n\n1 We assume that the common good is determined in part by the wellbeing of all persons and other morally significant individuals. However, we remain neutral on precisely how individual wellbeing determines the common good. For example, we do not commit ourselves to the view that the common good is simply aggregate individual wellbeing; we allow that the distribution of wellbeing might be relevant. We also allow that factors besides individual wellbeing might influence the common good. For example, some initiatives might possess intrinsic value that is independent of their contribution to wellbeing, and we allow that this intrinsic value might be one element in the common good.\n\n2 The Progressive Magazine ([1979](#CIT0026)).\n\n3 Rotblat ([1985](#CIT0028))\n\n4 Bowden ([2007](#CIT0006)).\n\n5 Williams ([1995](#CIT0030)).\n\n6 Oye et al. ([2014](#CIT0024)), Gurwitz ([2014](#CIT0015)), and Oye and Esvelt ([2014](#CIT0023)).\n\n7 Thaler ([1988](#CIT0029)).\n\n8 The probability that a particular agent will be wrong about the sign of the value of the outcome is Pr(*V*\n\\* + *d* > 0) if *V*\n\\* < 0 and Pr(*V*\n\\* + *d* < 0) if *V*\n\\* > 0. This is equal to 1 − *F*(−*V*\n\\*) if *V*\n\\* < 0 and *F*(−*V*\n\\*) if *V*\n\\* > 0. The probability that out of *N* agents at least one will be wrong about the sign is (1 − *F*(−*V*\n\\*)*N*) if *V*\n\\* < 0 and (1 − (1 − *F*(−*V*\n\\*))*N*) if *V*\n\\* > 0. However, even if errors are symmetric around 0, the expected outcome is not: in the *V*\n\\* < 0 case it is enough that one agent acts for a negative value to be obtained, while in the *V*\n\\* > 0 case all agents have to err on the side of caution for them to lose out on a positive value. The expected value obtained by naive agents is hence *V*\n\\*(1 − *F*(−*V*\n\\*)*N*). For positive values this is close to *V*\n\\* (for unbiased error distributions), and we will hence focus on the *V*\n\\* < 0 case where unilateral action is a problem.\n\n9 Theorem: As *N* grows, the likelihood *P* of at least one agent proceeding incorrectly increases monotonically towards 1 unless *F*(−*V*\n\\*) = 1 (i.e. unless there is an upper limit on the size of the deviations and *V*\n\\* is more negative than this limit, no agent will ever make a sufficiently bad mistake).Proof: If *F*(−*V*\n\\*) = 1, *p* = 0 for all *N*. Otherwise 0 ≤ *F*(−*V*\n\\*) < 1, and hence *F*(−*V*\n\\*)*N* approaches 0 as *N* → ∞.\n\n\n\n10 There will also, of course, be cases where an agent’s decision whether to undertake an initiative affects others but the agents are motivated by self-interest rather than the common good. In these cases, there are two possible reasons for getting the wrong decision, from the point of view of the common good: (i) self-interest and the common good come apart—that is, one is judged to have positive value and the other negative value—and (ii) the agent overestimates the “self-interest” value of the initiative.\n\n11 If the distribution of errors is *skewed* such that the typical estimate is higher than the true value, for instance due to optimism bias, then the risk of erroneous action is increased: in that case, even a single agent might be likely to overestimate the value of the initiative sufficiently to undertake it even when the true expectation value of the initiative is strongly negative. But this is unrelated to the curse.In the case of estimates skewed towards safety—that is, where there is pessimism bias—any tail distribution allowing mistaken action will still produce a growing probability of going ahead as *N* grows. There may be intermediary cases where the curse would helpfully serve to balance out an opposite effect arising from pessimism bias.\n\n\n\n12 Condorcet ([1785](#CIT0009)).\n\n13 Cf. Snorri Sturluson’s *Gylfagining*.\n\n14 For a list of UN Security Council vetoes, see (accessed February 23, 2015).\n\n15 See, for example, MacFarquhar ([2011](#CIT0021)).\n\n16 In addition to adhering to the principle of conformity in particular unilateralist situations, one might also have some moral reason to work at a more general level to counteract the unilateralist’s curse. One way to do this would be to promote awareness and adoption of the principle of conformity. Another way would be to promote the development of institutions that make unilateralist situations less likely to arise, especially in regards to matters of global significance where the effects of the curse can be particularly devastating.\n\n17 Bostrom ([2011](#CIT0005)).\n\n18 Aumann ([1976](#CIT0003)).\n\n19 For discussion, see e.g. Christensen ([2009](#CIT0008)) and Feldman and Warfield ([2010](#CIT0013)).\n\n20 Attempts to weaken this assumption have been made; see Hanson ([2006](#CIT0016)).\n\n21 Cox and Isaac ([1984](#CIT0011)).\n\n22 In actual cases, the other agents are likely to have different priors and non-independent information, plus uncertainty about the number of agents. This possibility can be included in our top equation, at the price of a far more complex model that needs several priors.\n\n23 Including self-deception about how meta-rational they are (Cowen and Hanson [2001](#CIT0010)).\n\n24 Another way of looking at the problem is through the lens of game theory. Each agent needs to choose a (pure or mixed) strategy mapping their observations into actions, trying to maximize expected utility. We assume that all agents share a single utility function, i.e. they are all working for the common good. Since the agents know they are identical and will not be able to communicate, they will be using the same strategy. It can then be shown that there if there is any local maximum in their utility function if they all use the same strategy *g*, then the general use of *g* is a Nash equilibrium. (See Armstrong ([2012](#CIT0001)) for further details.) The equilibrium can be non-strict under some conditions: a single agent is free to follow a different strategy without changing the outcome. This means that no agent will be able to realize higher expected value pursuing a different strategy.Note that optimal strategies can be probabilistic (i.e. mixed). For example, suppose the information each agent received is either a red light or a green light (indicating whether the initiative should be undertaken), but the green light is only correct 75% of the time. For multiple agents, always undertaking the initiative when a green light is received produces a worse outcome than only acting on a green light with a probability less than one. As the number of agents goes up this probability should become lower, exploiting the fact that in the case the action does have positive outcome the likelihood of at least one agent acting remains high enough. Calculating the optimal probability requires an estimate of the number of agents and the probability of erroneous information, again requiring Bayesian priors. Game theory mainly tells us that a solution exists, but finding it requires the meta-rationality approach.\n\n\n\n25 The norm does not deal with “spoiler” cases, where one agent can prevent an initiative from taking place. However, an analogous norm could be adopted to lift the unilateralist curse in those cases.\n\n26 Jackson et al. ([2001](#CIT0018)) and Cello, Paul, and Wimmer ([2002](#CIT0007)).\n\n27 See Atlas et al. ([2003](#CIT0002)) and Journal Editors and Authors Group ([2003](#CIT0019)). This procedure was invoked in the wake of two recent studies which demonstrated how to make avian influenza transmissible by air between ferrets. See, for discussion, Perez ([2012](#CIT0025)), Faden and Karron ([2012](#CIT0012)) and Osterholm and Henderson ([2012](#CIT0022)).\n\n28 See, for a brief description of the case, Berg ([2008](#CIT0004)).\n\n29 The assumptions of the Condorcet theorem can be weakened in many ways. In particular, agent competence only has to be on average above 50% (Grofman, Owen, and Feld [1983](#CIT0014)), and a certain level of voting correlation does not reduce majority voting performance (Ladha [1992](#CIT0020)).\n\n30 For similar reasons, an analogous norm would tend to reduce the likelihood of *spoiling* an initiative of those who evaluate an initiative most negatively.\n\n31 See, for example, Hooker ([2002](#CIT0017)).\n\n32 A parallel can be drawn to one prominent justification for the authority of the law, due to Joseph Raz. That justification appeals to the same kind of consideration that we suggest could ground a norm against unilateral action: “The normal and primary way to establish that a person should be acknowledged to have authority over another person involves showing that the alleged subject is likely better to comply with reasons which apply to him (other than the alleged authoritative directives) if he accepts the directives of the alleged authority as authoritatively binding, and tries to follow them, than if he tries to follow the reasons which apply to him directly” (Raz [1994](#CIT0027), 214).\n\n33 In this case, the maximum likelihood estimate is simply the average of their individual estimates.\n\nAcknowledgment:\n---------------\n\nWe would like to thank Toby Ord, Stuart Armstrong, and an audience at the Future of Humanity Institute, Oxford for their comments on earlier versions of this article.\n\nDisclosure statement\n--------------------\n\nNo potential conflict of interest was reported by the authors.\n\nReferences\n----------\n\n* Armstrong S. Oxford, UK: Future of Humanity Institute, Oxford University; 2012. Nash Equilibrium of Identical Agents Facing the Unilateralist’s Curse; pp. 1–5. [[Google Scholar](https://scholar.google.com/scholar?q=Armstrong+S.+2012+Nash+Equilibrium+of+Identical+Agents+Facing+the+Unilateralist’s+Curse+1+5+Oxford,+UK+Future+of+Humanity+Institute,+Oxford+University+)]\n* Atlas Ronald, Campbell Philip, Cozzarelli Nicholas R., Curfman Greg, Enquist Lynn, Fink Gerald, Flanagin Annette, et al. Statement on Scientific Publication and Security. *Science*. 2003;299(5610):1149. [[PubMed](https://pubmed.ncbi.nlm.nih.gov/12595658)] [[Google Scholar](https://scholar.google.com/scholar_lookup?journal=Science&title=Statement+on+Scientific+Publication+and+Security&author=Ronald+Atlas&author=Philip+Campbell&author=Nicholas+R.+Cozzarelli&author=Greg+Curfman&author=Lynn+Enquist&volume=299&issue=5610&publication_year=2003&pages=1149&pmid=12595658&)]\n* Aumann Robert J. Agreeing to Disagree. *The Annals of Statistics*. 1976;4(6):1236–1239. doi: 10.1214/aos/1176343654. [[CrossRef](//doi.org/10.1214%2Faos%2F1176343654)] [[Google Scholar](https://scholar.google.com/scholar_lookup?journal=The+Annals+of+Statistics&title=Agreeing+to+Disagree&author=Robert+J.+Aumann&volume=4&issue=6&publication_year=1976&pages=1236-1239&doi=10.1214/aos/1176343654&)]\n* Berg P. Meetings That Changed the World: Asilomar 1975: DNA Modification Secured. *Nature*. 2008;455(7211):290–291. doi: 10.1038/455290a. [[PubMed](https://pubmed.ncbi.nlm.nih.gov/18800118)] [[CrossRef](//doi.org/10.1038%2F455290a)] [[Google Scholar](https://scholar.google.com/scholar_lookup?journal=Nature&title=Meetings+That+Changed+the+World:+Asilomar+1975:+DNA+Modification+Secured&author=P.+Berg&volume=455&issue=7211&publication_year=2008&pages=290-291&pmid=18800118&doi=10.1038/455290a&)]\n* Bostrom Nick. Information Hazards: A Typology of Potential Harms from Knowledge. *Review of Contemporary Philosophy*. 2011;10(2011):44–79. [[Google Scholar](https://scholar.google.com/scholar_lookup?journal=Review+of+Contemporary+Philosophy&title=Information+Hazards:+A+Typology+of+Potential+Harms+from+Knowledge&author=Nick+Bostrom&volume=10&issue=2011&publication_year=2011&pages=44-79&)]\n* Bowden C. Our Wall. *National Geographic*. 2007;211(2007):115–139. [[Google Scholar](https://scholar.google.com/scholar_lookup?journal=National+Geographic&title=Our+Wall&author=C.+Bowden&volume=211&issue=2007&publication_year=2007&pages=115-139&)]\n* Cello J., Paul A. V., Wimmer E. Chemical Synthesis of Poliovirus CDNA: Generation of Infectious Virus in the Absence of Natural Template. *Science*. 2002;297(5583):1016–1018. doi: 10.1126/science.1072266. [[PubMed](https://pubmed.ncbi.nlm.nih.gov/12114528)] [[CrossRef](//doi.org/10.1126%2Fscience.1072266)] [[Google Scholar](https://scholar.google.com/scholar_lookup?journal=Science&title=Chemical+Synthesis+of+Poliovirus+CDNA:+Generation+of+Infectious+Virus+in+the+Absence+of+Natural+Template&author=J.+Cello&author=A.+V.+Paul&author=E.+Wimmer&volume=297&issue=5583&publication_year=2002&pages=1016-1018&pmid=12114528&doi=10.1126/science.1072266&)]\n* Christensen D. Disagreement as Evidence: The Epistemology of Controversy. *Philosophy Compass*. 2009;4(5):756–767. doi: 10.1111/phco.2009.4.issue-5. [[CrossRef](//doi.org/10.1111%2Fphco.2009.4.issue-5)] [[Google Scholar](https://scholar.google.com/scholar_lookup?journal=Philosophy+Compass&title=Disagreement+as+Evidence:+The+Epistemology+of+Controversy&author=D.+Christensen&volume=4&issue=5&publication_year=2009&pages=756-767&doi=10.1111/phco.2009.4.issue-5&)]\n* Condorcet Marquis de. *Essai sur l’application de l’analyse á la probabilité des décisions rendues á la pluralité des voix* [Essay on the Application of Analysis to the Probability of Majority Decisions] Paris: Imprimerie Royale; 1785. [[Google Scholar](https://scholar.google.com/scholar_lookup?title=Essai+sur+l’application+de+l’analyse+á+la+probabilité+des+décisions+rendues+á+la+pluralité+des+voix+[Essay+on+the+Application+of+Analysis+to+the+Probability+of+Majority+Decisions]&author=Marquis+de+Condorcet&publication_year=1785&)]\n* Cowen Tyler, Hanson Robin. *Disagreement as Self-Deception about Meta-Rationality*. 2001 \n* Cox James C., Isaac R. Mark. In Search of the Winner’s Curse. *Economic Inquiry*. 1984;22(4):579–592. [[Google Scholar](https://scholar.google.com/scholar_lookup?journal=Economic+Inquiry&title=In+Search+of+the+Winner’s+Curse&author=James+C.+Cox&author=R.+Mark+Isaac&volume=22&issue=4&publication_year=1984&pages=579-592&)]\n* Faden Ruth R., Karron Ruth A. The Obligation to Prevent the Next Dual-Use Controversy. *Science*. 2012;335(6070):802–804. [[PubMed](https://pubmed.ncbi.nlm.nih.gov/22323739)] [[Google Scholar](https://scholar.google.com/scholar_lookup?journal=Science&title=The+Obligation+to+Prevent+the+Next+Dual-Use+Controversy&author=Ruth+R.+Faden&author=Ruth+A.+Karron&volume=335&issue=6070&publication_year=2012&pages=802-804&pmid=22323739&)]\n* Feldman R., Warfield T., editors. *Disagreement*. Oxford: Oxford University Press; 2010. [[Google Scholar](https://scholar.google.com/scholar_lookup?title=Disagreement&publication_year=2010&)]\n* Grofman Bernard, Owen Guillermo, Feld Scott L. Thirteen Theorems in Search of the Truth. *Theory & Decision*. 1983;15:261–278. [[Google Scholar](https://scholar.google.com/scholar_lookup?journal=Theory+&+Decision&title=Thirteen+Theorems+in+Search+of+the+Truth&author=Bernard+Grofman&author=Guillermo+Owen&author=Scott+L.+Feld&volume=15&publication_year=1983&pages=261-278&)]\n* Gurwitz D. Gene Drives Raise Dual-Use Concerns. *Science*. 2014;345(6200):1010. doi: 10.1126/science.345.6200.1010-b. [[PubMed](https://pubmed.ncbi.nlm.nih.gov/25170142)] [[CrossRef](//doi.org/10.1126%2Fscience.345.6200.1010-b)] [[Google Scholar](https://scholar.google.com/scholar_lookup?journal=Science&title=Gene+Drives+Raise+Dual-Use+Concerns&author=D.+Gurwitz&volume=345&issue=6200&publication_year=2014&pages=1010&pmid=25170142&doi=10.1126/science.345.6200.1010-b&)]\n* Hanson Robin. Uncommon Priors Require Origin Disputes. *Theory and Decision*. 2006;61(4):319–328. doi: 10.1007/s11238-006-9004-4. [[CrossRef](//doi.org/10.1007%2Fs11238-006-9004-4)] [[Google Scholar](https://scholar.google.com/scholar_lookup?journal=Theory+and+Decision&title=Uncommon+Priors+Require+Origin+Disputes&author=Robin+Hanson&volume=61&issue=4&publication_year=2006&pages=319-328&doi=10.1007/s11238-006-9004-4&)]\n* Hooker B. *Ideal Code, Real World*. Oxford: Clarendon Press; 2002. [[CrossRef](//doi.org/10.1093%2F0199256578.001.0001)] [[Google Scholar](https://scholar.google.com/scholar_lookup?title=Ideal+Code,+Real+World&author=B.+Hooker&publication_year=2002&)]\n* Jackson R. J., Ramsay A. J., Christensen C. D., Beaton S., Hall D. F., Ramshaw I. A. Expression of Mouse Interleukin-4 by a Recombinant Ectromelia Virus Suppresses Cytolytic Lymphocyte Responses and Overcomes Genetic Resistance to Mousepox. *Journal of Virology*. 2001;75(3):1205–1210. doi: 10.1128/JVI.75.3.1205-1210.2001. [[PMC free article](/pmc/articles/PMC114026/)] [[PubMed](https://pubmed.ncbi.nlm.nih.gov/11152493)] [[CrossRef](//doi.org/10.1128%2FJVI.75.3.1205-1210.2001)] [[Google Scholar](https://scholar.google.com/scholar_lookup?journal=Journal+of+Virology&title=Expression+of+Mouse+Interleukin-4+by+a+Recombinant+Ectromelia+Virus+Suppresses+Cytolytic+Lymphocyte+Responses+and+Overcomes+Genetic+Resistance+to+Mousepox&author=R.+J.+Jackson&author=A.+J.+Ramsay&author=C.+D.+Christensen&author=S.+Beaton&author=D.+F.+Hall&volume=75&issue=3&publication_year=2001&pages=1205-1210&pmid=11152493&doi=10.1128/JVI.75.3.1205-1210.2001&)]\n* Journal Editors and Authors Group Uncensored Exchange of Scientific Results. *Proceedings of the National Academy of Sciences*. 2003;100(4):1464. [[PMC free article](/pmc/articles/PMC149850/)] [[PubMed](https://pubmed.ncbi.nlm.nih.gov/12590129)] [[Google Scholar](https://scholar.google.com/scholar_lookup?journal=Proceedings+of+the+National+Academy+of+Sciences&title=Uncensored+Exchange+of+Scientific+Results&volume=100&issue=4&publication_year=2003&pages=1464&)]\n* Ladha Krishna K. The Condorcet Jury Theorem, Free Speech, and Correlated Votes. *American Journal of Political Science*. 1992;36(3):617–634. doi: 10.2307/2111584. [[CrossRef](//doi.org/10.2307%2F2111584)] [[Google Scholar](https://scholar.google.com/scholar_lookup?journal=American+Journal+of+Political+Science&title=The+Condorcet+Jury+Theorem,+Free+Speech,+and+Correlated+Votes&author=Krishna+K+Ladha&volume=36&issue=3&publication_year=1992&pages=617-634&doi=10.2307/2111584&)]\n* MacFarquhar N. U.S. Blocks Security Council Censure of Israeli Settlements. [February 18]; [February 23, 2015];*The New York Times*. 2011 \n* Osterholm Michael T., Henderson Donald A. Life Sciences at a Crossroads: Respiratory Transmissible H5N1. *Science*. 2012;335(6070):801–802. [[PubMed](https://pubmed.ncbi.nlm.nih.gov/22267584)] [[Google Scholar](https://scholar.google.com/scholar_lookup?journal=Science&title=Life+Sciences+at+a+Crossroads:+Respiratory+Transmissible+H5N1&author=Michael+T.+Osterholm&author=Donald+A.+Henderson&volume=335&issue=6070&publication_year=2012&pages=801-802&pmid=22267584&)]\n* Oye K. A., Esvelt K. Gene Drives Raise Dual-Use Concerns—Response. *Science*. 2014;345(6200):1010–1011. [[PubMed](https://pubmed.ncbi.nlm.nih.gov/25170143)] [[Google Scholar](https://scholar.google.com/scholar_lookup?journal=Science&title=Gene+Drives+Raise+Dual-Use+Concerns—Response&author=K.+A.+Oye&author=K.+Esvelt&volume=345&issue=6200&publication_year=2014&pages=1010-1011&pmid=25170143&)]\n* Oye K. A., Esvelt K., Appleton E., Catteruccia F., Church G., Kuiken T., Lightfoot S. B, McNamara J., Smidler A., Collins J. P. Regulating Gene Drives. *Science*. 2014;345(6197):626–628. doi: 10.1126/science.1254287. [[PubMed](https://pubmed.ncbi.nlm.nih.gov/25035410)] [[CrossRef](//doi.org/10.1126%2Fscience.1254287)] [[Google Scholar](https://scholar.google.com/scholar_lookup?journal=Science&title=Regulating+Gene+Drives&author=K.+A.+Oye&author=K.+Esvelt&author=E.+Appleton&author=F.+Catteruccia&author=G.+Church&volume=345&issue=6197&publication_year=2014&pages=626-628&pmid=25035410&doi=10.1126/science.1254287&)]\n* Perez Daniel R. H5N1 Debates: Hung up on the Wrong Questions. *Science*. 2012;335(6070):799–801. [[PMC free article](/pmc/articles/PMC5003609/)] [[PubMed](https://pubmed.ncbi.nlm.nih.gov/22267585)] [[Google Scholar](https://scholar.google.com/scholar_lookup?journal=Science&title=H5N1+Debates:+Hung+up+on+the+Wrong+Questions&author=Daniel+R.+Perez&volume=335&issue=6070&publication_year=2012&pages=799-801&pmid=22267585&)]\n* The Progressive Magazine The H-Bomb Secret: How We Got It and Why We’re Telling It. *The Progressive Magazine*. 1979 November. full issue.\n* Raz Joseph. *‘Authority, Law, and Morality’ in His, Ethics in the Public Domain*. Oxford: Clarendon Press; 1994. [[Google Scholar](https://scholar.google.com/scholar_lookup?title=‘Authority,+Law,+and+Morality’+in+His,+Ethics+in+the+Public+Domain&author=Joseph+Raz&publication_year=1994&)]\n* Rotblat J. Leaving the Bomb Project. *Bulletin of the Atomic Scientists*. 1985;41(7):16–19. [[Google Scholar](https://scholar.google.com/scholar_lookup?journal=Bulletin+of+the+Atomic+Scientists&title=Leaving+the+Bomb+Project&author=J.+Rotblat&volume=41&issue=7&publication_year=1985&pages=16-19&)]\n* Thaler Richard H. Anomalies: The Winner’s Curse. *The Journal of Economic Perspectives*. 1988;2(1):191–202. [[Google Scholar](https://scholar.google.com/scholar_lookup?journal=The+Journal+of+Economic+Perspectives&title=Anomalies:+The+Winner’s+Curse&author=Richard+H.+Thaler&volume=2&issue=1&publication_year=1988&pages=191-202&)]\n* Williams K., Australia Bureau of Resource Sciences, CSIRO Division of Wildlife, and Ecology . *Managing Vertebrate Pests: Rabbits*. Canberra: Australian Government Publishing Service; 1995. [[Google Scholar](https://scholar.google.com/scholar_lookup?title=Managing+Vertebrate+Pests:+Rabbits&author=K.+Williams&publication_year=1995&)]\n\n\n---\n\nArticles from Social Epistemology are provided here courtesy of **Taylor & Francis**\n\n---", "url": "https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4959137/", "title": "The Unilateralist’s Curse and the Case for a Principle of Conformity", "source": "html_articles", "source_type": "journalArticle", "source_filetype": "pdf", "date_published": "2016-07-02T22:00:00Z", "authors": ["Nick Bostrom", "Thomas Douglas", "Anders Sandberg"], "summary": [], "id": "bd0faf78abe3523cbd7ea0aaa0319c40"} {"text": "Glob Policy. 2020 May; 11(3): 271–282. Published online 2020 Jan 24. doi: [10.1111/1758-5899.12786](//doi.org/10.1111%2F1758-5899.12786)PMCID: PMC7228299PMID: [32427180](https://pubmed.ncbi.nlm.nih.gov/32427180)Defence in Depth Against Human Extinction: Prevention, Response, Resilience, and Why They All Matter\n====================================================================================================\n\n[Owen Cotton‐Barratt](https://pubmed.ncbi.nlm.nih.gov/?term=Cotton‐Barratt%20O%5BAuthor%5D),1\n,[\\*](#gpol12786-biog-0001) [Max Daniel](https://pubmed.ncbi.nlm.nih.gov/?term=Daniel%20M%5BAuthor%5D),1\n,[\\*](#gpol12786-biog-0002) and [Anders Sandberg](https://pubmed.ncbi.nlm.nih.gov/?term=Sandberg%20A%5BAuthor%5D)1\n,[\\*](#gpol12786-biog-0003)### Owen Cotton‐Barratt\n\n\n1\nUniversity of Oxford\n\n\nFind articles by [Owen Cotton‐Barratt](https://pubmed.ncbi.nlm.nih.gov/?term=Cotton‐Barratt%20O%5BAuthor%5D)### Max Daniel\n\n\n1\nUniversity of Oxford\n\n\nFind articles by [Max Daniel](https://pubmed.ncbi.nlm.nih.gov/?term=Daniel%20M%5BAuthor%5D)### Anders Sandberg\n\n\n1\nUniversity of Oxford\n\n\nFind articles by [Anders Sandberg](https://pubmed.ncbi.nlm.nih.gov/?term=Sandberg%20A%5BAuthor%5D)[Author information](#) [Copyright and License information](#) [Disclaimer](/pmc/about/disclaimer/)\n1\nUniversity of Oxford\n[Copyright](/pmc/about/copyright/) © 2020 The Authors. *Global Policy* published by Durham University and John Wiley & Sons Ltd.This is an open access article under the terms of the [http://creativecommons.org/licenses/by-nc/4.0/](https://creativecommons.org/licenses/by-nc/4.0/) License, which permits use, distribution and reproduction in any medium, provided the original work is properly cited and is not used for commercial purposes.Associated Data\n---------------\n\n[Data Availability Statement](#)Data sharing is not applicable to this article as no new data were created or analysed.\n\nAbstract\n--------\n\nWe look at classifying extinction risks in three different ways, which affect how we can intervene to reduce risk. First, how does it start causing damage? Second, how does it reach the scale of a global catastrophe? Third, how does it reach everyone? In all of these three phases there is a defence layer that blocks most risks: First, we can prevent catastrophes from occurring. Second, we can respond to catastrophes before they reach a global scale. Third, humanity is resilient against extinction even in the face of global catastrophes. The largest probability of extinction is posed when all of these defences are weak, that is, by risks we are unlikely to prevent, unlikely to successfully respond to, and unlikely to be resilient against. We find that it’s usually best to invest significantly into strengthening all three defence layers. We also suggest ways to do so tailored to the classes of risk we identify. Lastly, we discuss the importance of underlying risk factors – events or structural conditions that may weaken the defence layers even without posing a risk of immediate extinction themselves.\n\n\n\n\n### Policy implications\n\n\n\n\n* We can usually best reduce extinction risk by splitting our budget between all defence layers.\n* We should include measures that reduce whole classes of risks, such as research uncovering currently unseen risk. We should also address risk factors that would not cause extinction themselves but weaken our defences, for example, bad global governance.\n* Future research should identify synergies between reducing extinction and other risks. For example, research on climate change adaptation and mitigation should assess how we can best preserve our ability to prevent, respond to, and be resilient against extinction risks.\n\n\n\n\n\n\n\nOur framework for discussing extinction risks\n---------------------------------------------\n\nHuman extinction would be a tragedy. For many moral views it would be far worse than merely the deaths entailed, because it would curtail our potential by wiping out all future generations and all value they could have produced (Bostrom, [2013](#gpol12786-bib-0013); Parfit, [1984](#gpol12786-bib-0045); Rees, [2003](#gpol12786-bib-0047), [2018](#gpol12786-bib-0048)).\n\nHuman extinction is also possible, even this century. Both the total risk of extinction by 2100 and the probabilities of specific potential causes have been estimated using a variety of methods including trend extrapolation, mathematical modelling, and expert elicitation; see Rowe and Beard ([2018](#gpol12786-bib-0050)) for a review, as well as Tonn and Stiefel ([2013](#gpol12786-bib-0059)) for methodological recommendations. For example, Pamlin and Armstrong ([2015](#gpol12786-bib-0044)) give probabilities between 0.00003% and 5% for different scenarios that could eventually cause irreversible civilisational collapse.\n\nTo guide research and policymaking in these areas, it may be important to understand what kind of processes could lead to our premature extinction. People have considered and studied possibilities such as asteroid impacts (Matheny, [2007](#gpol12786-bib-0038)), nuclear war (Turco et al., [1983](#gpol12786-bib-0065)), and engineered pandemics (Millett and Snyder‐Beattie, [2017](#gpol12786-bib-0039)). In this article we will consider three different ways of classifying such risks.\n\nThe motivating question behind the classifications we present is ‘How might this affect policy towards these risks?’ We proceed by identifying three phases in an extinction process at which people may intervene. For each phase, we ask how people could stop the process, because the different failure modes may be best addressed in different ways. For this reason we do not try to classify risks by the kind of natural process they represent, or which life support system they undermine (unlike e.g. Avin et al., [2018](#gpol12786-bib-0004)).\n\n### Three broad defence layers against human extinction\n\nAn event causing human extinction would be unprecedented, so is likely to have some feature or combination of features that is without precedent in human history. Now, we see events with *some* unprecedented property all of the time – whether they are natural, accidental, or deliberate – and many of these will be bad for people. However, a large majority of those pose essentially zero risk of causing our extinction.\n\nWhy is it that some damaging processes pose risks of extinction, but many do not? By understanding the key differences we may be better placed to identify new risks and to form risk management strategies that attack their causes as well as other factors behind their destructive potential.\n\nWe suggest that much of the difference can usefully be explained by three broad defence layers (Figure [​(Figure11](/pmc/articles/PMC7228299/figure/gpol12786-fig-0001/)):\n\n\n1. First layer: prevention. Processes – natural or human – which help people are liable to be recognised and scaled up (barring defeaters such as coordination problems). In contrast processes which harm people tend to be avoided and dissuaded. In order to be bad for significant numbers of people, a process must either require minimal assistance from people, or otherwise bypass this avoidance mechanism.\n2. Second layer: response.[1](#gpol12786-note-1001) If a process is recognised to be causing great harm (and perhaps pose a risk of extinction), people may cooperate to reduce or mitigate its impact. In order to cause large global damage, it must impede this response, or have enough momentum that there is nothing people can do.\n3. Third layer: resilience. People are scattered widely over the planet. Some are isolated from external contact for months at a time, or have several years’ worth of stored food. Even if a process manages to kill most of humanity, a surviving few might be able to rebuild. In order to cause human extinction, a catastrophe must kill everybody, or prevent a long‐term recovery.\n\n\n\n\n[![An external file that holds a picture, illustration, etc.\nObject name is GPOL-11-271-g004.jpg](/pmc/articles/PMC7228299/bin/GPOL-11-271-g004.jpg \"An external file that holds a picture, illustration, etc.\nObject name is GPOL-11-271-g004.jpg\")[Open in a separate window](/pmc/articles/PMC7228299/figure/gpol12786-fig-0001/?report=objectonly)](/pmc/articles/PMC7228299/figure/gpol12786-fig-0001/)[Figure 1](/pmc/articles/PMC7228299/figure/gpol12786-fig-0001/)Three broad defence layers.\n\nThe boundaries between these different types of risk‐reducing activity aren’t crisp, and one activity may help at multiple stages. But it seems that often activities will help primarily at one stage. We characterise *prevention* as reducing the likelihood that catastrophe strikes at all; it is necessarily done in advance. We characterise *response* as reducing the likelihood that a catastrophe becomes a severe global catastrophe (at the level which might threaten the future of civilisation). This includes reducing the impact of the catastrophe after it is causing obvious and significant damage, but the response layer might also be bolstered by mitigation work which is done in advance. Finally, we characterise *resilience* as reducing the likelihood that a severe global catastrophe eventually causes human extinction.[2](#gpol12786-note-1002)\n\n\nSuccessfully avoiding extinction could happen at each of these defence layers. In the rest of the article we explore two consequences of this.\n\nFirst, we can classify damaging processes by the way in which we could stop them at the defence layers. In section [2](#gpol12786-sec-0009), we’ll look at a classification of risks by their origin: understanding different ways in which we could succeed at the prevention layer. In section [3](#gpol12786-sec-0016), we’ll look at the features which may allow us to block them at the response layer. In section 4, we’ll classify risks by the way in which we could stop them from finishing everybody. We conclude each section by policy implications.\n\nEach risk will thus belong to three classes – one per defence layer. For example, consider a terrorist group releasing an engineered virus that grows into a pandemic and eventually kills everyone. In our classification, we’ll call this prospect a *malicious risk* with respect to its origin; a *cascading risk* with respect to its scaling mechanism of becoming a global catastrophe; and a *vector risk* in the last phase we’ve called endgame. We’ll present more examples at the end of section [4](#gpol12786-sec-0021) and in Table [​Table11](/pmc/articles/PMC7228299/table/gpol12786-tbl-0001/).\n\n### Table 1\n\nApplying our classification to five examples. Note that each risk belongs to three classes, one for each defence layer\n\n\n\n| Classification by | Origin | Scaling | Endgame |\n| --- | --- | --- | --- |\n| Associated defence layer | Prevention | Response | Resilience |\n| Terrorists releasing engineered pandemic | Malicious | Cascading | Vector |\n| Asteroid strike causing impact winter | Natural | Large | Habitat |\n| False alarm triggering nuclear war with ensuing nuclear winter | Accident | Leverage | Habitat |\n| Conventional proxy war escalating to nuclear war causing irreversible civilisational collapse | Conflict | Leverage | Capability |\n| Unforeseen rapid learning producing an AI agent that kills humans to preempt interference with its objectives | Unseen | Leverage | Agency |\n\n[Open in a separate window](/pmc/articles/PMC7228299/table/gpol12786-tbl-0001/?report=objectonly)Second, we present implications of our framework distinguishing three layers. In section 5, we discuss how to allocate resources between the three defence layers, concluding that in most cases all of prevention, response, and resilience should receive substantial funding and attention. In section [6](#gpol12786-sec-0028), we highlight that risk management, in addition to monitoring specific hazards, must protect its defence layers by fostering favourable structural conditions such as good global governance.\n\n### Related work\n\nAvin et al. ([2018](#gpol12786-bib-0004)) have recently presented a classification of risks to the lives of a significant proportion of the human population. They classify such risks based on ‘critical systems affected, global spread mechanism, and prevention and mitigation failure’. Our framework differs from theirs in two major ways. First, with extinction risks we focus on a more narrow type of risk. This allows us, in section [4](#gpol12786-sec-0021), to discuss what might stop global catastrophes from causing extinction, a question specific to extinction risks. Second, even where the classifications cover the same temporal phase of a global catastrophe, they are motivated by different questions. Avin et al. attempt a comprehensive survey of the natural, technological, and social systems that may be affected by a disaster, for example listing 45 critical systems in their second section. By contrast, we ask why a risk might break through a defence layer, and look for answers that abstract away from the specific system affected. For instance, in section [2](#gpol12786-sec-0009), we’ll distinguish between unforeseen, expected but unintended, and intended harms.\n\nWe believe the two classifications complement each other well. Avin and colleagues’ (2018) discussion of prevention and response failures is congenial to our section 6 on underlying risk factors. Their extensive catalogues of critical systems, spread mechanisms and prevention failures highlight the wide range of relevant scientific disciplines and stakeholders, and can help identify fault points relevant to particularly many risks. Conversely, we hope that our coarser typology can guide the search for additional critical systems and spread mechanisms. We believe that our classification also usefully highlights different ways of protecting the same systems. For example, the risks from natural and engineered pandemics might best be reduced by different policy levers even if both affected the same critical systems and spread by the same mechanisms. Lastly, our classification can help identify risk management strategies that would reduce whole clusters of risks. For example, restricting access to dangerous information may prevent many risks from malicious groups, irrespective of the critical system that would be targeted.\n\nOur classification also overlaps with the one by Liu et al. ([2018](#gpol12786-bib-0035)), for example when they distinguish intended from other vulnerabilities or emphasise the importance of resilience. While the classifications otherwise differ, we believe ours contributes to their goal to dig ‘beyond hazards’ and surface a variety of intervention points.\n\nBoth the risks discussed by Avin et al. ([2018](#gpol12786-bib-0004)) and extinction risks by definition involve risks of a massive loss of lives. This sets them apart from other risks where the adverse outcome would also have global scale but could be limited to less severe damage such as economic losses. Such risks are being studied by a growing literature on ‘global systemic risk’ (Centeno et al., [2015](#gpol12786-bib-0018)). Rather than reviewing that literature here, we’ll point out throughout the article where we believe it contains useful lessons for the study of extinction risks.\n\nFinally, it’s worth keeping in mind that extinction is not the only outcome that would permanently curtail humanity’s potential; see Bostrom ([2013](#gpol12786-bib-0013)) for other ways in which this could happen. A classification of these other *existential risks* is beyond the scope of this article, as is a more comprehensive survey of the large literature on global risks (e.g. Baum and Barrett, [2018](#gpol12786-bib-0008); Baum and Handoh, [2014](#gpol12786-bib-0010); Bostrom and Ćirković [2008](#gpol12786-bib-0015); Posner, [2004](#gpol12786-bib-0046)).\n\nClassification by origin: types of prevention failures\n------------------------------------------------------\n\nAvoiding catastrophe altogether is the most desirable outcome. The origin of a risk determines how it passes through the prevention layer, and hence the kind of steps society can take to strengthen prevention (Figure [​(Figure22](/pmc/articles/PMC7228299/figure/gpol12786-fig-0002/)).\n\n[![An external file that holds a picture, illustration, etc.\nObject name is GPOL-11-271-g001.jpg](/pmc/articles/PMC7228299/bin/GPOL-11-271-g001.jpg \"An external file that holds a picture, illustration, etc.\nObject name is GPOL-11-271-g001.jpg\")[Open in a separate window](/pmc/articles/PMC7228299/figure/gpol12786-fig-0002/?report=objectonly)](/pmc/articles/PMC7228299/figure/gpol12786-fig-0002/)[Figure 2](/pmc/articles/PMC7228299/figure/gpol12786-fig-0002/)Classification of risks by origin.\n\n### Natural risks\n\nThe simplest explanation for a risk to bypass our background prevention of harm‐creating activities is if the origin is outside of human control: a *natural risk*. Examples include a large enough asteroid striking the earth, or a naturally occurring but particularly deadly pandemic.\n\nWe sometimes can take steps to avoid natural risks. For example, we may be able to develop methods for deflecting asteroids. Preventing natural risks generally requires proactive understanding and perhaps detection, for instance scanning for asteroids on earth‐intersecting orbits. Such risks share important properties with anthropogenic risks, as any explanation for how they might materialise must include an explanation of why the human‐controlled prevention layer failed.\n\n### Anthropogenic risks\n\nAll non‐natural risks are in some sense *anthropogenic*, but we can classify them further. Some may have a localised origin, needing relatively small numbers of people to trigger them. Others require large‐scale and widespread activity. In each case there are at least a couple of ways that it could get through the prevention layer.\n\nNote that there is a spectrum in terms of the number of people who are needed to produce different risks, so the division between ‘few people’ and ‘many people’ is not crisp. We might think of the boundary as being around one hundred thousand or one million people, and things close to this boundary will have properties of both classes. However, it appears to us that for many of the plausible risks the number required is either much smaller (e.g., an individual or a cohesive group of people such as a company or military unit) or much larger than this (e.g., the population of a major power or even the whole world), so the qualitative distinction between ‘few people’ and ‘many people’ (and the different implications of these for responding) seems to us a useful one.\n\nAlso potentially relevant are the knowledge and intentions of the people conducting the risky activity. They may be ignorant of or aware of the possible harm; if the latter, they may or may not intend it.[3](#gpol12786-note-1003)\n\n\n### Anthropogenic risks from small groups\n\nThe case of a risk where relatively few people are involved in triggering and they are unaware of the potential harm is an *unseen risk*.[4](#gpol12786-note-1004) This is likely to involve a new kind of activity; it is most plausible with the development of unprecedented technologies (GPP, [2015](#gpol12786-bib-0027)), such as perhaps advanced artificial intelligence (Bostrom, [2014](#gpol12786-bib-0014)), nanotechnology (Auplat, [2012](#gpol12786-bib-0002), [2013](#gpol12786-bib-0003); Umbrello and Baum, [2018](#gpol12786-bib-0066)), or high‐energy physics experiments (Ord et al., [2010](#gpol12786-bib-0042)).\n\nThe case of a localised unintentional trigger which was foreseen as a possibility (and the dynamics somewhat understood) is an *accident risk*. This could include a nuclear war starting because of a fault in a system or human error, or the escape of an engineered pathogen from an experiment despite safety precautions.\n\nIf the harm was known and intended, we have a *malicious risk*. This is a scenario where a small group of people wants to do widespread damage;[5](#gpol12786-note-1005) see Torres ([2016](#gpol12786-bib-0061), [2018b](#gpol12786-bib-0063)) for a typology and examples. Malicious risks tend to be extreme forms of terrorism, where there is a threat which could cause global damage.\n\n### Anthropogenic risks from large groups\n\nTurning to scenarios where many people are involved, we ask why so many would pursue an activity which causes global damage. Perhaps they do not know about the damage. This is a *latent risk*. For them to remain ignorant for long enough, it is likely that the damage is caused in an indirect or delayed manner. We have seen latent risks realised before, but not ones that threatened extinction. For example, asbestos was used in a widespread manner before it was realised that it caused health problems. And it was many decades after we scaled up the burning of fossil fuels that we realised this contributed to climate change. If our climate turns out to be more sensitive than expected (Nordhaus, [2011](#gpol12786-bib-0040); Wagner and Weitzman, [2015](#gpol12786-bib-0068); Weitzman, [2009](#gpol12786-bib-0070)), and continued fossil fuel use triggers a truly catastrophic shift in climate, then this could be a latent risk today.\n\nIn some cases people may be aware of the damage and engage in the activity anyway. This failure to internalise negative externalities is typified by ‘tragedy of the commons’ scenarios, so we can call this a *commons risk*. For example, failure to act together to tackle global warming may be a commons risk (but lack of understanding of the dynamics causes a blur with latent risk). In general, commons risks require some coordination failure. They are therefore more likely if features of the risk inhibit coordination; see for example Barrett ([2016](#gpol12786-bib-0005)) and Sandler ([2016](#gpol12786-bib-0052)) for a game‐theoretic analysis of such features.\n\nFinally, there are cases where a large number of people engage in an activity to cause deliberate harm: *conflict risk*. This could include wars and genocides. Wars share some features with commons risk: there are solutions which are better for everybody but are not reached. In most conflicts, actors are intentionally causing harm, but only as an instrumental goal.\n\n### Risk creators and risk reducers\n\nIn the above we classify risks according to who creates the risk and their state of knowledge. We have done this because if we want to prevent risk it will often be most effective to go to the source. But we could also ask who is in a position to take actions to avoid the risk. In many cases those creating it have most leverage, but in principle almost any actor could take steps to reduce the occurrence rate. If risk prevention is underprovided, this is likely to be a tragedy of the commons scenario, and share characteristics with commons risk.\n\nFrom a moral and legal standpoint intentionality often matters. The possibility of being found culpable is an important incentive for avoiding risk‐causing activities and part of risk management in most societies. If creating or hiding potential catastrophic risks is made more blameworthy, prevention will likely be more effective. Unfortunately it also often motivates concealment that can create or aggravate risk; see Chernov and Sornette ([2015](#gpol12786-bib-0019)) for case studies of how this misincentive can weaken prevention and response. This shows the importance of making accountability effectively enforceable.\n\n### Policy implications for preventing extinction risk\n\n\n\n\n* To be able to prevent *natural risks,* we need research aimed at identifying potential hazards, understanding their dynamics, and eventually develop ways to reduce their rate of occurrence.\n* To avoid *unseen* and *latent risks,* we can promote norms such as appropriate risk management principles at institutions that engage in plausibly risky activities; note that there is an extensive literature on rivalling risk management principles (e.g. Foster et al., [2000](#gpol12786-bib-0025); O'Riordan and Cameron, [1994](#gpol12786-bib-0043); Sandin, [1999](#gpol12786-bib-0051); Sunstein, [2005](#gpol12786-bib-0053); Wiener, [2011](#gpol12786-bib-0071)), especially in the face of catastrophic risks (Baum, [2015](#gpol12786-bib-0006); Bostrom, [2013](#gpol12786-bib-0013); Buchholz and Schymura, [2012](#gpol12786-bib-0017); Sunstein, [2007](#gpol12786-bib-0054), [2009](#gpol12786-bib-0055); Tonn, [2009](#gpol12786-bib-0057); Tonn and Stiefel, [2014](#gpol12786-bib-0060)) – advocating for any particular principle is beyond the scope of this article. See also Jebari ([2015](#gpol12786-bib-0031)) for a discussion of how heuristics from engineering safety may help prevent unseen, latent, and accident risks. Regular horizon scanning may identify previously unknown risks, enabling us to develop targeted prevention measures. Organisations must be set up in such a way that warnings of newly discovered risks reach decision‐makers (see Clarke and Eddy, [2017](#gpol12786-bib-0020), for case studies where this failed).\n* *Accidents* may be prevented by general safety norms that also help reduce unseen risk. In addition, building on our understanding of specific accident scenarios, we can design failsafe systems or follow operational routines that minimise accident risk. In some cases, we may want to eschew an accident‐prone technology altogether in favour of safer alternatives. Accident prevention may benefit from research on high reliability organisations (Roberts and Bea, [2001](#gpol12786-bib-0049)) and lessons learnt from historical accidents. Where effective prevention measures have been identified, it may be beneficial to codify them through norms and law at the national and international levels. Alternatively, if we can internalise the expected damages of accidents through mechanisms such as insurance, we can leverage market incentives.[6](#gpol12786-note-1006)\n* Solving the coordination problems at the heart of *commons* and *conflict risks* is sometimes possible by fostering national or international cooperation, be it through building dedicated institutions or through establishing beneficial customs.[7](#gpol12786-note-1007) One idea is to give a stronger political voice to future generations (Jones et al., [2018](#gpol12786-bib-0033); Tonn, [1991](#gpol12786-bib-0056), [2018](#gpol12786-bib-0058)).\n* Lastly, we can prevent *malicious risks* by combating extremism. Technical (Trask, [2017](#gpol12786-bib-0064)) as well as institutional (Lewis, [2018](#gpol12786-bib-0034)) innovations may help with governance challenges in this area, a survey of which is beyond the scope of this article.\n* Note that our classification by origin is aimed at identifying policies that would – if successfully implemented – reduce a broad class of risks. Developing policy solutions is, however, just one step toward effective prevention. We must then also actually implement them – which may not happen due to, for example, free‐riding incentives. Our classification does not speak to this implementation step. Avin et al. ([2018](#gpol12786-bib-0004)) congenially address just this challenge in their classification of prevention and mitigation failures.\n\n\n\n\nClassification by scaling mechanism: types of response failure\n--------------------------------------------------------------\n\nFor a catastrophe to become a global catastrophe, it must eventually have large effects despite our response aimed at stopping it. To understand how this can happen, it’s useful to look at the time when we could first react. Effects must then either already be large or scale up by a large factor afterwards (Figure [​(Figure33](/pmc/articles/PMC7228299/figure/gpol12786-fig-0003/)).\n\n[![An external file that holds a picture, illustration, etc.\nObject name is GPOL-11-271-g002.jpg](/pmc/articles/PMC7228299/bin/GPOL-11-271-g002.jpg \"An external file that holds a picture, illustration, etc.\nObject name is GPOL-11-271-g002.jpg\")[Open in a separate window](/pmc/articles/PMC7228299/figure/gpol12786-fig-0003/?report=objectonly)](/pmc/articles/PMC7228299/figure/gpol12786-fig-0003/)[Figure 3](/pmc/articles/PMC7228299/figure/gpol12786-fig-0003/)Classification of risks by scaling mechanism.\n\nIf the initial effects are large, we will simply say that the risk is *large*. If not, we can look at the scaling process. If massive scaling happens in a small number of steps, we say there is *leverage* in play. If scaling in all steps is moderate, there must be quite a lot of such steps – in this case we say that the risk is *cascading.*\n\n\n### Large risks\n\nParadigm examples of catastrophes of an immediately global scale are large sudden‐onset natural disasters such as asteroid strikes. Since we cannot respond to them at a smaller‐scale stage, mitigation measures we can take in advance (part of the second defence layer as they would reduce damage after it has started) and the other defence layers of prevention and resilience are particularly important to reduce such risks. Prevention and mitigation may benefit from detecting a threat – say, an asteroid – early, but in our classification this is different from responding after there has been some actual small‐scale damage.\n\n### Leverage risks\n\nLeverage points for rapid one‐step scaling can be located in natural systems, for example if the extinction of a key species caused an ecosystem to collapse. However, it seems to us that leverage points are more common in technological or social systems that were designed to concentrate power or control.\n\nRisks of both natural and anthropogenic origin may interact with such systems. For instance, a tsunami triggered the 2011 disaster at the Fukushima Daiichi nuclear power plant. Anthropogenic examples include nuclear war (possible to trigger by a few individuals linked to a larger chain of command and control) or attacks on weak points in key global infrastructure.\n\nResponding to leverage risks is challenging because there are only few opportunities to intervene. On the other hand, blocking even one step of leveraged growth would be highly impactful. This suggests that response measures may be worthwhile if they can be targeted at the leverage points.\n\n### Cascading risks\n\nWith the major exception of escalating conflicts, cascading risks normally cascade in a way which does not rely on humans deciding to further the effects. A typical example is the self‐propagating growth of an epidemic. As automation becomes more widespread, there will be larger systems without humans in the loop, and thus perhaps more opportunities for different kinds of cascading risk.\n\nSince cascading risks are those which have a substantial amount of growing effects after we’re able to interact with them, it seems likely that they will typically give us more opportunities to respond, and that response will therefore be an important component of risk reduction. For risks which cascade exponentially (such as epidemics), an earlier response may be much more effective than a later one. Reducing the rate of propagation is also effective if there exist other interventions that can eventually stop or revert the damage.\n\nHowever, there are a few secondary risk‐enabling properties that can weaken the response layer and therefore help damage cascade to a global catastrophe which we could have stopped. For example, a cascading risk may:\n\n\n* Impede cooperation: by preventing a coordinated response, the likelihood of a global catastrophe is increased. Cooperation is harder when communication is limited, when it is hard to observe defection, or when there is decreased trust.\n* Not obviously present a risk: the longer a cascading risk is under‐recognised, the more it can develop before any real response. For example, long‐incubation pathogens can spread further before their hazard becomes apparent.\n* Be on extreme timescales: if the risk presents and cascades very fast, there is little opportunity for any response. Johnson et al. ([2012](#gpol12786-bib-0032)) analyse such ‘ultrafast’ events, using rapid changes in stock prices driven by trading algorithms as an example (Braun et al., [2018](#gpol12786-bib-0016), however find that most of these ‘mini flash crashes’ are dominated by a single large order rather than being the result of a cascade). Note, however, that which timescales count as relevantly ‘fast’ depends on our response capabilities – technological and institutional progress may result in faster‐cascading threats but also in opportunities to respond faster. On the other hand people may be bad at addressing problems that won’t manifest for generations, as is the case for some impacts of global warming.\n\n\n\n\n### Policy implications for responding to extinction risk\n\n\n\n\n* By their nature, we cannot respond to *large* risks before they become a global catastrophe. Of particular importance for such risks are therefore: mitigation that can be done in advance, and the defence layers of prevention and resilience.\n* *Leverage* risks provide us with the opportunity of a leveraged response: we can identify leverage points in advance and target our responses at them.\n* While the details of responses to *cascading* risks must be tailored to each specific case, we can highlight three general recommendations. First, detect damage early, when a catastrophe is still easy to contain. Second, reduce the time lag between detection and response, for example, by continuously maintaining response capabilities and having rapidly executable contingency plans in place. Third, ensure that planned responses won’t be stymied by the cascading process itself – for example, don’t store contingency plans for how to respond to a power outage on computers.[8](#gpol12786-note-1008)\n\n\n\n\nClassification by endgame: types of resilience failure\n------------------------------------------------------\n\nFor a global catastrophe to cause human extinction, it must in the end stop the continued survival of the species. This could be *direct*: killing everyone;[9](#gpol12786-note-1009) or *indirect*: removing our ability to continue flourishing over a longer period (Figure [​(Figure44](/pmc/articles/PMC7228299/figure/gpol12786-fig-0004/)).\n\n[![An external file that holds a picture, illustration, etc.\nObject name is GPOL-11-271-g003.jpg](/pmc/articles/PMC7228299/bin/GPOL-11-271-g003.jpg \"An external file that holds a picture, illustration, etc.\nObject name is GPOL-11-271-g003.jpg\")[Open in a separate window](/pmc/articles/PMC7228299/figure/gpol12786-fig-0004/?report=objectonly)](/pmc/articles/PMC7228299/figure/gpol12786-fig-0004/)[Figure 4](/pmc/articles/PMC7228299/figure/gpol12786-fig-0004/)Classification of risks by endgame.\n\n### Direct risks\n\nIn order to kill everyone, the catastrophe must reach everyone. We can further classify direct risks by how they reach everyone.\n\nThe simplest way this could happen is if it is everywhere that people are or could plausibly be: a *ubiquity risk*. If the entire planet is struck by a deadly gamma ray burst, or enough of a deadly toxin is dispersed through the atmosphere, this could plausibly kill everyone.\n\nIf it doesn’t reach everywhere people might be, a direct risk must at least reach everywhere that people in fact are. This might occur when people have carried it along with them: a *vector risk*. This includes risk from pandemics (if they are sufficiently deadly and have a long enough incubation period that it is spread everywhere) or perhaps risks which are spread by memes (Dawkins, [1976](#gpol12786-bib-0021)), or which come from some technological artefacts which we carry everywhere. Note that to directly cause extinction, a vector would need to impact hard‐to‐reach populations including ‘disaster shelters, people working on submarines, and isolated peoples’ (Beckstead, [2015a](#gpol12786-bib-0011), p. 36).\n\nIf not ubiquitous and not carried with the people, we would have to be extraordinarily unlucky for it to reach everyone by chance. Setting this aside as too unlikely, we are left with *agency risk*: deliberate actors trying to reach everybody. The actors could be humans or nonhuman intelligence (perhaps machine intelligence or even aliens). Agency risk probably means someone deliberately trying to ensure nobody survives, which may make it easier to get through the resilience layer by allowing anticipation of and response to possible survival plans. In principle agency risk includes cases where someone is deliberately trying to reach everyone, and only by accident does so in a way that kills them.\n\n### Indirect risks\n\nIf the risk threatens extinction without killing everyone, it must reduce our long‐term ability to survive as a species. This could include a very broad range of effects, but we can break them up according to the kind of ability it impedes.\n\n\n*Habitat risks* make long‐term survival impossible by altering or destroying the environment we live in so that it cannot easily support human life. For example a large enough asteroid impact might throw up dust which could prevent us from growing food for many years – if this was long enough, it could lead to human extinction. Alternatively an environmental change which lowered the average number of viable offspring to below replacement rates could pose a habitat risk.\n\n\n*Capability risks* knock us back in a way that permanently remove an important societal capability, leading in the long run to extinction. One example might be moving to a social structure which precluded the ability to adapt to new circumstances.\n\nWe are gesturing towards a distinction between habitat risks and capability risks, rather than drawing a sharp line. Habitat risks work through damage to an external environment, where capability risks work through damage to more internal social systems (or even biological or psychological factors). Capability risks are also even less direct than habitat risks, perhaps taking hundreds or thousands of years to lead to extinction. Indeed there is not a clear line between capability risks and events which damage our capabilities but are not extinction risks (cf. section [Classification by origin: types of prevention failures](#gpol12786-sec-0005), [Underlying risk factors: risks to the defence layers](#gpol12786-sec-0024)). Nonetheless when considering risks of human extinction it may be important to account for events which could cause the loss of fragile but important capabilities.\n\nAn important type of capability risk may be civilisational collapse. It is possible that killing enough people and destroying enough infrastructure could lead to a collapse of civilisation without causing immediate extinction. If this happens, it is then plausible that it might never recover, or recover in a less robust form, and be wiped out by some subsequent risk. It is an open and important question how likely this permanent loss of capability is (Beckstead, [2015b](#gpol12786-bib-0012)). If it is likely, the resilience layer may therefore be particularly important to reinforce, perhaps along the lines proposed by Maher and Baum ([2013](#gpol12786-bib-0036)). On the other hand, if even large amounts of destruction have only small effects on the chances of eventual extinction, it becomes more important to focus on risks which can otherwise get past the resilience layer.\n\n### Classifying example risks by each of origin, scaling, and endgame\n\nWe finally illustrate our completed classification scheme by applying it to examples, which we summarise in Table [​Table11](/pmc/articles/PMC7228299/table/gpol12786-tbl-0001/).\n\nThroughout the text, we’ve repeatedly referred to an asteroid strike that might cause extinction due to an ensuing impact winter. We’ve called this a *natural risk* regarding its origin; a *large* risk regarding scale, with no opportunity to intervene between the asteroid impact and its damage affecting the whole globe; and, if we assume that humanity dies out because climatic changes remove the ability to grow crops, a *habitat risk* in the endgame phase.\n\nOur next pair of examples illustrates that risks with the same salient central mechanism – in this case nuclear war – may well differ during other phases. Consider first a nuclear war precipitated by a malfunctioning early warning system – that is, a nuclear power launching what turns out to be a first strike because it falsely believed that its nuclear destruction was imminent. Suppose further that this causes a nuclear winter, leading to human extinction. This would be an *accident* that scales via *leverage,* and finally manifests as a *habitat risk.* Contrast this with the intentional use of nuclear weapons in an escalating conventional war, and assume further that this either doesn’t cause a nuclear winter or that some humans are able to survive despite adverse climatic conditions. Instead, humanity never recovers from widespread destruction, and is eventually wiped out by some other catastrophe that could have easily been avoided by a technologically advanced civilisation. This second scenario would be a *conflict* that again scaled via the *leverage* associated with nuclear weapons, but then finished off humanity by removing a crucial *capability* rather than via damage to its habitat.\n\nWe close by applying our classification to a more speculative risk we might face this century. Some scholars (e.g. Bostrom, [2014](#gpol12786-bib-0014)) have warned that progress in artificial intelligence (AI) could at some point allow unforeseen rapid self‐improvement in some AI system, perhaps one that uses machine learning and can autonomously acquire additional training data via sensors or simulation. The concern is that this could result in a powerful AI agent that deliberately wipes out humanity to pre‐empt interference with its objectives (see Omohundro, [2008](#gpol12786-bib-0041), for an argument why such pre‐emption might be plausible). To the extent that we currently don’t know of any machine learning algorithms that could exhibit such behaviour, this would be an *unseen risk;* the scaling would be via *leverage* if we assume a discrete algorithmic improvement as trigger, or alternatively the risk could be rapidly *cascading;* in the endgame, this scenario would present an *agency risk.*\n\n\n### Policy implications for resilience against extinction\n\n\n\n\n* To guard against what today would be *ubiquity risks,* we may in the future be able to establish human settlements on other planets (Armstrong and Sandberg, [2013](#gpol12786-bib-0001)).[10](#gpol12786-note-1010)\n* *Vector risks* may not reach people in isolated and self‐sufficient communities. Establishing disaster shelters may hence be an attractive option. Self‐sufficient shelters can also reduce *habitat risk*. Jebari ([2015](#gpol12786-bib-0031)) discusses how to maximise the resilience benefits from shelters, while Beckstead ([2015a](#gpol12786-bib-0011)) has argued that their marginal effect would be limited due to the presence of isolated peoples, submarine crews, and existing shelters.\n* Resilience against *vector* and *agency risks* may be increased by late‐stage response measures that work even in the event of widespread damage to infrastructure and the breakdown of social structure. An example might be the ‘isolated, self‐sufficient, and continuously manned underground refuges’ suggested by Jebari ([2015](#gpol12786-bib-0031), p. 541).\n\n\n\n\nAllocating resources between defence layers\n-------------------------------------------\n\nIn this section we will use our guiding idea of three defence layers to present a way of calculating the extinction probability posed by a given risk. We’ll draw three high‐level conclusions: first, the most severe risks are those which have a high probability of breaking through all three defence layers. Second, when allocating resources between the defence layers, rather than comparing absolute changes in these probabilities we should assess how often we can halve the probability of a risk getting through each layer. Third, it’s best to distribute a sufficiently large budget across all three defence layers.\n\nWe are interested in the probability *p* that a given risk *R* will cause human extinction in a specific timeframe, say by 2100*.* Whichever three classes *R* belongs to, in order to cause extinction it needs to get past all three defence layers; its associated extinction probability *p* is therefore equal to the product of three factors:\n\n\n1. The probability *c* for *R* getting past the first barrier and causing a catastrophe;\n2. The conditional probability *g* that *R* gets past the second barrier to cause a global catastrophe, *given* that it has passed the first barrier; and\n3. The conditional probability *e* that *R* gets past the third barrier to cause human extinction, *given* that it has passed the second barrier.\n\n\n\n\nIn short: *p* = *c*·*g*·*e*.\n\nEach of *c, g,* and *e* can get extremely small for some risks. But the extinction probability *p* will be highest when all three terms are non‐negligible. Hence we get our (somewhat obvious) first conclusion that the most concerning risks are those which can plausibly get past all three defence layers.\n\nHowever, most concerning doesn’t necessarily translate into the most valuable to act on. Suppose we’d like to invest additional resources into reducing risk *R*. We could use them to strengthen either of the three defences, which would make it less likely that *R* passes that defence. We should then compare *relative* rather than absolute changes to these probabilities, which is our second conclusion. That is, to minimise the extinction probability *p* we should ask which of *c, g,* and *e* we can halve most often. This is because the same relative change of each probability will have the same effect on the extinction probability *p* – halving either of *c, g,* or *e* will halve *p.* By contrast, the effect of the same absolute change will vary depending on the other two probabilities; for instance, reducing *c* by 0.1 reduces *p* by 0.1·g·e. In particular, a given absolute change will be more valuable if the other two probabilities are large.\n\nWhen one of *c, g,* or *e* is close to 100%, it may be much harder to reduce it to 50% than it would be to halve a smaller probability. The principle of comparing how often we can halve *c, g,* and *e* then implies that we’re better off reducing probabilities not close to 100%. For example, consider a large asteroid striking the Earth. We could take steps to avoid it (for example by scanning and deflecting), and we could take steps to increase our resilience (for example by securing food production). But if a large asteroid does cause a catastrophe, it seems very likely to cause a global catastrophe, and it is unclear that there is much to be done in reducing the risk at the scaling stage. In other words, the probability *g* is close to 1 and prohibitively hard to substantially reduce. We therefore shouldn’t invest resources into futile responses, but instead use them to strengthen both prevention and resilience.\n\nWhat if each defence layer has a decent chance of stopping a risk? We’ll then be best off by allocating a non‐zero chunk of funding to all three of them – a strategy of defence in depth, our third conclusion. The reason just is the familiar phenomenon of diminishing marginal returns of resources. It may initially be best to strengthen a particular layer – but once we’ve taken the low‐hanging fruit there, investing in another layer (or in reducing another risk) will become equally cost‐effective. Of course, our budget might be exhausted earlier. Defending in depth therefore tends to be optimal if and only if we can spend relatively much in total.\n\nWe close by discussing some limitations of our analysis. First, we remain silent on the optimal allocation of resources *between* different risks (rather than between different layers for a fixed risk or basket of risks); indeed, as we’ll argue in section [Classification by origin: types of prevention failures](#gpol12786-sec-0005), [Underlying risk factors: risks to the defence layers](#gpol12786-sec-0024), comprehensively answering the question of how to optimally allocate resources intended for extinction risk reduction requires us to look beyond even the full set of extinction risks. We do hope that our work could prove foundational for further research that investigates both the allocation between risks and between defence layers simultaneously. Indeed, it would be straightforward to consider several risks *pi* = *ci·gi·ei,*\n*i* = 1, …, *n;* assuming specific functional forms for how the probabilities *ci, gi,* and *ei* change in response to invested resources could then yield valuable insights.\n\nSecond, we have not considered interactions between different defence layers or different risks (Graham et al., [1995](#gpol12786-bib-0028); Baum, [2019](#gpol12786-bib-0007); Baum and Barrett, [2017](#gpol12786-bib-0009); Martin and Pindyck, [2015](#gpol12786-bib-0037)). These can present both as tradeoffs or synergies. For example, traffic restrictions in response to a pandemic might slow down research on a treatment that would render the disease non‐fatal, thus harming the resilience layer; on the other hand, they may inadvertently help with preventing malicious risk or being resilient against agency risk.\n\n### Policy implications for resource allocation within risk management\n\n\n\n\n* The most important extinction risks to act on are those that have a non‐negligible chance of breaking through all three defence layers – risks where we have a realistic chance of failing to prevent, a realistic chance of failing to successfully respond to, *and* a realistic chance of failing to be resilient against.\n* Due to diminishing marginal returns, when budgets are high enough it will often be best to maintain a portfolio of significant investment into each of prevention, response, and resilience.\n\n\n\n\nUnderlying risk factors: risks to the defence layers\n----------------------------------------------------\n\nIn sections [Classification by origin: types of prevention failures](#gpol12786-sec-0005), [Classification by scaling mechanism: types of response failure](#gpol12786-sec-0012), [Classification by endgame: types of resilience failure](#gpol12786-sec-0017) we have considered ways of classifying threats that may cause human extinction and the pathways through which they may do so. Our classification was based on the three defence layers of prevention, response, and resilience.\n\nGiving centre stage to the defence layers provides the following useful lens for extinction risk management. If our main goal is to reduce the likelihood of extinction, we can equivalently express this by saying that we should aim to strengthen the defence layers. Indeed, extinction can only become less likely if at least one particular extinction risk is made less likely; in turn this requires that it has a smaller chance of making it past at least one of the defence layers.\n\nThis is significant because there is a spectrum of ways to improve our defences depending on how narrowly our measures are tailored to specific risks. At one extreme, we can increase our capacity to prevent, respond to, or be resilient against one risk; for example, we can research methods to deflect asteroids. In between are measures to defend against a particular class of risk, as we’ve highlighted in our policy recommendations. At the other extreme is the reduction of *underlying risk factors* that weaken our capacity to defend against many classes of risks.\n\nRisk factors need not be associated with any potential proximate cause of extinction. For example, consider regional wars; even when they don’t escalate to a global catastrophe, they could hinder global cooperation and thus impede many defences.\n\nGlobal catastrophes constitute one important type of risk factor. We already discussed the possibility of them making earth uninhabitable or removing a capability that would be crucial for long‐term survival. But even if they do neither of these, they can severely damage our defence layers. In particular, getting hit by a global catastrophe followed in short succession by another might be enough to cause extinction when neither alone would have done so. There are significant historic examples of such *compound risks* below the extinction level. For instance, the deadliest accident in aviation history occurred when two planes collided on an airport runway; this was only possible because a previous terrorist attack on another airport had caused congestion due to rerouted planes, which disabled the prevention measure of using separate routes for taxiing and takeoff (Weick, [1990](#gpol12786-bib-0069)). When considering catastrophes we should therefore pay particular attention to negative impacts they may have on the defence layers.\n\nOur capacity to defend also depends on various structural properties that can change in gradual ways even in the absence of particularly conspicuous events. For example, the resilience layer may be weakened by continuous increases in specialisation and global interdependence. This can be compared with the model of synchronous failure suggested by Homer‐Dixon et al. ([2015](#gpol12786-bib-0030)). They describe how the slow accumulation of multiple simultaneous stresses makes a system vulnerable to a cascading failure.\n\nIt is beyond the scope of this article to attempt a complete survey of risk factors; we merely emphasise that they should be considered. We do hope that our classifications in sections [Classification by origin: types of prevention failures](#gpol12786-sec-0005), [Classification by scaling mechanism: types of response failure](#gpol12786-sec-0012), [Classification by endgame: types of resilience failure](#gpol12786-sec-0017) may be helpful in identifying risk factors. For example, thinking about preventing conflict and common risks may point us to global governance, while having identified vector and agency risks may highlight the importance of interdependence (even though, upon further scrutiny, these risk factors turn out to be relevant for many other classes of risk as well).\n\nWe conclude that the allocation of resources between layers defending against specific risks, which we investigated in section [2](#gpol12786-sec-0009), is not necessarily the most central task of extinction risk management. It is an open and important question whether reducing specific risks, clusters of risks, or underlying risk factors is most effective on the margin.\n\n### Policy implications from underlying risk drivers\n\n\n\n\n* Research on smaller‐scale risks should pay particular attention to how they might damage the three defence layers against extinction risks. Risk management should aim to mitigate such damage.\n* Conversely, the study of extinction risks cannot be limited to individual triggers such as asteroids or specific technologies. It would be desirable to better understand which underlying risk factors contribute to extinction risk by weakening our defences. For example, in what ways does global interdependence make extinction from a global catastrophe more likely, and are there interventions to mitigate this effect?\n\n\n\n\nConclusions\n-----------\n\nThe study and management of extinction risks are challenging for several reasons. Cognitive biases make it hard to appreciate the scale and probability of human extinction (Wiener, [2016](#gpol12786-bib-0072); Yudkowsky, [2008](#gpol12786-bib-0073)). Most potential people affected are in future generations, whose interests aren’t well represented in our political systems. Hazards can arise and scale in many different ways, requiring a variety of disciplines and stakeholders to understand and stop them. And since there is no precedent for human extinction, we struggle with a lack of data.\n\nFaced with such difficult terrain, we have considered the problem from a reasonably high level of abstraction; we hope thereby to focus attention on the most crucial aspects. If this work is useful, it will be as a foundation for future work or decisions. In some cases our classification might provoke thoughts that are helpful directly for decision‐makers that engage with specific risks. However, we anticipate that our work will be most useful in informing the design of systems for analysing and prioritising between several extinction risks, or in informing the direction of future research.\n\nBiographies\n-----------\n\n• \n**Owen Cotton‐Barratt** is a Mathematician at the Future of Humanity Institute, University of Oxford. His research concerns high‐stakes decision‐making in cases of deep uncertainty, including normative uncertainty, future technological developments, unprecedented accidents, and untested social responses.\n\n• \n**Max Daniel** is a Senior Research Scholar at the Future of Humanity Institute, University of Oxford. His research interests include existential risks, the governance of risks from transformative artificial intelligence, and foundational questions regarding our obligations and abilities to help future generations.\n\n• \n**Anders Sandberg** is a Senior Research Fellow at the Future of Humanity Institute, University of Oxford. His research deals with the management of low‐probability high‐impact risks, societal and ethical issues surrounding human enhancement, estimating the capabilities of future technologies, and very long‐range futures.\n\n Notes\n-----\n\n\n1We are particularly indebted to Toby Ord for several very helpful comments and conversations. We also thank Scott Janzwood, Sebastian Farquhar, Martina Kunz, Huw Price, Seán Ó hÉigeartaigh, Shahar Avin, the audience at a seminar at Cambridge’s Centre for the Study of Existential Risk (CSER), and two anonymous reviewers for helpful comments on earlier drafts of this article. We’re also grateful to Eva‐Maria Nag for comments on our policy suggestions. The contributions of Owen Cotton‐Barratt and Anders Sandberg to this article are part of a project that has received funding from the European Research Council (ERC) under the European Union’s Horizon 2020 research and innovation programme (grant agreement No 669751).\n\n\n\n1In the terminology of the United Nations Office for Disaster Risk Reduction (UNDRR, [2016](#gpol12786-bib-0067)), response denotes the provision of emergency services and public assistance during and immediately after a disaster. In our usage, we include any steps which may prevent a catastrophe scaling to a global catastrophe. This could include work traditionally referred to as mitigation.\n\n2The concept of resilience, originally coined in ecology (Holling, [1973](#gpol12786-bib-0029)), today is widely used in the analysis of risks of many types (e.g. Folke et al., [2010](#gpol12786-bib-0024)). In UNDRR (2016) terminology, resilience refers to ‘[t]he ability of a system, community or society exposed to hazards to resist, absorb, accommodate, adapt to, transform and recover from the effects of a hazard in a timely and efficient manner, including through the preservation and restoration of its essential basic structures and functions through risk management.’ In this article, we usually use resilience to specifically denote the ability of humanity as a whole to recover from a global catastrophe in a way that enables its long‐term survival. This ability may in turn depend on the resilience of many smaller natural, technical, and socio‐ecological systems.\n\n3Strictly knowledge and intentionality are two separate dimensions; however it is essentially impossible to intend the harm without being aware of the possibility, so we treat it as a spectrum with ignorance at one end, intent at the other end, and knowledge without intent in the middle. Again, there is some blur between these: there are degrees of awareness about a risk, and an intention of harm may be more or less central to an action.\n\n4There are degrees of lack of foresight of the risk. Cases where the people performing the activity are substantially unaware of the risks have many of the relevant features of this category, even if they have suspicions about the risks, or other people are aware of the risks.\n\n5They may not intend for that damage to cause human extinction – for the purposes of acting on this classification it’s more useful to know whether they were trying to cause harm.\n\n6We thank an anonymous reviewer for suggesting the policy responses of avoiding dangerous technologies and mandating insurance.\n\n7Global coordination more broadly may however be a double edged tool, since increased interdependency if not well managed can also increase the chance of systemic risks (Goldin & Mariathasan, [2014](#gpol12786-bib-0026)).\n\n8We thank an anonymous reviewer for suggesting both the third general recommendation and the example.\n\n9What about a risk that directly kills, say, 99.9999% of people? Technically this poses only an indirect risk, since to cause extinction it needs to remove the capability of the survivors to recover. However, if the proportion threatened is high enough then we can reason that it must also have a way of reaching essentially everyone, so the analysis of direct risks will also be relevant.\n\n10Some scholars have argued that humanity expanding into space would increase other risks; see for example an interview (Deudney, [n.d.](#gpol12786-bib-0022)) and an upcoming book (Deudney, [forthcoming](#gpol12786-bib-0023)) by political scientist Daniel Deudney and Torres ([2018a](#gpol12786-bib-0062)). Assessing the overall desirability of space colonisation is beyond the scope of this article.\n\nData availability statement\n---------------------------\n\nData sharing is not applicable to this article as no new data were created or analysed.\n\nReferences\n----------\n\n* Armstrong, S.\n and \nSandberg, A.\n (2013) ‘Eternity in Six Hours: Intergalactic Spreading of Intelligent Life and Sharpening the Fermi Paradox’, Acta Astronautica, 89, pp. 1–13. [[Google Scholar](https://scholar.google.com/scholar_lookup?journal=Acta+Astronautica&title=Eternity+in+Six+Hours:+Intergalactic+Spreading+of+Intelligent+Life+and+Sharpening+the+Fermi+Paradox&volume=89&publication_year=2013&pages=1-13&)]\n* Auplat, C. A.\n (2012) ‘The Challenges of Nanotechnology Policy Making PART 1. Discussing Mandatory Frameworks’, Global Policy, 3 (4), pp. 492–500. [[Google Scholar](https://scholar.google.com/scholar_lookup?journal=Global+Policy&title=The+Challenges+of+Nanotechnology+Policy+Making+PART+1.+Discussing+Mandatory+Frameworks&volume=3&issue=4&publication_year=2012&pages=492-500&)]\n* Auplat, C. A.\n (2013) ‘The Challenges of Nanotechnology Policy Making PART 2. Discussing Voluntary Frameworks and Options’, Global Policy, 4 (1), pp. 101–107. [[Google Scholar](https://scholar.google.com/scholar_lookup?journal=Global+Policy&title=The+Challenges+of+Nanotechnology+Policy+Making+PART+2.+Discussing+Voluntary+Frameworks+and+Options&volume=4&issue=1&publication_year=2013&pages=101-107&)]\n* Avin, S.\n, \nWintle, B. C.\n, \nWeitzdörfer, J.\n, \nÓ hÉigeartaigh, S. S., \n\nSutherland, W. J.\n and \nRees, M. J.\n (2018) ‘Classifying Global Catastrophic Risks’, Futures, 102, pp. 20–26. [[Google Scholar](https://scholar.google.com/scholar_lookup?journal=Futures&title=Classifying+Global+Catastrophic+Risks&volume=102&publication_year=2018&pages=20-26&)]\n* Barrett, S.\n (2016) ‘Collective Action to Avoid Catastrophe: When Countries Succeed, When They Fail, and Why’, Global Policy, 7 (S1), pp. 45–55. [[Google Scholar](https://scholar.google.com/scholar_lookup?journal=Global+Policy&title=Collective+Action+to+Avoid+Catastrophe:+When+Countries+Succeed,+When+They+Fail,+and+Why&volume=7&issue=S1&publication_year=2016&pages=45-55&)]\n* Baum, S. D.\n (2015) ‘Risk and Resilience for Unknown, Unquantifiable, Systemic, and Unlikely/catastrophic Threats’, Environment Systems and Decisions, 35 (2), pp. 229–236. [[Google Scholar](https://scholar.google.com/scholar_lookup?journal=Environment+Systems+and+Decisions&title=Risk+and+Resilience+for+Unknown,+Unquantifiable,+Systemic,+and+Unlikely/catastrophic+Threats&volume=35&issue=2&publication_year=2015&pages=229-236&)]\n* Baum, S. D.\n (2019) ‘Risk‐risk Tradeoff Analysis of Nuclear Explosives for Asteroid Deflection’, Risk analysis, 39 (11), pp. 2427–2442\n [[PubMed](https://pubmed.ncbi.nlm.nih.gov/31170330)] [[Google Scholar](https://scholar.google.com/scholar_lookup?journal=Risk+analysis&title=Risk‐risk+Tradeoff+Analysis+of+Nuclear+Explosives+for+Asteroid+Deflection&volume=39&issue=11&publication_year=2019&pages=2427-2442&pmid=31170330&)]\n* Baum, S.\n and \nBarrett, A.\n (2017) ‘Towards an Integrated Assessment of Global Catastrophic Risk’. in Garrick B. J. (ed.), Catastrophic and Existential Risk: Proceedings of the First Colloquium. Los Angeles, CA: Garrick Institute for the Risk Sciences, University of California, pp. 41–62.. [[Google Scholar](https://scholar.google.com/scholar_lookup?title=Catastrophic+and+Existential+Risk:+Proceedings+of+the+First+Colloquium&publication_year=2017&)]\n* Baum, S. D.\n and \nBarrett, A. M.\n (2018) 'Global Catastrophes: The Most Extreme Risks', in Bier V. (ed.), Risk in Extreme Environments: Preparing, Avoiding, Mitigating, and Managing. New York, NY: Routledge, pp. 174–184. [[Google Scholar](https://scholar.google.com/scholar_lookup?title=Risk+in+Extreme+Environments:+Preparing,+Avoiding,+Mitigating,+and+Managing&publication_year=2018&)]\n* Baum, S. D.\n and \nHandoh, I. C.\n (2014) ‘Integrating the Planetary Boundaries and Global Catastrophic Risk Paradigms’, Ecological Economics, 107, pp. 13–21. [[Google Scholar](https://scholar.google.com/scholar_lookup?journal=Ecological+Economics&title=Integrating+the+Planetary+Boundaries+and+Global+Catastrophic+Risk+Paradigms&volume=107&publication_year=2014&pages=13-21&)]\n* Beckstead, N.\n (2015a) ‘How Much Could Refuges Help us Recover from a Global Catastrophe?’, Futures, 72, 36–44. [[Google Scholar](https://scholar.google.com/scholar_lookup?journal=Futures&title=How+Much+Could+Refuges+Help+us+Recover+from+a+Global+Catastrophe?&volume=72&publication_year=2015a&pages=36-44&)]\n* Beckstead, N.\n (2015b). ‘The Long‐term Significance of Reducing Global Catastrophic risks’, The GiveWell Blog, 2015–08‐13 [online]. Available from: [Accessed 3 August 2018.]. [[Google Scholar](https://scholar.google.com/scholar_lookup?journal=The+GiveWell+Blog&title=The+Long‐term+Significance+of+Reducing+Global+Catastrophic+risks&publication_year=2015b&)]\n* Bostrom, N.\n (2013) ‘Existential Risk Prevention as Global Priority’, Global Policy, 4 (1), pp. 15–31. [[Google Scholar](https://scholar.google.com/scholar_lookup?journal=Global+Policy&title=Existential+Risk+Prevention+as+Global+Priority&volume=4&issue=1&publication_year=2013&pages=15-31&)]\n* Bostrom, N.\n (2014) Superintelligence: Paths, Dangers, Strategies. Oxford: Oxford University Press. [[Google Scholar](https://scholar.google.com/scholar_lookup?title=Superintelligence:+Paths,+Dangers,+Strategies&publication_year=2014&)]\n* Bostrom, N.\n and \nĆirković, M. M.\n (eds.) (2008) Global Catastrophic Risks. Oxford: Oxford University Press. [[Google Scholar](https://scholar.google.com/scholar_lookup?title=Global+Catastrophic+Risks&publication_year=2008&)]\n* Braun, T.\n, \nFiegen, J. A.\n, \nWagner, D. C.\n, \nKrause, S. M.\n and \nGuhr, T.\n (2018) ‘Impact and Recovery Process of Mini Flash Crashes: An Empirical Study’, PLoS ONE, 13 (5), e0196920. [[PMC free article](/pmc/articles/PMC5962080/)] [[PubMed](https://pubmed.ncbi.nlm.nih.gov/29782503)] [[Google Scholar](https://scholar.google.com/scholar_lookup?journal=PLoS+ONE&title=Impact+and+Recovery+Process+of+Mini+Flash+Crashes:+An+Empirical+Study&volume=13&issue=5&publication_year=2018&pages=e0196920&pmid=29782503&)]\n* Buchholz, W.\n and \nSchymura, M.\n (2012) ‘Expected Utility Theory and the Tyranny of Catastrophic Risks’, Ecological Economics, 77, pp. 234–239. [[Google Scholar](https://scholar.google.com/scholar_lookup?journal=Ecological+Economics&title=Expected+Utility+Theory+and+the+Tyranny+of+Catastrophic+Risks&volume=77&publication_year=2012&pages=234-239&)]\n* Centeno, M. A.\n, \nNag, M.\n, \nPatterson, T. S.\n, \nShaver, A.\n and \nWindawi, A. J.\n (2015) ‘The Emergence of Global Systemic Risk’, Annual Review of Sociology, 41 (1), pp. 65–85. [[Google Scholar](https://scholar.google.com/scholar_lookup?journal=Annual+Review+of+Sociology&title=The+Emergence+of+Global+Systemic+Risk&volume=41&issue=1&publication_year=2015&pages=65-85&)]\n* Chernov, D.\n and \nSornette, D.\n (2015) Man‐made Catastrophes and Risk Information Concealment: Case Studies of Major Disasters and Human Fallibility. Cham, Heidelberg, New York, Dordrecht, London: Springer. [[Google Scholar](https://scholar.google.com/scholar_lookup?title=Man‐made+Catastrophes+and+Risk+Information+Concealment:+Case+Studies+of+Major+Disasters+and+Human+Fallibility&publication_year=2015&)]\n* Clarke, R. A.\n and \nEddy, R. P.\n (2017) Warnings: Finding Cassandras to Stop Catastrophes. New York: Harper Collins. [[Google Scholar](https://scholar.google.com/scholar_lookup?title=Warnings:+Finding+Cassandras+to+Stop+Catastrophes&publication_year=2017&)]\n* Dawkins, R.\n (1976) The Selfish Gene. Oxford: Oxford University Press. [[Google Scholar](https://scholar.google.com/scholar_lookup?title=The+Selfish+Gene&publication_year=1976&)]\n* Deudney, D.\n (n. d.) ‘An Interview With Daniel Deudney’ [online]. Available from: [Accessed 08 August 2018]. [[Google Scholar](https://scholar.google.com/scholar_lookup?title=An+Interview+With+Daniel+Deudney’+[online]&)]\n* Deudney, D.\n (forthcoming) Dark Skies: Space Expansionism, Planetary Geopolitics, and the Ends of Humanity. Oxford: Oxford University Press. [[Google Scholar](https://scholar.google.com/scholar_lookup?title=Dark+Skies:+Space+Expansionism,+Planetary+Geopolitics,+and+the+Ends+of+Humanity&)]\n* Folke, C.\n, \nCarpenter, S. R.\n, \nWalker, B.\n, \nScheffer, M.\n, \nChapin, T.\n and \nRockström, J.\n (2010) ‘Resilience Thinking: Integrating Resilience, Adaptability and Transformability’, Ecology and Society [online], 15 (4), art. 20. [[Google Scholar](https://scholar.google.com/scholar_lookup?journal=Ecology+and+Society+[online]&title=Resilience+Thinking:+Integrating+Resilience,+Adaptability+and+Transformability&volume=15&issue=4&publication_year=2010&)]\n* Foster, K. R.\n, \nVecchia, P.\n and \nRepacholi, M. H.\n (2000) ‘Science and the Precautionary Principle’, Science, 288 (5468), pp. 979–981. [[PubMed](https://pubmed.ncbi.nlm.nih.gov/10841718)] [[Google Scholar](https://scholar.google.com/scholar_lookup?journal=Science&title=Science+and+the+Precautionary+Principle&volume=288&issue=5468&publication_year=2000&pages=979-981&pmid=10841718&)]\n* Goldin, I.\n and \nMariathasan, M.\n (2014) The Butterfly Defect: How Globalization Creates Systemic Risks, and What to Do About It. Princeton, NJ: Princeton University Press. [[Google Scholar](https://scholar.google.com/scholar_lookup?title=The+Butterfly+Defect:+How+Globalization+Creates+Systemic+Risks,+and+What+to+Do+About+It&publication_year=2014&)]\n* GPP (Global Priorities Project)\n(2015) ‘Policy Brief: Unprecedented Technological Risks’ [online]. Available from: [https://www.fhi.ox.ac.uk/wp-content/uploads/Unprecedented-Technological-Risks.pdf](http://www.fhi.ox.ac.uk/wp-content/uploads/Unprecedented-Technological-Risks.pdf) [Accessed 08 August 2018]. [[Google Scholar](https://scholar.google.com/scholar_lookup?title=Policy+Brief:+Unprecedented+Technological+Risks’+[online]&publication_year=2015&)]\n* Graham, J. D.\n, \nWiener, J. B.\n and \nSunstein, C. R.\n (eds.) (1995) Risk vs. Risk. Cambridge, MA: Harvard University Press. [[Google Scholar](https://scholar.google.com/scholar_lookup?title=Risk+vs.+Risk&publication_year=1995&)]\n* Holling, C. S.\n (1973) ‘Resilience and Stability of Ecological Systems’, Annual Review of Ecology and Systematics, 4 (1), pp. 1–23. [[Google Scholar](https://scholar.google.com/scholar_lookup?journal=Annual+Review+of+Ecology+and+Systematics&title=Resilience+and+Stability+of+Ecological+Systems&volume=4&issue=1&publication_year=1973&pages=1-23&)]\n* Homer‐Dixon, T.\n, \nWalker, B.\n, \nBiggs, R.\n, \nCrépin, A. S.\n, \nFolke, C.\n, \nLambin, E. F.\n et al. (2015) ‘Synchronous Failure: The Emerging Causal Architecture of Global Crisis’, Ecology and Society [online], 20 (3), art. 6. [[Google Scholar](https://scholar.google.com/scholar_lookup?journal=Ecology+and+Society+[online]&title=Synchronous+Failure:+The+Emerging+Causal+Architecture+of+Global+Crisis&volume=20&issue=3&publication_year=2015&)]\n* Jebari, K.\n (2015) ‘Existential Risks: Exploring a Robust Risk Reduction Strategy’, Science and Engineering Ethics, 21 (3), pp. 541–554. [[PubMed](https://pubmed.ncbi.nlm.nih.gov/24891130)] [[Google Scholar](https://scholar.google.com/scholar_lookup?journal=Science+and+Engineering+Ethics&title=Existential+Risks:+Exploring+a+Robust+Risk+Reduction+Strategy&volume=21&issue=3&publication_year=2015&pages=541-554&pmid=24891130&)]\n* Johnson, N.\n, \nZhao, G.\n, \nHunsader, E.\n, \nMeng, J.\n, \nRavindar, A.\n, \nCarran, S.\n and \nTivnan, B.\n (2012) ‘Financial Black Swans Driven by Ultrafast Machine Ecology’, arXiv preprint arXiv:1202.1448. [[Google Scholar](https://scholar.google.com/scholar_lookup?journal=arXiv+preprint+arXiv:1202.1448&title=Financial+Black+Swans+Driven+by+Ultrafast+Machine+Ecology&publication_year=2012&)]\n* Jones, H.\n, \nO’Brien, M.\n and \nRyan, T.\n (2018) ‘Representation of Future Generations in United Kingdom Policy‐making’, Futures, 102, pp. 153–163. [[Google Scholar](https://scholar.google.com/scholar_lookup?journal=Futures&title=Representation+of+Future+Generations+in+United+Kingdom+Policy‐making&volume=102&publication_year=2018&pages=153-163&)]\n* Lewis, G.\n (2018) ‘Horsepox Synthesis: A Case of the Unilateralist’s Curse?’ [online]. Available from: [Accessed 08 August 2018]. [[Google Scholar](https://scholar.google.com/scholar_lookup?title=Horsepox+Synthesis:+A+Case+of+the+Unilateralist’s+Curse?’+[online].&publication_year=2018&)]\n* Liu, H.\n, \nLauta, K. C.\n and \nMaas, M. M.\n (2018) ‘Governing Boring Apocalypses: A New Typology of Existential Vulnerabilities and Exposures for Existential Risk Research’, Futures, 102, 6–19. [[Google Scholar](https://scholar.google.com/scholar_lookup?journal=Futures&title=Governing+Boring+Apocalypses:+A+New+Typology+of+Existential+Vulnerabilities+and+Exposures+for+Existential+Risk+Research&volume=102&publication_year=2018&pages=6-19&)]\n* Maher, T. M.\n and \nBaum, S. D.\n (2013) ‘Adaptation to and Recovery from Global Catastrophe’, Sustainability, 5 (4), pp. 1461–1479. [[Google Scholar](https://scholar.google.com/scholar_lookup?journal=Sustainability&title=Adaptation+to+and+Recovery+from+Global+Catastrophe&volume=5&issue=4&publication_year=2013&pages=1461-1479&)]\n* Martin, I. W.\n and \nPindyck, R. S.\n (2015) ‘Averting Catastrophes: The Strange Economics of Scylla and Charybdis’, American Economic Review, 105 (10), pp. 2947–85. [[Google Scholar](https://scholar.google.com/scholar_lookup?journal=American+Economic+Review&title=Averting+Catastrophes:+The+Strange+Economics+of+Scylla+and+Charybdis&volume=105&issue=10&publication_year=2015&pages=2947-85&)]\n* Matheny, J. G.\n (2007) ‘Reducing the Risk of Human Extinction’, Risk Analysis, 27 (5), pp. 1335–1344. [[PubMed](https://pubmed.ncbi.nlm.nih.gov/18076500)] [[Google Scholar](https://scholar.google.com/scholar_lookup?journal=Risk+Analysis&title=Reducing+the+Risk+of+Human+Extinction&volume=27&issue=5&publication_year=2007&pages=1335-1344&pmid=18076500&)]\n* Millett, P.\n and \nSnyder‐Beattie, A.\n (2017) ‘Existential Risk and Cost‐Effective Biosecurity’, Health Security, 15 (4), pp. 373–383. [[PMC free article](/pmc/articles/PMC5576214/)] [[PubMed](https://pubmed.ncbi.nlm.nih.gov/28806130)] [[Google Scholar](https://scholar.google.com/scholar_lookup?journal=Health+Security&title=Existential+Risk+and+Cost‐Effective+Biosecurity&volume=15&issue=4&publication_year=2017&pages=373-383&pmid=28806130&)]\n* Nordhaus, W. D.\n (2011) ‘The Economics of Tail Events with an Application to Climate Change’, Review of Environmental Economics and Policy, 5 (2), pp. 240–257. [[Google Scholar](https://scholar.google.com/scholar_lookup?journal=Review+of+Environmental+Economics+and+Policy&title=The+Economics+of+Tail+Events+with+an+Application+to+Climate+Change&volume=5&issue=2&publication_year=2011&pages=240-257&)]\n* Omohundro, S. M.\n (2008) 'The Basic AI Drives.' In Wang P., Goertzel B. and Franklin S. (eds) Artificial General Intelligence 2008: Proceedings of the First AGI Conference. Frontiers in Artificial Intelligence and Applications 171. Amsterdam: IOS, pp. 483–492. [[Google Scholar](https://scholar.google.com/scholar_lookup?title=Artificial+General+Intelligence+2008:+Proceedings+of+the+First+AGI+Conference&publication_year=2008&)]\n* Ord, T.\n, \nHillerbrand, R.\n and \nSandberg, A.\n (2010) ‘Probing the Improbable: Methodological Challenges for Risks with Low Probabilities and High Stakes’, Journal of Risk Research, 13 (2), pp. 191–205. [[Google Scholar](https://scholar.google.com/scholar_lookup?journal=Journal+of+Risk+Research&title=Probing+the+Improbable:+Methodological+Challenges+for+Risks+with+Low+Probabilities+and+High+Stakes&volume=13&issue=2&publication_year=2010&pages=191-205&)]\n* O'Riordan, T.\n and \nCameron, J.\n (eds) (1994) Interpreting the Precautionary Principle. London: Earthscan. [[Google Scholar](https://scholar.google.com/scholar_lookup?title=Interpreting+the+Precautionary+Principle&publication_year=1994&)]\n* Pamlin, D.\n and \nArmstrong, S.\n (2015) Global challenges: 12 Risks That Threaten Human Civilization. Stockholm: Global Challenges Foundation. [[Google Scholar](https://scholar.google.com/scholar_lookup?title=Global+challenges:+12+Risks+That+Threaten+Human+Civilization&publication_year=2015&)]\n* Parfit, D.\n (1984) Reasons and Persons. Oxford: Oxford University Press. [[Google Scholar](https://scholar.google.com/scholar_lookup?title=Reasons+and+Persons&publication_year=1984&)]\n* Posner, R. A.\n (2004) Catastrophe: Risk and Response. Oxford: Oxford University Press. [[Google Scholar](https://scholar.google.com/scholar_lookup?title=Catastrophe:+Risk+and+Response&publication_year=2004&)]\n* Rees, M. J.\n (2003) Our Final Hour: A Scientist's Warning: How Terror, Error, and Environmental Disaster Threaten Humankind's Future in This Century – on Earth and Beyond. New York: Basic Books (AZ). [[Google Scholar](https://scholar.google.com/scholar_lookup?title=Our+Final+Hour:+A+Scientist's+Warning:+How+Terror,+Error,+and+Environmental+Disaster+Threaten+Humankind's+Future+in+This+Century+–+on+Earth+and+Beyond&publication_year=2003&)]\n* Rees, M.\n (2018) On the Future: Prospects for Humanity. Princeton, NJ: Princeton University Press. [[Google Scholar](https://scholar.google.com/scholar_lookup?title=On+the+Future:+Prospects+for+Humanity&publication_year=2018&)]\n* Roberts, K. H.\n and \nBea, R.\n (2001) ‘Must Accidents Happen? Lessons from High‐reliability Organizations’, Academy of Management Perspectives, 15 (3), pp. 70–78. [[Google Scholar](https://scholar.google.com/scholar_lookup?journal=Academy+of+Management+Perspectives&title=Must+Accidents+Happen?+Lessons+from+High‐reliability+Organizations&volume=15&issue=3&publication_year=2001&pages=70-78&)]\n* Rowe, T.\n and \nBeard, S.\n (2018) Probabilities, methodologies and the evidence base in existential risk assessments. Working paper, Centre for the Study of Existential Risk, Cambridge, UK. Available from: [Accessed 08 August 2018]. [[Google Scholar](https://scholar.google.com/scholar_lookup?title=Probabilities,+methodologies+and+the+evidence+base+in+existential+risk+assessments&publication_year=2018&)]\n* Sandin, P.\n (1999) ‘Dimensions of the Precautionary Principle’, Human and Ecological Risk Assessment: An International Journal, 5 (5), pp. 889–907. [[Google Scholar](https://scholar.google.com/scholar_lookup?journal=Human+and+Ecological+Risk+Assessment:+An+International+Journal&title=Dimensions+of+the+Precautionary+Principle&volume=5&issue=5&publication_year=1999&pages=889-907&)]\n* Sandler, T.\n (2016) ‘Strategic Aspects of Difficult Global Challenges’, Global Policy, 7 (S1), pp. 33–44. [[Google Scholar](https://scholar.google.com/scholar_lookup?journal=Global+Policy&title=Strategic+Aspects+of+Difficult+Global+Challenges&volume=7&issue=S1&publication_year=2016&pages=33-44&)]\n* Sunstein, C. R.\n (2005) Laws of Fear: Beyond the Precautionary Principle, vol 6. Cambridge: Cambridge University Press. [[Google Scholar](https://scholar.google.com/scholar_lookup?title=Laws+of+Fear:+Beyond+the+Precautionary+Principle&publication_year=2005&)]\n* Sunstein, C. R.\n (2007) ‘The Catastrophic Harm Precautionary Principle’, Issues in Legal Scholarship [online], 6 (3). Available from: [Accessed 08 August 2018] [[Google Scholar](https://scholar.google.com/scholar_lookup?journal=Issues+in+Legal+Scholarship+[online]&title=The+Catastrophic+Harm+Precautionary+Principle&volume=6&issue=3&publication_year=2007&)]\n* Sunstein, C. R.\n (2009) Worst‐case Scenarios. Cambridge, MA: Harvard University Press. [[Google Scholar](https://scholar.google.com/scholar_lookup?title=Worst‐case+Scenarios&publication_year=2009&)]\n* Tonn, B. E.\n (1991) ‘The Court of Generations: A Proposed Amendment to the US Constitution’, Futures, 23 (5), pp. 482–498. [[Google Scholar](https://scholar.google.com/scholar_lookup?journal=Futures&title=The+Court+of+Generations:+A+Proposed+Amendment+to+the+US+Constitution&volume=23&issue=5&publication_year=1991&pages=482-498&)]\n* Tonn, B. E.\n (2009) ‘Obligations to Future Generations and Acceptable Risks of Human Extinction’, Futures, 41 (7), pp. 427–435. [[Google Scholar](https://scholar.google.com/scholar_lookup?journal=Futures&title=Obligations+to+Future+Generations+and+Acceptable+Risks+of+Human+Extinction&volume=41&issue=7&publication_year=2009&pages=427-435&)]\n* Tonn, B. E.\n (2018) ‘Philosophical, Institutional, and Decision Making Frameworks for Meeting Obligations to Future Generations’, Futures, 95, pp. 44–57. [[Google Scholar](https://scholar.google.com/scholar_lookup?journal=Futures&title=Philosophical,+Institutional,+and+Decision+Making+Frameworks+for+Meeting+Obligations+to+Future+Generations&volume=95&publication_year=2018&pages=44-57&)]\n* Tonn, B.\n and \nStiefel, D.\n (2013) ‘Evaluating Methods for Estimating Existential Risks’, Risk Analysis, 33 (10), pp. 1772–1787. [[PubMed](https://pubmed.ncbi.nlm.nih.gov/23551083)] [[Google Scholar](https://scholar.google.com/scholar_lookup?journal=Risk+Analysis&title=Evaluating+Methods+for+Estimating+Existential+Risks&volume=33&issue=10&publication_year=2013&pages=1772-1787&pmid=23551083&)]\n* Tonn, B.\n and \nStiefel, D.\n (2014) ‘Human Extinction Risk and Uncertainty: Assessing Conditions for Action’, Futures, 63, pp. 134–144. [[Google Scholar](https://scholar.google.com/scholar_lookup?journal=Futures&title=Human+Extinction+Risk+and+Uncertainty:+Assessing+Conditions+for+Action&volume=63&publication_year=2014&pages=134-144&)]\n* Torres, P.\n (2016) ‘Agential Risks: A Comprehensive Introduction’, Journal of Evolution and Technology, 26 (2), pp. 31–47. [[Google Scholar](https://scholar.google.com/scholar_lookup?journal=Journal+of+Evolution+and+Technology&title=Agential+Risks:+A+Comprehensive+Introduction&volume=26&issue=2&publication_year=2016&pages=31-47&)]\n* Torres, P.\n (2018a) ‘Space Colonization and Suffering Risks: Reassessing the “Maxipok Rule”’, Futures, 100, pp. 74–85. [[Google Scholar](https://scholar.google.com/scholar_lookup?journal=Futures&title=Space+Colonization+and+Suffering+Risks:+Reassessing+the+“Maxipok+Rule”&volume=100&publication_year=2018a&pages=74-85&)]\n* Torres, P.\n (2018b) ‘Agential Risks and Information Hazards: An Unavoidable But Dangerous Topic?’, Futures, 95, pp. 86–97. [[Google Scholar](https://scholar.google.com/scholar_lookup?journal=Futures&title=Agential+Risks+and+Information+Hazards:+An+Unavoidable+But+Dangerous+Topic?&volume=95&publication_year=2018b&pages=86-97&)]\n* Trask, A.\n (2017) ‘Safe Crime Prediction: Homomorphic Encryption and Deep Learning for More Effective, Less Intrusive Digital Surveillance’ [online]. Available from: [Accessed 8 Auguest 2018]. [[Google Scholar](https://scholar.google.com/scholar_lookup?title=Safe+Crime+Prediction:+Homomorphic+Encryption+and+Deep+Learning+for+More+Effective,+Less+Intrusive+Digital+Surveillance’+[online]&publication_year=2017&)]\n* Turco, R. P.\n, \nToon, O. B.\n, \nAckerman, T. P.\n, \nPollack, J. B.\n and \nSagan, C.\n (1983) ‘Nuclear Winter: Global Consequences of Multiple Nuclear Explosions’, Science, 222 (4630), pp. 1283–1292. [[PubMed](https://pubmed.ncbi.nlm.nih.gov/17773320)] [[Google Scholar](https://scholar.google.com/scholar_lookup?journal=Science&title=Nuclear+Winter:+Global+Consequences+of+Multiple+Nuclear+Explosions&volume=222&issue=4630&publication_year=1983&pages=1283-1292&pmid=17773320&)]\n* Umbrello, S.\n and \nBaum, S. D.\n (2018) ‘Evaluating Future Nanotechnology: The Net Societal Impacts of Atomically Precise Manufacturing’, Futures, 100, pp. 63–73. [[Google Scholar](https://scholar.google.com/scholar_lookup?journal=Futures&title=Evaluating+Future+Nanotechnology:+The+Net+Societal+Impacts+of+Atomically+Precise+Manufacturing&volume=100&publication_year=2018&pages=63-73&)]\n* UNDRR (United Nations Office for Disaster Risk Reduction)\n(2016) ‘Report of the open‐ended intergovernmental expert working group on indicators and terminology relating to disaster risk reduction’. Document symbol A/71/644 [online]. Available from: [Accessed 08 August 2018]. [[Google Scholar](https://scholar.google.com/scholar_lookup?title=Report+of+the+open‐ended+intergovernmental+expert+working+group+on+indicators+and+terminology+relating+to+disaster+risk+reduction’&publication_year=2016&)]\n* Wagner, G.\n and \nWeitzman, M. L.\n (2015) Climate Shock: The Economic Consequences of a Hotter Planet. Princeton, NJ: Princeton University Press. [[Google Scholar](https://scholar.google.com/scholar_lookup?title=Climate+Shock:+The+Economic+Consequences+of+a+Hotter+Planet&publication_year=2015&)]\n* Weick, K. E.\n (1990) ‘The Vulnerable System: An Analysis of the Tenerife Air Disaster’, Journal of Management, 16 (3), pp. 571–593. [[Google Scholar](https://scholar.google.com/scholar_lookup?journal=Journal+of+Management&title=The+Vulnerable+System:+An+Analysis+of+the+Tenerife+Air+Disaster&volume=16&issue=3&publication_year=1990&pages=571-593&)]\n* Weitzman, M. L.\n (2009) ‘On Modeling and Interpreting the Economics of Catastrophic Climate Change’, The Review of Economics and Statistics, 91 (1), pp. 1–19. [[Google Scholar](https://scholar.google.com/scholar_lookup?journal=The+Review+of+Economics+and+Statistics&title=On+Modeling+and+Interpreting+the+Economics+of+Catastrophic+Climate+Change&volume=91&issue=1&publication_year=2009&pages=1-19&)]\n* Wiener, J. B.\n (2011) ‘The Rhetoric of Precaution’, in Wiener J. B., Rogers M. D., Hammitt J. K. and Sand P. H. (eds) The Reality of Precaution: Comparing Risk Regulation in the United States and Europe. Abingdon: Earthscan, pp. 3–35. [[Google Scholar](https://scholar.google.com/scholar_lookup?title=The+Reality+of+Precaution:+Comparing+Risk+Regulation+in+the+United+States+and+Europe&publication_year=2011&)]\n* Wiener, J. B.\n (2016) ‘The Tragedy of the Uncommons: On the Politics of Apocalypse’, Global Policy, 7 (S1), pp. 67–80. [[Google Scholar](https://scholar.google.com/scholar_lookup?journal=Global+Policy&title=The+Tragedy+of+the+Uncommons:+On+the+Politics+of+Apocalypse&volume=7+&issue=S1&publication_year=2016&pages=67-80&)]\n* Yudkowsky, E.\n (2008) ‘Cognitive Biases Potentially Affecting Judgment of Global Risks’, in Bostrom N. and Ćirković M. M. (eds) Global Catastrophic Risks. New York: Oxford University Press, pp. 91–119. [[Google Scholar](https://scholar.google.com/scholar_lookup?title=Global+Catastrophic+Risks&publication_year=2008&)]\n\n\n---\n\nArticles from Global Policy are provided here courtesy of **Wiley-Blackwell**\n\n---", "url": "https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7228299/", "title": "Global challenges: 12 risks that threaten human civilization", "source": "html_articles", "source_type": "journalArticle", "source_filetype": "pdf", "date_published": "2014-12-31T23:00:00Z", "authors": ["Dennis Pamlin", "Stuart Armstrong"], "summary": [], "id": "9bb47c9aad4bb67a79312cdca5e97043"} {"text": "Published: December 08, 2020 | by [Luke Muehlhauser](/about/team/luke-muehlhauser) \nWhen the Soviet Union began to fracture in 1991, the world was forced to reckon with the first collapse of a nuclear superpower in history.[1](https://www.openphilanthropy.org/blog/ai-governance-grantmaking#footnote1_ugsbl61 \"My thanks to Nathan Calvin for his help researching and drafting these opening paragraphs about the Nunn-Lugar Act.\")The USSR was home to more than 27,000 nuclear weapons, more than one million citizens working at nuclear facilities, and over 600 metric tons of nuclear fissile materials.[2](https://www.openphilanthropy.org/blog/ai-governance-grantmaking#footnote2_w0xguq6 \"On Soviet nuclear stockpile numbers, see Carter et al (1991) at pp. i & 29. On Soviet citizens working at nuclear facilities, see Parker (2016): [Siegfried] Hecker and the rest of the Americans were deeply concerned about the one million-plus Russians who worked in nuclear facilities. Many faced severe financial pressure in an imploding society and thus constituted a huge potential security risk.\") It seemed inevitable that some of these weapons, experts, and materials would end up in terrorist cells or hostile states,[3](https://www.openphilanthropy.org/blog/ai-governance-grantmaking#footnote3_xy8ud03 \" See e.g. contemporary commentary from Carter et al (1991), p. i: Soviet nuclear command and control is at root a social and political creation. However successful its designers have been in insulating it from all the problems they could foresee, it cannot be assumed capable of standing apart from turmoil throughout the society within which it is embedded. And if even one hundredth of one percent of the nuclear weapons in the Soviet Stockpile falls into the wrong hands, destruction greater than the world has seen since Hiroshima and Nagasaki could result. Another example contemporary quote is from then Secretary of Defense Dick Cheney: If the Soviets do an excellent job at retaining control over their stockpile of nuclear weapons – let's assume they've got 25,000 to 30,000; that's a ballpark figure – and they are 99 percent successful, that would mean you could still have as many as 250 that they were not able to control.\")especially given a series of recent failed attempts at non-proliferation cooperation between the US and the USSR.[4](https://www.openphilanthropy.org/blog/ai-governance-grantmaking#footnote4_zsi8pd4 \"See e.g. the 1986 breakdown of negotiations between Gorbachev and Reagan at Reykjavik over disagreements in Reagan’s proposed Strategic Defense Initiative, and the US’s 1980 withdrawal from the SALT II treaty after the Soviet invasion of Afghanistan.\")\n\n\nSeeing the threat, the [Carnegie](https://en.wikipedia.org/wiki/Carnegie_Corporation_of_New_York) and [MacArthur](https://en.wikipedia.org/wiki/MacArthur_Foundation) foundations funded a Prevention of Proliferation Task Force, which (among other outputs) produced the influential report “[Soviet Nuclear Fission: Control of the Nuclear Arsenal in a Disintegrating Soviet Union](https://www.belfercenter.org/publication/soviet-nuclear-fission-control-nuclear-arsenal-disintegrating-soviet-union)” by Ash Carter and others.[5](https://www.openphilanthropy.org/blog/ai-governance-grantmaking#footnote5_llg8sz9 \"Kohler (2007).\") Shortly before the report’s publication, the authors presented their findings to Senators Sam Nunn (D-GA) and Richard Lugar (R-IN) at a meeting arranged by the president of the Carnegie foundation.[6](https://www.openphilanthropy.org/blog/ai-governance-grantmaking#footnote6_6uzrtmz \"Jones (2019), p. 28.\") In later remarks, Nunn described the report as having an “astounding effect” on him and other Senators.[7](https://www.openphilanthropy.org/blog/ai-governance-grantmaking#footnote7_f4u4rro \"From Nunn’s remarks at a 1995 White House Forum, discussing his role in Soviet Nuclear disarmament: Then, in early November, Ash Carter gave his report on nuclear weapons security in the USSR, which I understand was financed by Carnegie... That report had an astounding effect. Dick Lugar and I got together. I knew that Dick had tremendous influence on the Republican side, tremendous influence in the Senate, and in the country. We really formed a partnership. Ash Carter presented his report to us. We then brought in other senators, and within about three to four weeks we had built a consensus.\")\n\n\nLater that year, Nunn and Lugar introduced legislation (co-drafted with Carter and others[8](https://www.openphilanthropy.org/blog/ai-governance-grantmaking#footnote8_wrglapi \"Carter and Perry (2000), pp. 71-72: Carter briefed the senators on the Harvard study. It turned out that Senator Nunn and Senator Lugar and their staff members, Robert Bell, Ken Myers, and Richard Combs, were working on a similar scheme for joint action. After the meeting broke up, Carter, Bell, Myers, and Combs stayed behind to draft what became known as the Nunn-Lugar legislation.\")) to create the Cooperative Threat Reduction Program, also known as the Nunn-Lugar Act.[9](https://www.openphilanthropy.org/blog/ai-governance-grantmaking#footnote9_ptb36i1 \"Jones (2019), especially pp. 27-33.\") The bill provided hundreds of millions of dollars in funding and scientific expertise to help former Soviet Union states decommission their nuclear stockpiles. As of 2013,[10](https://www.openphilanthropy.org/blog/ai-governance-grantmaking#footnote10_o8zjro2 \"The Cooperative Threat Reduction Plan was revised several times over the years, and expanded to engage other types of weapons and other states besides Russia (Congressional Research Service 2015).\") the Nunn-Lugar Act had achieved the dismantling or elimination of over 7,616 nuclear war-heads, 926 ICBMs, and 498 ICBM sites. In addition to removing weapons, the program also attempted to ensure that remaining nuclear materials in the former USSR were appropriately secured and accounted for.[11](https://www.openphilanthropy.org/blog/ai-governance-grantmaking#footnote11_c2piuab \"Nunn-Lugar Scorecard (2013).\") In 2012, President Obama [said](https://obamawhitehouse.archives.gov/the-press-office/2012/12/03/remarks-president-nunn-lugar-cooperative-threat-reduction-symposium) that Nunn-Lugar was one of America’s “smartest and most successful national security programs,” having previously called it “one of the most important investments we could have made to protect ourselves from catastrophe.”[12](https://www.openphilanthropy.org/blog/ai-governance-grantmaking#footnote12_dspzbxa \"The earlier quote is from The Audacity of Hope (2006), p. 311: The premise of what came to be known as the Nunn-Lugar program was simple: after the fall of the Soviet Union, the biggest threat to the United States — aside from an accidental launch — wasn’t a first strike ordered by Gorbachev or Yeltsin, but the migration of nuclear material or know-how into the hands of terrorists and rogue states, a possible result of Russia’s economic tailspin, corruption in the military, the impoverishment of Russian scientists, and security and control systems that had fallen into disrepair. Under Nunn-Lugar, America basically provided the resources to fix up these systems, and although the program caused some consternation to those accustomed to Cold War thinking, it has proved to be one of the most important investments we could have made to protect ourselves from catastrophe.\") President-Elect Joe Biden, a U.S. Senator at the time of Nunn-Lugar’s passage, called it “the most cost-effective national security expenditure in American history.”[13](https://www.openphilanthropy.org/blog/ai-governance-grantmaking#footnote13_bcjrwlk \"Jones (2019), p. 32.\")\n\n\nThe Nunn-Lugar program is an example of how technology governance can have a very large impact, and specifically by reducing global catastrophic risks from technology. Stories like this help inspire and inform our own grantmaking related to mitigating potential catastrophic risks from another (albeit very different) class of high-stakes technologies, namely some advanced artificial intelligence (AI) capabilities that will be fielded in the coming decades, and in particular from what we call “transformative AI” (more below).[14](https://www.openphilanthropy.org/blog/ai-governance-grantmaking#footnote14_rta7rzg \"To be clear, I'm not using the Nunn-Lugar Act as anything more than an example of technology governance having a large impact by reducing a global catastrophic risk from technology. For example, I'm not using the Nunn-Lugar example to suggest that future AI risks are similar to the risk of \\\"loose nukes,\\\" nor that aggressive AI arms control measures should be an urgent priority.\")\n\n\nWe have previously described some of our grantmaking priorities related to technical work on “AI alignment” (e.g. [here](https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/the-open-phil-ai-fellowship#examples)), but we haven’t yet said much about our grantmaking related to AI governance. In this post, I aim to clarify our priorities in AI governance, and summarize our AI governance grantmaking so far.\n\n\nOur priorities within AI governance\n-----------------------------------\n\n\nBy AI governance we mean local and global norms, policies, laws, processes, politics, and institutions (not just governments) that will affect social outcomes from the development and deployment of AI systems. We aim to support work related to both AI governance *research* (to improve our collective understanding of how to achieve beneficial and effective AI governance) and AI governance *practice* and *influence* (to improve the odds that good governance ideas are actually implemented by companies, governments, and other actors).\n\n\nWithin the large tent of “AI governance,” we focus on work that we think may increase the odds of eventual good outcomes from “[transformative AI](https://docs.google.com/document/d/15siOkHQAoSBl_Pu85UgEDWfmvXFotzub31ow3A11Xvo/edit),” especially by reducing potential catastrophic risks from transformative AI[15](https://www.openphilanthropy.org/blog/ai-governance-grantmaking#footnote15_ae9c6nl \"We focus on \\\"transformative AI\\\" (a term we introduced in a 2016 blog post) because our Potential Risks from Advanced AI focus area is part of our longtermism-motivated portfolio. For more on our reasons for this focus, see Potential Risks from Advanced Artificial Intelligence: The Philanthropic Opportunity. For more on longtermism, see e.g. Greaves & MacAskill (forthcoming); Ord (2020); Bostrom (2013). Because of this longtermist motivation, we focus on a subset of transformative AI scenarios that seems especially important from a longtermist perspective, for example (but not limited to) scenarios involving \\\"prepotent AI\\\" (Critch & Krueger 2020). However, in this blog post and elsewhere I often focus the discussion on \\\"transformative AI\\\" because this term is (I hope) more concrete than alternatives such as \\\"AI systems of likely longtermist importance,\\\" and because it helps to point readers in the direction of issues we focus on (i.e. those with extremely large stakes for the future of human civilization). Our priorities in the space overlap with, but aren't identical to, those articulated in Dafoe (2020).\") — regardless of whether that work is itself motivated by transformative AI concerns (see next section). By transformative AI, I mean software that has [at least as profound an impact on the world’s trajectory as the Industrial Revolution did](https://docs.google.com/document/d/15siOkHQAoSBl_Pu85UgEDWfmvXFotzub31ow3A11Xvo/edit#heading=h.lnzzqc1wopfc).[16](https://www.openphilanthropy.org/blog/ai-governance-grantmaking#footnote16_pont95i \"This phrasing is due to my colleague Ajeya Cotra, and is adapted from the definition of transformative AI introduced by Holden Karnofsky here.\") Importantly, this is a much larger scale of impact than others seem to mean when discussing “transformative technologies” or a “4th industrial revolution,” but it also doesn’t assume technological developments as radical as “artificial general intelligence” or “machine superintelligence” (see [here](https://docs.google.com/document/d/15siOkHQAoSBl_Pu85UgEDWfmvXFotzub31ow3A11Xvo/edit#)). Nor does it assume any particular AI architecture or suite of capabilities; it remains an open empirical question which architectures and capabilities would have such extreme (positive or negative) impact on society. For example, even a small set of AI systems with narrow and limited capabilities could — in theory, in a worst-case scenario — have industrial-revolution-scale (negative) impact if they were used to automate key parts of nuclear command and control in the U.S. and Russia, and this was the primary cause of an unintended large-scale nuclear war.[17](https://www.openphilanthropy.org/blog/ai-governance-grantmaking#footnote17_0qjbtss \"On risks from the interaction of AI and nuclear arsenals, including the automation of some parts of nuclear command and control, see e.g. Boulanin et al. (2020); Geist & Lohn (2018); Horowitz et al. (2019).\") (But this is only one example scenario and, one hopes, a very unlikely one.[18](https://www.openphilanthropy.org/blog/ai-governance-grantmaking#footnote18_g64yp5u \"This example might also be unrepresentative of the kind of possible catastrophic risk from transformative AI that we are likely to focus on, for example if it is difficult to affect with philanthropic dollars, or if even larger-scale nuclear war is unlikely to have much longtermist significance (see Ord 2020, ch. 4).\"))\n\n\nUnfortunately, it’s difficult to know which “intermediate goals” we could pursue that, if achieved, would clearly increase the odds of eventual good outcomes from transformative AI. Would tighter regulation of AI technologies in the U.S. and Europe meaningfully reduce catastrophic risks, or would it increase them by (e.g.) privileging AI development in states that typically have lower safety standards and a less cooperative approach to technological development? Would broadly accelerating AI development increase the odds of good outcomes from transformative AI, e.g. because faster economic growth leads to more [positive-sum](https://www.britannica.com/topic/positive-sum-game) political dynamics, or would it increase catastrophic risk, e.g. because it would leave less time to develop, test, and deploy the technical and governance solutions needed to successfully manage transformative AI? For those examples and many others, we are not just uncertain about whether pursuing a particular intermediate goal would turn out to be *tractable* — we are also uncertain about whether *achieving* the intermediate goal would be good or bad for society, in the long run. Such “sign uncertainty” can dramatically reduce the expected value of pursuing some particular goal,[19](https://www.openphilanthropy.org/blog/ai-governance-grantmaking#footnote19_8t28yza \"For example, if we estimate that a $1 million grant has a 60% chance of having ~no impact and a 40% chance of creating +100 units of some social benefit, the expected value of the grant is (.6×0)+(.4×100) = 40 benefit units, for a return on investment (ROI) of one benefit unit per $25,000 spent. If instead we estimate that a $1 million grant has a 40% of having ~no impact, a 20% chance of creating -100 benefit units (i.e. a large harm), and (as with the other grant) a 40% chance of creating +100 benefit units, then even though we think the grant is twice as likely to create a large benefit than a large harm, the expected value of the grant is only (0.4×0)+(0.2×(-100))+(0.4×100) = 20 benefit units, for an ROI of one benefit unit per $50,000. In other words, our \\\"hits-based giving\\\" approach can accommodate more failure of the \\\"no impact\\\" variety than it can of the \\\"negative impact\\\" variety. (And to be clear, I'm not suggesting anything different from normal cost-benefit analysis.)\")often enough for us to not prioritize that goal.[20](https://www.openphilanthropy.org/blog/ai-governance-grantmaking#footnote20_sdl0szs \"That is, sign uncertainty can reduce the expected value of pursuing some particular goal below our threshold for how much benefit we hope to create on average per dollar spent. For more on our traditional \\\"100x bar\\\" for benefit produced per dollar, see GiveWell’s Top Charities Are (Increasingly) Hard to Beat, but also note that we are still thinking through what threshold to use for our longtermism-motivated grantmaking, per our current approach to \\\"worldview diversification\\\"; see here. The potential impact of sign uncertainty on expected value is universal, but I highlight it here because I have encountered sign uncertainty more commonly in our work on AI governance than in some other Open Philanthropy focus areas, for example in our grantmaking to machine learning researchers and engineers for technical work on AI alignment (though there can be some sign uncertainty for those grants too). For more on sign uncertainty in the context of attempts to do good cost-effectively, see e.g. Kokotajlo & Oprea (2020).\")\n\n\nAs such, our AI governance grantmaking tends to focus on…\n\n\n* …research that may be especially helpful for learning how AI technologies may develop over time, which AI capabilities could have industrial-revolution-scale impact, and which intermediate goals would, if achieved, have a positive impact on transformative AI outcomes, e.g. via our grants to [GovAI](https://www.openphilanthropy.org/grants/centre-for-the-governance-of-ai-general-support/).\n* …research and advocacy supporting intermediate goals that we’ve come to think will improve expected transformative AI outcomes,[21](https://www.openphilanthropy.org/blog/ai-governance-grantmaking#footnote21_qs3qabr \"We don't have a concrete list of such intermediate goals at this time, because our expectations about the likely flow-through effects from various possible intermediate goals to transformative AI outcomes are still in a great deal of flux as we learn more.\") such as more work on methods for gaining high assurance in advanced AI systems and greater awareness of the difficulty of achieving such high assurance, e.g. via our funding for [Lohn (2020)](https://arxiv.org/abs/2009.00802) and [Flournoy et al. (2020)](https://cset.georgetown.edu/wp-content/uploads/Building-Trust-Through-Testing.pdf).\n* …broad field-building activities, for example to identify and empower highly capable individuals with a passion for increasing the odds that transformative AI will result in long-lasting broad benefit, e.g. via [scholarships](https://www.openphilanthropy.org/grants/study-and-training-related-to-ai-policy-careers-scholarship-support/), our support for [career advice related to AI policy careers](https://80000hours.org/articles/us-ai-policy/), and grantees such as [GovAI](https://www.openphilanthropy.org/grants/centre-for-the-governance-of-ai-general-support/).[22](https://www.openphilanthropy.org/blog/ai-governance-grantmaking#footnote22_tua1p5o \"And more broadly, much of our work aims to build other assets for the field besides individuals, for example institutions, professional networks, credentialing methods, etc.\")\n* …better-informed AI governance training and advice for governments, companies, and other actors, especially on issues of likely relevance to transformative AI outcomes such as great power technology competition, e.g. via our grants to [CSET](https://www.openphilanthropy.org/grants/georgetown-university-center-for-security-and-emerging-technology/) and the [Wilson Center](https://www.openphilanthropy.org/grants/wilson-center-ai-policy-seminar-series-june-2020/).\n\n\nIn a footnote, I list all the grants we’ve made so far that were, at least in part, motivated by their hoped-for impact on AI governance.[23](https://www.openphilanthropy.org/blog/ai-governance-grantmaking#footnote23_7y04h21 \"Each grant below is linked to a page with more information about the size of the grant and its rationale. While some of our grants are entirely aimed at supporting AI governance work, many of our grants support a variety of work by a given grantee. For example, a grant might support both AI governance and (technical) AI alignment work (e.g. our grant to OpenAI), or it might support work on a variety of global catastrophic risks (e.g. our grants to FHI), with only some (typically unknown) portion of it supporting work on AI governance specifically. In the table below, I provide rough estimates about what fraction of each grant effectively supported AI governance work vs. other kinds of work at the grantee, but these are just guesses and don't currently play any official role in our budgeting. The cutoffs of 35% and 90% were chosen for grant classification convenience.Our annual spending in this area fluctuates greatly from year to year, depending on how much staff time we're able to devote to the area and especially on which opportunities happen to be available and discovered in a given year. Grant How much for AI governance? MIT, for Thompson (2020) >90% CSIS (2020) >90% CISAC at Stanford University (2020) >90% RHGM (2020) >90% CNAS, for Scharre (2020 #2) >90% CNAS, for Scharre (2020 #1) >90% FHI at Oxford University, for GovAI (2020) >90% FHI at Oxford University, renewal (2020) <35% World Economic Forum (2020) >90% Lohn (2020) >90% Wilson Center, expansion (2020) >90% Oxford University (2020) <35% 80,000 Hours, renewal (2020) <35% Wilson Center, renewal (2020) >90% WestExec (2020) >90% RAND, for Lohn (2020) >90% Scholarships (2020) >90% CSET at Georgetown University (2019) >90% 80,000 Hours, renewal (2019) <35% Wilson Center (2018) >90% FHI at Oxford University, for Dafoe (2018) >90% FHI at Oxford University, renewal (2018) <35% CNAS, for Danzig, renewal (2018) >90% AI Impacts, renewal (2018) 35%-90% Future of Life Institute, renewal (2018) <35% 80,000 Hours (2018) <35% Yale University, for Dafoe (2017) >90% UCLA, for Parson & Re (2017) >90% CNAS, for Danzig (2017) >90% OpenAI (2017) 35%-90% FHI at Oxford University (2017) <35% Future of Life Institute, renewal (2017) <35% AI Impacts (2016) 35%-90% Electronic Frontier Foundation (2016) 35%-90% George Mason University, for Hanson (2016) >90% Future of Life Institute, renewal (2016) <35% Future of Life Institute (2015) <35% \")\n\n\nExample work I’ve found helpful\n-------------------------------\n\n\nOur sense is that [relatively few people](https://www.openphilanthropy.org/research/potential-risks-from-advanced-artificial-intelligence-the-philanthropic-opportunity/#neglectedness) who work on AI governance share our focus on improving likely outcomes from transformative AI, for understandable reasons: such issues are speculative, beyond the planning horizon of most actors, may be intractable until a later time, may be [impossible to forecast](https://www.openphilanthropy.org/research/how-feasible-is-long-range-forecasting/) even in broad strokes, etc.\n\n\nNevertheless, there has been substantial AI governance work that I suspect has increased the odds of good outcomes from transformative AI,[24](https://www.openphilanthropy.org/blog/ai-governance-grantmaking#footnote24_6q2ntcn \"Presumably by only a very small amount in each case, and typically with some remaining sign uncertainty.\") regardless of whether that work was itself motivated by transformative AI concerns, or has any connection to Open Philanthropy funding. I list some examples below, in no order:\n\n\n* The early exploration of concrete mechanisms that may help with AI-related cooperation in [Brundage et al. (2020)](https://arxiv.org/abs/2004.07213), [Imbrie & Kania (2019)](https://cset.georgetown.edu/research/ai-safety-security-and-stability-among-great-powers-options-challenges-and-lessons-learned-for-pragmatic-engagement/), [O’Keefe et al. (2020)](https://www.fhi.ox.ac.uk/wp-content/uploads/Windfall-Clause-Report.pdf), and [Horowitz et al. (2020)](https://www.sciencedirect.com/science/article/abs/pii/S0030438720300430).\n* Writings on how to manage AI competition to mitigate catastrophic accident risks, e.g. [Danzig (2018)](https://www.cnas.org/publications/reports/technology-roulette), [Scharre (2019)](https://www.foreignaffairs.com/articles/2019-04-16/killer-apps), and Clark’s chapter in [Bitounis & Price (2019)](https://www.amazon.com/Technology-National-Security-Maintaining-Americas/dp/0578427958/).\n* Analyses of the current state of deep learning assurance practices and how to improve them, e.g. [Lohn (2020)](https://arxiv.org/abs/2009.00802), [Flournoy et al. (2020)](https://cset.georgetown.edu/wp-content/uploads/Building-Trust-Through-Testing.pdf), and [Ashmore et al. (2019)](https://arxiv.org/abs/1905.04223).\n* Various pieces of speculative but thoughtful and data-informed analysis of some important possible trajectories for AI capabilities and governance challenges, e.g. [Thompson et al. (2020)](https://arxiv.org/abs/2007.05558), some reports from OpenAI ([1](https://openai.com/blog/ai-and-compute/), [2](https://openai.com/blog/ai-and-efficiency/), [3](https://arxiv.org/abs/2005.14165), [4](https://arxiv.org/abs/2010.14701)), and [Tucker et al. (2020)](https://dl.acm.org/doi/abs/10.1145/3375627.3375863).\n* Data-driven analyses of semiconductor supply chains, the AI workforce, and other key strategic variables — e.g. [Khan (2020)](https://cset.georgetown.edu/research/ai-chips-what-they-are-and-why-they-matter/) and [MacroPolo (2020)](https://macropolo.org/digital-projects/the-global-ai-talent-tracker/).\n* The well-informed coverage of important AI governance developments in Clark’s [Import AI](https://jack-clark.net/) newsletter, CSET’s [policy.ai](https://cset.georgetown.edu/newsletters/) newsletter, and Shah’s [Alignment Newsletter](https://rohinshah.com/alignment-newsletter/).\n* The Wilson Center’s [AI seminar series](https://www.wilsoncenter.org/artificial-intelligence-lab) for Congressional staff, Executive Branch staff, and some Members of Congress.\n* 80,000 Hours’ [career](https://80000hours.org/articles/ai-policy-guide/) [guides](https://80000hours.org/articles/us-ai-policy/), [podcast episodes](https://80000hours.org/podcast/), and [regularly updated job listings](https://80000hours.org/job-board/ai-safety-policy/) related to AI governance careers.\n* A variety of work that has gone into identifying, attracting, training, and networking (often junior) individuals who began working on these topics in the last few years, including the work of building institutions that do those things.\n\n\nIn the future, we hope to fund more work along these lines. As demonstrated by the examples above, some of the work we fund will involve explicit analysis of very long-run, potentially transformative impacts of AI, but much of the work we fund will be focused on more immediate, tractable issues of AI governance, so long as we are persuaded that the work has a decent chance of improving the odds of eventual good outcomes from transformative AI (and regardless of whether a given grantee has any interest in transformative AI).\n\n\nOf course, we might never fund anything in AI governance as impactful as the work that led to the Nunn-Lugar Act, but per our commitment to [hits-based giving](https://www.openphilanthropy.org/blog/hits-based-giving), we are willing to take that risk given the scale of impact we expect from transformative AI.\n\n\n\n\n[Expand Footnotes\n \n\n\n\n\n Collapse Footnotes](javascript:void(0);)\n\n\n\n\n\n[1.](https://www.openphilanthropy.org/blog/ai-governance-grantmaking#footnoteref1_ugsbl61) My thanks to Nathan Calvin for his help researching and drafting these opening paragraphs about the Nunn-Lugar Act.\n\n\n[2.](https://www.openphilanthropy.org/blog/ai-governance-grantmaking#footnoteref2_w0xguq6) On Soviet nuclear stockpile numbers, see [Carter et al (1991)](https://www.belfercenter.org/publication/soviet-nuclear-fission-control-nuclear-arsenal-disintegrating-soviet-union) at pp. i & 29. On Soviet citizens working at nuclear facilities, see [Parker (2016)](https://engineering.stanford.edu/magazine/article/why-soviet-nuclear-arsenal-stayed-secure-nation-collapsed):\n\n\n\n> [Siegfried] Hecker and the rest of the Americans were deeply concerned about the one million-plus Russians who worked in nuclear facilities. Many faced severe financial pressure in an imploding society and thus constituted a huge potential security risk.\n> \n> \n\n\n[3.](https://www.openphilanthropy.org/blog/ai-governance-grantmaking#footnoteref3_xy8ud03) See e.g. contemporary commentary from [Carter et al (1991)](https://www.belfercenter.org/publication/soviet-nuclear-fission-control-nuclear-arsenal-disintegrating-soviet-union), p. i:\n\n\n\n> Soviet nuclear command and control is at root a social and political creation. However successful its designers have been in insulating it from all the problems they could foresee, it cannot be assumed capable of standing apart from turmoil throughout the society within which it is embedded. And if even one hundredth of one percent of the nuclear weapons in the Soviet Stockpile falls into the wrong hands, destruction greater than the world has seen since Hiroshima and Nagasaki could result.\n> \n> \n\n\nAnother example contemporary quote is from then Secretary of Defense [Dick Cheney](https://nsarchive.gwu.edu/briefing-book/nuclear-vault-nunn-lugar-russia-programs/2016-12-12/nunn-lugar-25th-anniversary-shows):\n\n\n\n> If the Soviets do an excellent job at retaining control over their stockpile of nuclear weapons – let’s assume they’ve got 25,000 to 30,000; that’s a ballpark figure – and they are 99 percent successful, that would mean you could still have as many as 250 that they were not able to control.\n> \n> \n\n\n[4.](https://www.openphilanthropy.org/blog/ai-governance-grantmaking#footnoteref4_zsi8pd4) See e.g. the 1986 [breakdown of negotiations](https://www.atomicheritage.org/history/reagan-and-gorbachev-reykjavik-summit) between Gorbachev and Reagan at Reykjavik over disagreements in Reagan’s proposed Strategic Defense Initiative, and the US’s 1980 [withdrawal](https://www.cfr.org/timeline/us-russia-nuclear-arms-control) from the SALT II treaty after the Soviet invasion of Afghanistan.\n\n\n[5.](https://www.openphilanthropy.org/blog/ai-governance-grantmaking#footnoteref5_llg8sz9) [Kohler (2007)](https://cspcs.sanford.duke.edu/sites/default/files/descriptive/cooperative_security_and_the_nunn-lugar_act.pdf).\n\n\n[6.](https://www.openphilanthropy.org/blog/ai-governance-grantmaking#footnoteref6_6uzrtmz) [Jones (2019)](http://www.shfg.org/resources/Documents/3-Jones.pdf), p. 28.\n\n\n[7.](https://www.openphilanthropy.org/blog/ai-governance-grantmaking#footnoteref7_f4u4rro) From Nunn’s remarks at a [1995 White House Forum](https://clintonwhitehouse3.archives.gov/WH/EOP/OSTP/forum/html/nunn.html), discussing his role in Soviet Nuclear disarmament:\n\n\n\n> Then, in early November, Ash Carter gave his report on nuclear weapons security in the USSR, which I understand was financed by Carnegie… That report had an astounding effect. Dick Lugar and I got together. I knew that Dick had tremendous influence on the Republican side, tremendous influence in the Senate, and in the country. We really formed a partnership. Ash Carter presented his report to us. We then brought in other senators, and within about three to four weeks we had built a consensus.\n> \n> \n\n\n[8.](https://www.openphilanthropy.org/blog/ai-governance-grantmaking#footnoteref8_wrglapi) [Carter and Perry (2000)](https://www.brookings.edu/book/preventive-defense/), pp. 71-72:\n\n\n\n> Carter briefed the senators on the Harvard study. It turned out that Senator Nunn and Senator Lugar and their staff members, Robert Bell, Ken Myers, and Richard Combs, were working on a similar scheme for joint action. After the meeting broke up, Carter, Bell, Myers, and Combs stayed behind to draft what became known as the Nunn-Lugar legislation.\n> \n> \n\n\n[9.](https://www.openphilanthropy.org/blog/ai-governance-grantmaking#footnoteref9_ptb36i1) [Jones (2019)](http://www.shfg.org/resources/Documents/3-Jones.pdf), especially pp. 27-33.\n\n\n[10.](https://www.openphilanthropy.org/blog/ai-governance-grantmaking#footnoteref10_o8zjro2) The Cooperative Threat Reduction Plan was revised several times over the years, and expanded to engage other types of weapons and other states besides Russia ([Congressional Research Service 2015](https://crsreports.congress.gov/product/pdf/R/R43143)).\n\n\n[11.](https://www.openphilanthropy.org/blog/ai-governance-grantmaking#footnoteref11_c2piuab) [Nunn-Lugar Scorecard (2013)](https://www.dtra.mil/Portals/61/Documents/CTR%20Scorecards/20130501_fy13_ctr-scorecard_slides_may13.pdf).\n\n\n[12.](https://www.openphilanthropy.org/blog/ai-governance-grantmaking#footnoteref12_dspzbxa) The earlier quote is from [*The Audacity of Hope* (2006)](https://en.wikipedia.org/wiki/The_Audacity_of_Hope), p. 311:\n\n\n\n> The premise of what came to be known as the Nunn-Lugar program was simple: after the fall of the Soviet Union, the biggest threat to the United States — aside from an accidental launch — wasn’t a first strike ordered by Gorbachev or Yeltsin, but the migration of nuclear material or know-how into the hands of terrorists and rogue states, a possible result of Russia’s economic tailspin, corruption in the military, the impoverishment of Russian scientists, and security and control systems that had fallen into disrepair. Under Nunn-Lugar, America basically provided the resources to fix up these systems, and although the program caused some consternation to those accustomed to Cold War thinking, it has proved to be one of the most important investments we could have made to protect ourselves from catastrophe.\n> \n> \n\n\n[13.](https://www.openphilanthropy.org/blog/ai-governance-grantmaking#footnoteref13_bcjrwlk) [Jones (2019)](http://www.shfg.org/resources/Documents/3-Jones.pdf), p. 32.\n\n\n[14.](https://www.openphilanthropy.org/blog/ai-governance-grantmaking#footnoteref14_rta7rzg) To be clear, I’m not using the Nunn-Lugar Act as anything more than an example of technology governance having a large impact by reducing a global catastrophic risk from technology. For example, I’m not using the Nunn-Lugar example to suggest that future AI risks are similar to the risk of “loose nukes,” nor that aggressive AI arms control measures should be an urgent priority.\n\n\n[15.](https://www.openphilanthropy.org/blog/ai-governance-grantmaking#footnoteref15_ae9c6nl) We focus on “[transformative AI](https://docs.google.com/document/d/15siOkHQAoSBl_Pu85UgEDWfmvXFotzub31ow3A11Xvo/edit)” (a term we introduced in [a 2016 blog post](https://www.openphilanthropy.org/blog/some-background-our-views-regarding-advanced-artificial-intelligence)) because our [Potential Risks from Advanced AI focus area](https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence) is part of our [longtermism-motivated portfolio](https://www.openphilanthropy.org/blog/update-cause-prioritization-open-philanthropy#Long-termist_vs._near-termist_views). For more on our reasons for this focus, see [Potential Risks from Advanced Artificial Intelligence: The Philanthropic Opportunity](https://www.openphilanthropy.org/blog/potential-risks-advanced-artificial-intelligence-philanthropic-opportunity). For more on longtermism, see e.g. [Greaves & MacAskill (forthcoming)](https://globalprioritiesinstitute.org/wp-content/uploads/2019/Greaves_MacAskill_The_Case_for_Strong_Longtermism.pdf); [Ord (2020)](https://www.amazon.com/Precipice-Existential-Risk-Future-Humanity-ebook/dp/B07V9GHKYP/); [Bostrom (2013)](https://www.existential-risk.org/concept.pdf). Because of this longtermist motivation, we focus on a subset of transformative AI scenarios that seems especially important from a longtermist perspective, for example (but not limited to) scenarios involving “prepotent AI” ([Critch & Krueger 2020](https://arxiv.org/abs/2006.04948)). However, in this blog post and elsewhere I often focus the discussion on “transformative AI” because this term is (I hope) more concrete than alternatives such as “AI systems of likely longtermist importance,” and because it helps to point readers in the direction of issues we focus on (i.e. those with extremely large stakes for the future of human civilization). Our priorities in the space overlap with, but aren’t identical to, those articulated in [Dafoe (2020)](https://www.allandafoe.com/opportunity).\n\n\n[16.](https://www.openphilanthropy.org/blog/ai-governance-grantmaking#footnoteref16_pont95i) This phrasing is due to my colleague Ajeya Cotra, and is adapted from the definition of transformative AI introduced by Holden Karnofsky [here](https://www.openphilanthropy.org/blog/some-background-our-views-regarding-advanced-artificial-intelligence).\n\n\n[17.](https://www.openphilanthropy.org/blog/ai-governance-grantmaking#footnoteref17_0qjbtss) On risks from the interaction of AI and nuclear arsenals, including the automation of some parts of nuclear command and control, see e.g. [Boulanin et al. (2020)](https://www.sipri.org/publications/2020/other-publications/artificial-intelligence-strategic-stability-and-nuclear-risk); [Geist & Lohn (2018)](https://www.rand.org/pubs/perspectives/PE296.html); [Horowitz et al. (2019)](https://arxiv.org/abs/1912.05291).\n\n\n[18.](https://www.openphilanthropy.org/blog/ai-governance-grantmaking#footnoteref18_g64yp5u) This example might also be unrepresentative of the kind of possible catastrophic risk from transformative AI that we are likely to focus on, for example if it is difficult to affect with philanthropic dollars, or if even larger-scale nuclear war is unlikely to have much longtermist significance (see [Ord 2020](https://www.amazon.com/Precipice-Existential-Risk-Future-Humanity-ebook/dp/B07V9GHKYP/), ch. 4).\n\n\n[19.](https://www.openphilanthropy.org/blog/ai-governance-grantmaking#footnoteref19_8t28yza) For example, if we estimate that a $1 million grant has a 60% chance of having ~no impact and a 40% chance of creating +100 units of some social benefit, the expected value of the grant is (.6×0)+(.4×100) = 40 benefit units, for a return on investment (ROI) of one benefit unit per $25,000 spent. If instead we estimate that a $1 million grant has a 40% of having ~no impact, a 20% chance of creating -100 benefit units (i.e. a large harm), and (as with the other grant) a 40% chance of creating +100 benefit units, then even though we think the grant is twice as likely to create a large benefit than a large harm, the expected value of the grant is only (0.4×0)+(0.2×(-100))+(0.4×100) = 20 benefit units, for an ROI of one benefit unit per $50,000. In other words, our “[hits-based giving](https://www.openphilanthropy.org/blog/hits-based-giving)” approach can accommodate more failure of the “no impact” variety than it can of the “negative impact” variety. (And to be clear, I’m not suggesting anything different from normal cost-benefit analysis.)\n\n\n[20.](https://www.openphilanthropy.org/blog/ai-governance-grantmaking#footnoteref20_sdl0szs) That is, sign uncertainty can reduce the expected value of pursuing some particular goal below our threshold for how much benefit we hope to create on average per dollar spent. For more on our traditional “100x bar” for benefit produced per dollar, see [GiveWell’s Top Charities Are (Increasingly) Hard to Beat](https://www.openphilanthropy.org/blog/givewells-top-charities-are-increasingly-hard-beat), but also note that we are still thinking through what threshold to use for our longtermism-motivated grantmaking, per our current approach to “[worldview diversification](https://www.openphilanthropy.org/blog/worldview-diversification)”; see [here](https://www.openphilanthropy.org/blog/update-cause-prioritization-open-philanthropy#No_more_unified_benchmark). The potential impact of sign uncertainty on expected value is universal, but I highlight it here because I have encountered sign uncertainty more commonly in our work on AI governance than in some other Open Philanthropy focus areas, for example in our grantmaking to machine learning researchers and engineers for technical work on AI alignment (though there can be some sign uncertainty for those grants too). For more on sign uncertainty in the context of attempts to do good cost-effectively, see e.g. [Kokotajlo & Oprea (2020)](https://onlinelibrary.wiley.com/doi/abs/10.1111/phpe.12133).\n\n\n[21.](https://www.openphilanthropy.org/blog/ai-governance-grantmaking#footnoteref21_qs3qabr) We don’t have a concrete list of such intermediate goals at this time, because our expectations about the likely flow-through effects from various possible intermediate goals to transformative AI outcomes are still in a great deal of flux as we learn more.\n\n\n[22.](https://www.openphilanthropy.org/blog/ai-governance-grantmaking#footnoteref22_tua1p5o) And more broadly, much of our work aims to build other assets for the field besides individuals, for example institutions, professional networks, credentialing methods, etc.\n\n\n[23.](https://www.openphilanthropy.org/blog/ai-governance-grantmaking#footnoteref23_7y04h21) Each grant below is linked to a page with more information about the size of the grant and its rationale. While some of our grants are entirely aimed at supporting AI governance work, many of our grants support a variety of work by a given grantee. For example, a grant might support both AI governance and (technical) AI alignment work (e.g. our grant to OpenAI), or it might support work on a variety of global catastrophic risks (e.g. our grants to FHI), with only some (typically unknown) portion of it supporting work on AI governance specifically. In the table below, I provide rough estimates about what fraction of each grant effectively supported AI governance work vs. other kinds of work at the grantee, but these are just guesses and don’t currently play any official role in our budgeting. The cutoffs of 35% and 90% were chosen for grant classification convenience.\n\n\nOur annual spending in this area fluctuates greatly from year to year, depending on how much staff time we’re able to devote to the area and especially on which opportunities happen to be available and discovered in a given year.\n\n\n\n\n\n| | |\n| --- | --- |\n| Grant | How much for AI governance? |\n| [MIT, for Thompson (2020)](https://www.openphilanthropy.org/focus/other-areas/massachusetts-institute-of-technology-ai-trends-and-impacts-research) | >90% |\n| [CSIS (2020)](https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/center-for-strategic-and-international-studies-ai-accident-risk-and-technology-competition) | >90% |\n| [CISAC at Stanford University (2020)](https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/center-for-international-security-and-cooperation-ai-accident-risk-and-technology-competition) | >90% |\n| [RHGM (2020)](https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/rice-hadley-gates-manuel-ai-risk) | >90% |\n| [CNAS, for Scharre (2020 #2)](https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/center-for-a-new-american-security-ai-and-security-projects) | >90% |\n| [CNAS, for Scharre (2020 #1)](https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/center-for-a-new-american-security-ai-governance-projects) | >90% |\n| [FHI at Oxford University, for GovAI (2020)](https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/gov-ai-general-support) | >90% |\n| [FHI at Oxford University, renewal (2020)](https://www.openphilanthropy.org/focus/global-catastrophic-risks/miscellaneous/future-of-humanity-institute-research-scholars-programme) | <35% |\n| [World Economic Forum (2020)](https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/world-economic-forum-global-ai-council-workshop) | >90% |\n| [Lohn (2020)](https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/andrew-lohn-paper-machine-learning-model-robustness) | >90% |\n| [Wilson Center, expansion (2020)](https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/wilson-center-ai-policy-seminar-series-june-2020) | >90% |\n| [Oxford University (2020)](https://www.openphilanthropy.org/giving/grants/university-of-oxford-new-office-for-effective-altruism-organizations) | <35% |\n| [80,000 Hours, renewal (2020)](https://www.openphilanthropy.org/giving/grants/80000-hours-general-support-2020) | <35% |\n| [Wilson Center, renewal (2020)](https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/wilson-center-ai-policy-seminar-series-2020) | >90% |\n| [WestExec (2020)](https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/westexec-report-on-assurance-in-machine-learning-systems) | >90% |\n| [RAND, for Lohn (2020)](https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/rand-corporation-research-on-the-state-of-ai-assurance-methods) | >90% |\n| [Scholarships (2020)](https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/study-and-training-related-to-ai-policy-careers) | >90% |\n| [CSET at Georgetown University (2019)](https://www.openphilanthropy.org/giving/grants/georgetown-university-center-security-and-emerging-technology) | >90% |\n| [80,000 Hours, renewal (2019)](https://www.openphilanthropy.org/giving/grants/80000-hours-general-support-2019) | <35% |\n| [Wilson Center (2018)](https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/wilson-center-ai-policy-seminar-series) | >90% |\n| [FHI at Oxford University, for Dafoe (2018)](https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/oxford-university-global-politics-of-ai-dafoe) | >90% |\n| [FHI at Oxford University, renewal (2018)](https://www.openphilanthropy.org/focus/global-catastrophic-risks/miscellaneous/future-humanity-institute-work-on-global-catastrophic-risks) | <35% |\n| [CNAS, for Danzig, renewal (2018)](https://www.openphilanthropy.org/focus/global-catastrophic-risks/miscellaneous/center-for-a-new-american-security-richard-danzig-outreach-on-technological-risk-2018) | >90% |\n| [AI Impacts, renewal (2018)](https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/ai-impacts-general-support-2018) | 35%-90% |\n| [Future of Life Institute, renewal (2018)](https://www.openphilanthropy.org/focus/global-catastrophic-risks/miscellaneous/future-life-institute-general-support-2018) | <35% |\n| [80,000 Hours (2018)](https://www.openphilanthropy.org/giving/grants/80000-hours-general-support-2018) | <35% |\n| [Yale University, for Dafoe (2017)](https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/yale-university-global-politics-of-ai-dafoe) | >90% |\n| [UCLA, for Parson & Re (2017)](https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/ucla-artificial-intelligence-governance) | >90% |\n| [CNAS, for Danzig (2017)](https://www.openphilanthropy.org/focus/global-catastrophic-risks/miscellaneous/center-for-a-new-american-security-richard-danzig-outreach-on-technological-risk) | >90% |\n| [OpenAI (2017)](https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/openai-general-support) | 35%-90% |\n| [FHI at Oxford University (2017)](https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/future-humanity-institute-general-support) | <35% |\n| [Future of Life Institute, renewal (2017)](https://www.openphilanthropy.org/focus/global-catastrophic-risks/miscellaneous/future-life-institute-general-support-2017) | <35% |\n| [AI Impacts (2016)](https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/ai-impacts-general-support) | 35%-90% |\n| [Electronic Frontier Foundation (2016)](https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/electronic-frontier-foundation-ai-social) | 35%-90% |\n| [George Mason University, for Hanson (2016)](https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/george-mason-university-research-future-artificial-intelligence-scenarios) | >90% |\n| [Future of Life Institute, renewal (2016)](https://www.openphilanthropy.org/focus/global-catastrophic-risks/miscellaneous/future-life-institute-general-support) | <35% |\n| [Future of Life Institute (2015)](https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/future-life-institute-artificial-intelligence-risk-reduction) | <35% |\n\n\n\n[24.](https://www.openphilanthropy.org/blog/ai-governance-grantmaking#footnoteref24_6q2ntcn) Presumably by only a very small amount in each case, and typically with some remaining sign uncertainty.", "url": "https://www.openphilanthropy.org/blog/ai-governance-grantmaking", "title": "Our AI governance grantmaking so far", "source": "html_articles", "source_type": "blogPost", "source_filetype": "pdf", "date_published": "2020-12-15T23:00:00Z", "authors": ["Luke Muehlhauser"], "summary": [], "id": "243211c3fa1a6ebec9803e34f34882c6"} {"text": "![](https://www.openphilanthropy.org/wp-content/uploads/imageI-1.png)\n\n\nPublished: June 15, 2020 | by [David Roodman](/about/team/david-roodman) \nIn arriving at our [funding priorities](https://www.openphilanthropy.org/focus/)—including criminal justice reform, farm animal welfare, pandemic preparedness, health-related science, and artificial intelligence safety—Open Philanthropy has pondered profound questions. How much should we care about people who will live far in the future? Or about chickens today? What events could extinguish civilization? Could artificial intelligence (AI) surpass human intelligence?\n\n\nOne strand of analysis that has caught our attention is about the pattern of growth of human society over many millennia, as measured by number of people or value of economic production. Perhaps the mathematical shape of the past tells us about the shape of the future. I dug into that subject. A draft of my technical paper is [here](https://www.openphilanthropy.org/wp-content/uploads/Modeling-the-human-trajectory-2.pdf). (Comments welcome.) In this post, I’ll explain in less technical language what I learned.\n\n\nIt’s extraordinary that the larger the human economy has become—the more people and the more goods and services they produce—the faster it has grown on average. Now, especially if you’re reading quickly, you might think you know what I mean. And you might be wrong, because I’m not referring to exponential growth. That happens when, for example, the number of people carrying a virus doubles every week. Then the *growth rate* (100% increase per week) holds fixed. The human economy has grown *super*-exponentially. The bigger it has gotten, the faster it has doubled, on average. The global economy churned out $74 trillion in goods and services in 2019, twice as much as in 2000.[1](https://www.openphilanthropy.org/blog/modeling-human-trajectory#footnote1_r83c3rd \"Figures are in purchasing power parity--adjusted dollars of 1990. See the Data section of the paper.\") Such a quick doubling was unthinkable in the Middle Ages and ancient times. Surely early doublings took millennia.\n\n\nIf global economic growth keeps accelerating, the future will differ from the present to a mind-boggling degree. The question is whether there might be *some* plausibility in such a prospect. That is what motivated my exploration of the mathematical patterns in the human past and how they could carry forward. Having now labored long on the task, I doubt I’ve gained much perspicacity. I did come to appreciate that any system whose rate of growth rises with its size is inherently unstable. The human future might be one of explosion, an economic upwelling that eclipses the industrial revolution as thoroughly as *it* eclipsed the agricultural revolution. Or the future could be one of implosion, in which environmental thresholds are crossed or the creative process that drives growth runs amok, as in an AI dystopia. More likely, these impulses will mix.\n\n\nI now understand more fully a view that shapes the work of Open Philanthropy. The range of possible futures is wide. So it is our task as citizens and funders, at this moment of potential leverage, to lower the odds of bad paths and raise the odds of good ones.\n\n\n1. The human past, coarsely quantified\n--------------------------------------\n\n\nHumans are better than viruses at multiplying. If a coronavirus particle sustains an advantageous mutation (lowering the virulence of the virus, one hopes), it cannot transmit that innovation to particles around the world. But humans have language, which is the medium of culture. When someone hits upon a new idea in science or political philosophy (lowering the virulence of humans, one hopes) that intellectual mutation can disseminate quickly. And some new ideas, such as the printing press and the World Wide Web, let other ideas spread even faster. Through most of human history, new insights about how to grow wheat or raise sheep ultimately translated into population increases. The material standard of living did not improve much and may even have declined. In the last century or so, the pattern has flipped. [In most of the world, women are having fewer children](https://ourworldindata.org/fertility-rate) while material standards are higher for many, enough that human economic activity, in aggregate, has continued to swell. When the global economy is larger, it has more capacity to innovate, and potentially to double even faster.\n\n\nTo the extent that superexponential growth is a good model for history, it comes with a strange corollary when projected forward: the human system will go infinite in finite time. Cyberneticist Heinz Von Foerster and colleagues highlighted this implication [in 1960](https://web.archive.org/web/20191210043602/http://www.bioinfo.rpi.edu/bystrc/courses/biol4961/Doomsday.pdf). They graphed world population since the birth of Jesus, fit a line to the data, projected it, and foretold an Armageddon of infinite population in 2026. They evidently did so tongue in cheek, for they dated the end times to Friday the 13th of November. As we close in on 2026, [the impossible prophecy is not looking more possible](https://slatestarcodex.com/2019/04/22/1960-the-year-the-singularity-was-cancelled/). In fact, the world population growth rate [peaked at 2.1%/year in 1968 and has since fallen by half.](https://web.archive.org/web/20200423194709if_/https://population.un.org/wpp/Download/Files/1_Indicators%20(Standard)/EXCEL_FILES/1_Population/WPP2019_POP_F01_1_TOTAL_POPULATION_BOTH_SEXES.xlsx)\n\n\nThat a grand projection went off track so fast should instill humility in anyone trying to predict the human trajectory. And it’s fine to laugh at the absurdity of an infinite doomsday. Nevertheless, those responses seem incomplete. What should we make of the fact that good models of the past project an impossible future? While population growth has slowed, growth in aggregate economic activity has not slackened as much. Historically poor countries such as China are catching up with wealthier ones, adding to the global totals. Of course, there is only so much catching up to do. And economically important ideas may be getting harder to find. For instance, keeping up with [Moore’s law](https://en.wikipedia.org/wiki/Moore%27s_law) of computer chip improvement is [getting more expensive](https://pubs.aeaweb.org/doi/pdfplus/10.1257/aer.20180338#page=15). But history records other slowdowns, each of which ended with a burst of innovation such as the European Enlightenment. Is this time different? It’s possible, to be sure. But it’s impossible to be sure.\n\n\nSince 1960, when Von Foerster and colleagues published, other analysts have worked the same vein—now including me. I was influenced by writings of [Michael Kremer in 1993](https://web.archive.org/web/20200502022901/http://faculty.cbpp.uaa.alaska.edu/elhowe/ECON_F04/kremer_93.pdf) and [Robin Hanson in 2000](https://web.archive.org/web/20200227163524/http://mason.gmu.edu/~rhanson/longgrow.pdf). Building on [work](https://doi.org/10.1080/08898488809525278) by demographer Ronald Lee, Kremer brought ideas about “endogenous technology” (explained below) to population data like that of Von Foerster and his coauthors. Except Kremer’s population numbers went back not 2,000 years, but a *million* years. Hanson was the first to look at economic output, rather than population, over such a stretch, relying mainly on [numbers from Brad De Long](https://web.archive.org/20061115050150/delong.typepad.com/print/20061012_LRWGDP.pdf).\n\n\nYou might wonder how anyone knows how many people lived in 5000 BCE and how much “gross product” they produced. Scholars have formed rough ideas from the available evidence. Ancient China and Rome conducted censuses, for example. McEvedy and Jones, whose historical population figures are widely used, [put it this way](https://web.archive.org/20181231031745/http://www.arabgeographers.net/up/uploads/14299936761.pdf):\n\n\n\n> [T]here is something more to statements about the size of classical and early medieval populations than simple speculation….[W]e wouldn’t attempt to disguise the hypothetical nature of our treatment of the earlier periods. But we haven’t just pulled numbers out of the sky. Well, not often.\n> \n> \n\n\nMeanwhile, until 1800 most people lived barely above subsistence; before then the story of GWP growth was mostly the story of population growth, which simplifies the task of estimating GWP through most of history.\n\n\nI focused on GWP from 10,000 BCE to 2019. I chose GWP over population because I think economic product is a better indicator of capacity for innovation, which seems central to economic history. And I prefer to start in 10,000 BCE rather 1 million or 2 million years ago because the numbers become especially conjectural that far back. In addition, it seems problematic to start before the evolution of language 40,000–50,000 years ago. Arguably, it was then that the development of human society took on its modern character. Before, hominins had developed technologies such as handaxes, intellectual mutations that may have spread no faster than the descendants of those who wrought them. After, innovations could diffuse through human language, a novel medium of arbitrary expressiveness—one built on a verbal “alphabet” whose letters could be strung together in limitless, meaningful ways. Human language is the first new, arbitrarily expressive medium on Earth since DNA.[2](https://www.openphilanthropy.org/blog/modeling-human-trajectory#footnote2_hpmn3ey \"On the definition and timing of this event, see Smith and Szathmáry, The Origins of Life: From the Birth of Life to the Origin of Language, p. 141.\")\n\n\nHere is the data series I studied the most[3](https://www.openphilanthropy.org/blog/modeling-human-trajectory#footnote3_oai6t4l \"See the Data section of the paper for details.\"):\n\n\n![Roodman_GWP_10,000_BCE-2019_1.png](https://www.openphilanthropy.org/sites/default/files/Roodman_GWP_10%2C000_BCE-2019_1.png)\n\n\nFor clarity, here is the same graph but with $1 billion, $10 billion, $100 billion, etc., equally spaced. When the vertical axis is scaled this way, exponentially growing quantities—ones with fixed doubling times—follow straight lines. So to show how poorly human history corresponds to exponential growth, I’ve also drawn a best-fit line:\n\n\n![Roodman_GWP_10,000_BCE-2019_2.png](https://www.openphilanthropy.org/sites/default/files/Roodman_GWP_10%2C000_BCE-2019_2.png) \n\nFinally, just as in that 1960 paper, I do something similar to the horizontal axis, so that 10,000, 1,000, 100, and 10 years before 2047 are equally spaced. (Below, I’ll explain how I chose 2047.) The horizontal stretching and compression changes the contour of the data once again. And it bends the line that represented exponential growth. But I’ve fit another line under the new scaling:\n\n\n![Roodman_GWP_10,000_BCE-2019_3.png](https://www.openphilanthropy.org/sites/default/files/Roodman_GWP_10%2C000_BCE-2019_3.png) \n\nThe new “power law” line follows the data points remarkably well. The most profound developments since language—the agricultural and industrial revolutions—shrink to gentle ripples on a long-term climb.\n\n\nThis graph raises two important questions. First, did those economic revolutions constitute major breaks with the past, which is how we usually think of them, or were they mere statistical noise within the longer-term pattern? And where does that straight line take us if we follow it forward?\n\n\nI’ll tackle the second question first. I’ve already extended the line on the graph to 10 years before 2047, i.e., 2037, at which time it has GWP reaching a stupendous $500 trillion. That is ten times the level of 2007. If like [Harold with his purple crayon](https://www.youtube.com/watch?v=yl94zwz8cKU) you extend the line across your computer screen, off the edge, and into the ether, you will come to 1 year before 2047, then 0.1 before, then 0.01…. Meanwhile GWP will grow horrifically: to $30.7 quadrillion at the start of 2046, to $1.9 quintillion 11 months later, and so on. Striving to reach 2047, you will drive GWP to infinity.[4](https://www.openphilanthropy.org/blog/modeling-human-trajectory#footnote4_1mz8lgh \"The annual GWP figures are best interpreted as pertaining to the midpoint of each year, a nicety I neglect to simplify exposition.\") That was Von Foerster’s point back in 1960: explosion is an inevitable implication of the straight-line model of history in that last graph.\n\n\nYet the line fits so well. To grapple with this paradox, I took two main analytical approaches. I gained insight from each. But in the end the paradox essentially remained, and I think now that it is best interpreted in a non-mathematical way. I will discuss these ideas in turn.\n\n\n2. Capturing the randomness of history\n--------------------------------------\n\n\nAn old BBC documentary called the Midas Formula ([transcript](https://web.archive.org/web/20190813041240/https://www.bbc.co.uk/science/horizon/1999/midas_script.shtml)) tells how three economists in the early 1970s developed the *E = mc2* of finance. It is a way to estimate the value of *options* such as the right to buy a stock at a set price by a set date. Fischer Black and Myron Scholes first arrived, tentatively, at the formula, then consulted Robert Merton. Watch till 27:40, then keep reading this post!\n\n\n \n\n\n\n \n\nThe BBC documented the work of Black, Scholes, and Merton not only because they discovered an important formula, but also because they co-founded the hedge fund Long-Term Capital Management to apply some of their ideas, and the fund imploded spectacularly in 1998.\n\n\nIn thinking about the evolution of GWP over thousands of years, I experienced something like what Merton experienced, except for the bits about winning a Nobel and almost bringing down the global financial system. I realized I needed a certain kind of math, then discovered that it exists.\n\n\nThe calculus of Isaac Newton and Gottfried Leibniz excels at describing smooth arcs, such as the path of Halley’s Comet. Like the rocket in the BBC documentary, the comet’s mathematical situation is always changing. As it boomerangs across the solar system, it experiences a smoothly varying pull from the sun, strongest at the perihelion, weakest when the comet sails beyond Neptune. If at some moment the comet is hurtling by the sun at 50 kilometers per second, then a second later, or a nanosecond later, it won’t be, not exactly. And the rate at which the comet’s speed is changing is itself always changing.\n\n\nOne way to approximate the comet’s path is to program a computer. We could feed in a starting position and velocity, code formulas for where the object will be a nanosecond later given its velocity now, update its velocity at the new location to account for the sun’s pull, and repeat. This method is [widely used](https://en.wikipedia.org/wiki/Euler_method). The miracle in calculus lies in passing to the limit, treating paths through time and space as accumulations of infinitely many, infinitely small steps, which no computer could simulate because no computer is infinitely fast. Yet passing to the infinite limit often *simplifies* the math. For example, plotting the smooth lines and curves in the graphs above required no heavy-duty number crunching even though the contours represent growth processes in which the absolute increment, additional dollars of GWP, is always changing.\n\n\nBut classical calculus ignores randomness. It is great for modeling the fall of apples; not so much for the [price of Apple](https://www.google.com/search?q=appl&tbm=fin#scso=_objNXs7aK9ysytMP4uK12AY1:0). And not so much for rockets buffeted by turbulence, nor for the human trajectory, which has sustained shocks such as the fall of Rome, the Black Death, industrial take-off, world wars, depressions, and financial crises. It was Kyosi Itô who in the mid-20th century, more than anyone else, found a way to infuse randomness into calculus. The result is called the stochastic calculus, or the Itô calculus. (Though to listen to the BBC narrator, you’d think he invented the classical calculus rather than adding randomness to it.)\n\n\nThink of an apple falling toward the surface of a planet whose gravity is randomly fluctuating, jiggling the apple’s acceleration as it descends. Or think of a trillion molecules of dry ice vapor released to careen and scatter across a stage. Each drop of an apple or release of a molecule would initiate a unique course through space and time. We cannot predict the exact paths but we can estimate the *distribution* of possibilities. The apple, for example, might more likely land in the first second than in the 100th.\n\n\nI devised a stochastic model for the evolution of GWP. I borrowed [ideas from John Cox](https://doi.org/10.3905/jpm.1996.015), who as a young Ph.D. followed in the footsteps of Black, Scholes, and Merton. The stochastic approach intrigued me because it can express the randomness of human history, including the way that unexpected events send ripples into the future. Also, for technical reasons, stochastic models are better for data series with unevenly spaced data points. (In my GWP data, the first two numbers are 5,000 years apart, for 10,000 and 5,000 BCE, while the last two are nine apart, for 2010 and 2019.) Finally, I hoped that a stochastic model would soften the paradox of infinity: perhaps after fitting to the data, it would imply that infinite GWP in finite time was *possible* but not *inevitable*.\n\n\nThe equation for this stochastic model generalizes that implied by the straight “power law” line in the third graph above, the one we followed toward infinity in 2047.[5](https://www.openphilanthropy.org/blog/modeling-human-trajectory#footnote5_t9oc7he \"See the section 3.1 of the paper.\") It preserves the possibility that growth can rise more than proportionally with the level of GWP, so that doublings will tend to come faster and faster. Here, I’ll skip the equations and stick to graphs.\n\n\nThe first graph shows twenty “rollouts” of the model after it has been calibrated to match GWP history.[6](https://www.openphilanthropy.org/blog/modeling-human-trajectory#footnote6_yzi7euj \"See the Estimation section of the paper for specifics.\") All twenty paths start where the real data series starts, at $1.6 billion in 10,000 BCE. The real GWP series is in red. Arguably the rollouts meet the Goldilocks test: they resemble the original data series, but not so perfectly as to look contrived. Each represents an alternative history of humanity. Like the real series, the rollouts experience random ups and downs, woven into an overall tendency to rise at a gathering pace. I think of the downs as statistical Black Deaths. The randomness suffices to greatly affect the timing of economic takeoff: one rollout explodes by 3000 BCE while others do not do so even by 5000 CE. In a path that explodes early, I imagine, the wheel was invented a thousand years sooner, and the breakthroughs snowballed from there.\n\n\n![BernouPathsGWP12KDecnovBlog.png](https://www.openphilanthropy.org/sites/default/files/BernouPathsGWP12KDecnovBlog.png) \n\nThe second graph introduces a few changes. Instead of 20 rollouts, I run 10,000. Since that is too many to plot and perceive, I show percentiles. The black curve in the middle shows the median simulated GWP at each moment—the 50th percentile. Boundaries between grey bands mark the 5th, 10th, 15th, etc., percentiles. I also run 10,000 rollouts from the endpoint of the data series, which is $73.6 trillion in 2019, and depict those in the same way. And to take account of the uncertainty in the fitting of my model to the data, each path is generated under a slightly different version of the model.[7](https://www.openphilanthropy.org/blog/modeling-human-trajectory#footnote7_719l1gh \"For each path, the four modeling parameters are drawn from the multivariate normal distribution implied by the covariance matrix of the maximum-likelihood estimates.\") So this graph contains two kinds of randomness: the randomness of history itself, and the imprecision in our measurement of it.[8](https://www.openphilanthropy.org/blog/modeling-human-trajectory#footnote8_jasamh9 \"More precisely, it incorporates modeled stochasticity and parameter uncertainty, one source of which is measurement error. It does not incorporate uncertainty about the correct structure of the model.\")\n\n\n![BernouDistGWP12KDecLogBlog.png](https://www.openphilanthropy.org/sites/default/files/BernouDistGWP12KDecLogBlog.png) \n\nThe actual GWP series, still in red, meanders mainly between the 40th and 60th percentiles. This good fit is the stochastic analog to the good fit of the power law line in the third graph in the earlier triplet. As a result, this model is the best statistical representation I have seen of world economic history, as proxied by GWP. That and a dollar will buy you an apple.\n\n\nThrough the Itô calculus, I quantified the probability and timing of escalation to infinity (according to the fitted model). The probability that a path like those in the first of the two graphs just above will *not* eventually explode is a mere 1 in 100 million. The median year of explosion is 1527. Applying the same calculations starting from 2019—that is, incorporating the knowledge that GWP reached $73.6 trillion last year—the probability of no eventual explosion falls to 1 in 1069, which is an atoms-in-the-universe sort of figure. (OK, there may be [1086 atoms](https://www.universetoday.com/36302/atoms-in-the-universe/). But who’s counting?) The estimate of the median explosion year sharpens to 2047 (95% confidence range is ±16 years), which is why I used 2047 in the third graph of the post. In the mathematical world of the best-fit model, explosion is all but inevitable by the end of the century.[9](https://www.openphilanthropy.org/blog/modeling-human-trajectory#footnote9_yofimul \"Figures from Table 3, column 4, of the paper.\")\n\n\nIncorporating randomness into the modeling does not after all soften the paradox of infinity. An even better mathematical description of the past still predicts an impossible future.\n\n\nI will put that conundrum back on hold for the moment and address the other question inspired by the power law’s excellent fit to the GWP numbers. Should the agricultural and industrial revolutions be viewed as ruptures in history or as routine, modest deviations around a longer-term trend? To assess whether GWP was surprisingly high in 1820, by which time the industrial revolution had built a head of steam, I fitted the model just to the data *before* 1820, i.e., through 1700. Then I generated many paths wiggling forward from 1700 to 1820. The 1820 GWP value of $741 billion places it in the 95th percentile of these simulated paths: the model is “surprised,” going by previous history, at how big GWP was in 1820. I repeat the whole exercise for other time points, back to 1600 and forward to 2019. This graph contains the results:\n\n\n![BernouDiffPredGWP12KDecBlog.png](https://www.openphilanthropy.org/sites/default/files/BernouDiffPredGWP12KDecBlog.png) \n\nThe model is also surprised by the data point after 1820, for 1870, despite “knowing” about the fast GWP growth leading up to 1820. And it is surprised again in 1913. Now, if my stochastic model for GWP is correct, then the 14 dots in this graph should be distributed roughly evenly across the vertical 0–100% range, with no correlation from one dot to the next. That’s not what we see. The three dots in a row above the 90th percentile strongly suggests that the economic growth of the 19th century broke with the past. The same goes for the four low values since 1990: recent global growth has been slower and steadier than the model predicts from previous history.\n\n\nIn sum, my stochastic model succeeds in expressing some of the randomness of history, along with the long-term propensity for growth to accelerate. But it is not accurate or flexible enough to fully accommodate events as large and sudden as the industrial revolution. Nevertheless, I think it is a virtue, and perhaps an inspiration for further work, that this rigorous model can quantify its own shortcomings.\n\n\n3. Land, labor, capital, and more\n---------------------------------\n\n\nTo this point I have represented economic growth as *univariate*. A single quantity, GWP, determines the rate of its own growth, if with randomness folded in. I have radically caricatured human history—the billions of people who have lived, and how they have made their livings. That is how models work, simplifying matters in order to foreground aspects few enough for the mind to embrace.\n\n\nA longstanding tradition in the study of economic growth is to move one notch in the direction of complexity, from one variable to several. Economic activity is cast as combining “factors of production.” Thus we have inherited from classical economists such as Adam Smith and David Ricardo the triad of land, labor, and capital. Modern factor lists may include other ingredients, such as “human capital,” which is investment in skills and education that raises the value of one’s labor. A stimulus to one factor can boost economic output, which can be reinvested in some or all of the factors: more office buildings, more college degrees, more kids even. In this way, factors can propel their own growth and each other’s, in a richer version of the univariate feedback loop contemplated above. And just as in the univariate model that fits GWP history so well, the percentage growth rate of the economy can increase with output.[10](https://www.openphilanthropy.org/blog/modeling-human-trajectory#footnote10_i3q8qz5 \"Compare equations 1 and 18 in the paper.\")\n\n\nI studied multivariate models too[11](https://www.openphilanthropy.org/blog/modeling-human-trajectory#footnote11_wjesac5 \"See section 2 of the paper.\") though I left for another day the technically daunting step of injecting them with randomness. I learned a few things.\n\n\nFirst, the single-variable “power law” model—that straight line in my third graph up top—is, mathematically, a special case of standard models in economics, models that won at least one Nobel (for Robert Solow) and are taught to students every day somewhere on this Earth.[12](https://www.openphilanthropy.org/blog/modeling-human-trajectory#footnote12_1jbajua \"I'm referring to Solow-Swan models with Cobb-Douglas production, fixed reinvestment rates in factors, and constant deprecation or appreciation of factors.\") In this sense, fitting the power law model to the GWP data and projecting forward is not as naive as it might appear.\n\n\nTo appreciate the concern about naiveté, think of the [IHME model of the spread of coronavirus in the United States](https://covid19.healthdata.org/united-states-of-america). It received much attention—including [criticism](https://www.vox.com/future-perfect/2020/5/2/21241261/coronavirus-modeling-us-deaths-ihme-pandemic) that it is an atheoretical “curve-fitting” exercise. The IHME model worked by synthesizing a hump-shaped contour from the experiences of Wuhan and Italy, then fitting the early section of the contour to U.S. data and projecting forward. The IHME exercise did not try to mathematically reconstruct what *underlay* the U.S. data, the speed at which the virus hopped from person to person, community to community. If “[the IHME projections are based not on transmission dynamics but on a statistical model with no epidemiologic basis](https://doi.org/10.7326/M20-1565),” the analogous charge cannot so easily be brought against the power law model for GWP. It is in a certain way rooted in established economics.\n\n\nThe second thing I learned constitutes a caveat that I just glossed over. By the mid-20th century, it [became clear to economists](https://www.nber.org/chapters/c5650.pdf#page=7) that reinvestment alone had not generated the economic growth of the industrial era. Yes, there were more workers and factories, but from any given amount of labor and capital, industrial countries extracted more value in 1950 than in 1870. As [Paul Romer put it in 1990](https://web.archive.org/web/20190913035557/https://web.stanford.edu/~klenow/Romer_1990.pdf),\n\n\n\n> The raw materials that we use have not changed, but…the instructions that we follow for combining raw materials have become vastly more sophisticated. One hundred years ago, all we could do to get visual stimulation from iron oxide was to use it as a pigment. Now we put it on plastic tape and use it to make videocassette recordings.\n> \n> \n\n\nSo in the 1950s economists [inserted another input into their models](https://www.jstor.org/stable/1926047): technology. As meant here, technology is knowledge rather than the physical manifestations thereof, the know-how to make a smartphone, not the phone itself.\n\n\nThe ethereal character of technology makes it alchemical too. One person’s use of a drill bit or farm plot tends to exclude others’ use of the same, while one person’s use of an idea does not. So a single discovery can raise the productivity of the entire global economy. I love Thomas Jefferson’s explanation, which I got from [Charles Jones](https://web.stanford.edu/~chadj/JonesHandbook2005.pdf#page=7):\n\n\n\n> Its peculiar character … is that no one possesses the less, because every other possesses the whole of it. He who receives an idea from me, receives instruction himself without lessening mine; as he who lights his taper at mine, receives light without darkening me. That ideas should freely spread from one to another over the globe, for the moral and mutual instruction of man, and improvement of his condition, seems to have been peculiarly and benevolently designed by nature, when she made them, like fire, expansible over all space, without lessening their density at any point.\n> \n> \n\n\nThat ideas can spread like flames from candle to candle seems to lie at the heart of the long-term speed-up of growth.\n\n\nAnd the tendency to speed up, expressed in a short equation, is also what generates the strange, superexponential implication that economic output could spiral to infinity in decades. Yet that implication is *not* conventional within economics, unsurprisingly. Since the 1950s, macroeconomic modeling has emphasized the achievement of “steady state,” meaning a constant economic growth rate such as 3% per year. Granted, even such exponential growth seems implausible if we look far enough ahead, just as the coronavirus case count can’t keep doubling forever. But, in their favor, models predicting steady growth cohered with the [relatively stability](https://en.wikipedia.org/wiki/Kaldor%27s_facts) of per-person economic growth over the previous century in industrial countries (contrasting with the acceleration we see over longer stretches). And under exponential growth the economy merely keeps expanding; it does not reach infinity in finite time. “It is one thing to say that a quantity will eventually exceed any bound,” [Solow once quipped](https://pubs.aeaweb.org/doi/pdfplus/10.1257/jep.8.1.45#page=6). “It is quite another to say that it will exceed any stated bound before Christmas.”\n\n\nThe power law model that fits history so well, yet explodes before Christmas, is mathematical kin with Solow’s influential models. So how did he avoid the explosive tendencies? To understand, step over to my whiteboard, where I’ll diagram a [typical version of Solow’s model](https://eml.berkeley.edu/~dromer/papers/MRW_QJE1992.pdf#page=10). The economy is conceived as a giant factory with four inputs: labor, capital, human capital, and technology. It produces output, much of which is immediately eaten, drunk, watched, or otherwise consumed: \n\n![blog_economy_1.png](https://www.openphilanthropy.org/sites/default/files/blog_economy_1.png) \n\nSome of the output is not consumed, and is instead invested in factors—here, the capital of businesses, and the human capital that is skills in our brains: \n\n![blog_economy_2.png](https://www.openphilanthropy.org/sites/default/files/blog_economy_2.png) \n\nA final dynamic is depreciation: factories wear out, skills fade. And the more there are, the more wear out each year. So just below I’ve drawn little purple loops to the left of these factors with minus signs inside them. Fortunately, the reinvestment flowing in through the orange arrows can compensate for depreciation by effecting repairs and refreshing skills.\n\n\nNow, labor and technology can also depreciate, since workers age and die and innovations are occasionally lost too. But Solow put the sources of their replenishment *outside his model*. From the standpoint of the Solow model, they grow for opaque reasons. So they receive no orange arrows. And to convey their unexplained tendency to grow, I’ve drawn plus signs in their purple feedback loops: \n\n![blog_economy_3.png](https://www.openphilanthropy.org/sites/default/files/blog_economy_3.png) \n\nIn the language of economics, Solow made [technology](https://www.jstor.org/stable/1926047) and [labor](http://piketty.pse.ens.fr/files/Solow1956.pdf) *exogenous*. This choice constitutes another caveat to my claim that the power law model is rooted in standard economic models. If the factors growing at a fixed, exogenously determined rate are economically important enough, they can keep the whole system from exploding into infinite growth.\n\n\nFor Solow, the invocation of exogeneity had two virtues and a drawback. I’ll explain them with reference to technology, the more fateful factor.\n\n\nOne virtue was humility: it left for future research the mystery of what sets the pace of technological advance.[13](https://www.openphilanthropy.org/blog/modeling-human-trajectory#footnote13_w49khet \"See Robert Solow, Growth Theory: An Exposition, pp. 98ff.\")\n\n\nThe other virtue was that defaulting to the simple assumption that technology—the efficiency of turning inputs into output—improved at a constant rate such as 1% per year led to the comfortable prediction that a market economy would converge to a “steady state” of constant growth. It was as if economic output were a ship and technology its anchor; and as if the anchor were not heavy enough to moor the ship, but its abrasion against the seabed capped the ship’s speed. In effect, Solow built the desired outcome of constant growth into his model.\n\n\nIn general, the drawback of casting technology as exogenous is that it leaves a story of long-term economic development incomplete. It does not explain or examine where technical advance comes from, nor its mathematical character, despite its centrality to history. On its face, taking the rate of technological advance as fixed implies, implausibly, that a society’s wealth has *zero* effect on its rate of technical advance. There is no orange arrow from Production to Technology. Yet in general, when societies become richer, they do invest more in research and development and other kinds of innovation. It was this observation that motivated Romer, among others, to reconfigure economic models to make technological advance *endogenous* (which eventually earned a Nobel too). Just as people can invest earnings into capital, people can invest in technology, not to mention labor (in the number, longevity, and health of workers). In the set-up on my whiteboard, making these links merely requires writing the same equations for technology and labor as for capital. It is like drawing the sixth branch of a snowflake just like the other five. It looks like this: \n\n![blog_economy_4.png](https://www.openphilanthropy.org/sites/default/files/blog_economy_4.png) \n\nI discovered that when you do this—when you allow technology and all the other factors to affect economic output and be affected by it—the modeled system is unstable. (I was hardly the first to figure that out.) As time passes, the amount of each factor either explodes to infinity in finite time or decays to zero in infinite time. And under broadly plausible (albeit rigid) assumptions about the rates at which that Production diamond transforms inputs into output and reinvestment, explosion is the norm. It can even happen when [ideas are getting harder to find](https://pubs.aeaweb.org/doi/pdfplus/10.1257/aer.20180338). For example, even though it is getting expensive to squeeze more speed out of silicon chips, the global capacity to invest in the pursuit has never been greater.[14](https://www.openphilanthropy.org/blog/modeling-human-trajectory#footnote14_tz64ql1 \"I believe that Romer understood the implications of infinity in his work, anticipating that jab from Solow. But he appeared more interested in, and trusting of, near-term implications, such as for technology policy. Romer's first public iteration of endogenous growth avoided infinities by fiat, at the cost of avoiding explicit mathematical forms and complicating the exposition. The 1990 treatment is simpler and more mathematically explicit and does carry the implications. His 1994 JEP piece records the \\\"great deal of attention\\\" he paid to these issues of mathematical form in the first iteration.\")\n\n\nHere’s a demonstration of how endogenous technology creates explosive potential. Imagine an economy that begins with 1 unit each of labor, capital, human capital, and technology. Define the “units” how you please. A unit of capital could be a handaxe or a million factories. Suppose the economy then produces 1 unit of output per year. I’ve diagrammed that starting point by writing a 1 next to each factor as well as to the right of the Production diamond. To simplify, I’ve removed the purple depreciation loops: \n\n![blog_economy_5.png](https://www.openphilanthropy.org/sites/default/files/blog_economy_5.png) \n\nNow suppose that over a generation, enough output is reinvested in each factor other than technology that the stock of each increases to 2 units. Technology doesn’t change. Doubling the number of factories, workers, and diplomas they collectively hold is like duplicating the global economy: with all the inputs doubled, output should double too: \n\n![blog_economy_6.png](https://www.openphilanthropy.org/sites/default/files/blog_economy_6.png) \n\nNow suppose that in addition over this same generation, the world invests enough in R&D to double technology. Now the world economy extracts twice the economic value from given inputs—which themselves have doubled. So output *quadruples* in the first generation: \n\n![blog_economy_7.png](https://www.openphilanthropy.org/sites/default/files/blog_economy_7.png) \n\nWhat happens when the process repeats? Since output starts at 4 per year, instead of 1 as in the previous generation, total reinvestment into each input also quadruples. So where each factor stock climbed by 1 unit in the first generation, now each climbs by 4, from 2 to 6. In other words, each input triples by the end of this generation. And just as doubling each input, including technology, multiplied output by 2×2 = 4, the new cycle multiplies output by 3×3 = 9, raising it from 4 to 36: \n\n![blog_economy_8.png](https://www.openphilanthropy.org/sites/default/files/blog_economy_8.png) \n\nThe growth rate *accelerates*. The doubling time drops. And it drops ever more in succeeding generations.\n\n\nAgain, it is technology that drives this acceleration. If technology were stagnant, or if, as in Solow’s model, its growth rate were locked down, the system could not spiral upward so.\n\n\nIn the paper, I carry out a more intense version of this exercise, with 100 million steps, each representing 10 minutes. I imagine the economy to start in the Stone Age, so I endow it with a lot of labor (people) but primitive technology and little capital or human capital. I start population at 1 (which could represent 1 million) and the other factors lower. This graph shows how factor stocks and economic output (GWP) evolve over time: \n\n![lny_v_t_r0Blog.png](https://www.openphilanthropy.org/sites/default/files/lny_v_t_r0Blog.png) \n\nApparently my simulated economy could not support all the people I gave it at the start, at least given the fraction of its income I allowed it to invest in creating and sustaining life. So population falls at first, until after about 500 years the economy settles into something close to stasis. But it is not quite stasis, for eventually the economy starts to grow perceptibly, and within a few centuries its scale ascends to infinity. The sharp acceleration resembles history.\n\n\nIt turns out that a superexponential growth process not only fits the past well. It is rooted in conventional economic theory, once that theory is naturally generalized to allow for investment in technology.\n\n\n4. Interpreting infinity\n------------------------\n\n\nHow then are we to make sense of the fact that good models of the past predict an impossible future?\n\n\nOne explanation is simply that history need not repeat itself. The best model for the past may not be the best for the future. Perhaps technology can only progress so far. It has been half a century since men first stepped onto the moon and the 747 entered commercial service; contrast that with the previous half century of progress in aeronautics. As we saw, the world economy has grown more slowly and steadily in the last 50 years than the univariate model predicts. But it is hard to know whether any slowdown is permanent or merely a century-scale pause.\n\n\nA deeper take is that infinities are a sign not that a model is flatly wrong but that it loses accuracy outside a certain realm of possible states of the world. Beyond that realm, some factor once neglected no longer can be. Einstein used the fact that the speed of light is the same in all inertial reference frames to crack open classical physics. It turned out that when such great speeds were involved, the old equations become wrong. As Anders Johansen and Didier Sornette [have written](https://arxiv.org/pdf/cond-mat/0002075.pdf#page=6),\n\n\n\n> Singularities are always mathematical idealisations of natural phenomena: they are not present in reality but foreshadow an important transition or change of regime. In the present context, they must be interpreted as a kind of ‘critical point’ signaling a fundamental and abrupt change of regime similar to what occurs in phase transitions.\n> \n> \n\n\nWhat might be that factor once neglected that no longer can be? One candidate is a certain unrealism in calculus-based economic models. Calculus is great for predicting the path of comets, along which the sun’s pull really does change in each picosecond. All the the simulations I’ve graphed here treat innovation analogously, as something that happens in infinitely many steps, each of infinitely small size, each diffused around the globe at infinite speed. But real innovations take time to adopt, and time lags forestall infinities. If you keep hand-cranking the model on my white board, you *won’t* get to infinity by Christmas. You will just get really big numbers. That is because the simulation will take a finite number of chunky steps, not an infinite number of infinitely small steps.\n\n\nThe upshot of recognizing the unrealism of calculus, however, seems only to be that while GWP won’t go to infinity, it could still get stupendously big. How might that happen? We have in hand machines whose fundamental operations proceed a million times faster than those of the brain. And researchers are [getting better at making such machines work like brains](https://www.eff.org/ai/metrics). Artificial intelligence might open major new production possibilities. More radically, if AI is doing the economic accounting a century from now, it may include the welfare of artificial minds in GWP. Their number would presumably dwarf the human population. As absurd as that may sound, a rise of AI could be seen as the next unfolding of possibilities that began with the evolution of talkative, toolmaking apes.\n\n\nAnother neglected factor is the flow of energy (more precisely, negative entropy) from the sun and the earth’s interior. As economists [Nicholas Georgescu-Roegen](https://www.hup.harvard.edu/catalog.php?isbn=9780674281653) and [Herman Daly](https://books.google.com/books?id=qmp5mPdJ474C&newbks=1&newbks_redir=0&lpg=PP1&dq=Steady-State%20Economics%20daly&pg=PP1#v=onepage&q&f=false) have emphasized, depictions of the economic process like my whiteboard diagrams obscure the role of energy and natural resources in converting capital and labor into output. For this reason, at the end of my paper, I add natural resources to the model, rather as the classical economists included land. Since sunlight is constantly replenishing the biosphere, I have natural resources appreciate rather than depreciate. And to capture how economic activity can deplete natural resources, I cast the “reinvestment” in resources as negative.[15](https://www.openphilanthropy.org/blog/modeling-human-trajectory#footnote15_9buulzd \"I construe \\\"natural resources\\\" to mean not proved reserves of oil, but the full wealth (or curse) of oil deposits in the earth's mantle; not farmland but the grasslands cleared to make it. In both pairs, the first item is produced through a combination of labor, tools, and true natural resources.\") This is conceptually awkward, but I don’t see a better way within this modeling structure. I indicate these dynamics with a positive sign in the purple loop for natural resources and a minus sign on its orange reinvestment arrow: \n\n![blog_economy_10.png](https://www.openphilanthropy.org/sites/default/files/blog_economy_10.png) \n\nIn the simulation, the stock of resources is taken as initially plentiful, so it too starts at 1 rather than a lower value. The slow, solar-powered increase in this economic input (in green) hastens the economic explosion by a thousand years. But because the growing economy depletes natural resources more rapidly, the take-off initiates a plunge in natural resources, which brings GWP down with it. In a flash, explosion turns into implosion. \n\n![lny_v_t_r1Blog.png](https://www.openphilanthropy.org/sites/default/files/lny_v_t_r1Blog.png) \n\nThe scenario is, one hopes, unrealistic. Its realism will depend on whether the human enterprise ultimately undermines itself by depleting a natural endowment such as safe water supplies or the greenhouse gas absorptive capacity of the atmosphere; or whether we skirt such limits by, for example, switching to climate-safe energy sources and using them to clean the water and store the carbon.\n\n\n…which points up another factor the model neglects: how people respond to changing circumstances by changing their behavior. While the model allows the amount of labor, capital, etc., to gyrate, it locks down the numbers that shape that evolution, such as the rate at which economic output translates into environmental harm. This is another reason to interpret the model’s behavior *directionally*, as suggesting a tendency to diverge, not as gesturing all the way to utopia or dystopia.\n\n\nStill, this run suffices to demonstrate that an accelerating-growth model can capture the explosiveness of long-term GWP history without predicting a permanently spiraling ascent. Thus the presence of infinities in the model *neglecting* natural resource degradation does not justify dismissing superexponential models as a group. This too I learned through multivariate modeling.\n\n\n5. Conclusion\n-------------\n\n\nI do not know whether most of the history of technological advance on Earth lies behind or ahead of us. I do know that it is far easier to imagine what has happened than what hasn’t. I think it would be a mistake to laugh off or dismiss the predictions of infinity emerging from good models of the past. Better to take them as stimulants to our imaginations. I believe the predictions of infinity tell us two key things. First, if the patterns of long-term history continue, some sort of economic explosion will take place again, the most plausible channel being AI. It wouldn’t reach infinity, but it could be big. Second, and more generally, I take the propensity for explosion as a sign of *instability* in the human trajectory. Gross world product, as a rough proxy for the scale of the human enterprise, might someday spike or plunge or follow a complicated path in between. The projections of explosion should be taken as indicators of the long-run tendency of the human system to diverge. They are hinting that realistic models of long-term development are unstable, and stable models of long-term development unrealistic. The credible range of future paths is indeed wide.\n\n\n*Data and code for the paper and for this post are on [GitHub](https://github.com/droodman/Modeling-Human-Trajectory). The code runs in Stata. Open Philanthropy’s [Tom Davidson](https://www.openphilanthropy.org/about/team/tom-davidson/) wrote a [Colab notebook](https://colab.research.google.com/drive/11oAdADbcd6GCslV0P5ESubqghaQlpyh2#scrollTo=qtbbF_9QCkxo) that lets you perform and modify the stochastic model fitting in Python. A comment draft of the paper is [here](https://www.openphilanthropy.org/wp-content/uploads/Modeling-the-human-trajectory-2.pdf).*\n\n\n\n\n[Expand Footnotes\n \n\n\n\n\n Collapse Footnotes](javascript:void(0);)\n\n[1.](https://www.openphilanthropy.org/blog/modeling-human-trajectory#footnoteref1_r83c3rd) Figures are in purchasing power parity–adjusted dollars of 1990. See the [Data section of the paper](https://www.openphilanthropy.org/wp-content/uploads/Modeling-the-human-trajectory-2.pdf).\n\n\n[2.](https://www.openphilanthropy.org/blog/modeling-human-trajectory#footnoteref2_hpmn3ey) On the definition and timing of this event, see Smith and Szathmáry, *[The Origins of Life: From the Birth of Life to the Origin of Language](https://www.google.com/books/edition/The_Origins_of_Life/3L-kfT7Py2MC?hl=en&gbpv=1&dq=The%20Origins%20of%20Life%2C%20From%20the%20Birth%20of%20Life%20to%20the%20Origin%20of%20Language&pg=PP1&printsec=frontcover)*, p. 141.\n\n\n[3.](https://www.openphilanthropy.org/blog/modeling-human-trajectory#footnoteref3_oai6t4l) See the [Data section of the paper](https://www.openphilanthropy.org/wp-content/uploads/Modeling-the-human-trajectory-2.pdf) for details.\n\n\n[4.](https://www.openphilanthropy.org/blog/modeling-human-trajectory#footnoteref4_1mz8lgh) The annual GWP figures are best interpreted as pertaining to the midpoint of each year, a nicety I neglect to simplify exposition.\n\n\n[5.](https://www.openphilanthropy.org/blog/modeling-human-trajectory#footnoteref5_t9oc7he) See the [section 3.1 of the paper](https://www.openphilanthropy.org/wp-content/uploads/Modeling-the-human-trajectory-2.pdf).\n\n\n[6.](https://www.openphilanthropy.org/blog/modeling-human-trajectory#footnoteref6_yzi7euj) See the [Estimation section of the paper](https://www.openphilanthropy.org/wp-content/uploads/BernouPathsGWP12KDecnovBlog.png) for specifics.\n\n\n[7.](https://www.openphilanthropy.org/blog/modeling-human-trajectory#footnoteref7_719l1gh) For each path, the four modeling parameters are drawn from the multivariate normal distribution implied by the covariance matrix of the maximum-likelihood estimates.\n\n\n[8.](https://www.openphilanthropy.org/blog/modeling-human-trajectory#footnoteref8_jasamh9) More precisely, it incorporates modeled stochasticity and parameter uncertainty, one source of which is measurement error. It does not incorporate uncertainty about the correct structure of the model.\n\n\n[9.](https://www.openphilanthropy.org/blog/modeling-human-trajectory#footnoteref9_yofimul) Figures from [Table 3, column 4, of the paper](https://www.openphilanthropy.org/wp-content/uploads/Modeling-the-human-trajectory-2.pdf).\n\n\n[10.](https://www.openphilanthropy.org/blog/modeling-human-trajectory#footnoteref10_i3q8qz5) Compare equations 1 and 18 in the [paper](https://www.openphilanthropy.org/wp-content/uploads/Modeling-the-human-trajectory-2.pdf).\n\n\n[11.](https://www.openphilanthropy.org/blog/modeling-human-trajectory#footnoteref11_wjesac5) See [section 2 of the paper](https://www.openphilanthropy.org/wp-content/uploads/Modeling-the-human-trajectory-2.pdf).\n\n\n[12.](https://www.openphilanthropy.org/blog/modeling-human-trajectory#footnoteref12_1jbajua) I’m referring to Solow-Swan models with Cobb-Douglas production, fixed reinvestment rates in factors, and constant deprecation or appreciation of factors.\n\n\n[13.](https://www.openphilanthropy.org/blog/modeling-human-trajectory#footnoteref13_w49khet) See Robert Solow, *[Growth Theory: An Exposition](https://www.amazon.com/Growth-Theory-Exposition-Robert-2000-01-13/dp/B019NDD2BI)*, pp. 98ff.\n\n\n[14.](https://www.openphilanthropy.org/blog/modeling-human-trajectory#footnoteref14_tz64ql1) I believe that Romer [understood the implications of infinity](https://pubs.aeaweb.org/doi/pdfplus/10.1257/jep.8.1.3#page=16) in his work, anticipating that jab from Solow. But he appeared more interested in, and trusting of, near-term implications, such as for technology policy. [Romer’s first public iteration](https://web.archive.org/web/20200212202446/http://www.dklevine.com/archive/refs42232.pdf) of endogenous growth avoided infinities by fiat, at the cost of avoiding explicit mathematical forms and complicating the exposition. The [1990 treatment](https://web.archive.org/web/20190913035557/https://web.stanford.edu/~klenow/Romer_1990.pdf) is simpler and more mathematically explicit and does carry the implications. His 1994 [*JEP*](https://pubs.aeaweb.org/doi/pdfplus/10.1257/jep.8.1.3) piece records the “great deal of attention” he paid to these issues of mathematical form in the first iteration.\n\n\n[15.](https://www.openphilanthropy.org/blog/modeling-human-trajectory#footnoteref15_9buulzd) I construe “natural resources” to mean not proved reserves of oil, but the full wealth (or curse) of oil deposits in the earth’s mantle; not farmland but the grasslands cleared to make it. In both pairs, the first item is produced through a combination of labor, tools, and true natural resources.", "url": "https://www.openphilanthropy.org/blog/modeling-human-trajectory", "title": "Modeling the Human Trajectory", "source": "html_articles", "source_type": "blogPost", "source_filetype": "pdf", "date_published": "2020-06-14T22:00:00Z", "authors": ["David Roodman"], "summary": [], "id": "531448b0c193ce8ea6484ce9c988f5a2"} {"text": "Published: September 11, 2020 | by [Joseph Carlsmith](/about/team/joseph-carlsmith) \nOpen Philanthropy is interested in when AI systems will be able to perform [various tasks](https://www.openphilanthropy.org/research/some-background-on-our-views-regarding-advanced-artificial-intelligence/) that humans can perform (“AI timelines”). To inform our thinking, I investigated what evidence the human brain provides about the computational power sufficient to match its capabilities. This is the full report on what I learned. A medium-depth summary is available [here](https://www.openphilanthropy.org/research/new-report-on-how-much-computational-power-it-takes-to-match-the-human-brain/). The [executive summary](#section_1.1) below gives a shorter overview.\n\n\n \n\n\n1 Introduction\n--------------\n\n\n#### 1.1 Executive summary\n\n\nLet’s grant that in principle, sufficiently powerful computers can perform any cognitive task that the human brain can. How powerful is sufficiently powerful? I investigated what we can learn from the brain about this. I consulted with more than 30 experts, and considered four methods of generating estimates, focusing on [floating point operations per second](https://en.wikipedia.org/wiki/FLOPS) (FLOP/s) as a metric of computational power.\n\n\nThese methods were:\n\n\n1. Estimate the FLOP/s required to model the brain’s mechanisms at a level of detail adequate to replicate task-performance (the“[mechanistic method](#section_2)”).[1](https://www.openphilanthropy.org/brain-computation-report#footnote1_2nn9xh3 \"The names “mechanistic method” and “functional method” were suggested by our technical advisor Dr. Dario Amodei, though he uses a somewhat more specific conception of the mechanistic method. Sandberg and Bostrom (2008) also distinguish between “straightforward multiplicate estimates” and those that are based on “analogy or constraints” (p. 84, Appendix A).\")\n2. Identify a portion of the brain whose function we can already approximate with artificial systems, and then scale up to a FLOP/s estimate for the whole brain (the “[functional method](#section_3)”).\n3. Use the brain’s energy budget, together with physical limits set by [Landauer’s principle](https://en.wikipedia.org/wiki/Landauer%27s_principle), to upper-bound required FLOP/s (the “[limit method](#section_4)”).\n4. Use the communication bandwidth in the brain as evidence about its computational capacity (the “[communication method](#section_5)”). I discuss this method only briefly.\n\n\nNone of these methods are direct guides to the *minimum possible* FLOP/s budget, as the most efficient ways of performing tasks need not resemble the brain’s ways, or those of current artificial systems. But if sound, these methods would provide evidence that certain budgets are, at least, big enough (*if* you had the right software, which may be very hard to create – see discussion in [section 1.3](#section_1.3)).[2](https://www.openphilanthropy.org/brain-computation-report#footnote2_hqs4jxi \"Here I am using \\\"software\\\" in a way that includes trained models in addition to hand-coded programs. Some forms of hardware (including neuromorphic hardware -- see Mead (1989)) complicate traditional distinctions between hardware and software, but the broader consideration at stake here -- e.g., that task-performance requires organizing available computational power in the right way -- remains applicable.\")\n\n\nHere are some of the numbers these methods produce, plotted alongside the FLOP/s capacity of some current computers.\n\n\n \n\n\n\n[![FLOPsBudgets5.png](https://www.openphilanthropy.org/files/Blog/FLOPsBudgets5.png)](https://www.openphilanthropy.org/files/Blog/FLOPsBudgets5.png)**Figure 1: The report’s main estimates.** See the [conclusion](#section_6)for a list that describes them in more detail, and summarizes my evaluation of each.\n\n\nThese numbers should be held lightly. They are back-of-the-envelope calculations, offered alongside initial discussion of complications and objections. The science here is very far from settled.\n\n\nFor those open to speculation, though, here’s a summary of what I’m taking away from the investigation:\n\n\n* **Mechanistic method estimates suggesting that 1013-1017 FLOP/s is enough to match the human brain’s task-performance seem plausible to me**. This is partly because various experts are sympathetic to these estimates (others are more skeptical), and partly because of the direct arguments in their support. Some considerations from this method point to [higher numbers](#section_2.4.2); and some, to [lower numbers](#section_2.4.1). Of these, the latter seem to me stronger.[3](https://www.openphilanthropy.org/brain-computation-report#footnote3_10d52qz \"Though it also seems easier, in general, to show that X is enough, than that X is strictly required – an asymmetry present throughout the report.\")\n* **I give less weight to functional method estimates**, primarily due to uncertainties about (a) the FLOP/s required to fully replicate the functions in question, (b) what the relevant portion of the brain is doing (in the case of the visual cortex), and (c) differences between that portion and the rest of the brain (in the case of the retina). However, **I take estimates based on the visual cortex as some weak evidence that the mechanistic method range above (1013-1017 FLOP/s) isn’t much too low**. Some estimates based on recent deep neural network models of retinal neurons point to higher numbers, but I take these as even weaker evidence.\n* **I think it unlikely that the required number of FLOP/s exceeds the bounds suggested by the limit method**. However, I don’t think the method itself airtight. Rather, I find some arguments in the vicinity persuasive (though not all of them rely directly on Landauer’s principle); various experts I spoke to (though not all) were quite confident in these arguments; and other methods seem to point to lower numbers.\n* **Communication method estimates may well prove informative**, but I haven’t vetted them. I discuss this method mostly in the hopes of prompting further work.\n\n\n**Overall, I think it more likely than not that 1015 FLOP/s is enough to perform tasks as well as the human brain (given the right software, which may be very hard to create). And I think it unlikely (<10%) that more than 1021 FLOP/s is required.**[4](https://www.openphilanthropy.org/brain-computation-report#footnote4_xne824i \"The probabilities reported here should be interpreted as subjective levels of confidence or “credences,” not as claims about objective frequencies, statistics, or “propensities” (see Peterson (2009), Chapter 7, for discussion of various alternative interpretations of probability judgments). One way of defining these credences is via preferences over lotteries - a definition I find useful (though not fully satisfactory). On such a definition, “I think it more likely than not\\\" means that, for example, if I had the option to win $10,000 if 1015 FLOP/s is sufficient, in principle, to match human-level task-performance, or the option to win $10,000 if 1015 FLOP/s is not sufficient, I would choose the former option. Skepticism about my answer should go in proportion to confidence that 1e15 FLOP/s is not sufficient (e.g., those who disagree should prefer the latter option rather than the former), rather than with dissatisfaction with the evidence available either way (I too am quite dissatisfied in this regard), or disinclination to take real-world bets (why turn down a free chance at $10,000?). That said, for various reasons, I don't find this definition of subjective probability judgments fully satisfactory (in particular, it transforms probabilistic claims about the world into true/false claims about one's betting behavior-- and it's not clear exactly what sort of betting behavior is implied, or what consistency in such behavior assumed), so I offer it more as a gesture at a way of soliciting subjective credences than as an endorsed definition. See Peterson (2009), section 7.5, for discussion of lotteries of this type in the context of the literature on decision-theory. See also this blog post by Andrew Critch for more informal discussion; and see Muehlhauser (2017a), section 2, for discussion of some complexities involved in using these probabilities in practice.\") But I’m not a neuroscientist, and there’s no consensus in neuroscience (or elsewhere).\n\n\nI offer a few more specific probabilities, keyed to one specific type of brain model, in the [appendix](#section_7).[5](https://www.openphilanthropy.org/brain-computation-report#footnote5_376tjis \"I focus on this model in particular because I think it fits best with the mechanistic method evidence I’ve thought about most and take most seriously. Offering specific probabilities keyed to the minimum FLOP/s sufficient for task-performance, by contrast, requires answering further questions about the theoretical limits of algorithmic efficiency that I haven’t investigated.\") My current best-guess median for the FLOP/s required to run that particular type of model is around 1015 (note that this is not an estimate of the FLOP/s uniquely “equivalent” to the brain – see discussion in [section 1.6](#section_1.6)).\n\n\nAs can be seen from the figure above, the FLOP/s capacities of current computers (e.g., a [V100](https://www.nvidia.com/en-us/data-center/v100/) at ~1014 FLOP/s for ~$10,000, the [Fugaku supercomputer](https://en.wikipedia.org/wiki/Fugaku_(supercomputer)) at ~4×1017 FLOP/s for ~$1 billion) cover the estimates I find most plausible.[6](https://www.openphilanthropy.org/brain-computation-report#footnote6_8qle859 \"See here for V100 prices (currently ~$8,799); and here the $1 billion Fugaku pricetag: “The six-year budget for the system and related technology development totaled about $1 billion, compared with the $600 million price tags for the biggest planned U.S. systems.” Fugaku FLOP/s performance is listed here, at around ~4×1017 FLOP/s-5×1017 FLOP/s. Google’s TPU supercomputer, which recently broke records in training ML systems, can also do ~4×1017 FLOP/s, though I’m not sure the costs. See Kumar (2020): “In total, this system delivers over 430 PFLOPs of peak performance.” The A100, for ~$200,000, can do 5×1015 FLOP/s -- see Mehar (2020). NVIDIA's newest SuperPOD can deliver ~7×1017 of AI performance -- see Paikeday (2020).\") However:\n\n\n* Computers capable of matching the human brain’s task performance would also need to meet further constraints (for example, constraints related to memory and memory bandwidth).\n* Matching the human brain’s task-performance requires *actually creating* sufficiently capable and computationally efficient AI systems, and I do not discuss how hard this might be (though note that training an AI system to do X, in machine learning, is much more resource-intensive than using it to do X once trained).[7](https://www.openphilanthropy.org/brain-computation-report#footnote7_hqg8qk3 \"See discussion in Section 1.3 below.\")\n\n\nSo even if my best-guesses are right, this does not imply that we’ll see AI systems as capable as the human brain anytime soon.\n\n\n**Acknowledgements**: This report emerged out of Open Philanthropy’s engagement with some arguments suggested by one of our technical advisors, Dario Amodei, in the vein of the mechanistic/functional methods (see citations throughout the report for details). However, my discussion should not be treated as representative of Dr. Amodei’s views; the project eventually broadened considerably; and my conclusions are my own. My thanks to Dr. Amodei for prompting the investigation, and to Open Philanthropy’s technical advisors Paul Christiano and Adam Marblestone for help and discussion with respect to different aspects of the report. I am also grateful to the following external experts for talking with me. In neuroscience: Stephen Baccus, Rosa Cao, E.J. Chichilnisky, Erik De Schutter, Shaul Druckmann, Chris Eliasmith, davidad (David A. Dalrymple), Nick Hardy, Eric Jonas, Ilenna Jones, Ingmar Kanitscheider, Konrad Kording, Stephen Larson, Grace Lindsay, Eve Marder, Markus Meister, Won Mok Shim, Lars Muckli, Athanasia Papoutsi, Barak Pearlmutter, Blake Richards, Anders Sandberg, Dong Song, Kate Storrs, and Anthony Zador. In other fields: Eric Drexler, Owain Evans, Michael Frank, Robin Hanson, Jared Kaplan, Jess Riedel, David Wallace, and David Wolpert. My thanks to Dan Cantu, Nick Hardy, Stephen Larson, Grace Lindsay, Adam Marblestone, Jess Riedel, and David Wallace for commenting on early drafts (or parts of early drafts) of the report; to six other neuroscientists (unnamed) for reading/commenting on a later draft; to Ben Garfinkel, Catherine Olsson, Chris Sommerville, and Heather Youngs for discussion; to Nick Beckstead, Ajeya Cotra, Allan Dafoe, Tom Davidson, Owain Evans, Katja Grace, Holden Karnofsky, Michael Levine, Luke Muehlhauser, Zachary Robinson, David Roodman, Carl Shulman, Bastian Stern, and Jacob Trefethen for valuable comments and suggestions; to Charlie Giattino, for conducting some research on the scale of the human brain; to Asya Bergal for sharing with me some of her research on Landauer’s principle; to Jess Riedel for detailed help with the limit method section; to AI Impacts for sharing some unpublished research on brain-computer equivalence; to Rinad Alanakrih for help with image permissions; to Robert Geirhos, IEEE, and Sage Publications for granting image permissions; to Jacob Hilton and Gregory Toepperwein for help estimating the FLOP/s costs of different models; to Hannah Aldern and Anya Grenier for help with recruitment; to Eli Nathan for extensive help with the website and citations; to Nik Mitchell, Andrew Player, Taylor Smith, and Josh You for help with conversation notes; and to Nick Beckstead for guidance and support throughout the investigation.\n\n\n#### 1.2 Caveats\n\n\n*(This section discusses some caveats about the report’s epistemic status, and some notes on presentation. Those eager for the main content, however uncertain, can skip to**[section 1.3](#section_1.3)**.)*\n\n\nSome caveats:\n\n\n* Little if any of the evidence surveyed in this report is particularly conclusive. My aim is not to settle the question, but to inform analysis and decision-making that must proceed in the absence of conclusive evidence, and to lay groundwork for future work.\n* I am not an expert in neuroscience, computer science, or physics (my academic background is in philosophy).\n* I sought out a variety of expert perspectives, but I did not make a rigorous attempt to ensure that the experts I spoke to were a representative sample of opinion in the field. Various selection effects influencing who I interviewed plausibly correlate with sympathy towards lower FLOP/s requirements.[8](https://www.openphilanthropy.org/brain-computation-report#footnote8_i2q7fw5 \"Selection effects include: expertise related to an issue relevant to the report, willingness to talk with me about the subject, recommendation by one of the other experts I spoke with as a possible source of helpful input, and connection (sometimes a few steps removed) with the professional and social communities that intersect at Open Philanthropy. \")\n* For various reasons, the research approach used here differs from what might be expected in other contexts. Key differences include:\n\t+ I give weight to intuitions and speculations offered by experts, as well as to factual claims by experts that I have not independently verified (these are generally documented in conversation notes approved by the experts themselves).\n\t+ I report provisional impressions from initial research.\n\t+ My literature reviews on relevant sub-topics are not comprehensive.\n\t+ I discuss unpublished papers where they appear credible.\n\t+ My conclusions emerge from my own subjective synthesis of the evidence I engaged with.\n* There are ongoing questions about the baseline reliability of various kinds of published research in neuroscience and cognitive science.[9](https://www.openphilanthropy.org/brain-computation-report#footnote9_7pczl9i \"See Poldrack et al. (2017); Vul and Pashler (2017); Uttal (2012); Button et al. (2013); Szucs and P.A. loannidis (2017); and Carp (2012). And see also Muehlhauser (2017b), Appendix Z.8, for discussion of his reasons for default skepticism of published studies. My thanks to Luke Muehlhauser for suggesting this type of consideration and these references. \") I don’t engage with this issue explicitly, but it is an additional source of uncertainty.\n\n\nA few other notes on presentation:\n\n\n* I have tried to keep the report accessible to readers with a variety of backgrounds.\n* The endnotes are frequent and sometimes lengthy, and they contain more quotes and descriptions of my research process than is academically standard. This is out of an effort to make the report’s reasoning [***t****ransparent***](https://www.openphilanthropy.org/research/reasoning-transparency/)to readers. However, the endnotes are not essential to the main content, and I suggest only reading them if you’re interested in more details about a particular point.\n* I draw heavily on non-verbatim notes from my conversations with experts, made public with their approval and cited/linked in endnotes. These notes are also available [here](https://www.openphilanthropy.org/research/?content-type=conversations&view-list=true#categories).\n* I occasionally use the word “compute” as a shorthand for “computational power.”\n* Throughout the rest of the report, I use a form of [scientific notation](https://en.wikipedia.org/wiki/Scientific_notation), in which “XeY” means “X×10Y.” Thus, 1e6 means 1,000,000 (a million); 4e8 means 400,000,000 (four hundred million); and so on. I also round aggressively.\n\n\n#### 1.3 Context\n\n\n*(This section briefly describes what prompts Open Philanthropy’s interest in the topic of this report. Those primarily interested in the main content can skip to*[*Section 1.4*](#section_1.4)*.)*\n\n\nThis report is part of a broader effort at Open Philanthropy to investigate when advanced AI systems might be developed (“AI timelines”) – a question that we think decision-relevant for our grant-making related to [potential risks from advanced AI](https://www.openphilanthropy.org/focus/potential-risks-advanced-ai/).[10](https://www.openphilanthropy.org/brain-computation-report#footnote10_wg3tydz \"This effort is itself part of a project at Open Philanthropy currently called Worldview Investigations, which aims to investigate key questions informing our grant-making.\") But why would an interest in AI timelines prompt an interest in the topic of this report in particular?\n\n\nSome classic analyses of AI timelines (notably, by [Hans Moravec](https://frc.ri.cmu.edu/~hpm/book98/fig.ch3/p060.html) and [Ray Kurzweil](http://www.singularity.com/charts/page70.html)) emphasize forecasts about when available computer hardware will be “equivalent,” in some sense (see [section 1.6](#section_1.6) for discussion), to the human brain.[11](https://www.openphilanthropy.org/brain-computation-report#footnote11_w8x4o51 \"See, for example, Moravec (1998), chapter 2; and Kurzweil (2005), chapter 3. See this list from AI Impacts for related forecasts.\")\n\n\n\n[![Hardware-drawing.png](https://www.openphilanthropy.org/files/Blog/Hardware-drawing.png)](https://www.openphilanthropy.org/files/Blog/Hardware-drawing.png)**Figure 2: Graph schema for classic forecasts**. See real examples [here](https://frc.ri.cmu.edu/~hpm/book98/fig.ch3/p060.html) and [here](http://www.singularity.com/charts/page70.html).\n\n\nA basic objection to predicting AI timelines on this basis alone is that you need more than hardware to do what the brain does.[12](https://www.openphilanthropy.org/brain-computation-report#footnote12_7fre80b \"See, for example, Malcolm (2000); Lanier (2000) (“Belief # 5”); Russell (2019) (p. 78). AI Impacts offers a framework that I find helpful, which uses indifference curves indicating which combinations hardware and software capability yield the same overall task-performance. This framework (see especially Figure 3) makes clear that the first human-level AI systems could use much more or much less hardware than the amount “equivalent” to the human brain (at least assuming that this amount is not the absolute minimum) -- though see figure 4 for a scenario in which brain-equivalent hardware is a better basis for forecasts.\") In particular, you need software to run on your hardware, and creating the right software might be very hard (Moravec and Kurzweil both recognize this, and appeal to further arguments).[13](https://www.openphilanthropy.org/brain-computation-report#footnote13_cw60b9y \"Moravec argues here that “under current circumstances, I think computer power is the pacing factor for AI” (see his second reply to Robin Hanson). Kurzweil (2005) devotes all of Chapter 4 to the question of software.\")\n\n\n \n\n\nIn the context of machine learning, we can offer a more specific version of this objection: the hardware required to *run* an AI system isn’t enough; you also need the hardware required to *train* it (along with other resources, like data).[14](https://www.openphilanthropy.org/brain-computation-report#footnote14_m780czs \"For example: a ResNet-152 uses ~1e10 FLOP to classify an image, but took ~1e19 FLOP (a billion times more) to train, according to Hernandez and Amodei (2018) (see appendix, though see also Hernandez and Brown (2020) for discussion of decreasing training costs for vision models over time).\") And training a system requires running it a *lot*. DeepMind’s [AlphaGo Zero](https://www.nature.com/articles/nature24270), for example, trained on ~5 million games of Go.[15](https://www.openphilanthropy.org/brain-computation-report#footnote15_24im16f \"Silver et al. (2017): “Over the course of training, 4.9 million games of self-play were generated” (see “Empirical analysis of AlphaGo Zero training”). A bigger version of the model trained on 29 million games. See Kaplan et al. (2020) and Hestness et al. (2017) for more on the scaling properties for training in deep learning.\")\n\n\nNote, though, that depending on what sorts of task-performance will result from what sorts of training, a framework for thinking about AI timelines that incorporated training requirements would start, at least, to incorporate and quantify the difficulty of creating the right software more broadly.[16](https://www.openphilanthropy.org/brain-computation-report#footnote16_f17b6og \"The question of what sorts of task-performance will result from what sorts of training is centrally important in this context, and I am not here assuming any particular answers to it.\") This is because training *turns*computation, data, and other resources into software you wouldn’t otherwise know how to make.\n\n\nWhat’s more, the hardware required to train a system is *related*to the hardware required to run it.[17](https://www.openphilanthropy.org/brain-computation-report#footnote17_3oh6ac5 \"The fact that training a model requires running it a lot makes this clear. But there are also more complex relationships between e.g. model size and amount of training data. See Kaplan et al. (2020) and Hestness et al. (2017).\") This relationship is central to Open Philanthropy’s interest in the topic of this report, and to an investigation my colleague [Ajeya Cotra](https://www.openphilanthropy.org/about/team/ajeya-cotra/) has been conducting, which draws on my analysis. That investigation focuses on what brain-related FLOP/s estimates, along with other estimates and assumptions, might tell us about when it will be feasible to *train* different types of AI systems. I don’t discuss this question here, but it’s an important part of the context. And in that context, brain-related hardware estimates play a different role than they do in forecasts like Moravec’s and Kurzweil’s.\n\n\n#### 1.4 FLOP/s basics\n\n\n*(This section discusses what FLOP/s are, and why I chose to focus on them. Readers familiar with FLOP/s and happy with this choice can skip to [Section 1.5](#section_1.5).)*\n\n\nComputational power is multidimensional – encompassing, for example, the number and type of operations performed per second, the amount of memory stored at different levels of accessibility, and the speed with which information can be accessed and sent to different locations.[18](https://www.openphilanthropy.org/brain-computation-report#footnote18_baro8pn \"See e.g. Dongerra et al. (2003): “the performance of a computer is a complicated issue, a function of many interrelated quantities. These quantities include the application, the algorithm, the size of the problem, the high-level language, the implementation, the human level of effort used to optimize the program, the compiler’s ability to optimize, the age of the compiler, the operating system, the architecture of the computer and the hardware characteristics” (p. 805); Moravec (1988): “Any particular formula for estimating power may be grossly misled by an unlucky or diabolic counterexample. For instance, if a computer’s power were defined simply by how many additions per second it could do, an otherwise useless special circuit made of an array of fast adders, and nothing else, costing a few hundred dollars, could outperform a $10-million supercomputer” (p. 169); Nordhaus (2001): “Measuring computer power has bedeviled analysts because computer characteristics are multidimensional and evolve rapidly over time.” (p. 5).\")\n\n\nThis report focuses on operations per second, and in particular, on “[floating point operations](https://en.wikipedia.org/wiki/Floating-point_arithmetic).”[19](https://www.openphilanthropy.org/brain-computation-report#footnote19_0f829fd \"An operation, here, is an abstract mapping from inputs to outputs that can be implemented by a computer, and that is treated as basic for the purpose of the analysis in question (see Schneider and Gersting (2018) (p. 96-100)). A FLOP is itself composed out of a series of much simpler logic operations, which are in some contexts a more natural and basic computational unit. See e.g. Sipser (2013), section 9.3, for discussion of analyzing the complexity of algorithms in terms of the number of AND, OR, and NOT gates required to construct a functional circuit. The report’s analysis could in principle be converted into these units instead -- or, indeed, into any computational unit capable of simulating a FLOP. \") These are arithmetic operations (addition, subtraction, multiplication, division) performed on a pair of [floating point numbers](http://cstl-csm.semo.edu/xzhang/Class%20Folder/CS280/Workbook_HTML/FLOATING_tut.htm) – that is, numbers represented as a set of significant digits multiplied by some other number raised to some exponent (like [scientific notation](https://en.wikipedia.org/wiki/Scientific_notation)). I’ll use “FLOPs” to indicate floating point operations (plural), and “FLOP/s” to indicate floating point operations per second.\n\n\nMy central reason for focusing on FLOP/s is that various brain-related FLOP/s estimates are key inputs to the framework for thinking about training requirements, mentioned above, that my colleague Ajeya Cotra has been investigating, and they were the focus of Open Philanthropy’s initial exploration of this topic, out of which this report emerged. Focusing on FLOP/s in particular also limits the scope of what is already a fairly broad investigation; and the availability of FLOP/s is one key contributor to recent progress in AI.[20](https://www.openphilanthropy.org/brain-computation-report#footnote20_ybj43f6 \"See e.g. Kahn and Mann (2020): “The success of modern AI techniques relies on computation on a scale unimaginable even a few years ago. Training a leading AI algorithm can require a month of computing time and cost $100 million” (p. 3); and Geoffrey Hinton’s comments in Lee (2016): “In deep learning, the algorithms we use now are versions of the algorithms we were developing in the 1980s, the 1990s. People were very optimistic about them, but it turns out they didn’t work too well. Now we know the reason is they didn’t work too well is that we didn’t have powerful enough computers, we didn’t have enough data sets to train them. If we want to approach the level of the human brain, we need much more computation, we need better hardware.” For more discussion of the compute burdens of contemporary AI applications, see e.g. Kaplan et al. (2020), Amodei and Hernandez (2018), and McCandlish et al. (2018). Note that the dominant costs here are from training the relevant systems, not from running them. However, the costs of training depend centrally on the costs of running (along with other factors). This relationship is central to my colleague Ajeya Cotra’s investigation.\")\n\n\nStill, the focus on FLOP/s is a key limitation of this analysis, as other computational resources are just as crucial to task-performance: if you can’t store the information you need, or get it where it needs to be fast enough, then the units in your system that perform FLOPs will be some combination of useless and inefficiently idle.[21](https://www.openphilanthropy.org/brain-computation-report#footnote21_yubq0il \"I say a little bit about communication bandwidth in Section 5. See Sandberg and Bostrom (2008) (p. 84-85), for a literature review of memory estimates. See Open Philanthropy's non-verbatim notes from a conversation with Dr. Adam Marblestone] (“FLOP/s”) for some discussion of other relevant factors.\") Indeed, my understanding is that FLOP/s are often not the relevant bottleneck in various contexts related to AI and brain modeling.[22](https://www.openphilanthropy.org/brain-computation-report#footnote22_0lgamzp \"Eugene Izhikevich, for example, reports that in running his brain simulation, he did not have the memory required to store all of the synaptic weights (10,000 terabytes), and so had to regenerate the anatomy of his simulated brain every time step; and Stephen Larson suggested that one of the motivations behind the Blue Brain project’s reliance on a supercomputer was the need to reduce latency between computation units (see Open Philanthropy's non-verbatim notes from a conversation with Dr. Stephen Larson (p. 5)). See also Fathom Computing’s comment here: “Data movements, not math or logic operations, are the bottleneck in computing” (though this is hardly an unbiased source); Hollemans’ comments here: “The number of computations — whether you count them as MACCs or FLOPS — is only part of the story. Memory bandwidth is the other part, and most of the time is even more important!”; and various citations from AI Impacts, e.g. Angel et al. (2012), and Takahashi (2012).\") And further dimensions an AI system’s implementation, like hardware architecture, can introduce significant overheads, both in FLOP/s and other resources.[23](https://www.openphilanthropy.org/brain-computation-report#footnote23_je4jj2a \"From Open Philanthropy's non-verbatim notes from a conversation with Dr. Adam Marblestone: “the architecture of a given computer (especially e.g. a standard von Neumann architecture) might create significant overhead. For example, the actual brain co-locates long-term memory and computing. If you had to store longer-term data in a conventional RAM instead, many additional operations might be necessary in order to locate, address, and update relevant variables” (p. 1). One option for reducing overheads might involve neuromorphic computing architectures (see Mead (1989), descriptions here, and papers here; Zaghloul and Boahen (2006) report a “100-fold improvement over conventional microprocessors” (p. 266)). There is also a growing industry of chips designed specifically for AI applications (see Khan (2020): “AI-specialized chip designs are an additional 10 to 1,000 more cost-effective for training AI algorithms than ordinary chips” (p. 2)).\")\n\n\nUltimately, though, once other computational resources are in place, and other overheads have mostly been eliminated or accounted for, you need to actually perform the FLOP/s that a given time-limited computation requires. In order to isolate this quantity, I proceed on the idealizing assumption that non-FLOP resources are available in amounts adequate to make full use of all of the FLOP/s in question (but not in unrealistically extreme abundance), without significant overheads.[24](https://www.openphilanthropy.org/brain-computation-report#footnote24_h8e70n1 \"An example of “unrealistically extreme abundance” would be the type of abundance of memory required by a giant look-up table. Even bracketing such obviously extreme scenarios, though, it seems possible that trade-offs between FLOP/s and other computational resources might complicate talk about the minimum FLOP/s sufficient to do X, absent further more specific constraints on the other resources available. I haven’t delved into this issue much: my hope is that insofar as it’s a problem in theory, the actual evidence surveyed in the report will still be useful in practice. \") All talk of the “FLOP/s sufficient to X” assumes this caveat.\n\n\nThis means you can’t draw conclusions about which concrete computers can replicate human-level task performance directly from the FLOP/s estimates in this report, even if you think those estimates credible. Such computers will need to meet further constraints.[25](https://www.openphilanthropy.org/brain-computation-report#footnote25_2tuoex6 \"See Ananthanarayanan et al. (2009) for discussion of the hardware complexities involved in brain simulation.\")\n\n\nNote, as well, that these estimates do not depend on the assumption that the brain performs operations analogous to FLOPs, or on any other similarities between brain architectures and computer architectures.[26](https://www.openphilanthropy.org/brain-computation-report#footnote26_d6hdhnd \"Objections focused on general differences between brains and various human-engineered computers (e.g., the brain lacks a standardized clock, the brain is very parallel, the brain is analog, the brain is stochastic, the brain is chaotic, the brain is embodied, the brain’s memory works differently, the brain lacks a sharp distinction between hardware and software, etc.) are therefore relevant only insofar as they are incompatible with particular claims in the report; they are not, as far as I can tell, incompatible with any underlying assumptions of the project as a whole (except insofar as they are taken to suggest that no human-engineered computer can perform the tasks the brain performs -- a form of skepticism the report does not attempt to address). See Marcus (2015) for discussion of some such objections. The different methods I consider rely on their own, somewhat more substantive assumptions.\") The report assumes that the tasks the brain performs can also be performed using a sufficient number of FLOP/s, but the causal structure in the brain that gives rise to task-performance could in principle take a wide variety of unfamiliar forms.\n\n\n#### 1.5 Neuroscience basics\n\n\n*(This section reviews some of the neural mechanisms I’ll be discussing, in an effort to make the report’s content accessible to readers without a background in neuroscience.[27](https://www.openphilanthropy.org/brain-computation-report#footnote27_tcs4hmd \"My impression is that the content reviewed here is basically settled science, though see Section 1.5.1 for discussion of various types of ongoing neuroscientific uncertainty.\") Those familiar with signaling mechanisms in the brain – neurons, neuromodulators, gap junctions – can skip to*[*Section 1.5.1*](#section_1.5.1)*).*\n\n\nThe human brain contains around 100 billion neurons, and roughly the same number of non-neuronal cells.[28](https://www.openphilanthropy.org/brain-computation-report#footnote28_redzbhw \"Azevedo et al. (2009): “We find that the adult male human brain contains on average 86.1 ± 8.1 billion NeuN-positive cells (“neurons”) and 84.6 ± 9.8 billion NeuN-negative (“nonneuronal”) cells” (532). My understanding is that the best available method of counting neurons is isotropic fractionation, which proceeds by dissolving brain structures into a kind of homogenous “brain soup,” and then counting cell nuclei (see Herculano-Houzel and Lent (2005) for a more technical description of the process, and Bartheld et al. (2016) for a history of cell-counting in the brain). Note that there may be substantial variation in cell counts between individuals (for example, according to Bartheld et al. (2016) (p. 9), citing Haug (1986) and Pakkenberg and Gundersen (1997), neocortical neuron count may vary by a factor of more than two, though I haven’t checked these further citations). At one point it was widely thought that the ratio of glial cells (a type of non-neuronal cell) to neurons in the brain was 10:1, but this is wrong (see Bartheld et al. (2016)).\") Neurons are cells specialized for sending and receiving various types of electrical and chemical signals, and other non-neuronal cells send and receive signals as well.[29](https://www.openphilanthropy.org/brain-computation-report#footnote29_awm3f8h \"I do not have a rigorous definition of “signaling” between cells, though there may be one available. A central example would be when one cell has a specialized mechanism for sending out a particular type of chemical to another cell, which in turn has a specialized receptor for receiving that chemical. See Lodish et al. (2008), ch. 15 and 16, for lengthy discussion of biological signaling mechanisms. For examples of signaling by non-neuronal cells, see the section on glia. Jess Riedel suggested a definition on which the functionally-structured impact of one cell on another counts as signaling if the impact on the second cell varies based on the state of the first (as opposed to, e.g., one cell sending the other one resources irrespective of the first cell’s state) -- a case in which the impact on the second cell provides information about the state of the first (see Open Philanthropy's non-verbatim notes from a conversation with Dr. Jess Riedel, p. 5).\") These signals allow the brain, together with the rest of the nervous system, to receive and encode sensory information from the environment, to process and store this information, and to output the complex, structured motor behavior constitutive of task performance.[30](https://www.openphilanthropy.org/brain-computation-report#footnote30_jtcnpkq \"The texts I have engaged with in cognitive science and neuroscience do not attempt to give necessary and sufficient conditions for a physical system to count as “processing information,” and I will not attempt a rigorous definition here (see Piccinini and Scarantino (2011) for an attempt to disambiguate and evaluate a few possible interpretations, based on different possible conceptions of the relevant type of “information”). My impression, though, is that the intuitive notion is roughly as follows. The brain’s activity makes what you do sensitive to sensory input, past and present (someone throws a shoe at your head, and you duck; you see an old friend at a coffee shop, and you stop to chat). Such sensitivity requires that when the brain receives one set of sensory inputs, rather than another, this difference is reflected somehow in the state of the nervous system in a manner available, at least initially, to make a reliable difference between one macroscopically-specified behavioral response or another (though lots of information is quickly discarded). In this sense, the brain takes in or “encodes” information about sensory inputs using different biophysical variables (that is, aspects of the biophysical system that can be in different states). The brain then processes this information in the sense that the states of these variables serve as inputs to further causal processes in the brain which combine to create behavioral sensitivity to high-level properties of an organism’s environment and history. Thus, for example, if you want to set up a brain that causes an organism to run from a tiger, but not from a tree, you need to have more than a set of biophysical variables that correlate with the type of light hitting different parts of the eye -- you also need causal processes that “extract” from that light an answer to the question “is this a tiger or a tree?”, and then cause the relevant behavioral response. For more discussion in this vein, see e.g. London and Häusser (2005) (p. 209); Koch (1999) (p. 1); Hanson (2016) (p. 50); and Marr (1982) (p. 3). See this video for a vivid illustration of feature extraction; and this video for a nice example of neural information-processing.\")\n\n\n \n\n\n\n[![NeuronDiagram.png](https://www.openphilanthropy.org/files/Research/Brain_Compute/image2.png)](https://www.openphilanthropy.org/files/Research/Brain_Compute/image2.png)**Figure 3: Diagram of a neuron**. From OpenStax, “Anatomy and Physiology”, [Section 12.2](https://openstax.org/books/anatomy-and-physiology/pages/12-2-nervous-tissue#fig-ch12_02_01), unaltered. Licensed under [CC BY 4.0](https://creativecommons.org/licenses/by/4.0/).\n\n\n \n\n\nWe can divide a typical neuron into three main parts: the soma, the dendrites, and the axon.[31](https://www.openphilanthropy.org/brain-computation-report#footnote31_xzuh3db \"See the “anatomy of a neuron” section here for quick description. See Kandel et al. (2013), ch. 4-8, Lodish et al. (2008), ch. 23, and this series of videos, for detailed descriptions of basic neuron structure and function. \") The soma is the main body of the cell. The dendrites are extensions of the cell that branch off from the soma, and which typically receive signals from other neurons. The axon is a long, tail-like projection from the soma, which carries electrical impulses away from the cell body. The end of the axon splits into branches, the ends of which are known as *axon terminals*, which reach out to connect with other cells at locations called *synapses*. A typical synapse forms between the axon terminal of one neuron (the *presynaptic neuron*) and the dendrite of another (the *postsynaptic neuron*), with a thin zone of separation between them known as the *synaptic cleft*.[32](https://www.openphilanthropy.org/brain-computation-report#footnote32_0qequig \"Neurons can also synapse onto blood vessels, muscle cells, neuron cell bodies, axons, and axon terminals (at least according to the medical gallery of Blausen Medical 2014), but for simplicity, I will focus on synapses between axon terminals and dendrites in what follows.\")\n\n\nThe cell as a whole is enclosed in a [membrane](https://en.wikipedia.org/wiki/Membrane) that has various pumps that regulate the concentration of certain [ions](https://en.wikipedia.org/wiki/Ion) – such as sodium (Na+), potassium (K+) and chloride (Cl–) – inside it.[33](https://www.openphilanthropy.org/brain-computation-report#footnote33_xq4ycfp \"See Siegelbaum and Koester (2013a): “In addition to ion channels, nerve cells contain a second important class of proteins specialized for moving ions across cell membranes, the ion transporters or pumps. These proteins do not participate in rapid neuronal signaling but rather are important for establishing and maintaining the concentration gradients of physiologically important ions between the inside and outside of the cell” (p. 100). See also the section on “Where does the resting membrane potential come from?” here.\") This regulation creates different concentrations of these ions inside and outside the cell, resulting in a difference in the electrical potential across the membrane (the *[membrane potential](https://www.khanacademy.org/science/biology/human-biology/neuron-nervous-system/a/the-membrane-potential)*).[34](https://www.openphilanthropy.org/brain-computation-report#footnote34_hdxry16 \"See Siegelbaum and Koester (2013c) (p. 126-147); and the section “Where does the resting membrane potential come from?” here.\") The membrane also contains proteins known as *ion channels*, which, when open, allow certain types of ions to flow into and out of the cell.[35](https://www.openphilanthropy.org/brain-computation-report#footnote35_dbl2189 \"See Siegelbaum and Koester (2013a) (p. 100-124), for detailed description of ion channel dynamics.\")\n\n\nIf the membrane potential in a neuron reaches a certain threshold, then a particular set of voltage-gated ion channels open to allow ions to flow into the cell, creating a temporary *spike* in the membrane potential (an *[action potential](https://en.wikipedia.org/wiki/Action_potential)*).[36](https://www.openphilanthropy.org/brain-computation-report#footnote36_ra0rre0 \"See Kandel et al. (2013) (p. 31-35); and Siegelbaum and Koester (2013b) (p. 148-171), for description. See also here.\") This spike travels down the axon to the axon terminals, where it causes further *voltage-gated ion channels* to open, allowing an influx of calcium ions into the pre-synaptic axon terminal. This calcium can trigger the release of molecules known as *neurotransmitters*, which are stored in sacs called *vesicles* in the axon terminal.[37](https://www.openphilanthropy.org/brain-computation-report#footnote37_dmg140o \"See Siegelbaum and Koester (2013d) (p. 184-187); Siegelbaum et al. (2013c) (p. 260-287); and description here in the section “overview of transmission at chemical synapses”). See also Lodish et al. (2008) (p. 1020). Note that action potentials do not always trigger synaptic transmission: see section 2.1.1.2.2.\")\n\n\nThese vesicles merge with the cell membrane at the synapse, allowing the neurotransmitter they contain to diffuse across the synaptic cleft and bind to receptors on the post-synaptic neuron. These receptors can cause (directly or indirectly, depending on the type of receptor) ion channels on the post-synaptic neuron to open, thereby altering the membrane potential in that area of that cell.[38](https://www.openphilanthropy.org/brain-computation-report#footnote38_7e5dirw \" I’ll refer to the event of a spike arriving at a synapse as a “spike through synapse.” A network of interacting neurons is sometimes called a neural circuit. A series of spikes from a single neuron is sometimes called a spike train. From Khan Academy: “we can divide the receptor proteins that are activated by neurotransmitters into two broad classes: Ligand-activated ion channels: These receptors are membrane-spanning ion channel proteins that open directly in response to ligand binding. Metabotropic receptors: These receptors are not themselves ion channels. Neurotransmitter binding triggers a signaling pathway, which may indirectly open or close channels (or have some other effect entirely)” (see section “Two types of neurotransmitter receptors”). See Siegelbaum et al. (2013) (p. 210-235), for more on the first class of receptors; and Siegelbaum et al. (2013b) (p. 236-255), for more on the second.\")\n\n\n \n\n\n\n[![SynapseDiagram.png](https://www.openphilanthropy.org/files/Research/Brain_Compute/image3.png)](https://www.openphilanthropy.org/files/Research/Brain_Compute/image3.png)**Figure 4: Diagram of synaptic communication.**From OpenStax, “Anatomy and Physiology”, [Section 12.5](https://openstax.org/books/anatomy-and-physiology/pages/12-5-communication-between-neurons), unaltered. Licensed under [CC BY 4.0](https://creativecommons.org/licenses/by/4.0/).[39](https://www.openphilanthropy.org/brain-computation-report#footnote39_7iq0k42)\n\n\n \n\n\nThe expected size of the impact (excitatory or inhibitory) that a spike through a synapse will have on the post-synaptic membrane potential is often summarized via a parameter known as a *[synaptic weight](https://en.wikipedia.org/wiki/Synaptic_weight)*.[40](https://www.openphilanthropy.org/brain-computation-report#footnote40_a1zs2zb \"See Open Philanthropy's non-verbatim notes from a conversation with Prof. Shaul Druckmann: \\\"Setting aside plasticity, most people assume that modeling the immediate impact of a pre-synaptic spike on the post-synaptic neuron is fairly simple. Specifically, you can use a single synaptic weight, which reflects the size of the impact of a spike through that synapse on the post-synaptic membrane potential\\\" (p. 1). Lahiri and Ganguli (2013) note that the theoretical models often treat synapses as “described solely by a single scalar value denoting the size of a post-synaptic potential” (p. 1), though they do not endorse this.\") This weight changes on various timescales, depending on the history of activity in the pre-synaptic and post-synaptic neuron, together with other factors. These changes, along with others that take place within synapses, are grouped under the term *[synaptic plasticity](http://www.scholarpedia.org/article/Models_of_synaptic_plasticity)*.[41](https://www.openphilanthropy.org/brain-computation-report#footnote41_i4amfgc \"See discussion and citations in Section 2.2 for more details.\") Other changes also occur in neurons on various timescales, affecting the manner in which neurons respond to synaptic inputs (some of these changes are grouped under the term *[intrinsic plasticity](http://www.scholarpedia.org/article/Intrinsic_plasticity)*).[42](https://www.openphilanthropy.org/brain-computation-report#footnote42_0gq6kl0 \"Cudmore and Desai (2008): “Intrinsic plasticity is the persistent modification of a neuron’s intrinsic electrical properties by neuronal or synaptic activity. It is mediated by changes in the expression level or biophysical properties of ion channels in the membrane, and can affect such diverse processes as synaptic integration, subthreshold signal propagation, spike generation, spike backpropagation, and meta-plasticity” (opening section).\") New synapses, dendritic spines, and neurons also grow over time, and old ones die.[43](https://www.openphilanthropy.org/brain-computation-report#footnote43_s5updbo \"See e.g. Munno and Syed (2003), Ming and Song (2011), Grutzendler et al. (2002), Holtmaat et al. (2005).\")\n\n\nThere are also a variety of other signaling mechanisms in the brain that this basic story does not include. For example:\n\n\n* [Other chemical signals](#section_2.3.1)*:* Neurons can also send and receive other types of chemical signals – for example, molecules known as *[neuropeptides](https://en.wikipedia.org/wiki/Neuropeptide)*, and gases like nitric oxide – that can diffuse more broadly through the space in between cells, across cell membranes, or via the blood.[44](https://www.openphilanthropy.org/brain-computation-report#footnote44_n6gow68 \"See Schwartz and Javitch (2013), (p. 297-301); Russo (2017); and Leng and Ludwig (2008): “Neurones use many different molecules to communicate with each other, acting in many different ways via specific receptors. Amongst these molecules are more than a hundred different peptides, expressed in different subpopulations of neurons, and many of these peptides are known for the distinctive effects on specific physiological functions that follow central administration of peptide agonists or antagonists.” (p. 5625). See also Mains and Eipper (1999). \") The chemicals neurons release that influence the activity of groups of neurons (or other cells) are known as *neuromodulators*.[45](https://www.openphilanthropy.org/brain-computation-report#footnote45_mt0wn09 \"Burrows (1996): “A neuromodulator is a messenger released from a neuron in the central nervous system, or in the periphery, that affects groups of neurons, or effector cells that have the appropriate receptors. It may not be released at synaptic sites, often acts through second messengers and can produce long-lasting effects. The release may be local so that only nearby neurons or effectors are influenced, or may be more widespread, which means that the distinction with a neurohormone can become very blurred. The act of neuromodulation, unlike that of neurotransmission, does not necessarily carry excitation of inhibition from one neuron to another, but instead alters either the cellular or synaptic properties of certain neurons so that neurotransmission between them is changed” (p. 195).\")\n* [Glial cells](#section_2.3.2)*:* Non-neuronal cells in the brain known as *[glia](https://en.wikipedia.org/wiki/Glia)* have traditionally been thought to mostly perform functions to do with maintenance of brain function, but they may be involved in task-performance as well.[46](https://www.openphilanthropy.org/brain-computation-report#footnote46_sgbsmsn \"Araque and Navarrete (2010) (p. 2375); Bullock et al. (2005), (p. 792); Mu et al. (2019); and the rest of the discussion in Section 2.3.2.\")\n* [Electrical synapses](#section_2.3.3)*:* In addition to the *[chemical synapses](https://en.wikipedia.org/wiki/Chemical_synapse)* discussed above, there are also *[electrical synapses](https://en.wikipedia.org/wiki/Electrical_synapse)* that allow direct, fast, and bi-directional exchange of electrical signals between neurons (and between other cells). The channels mediating this type of connection are known as *[gap junctions](https://en.wikipedia.org/wiki/Gap_junction)*.\n* [Ephaptic effects:](#section_2.3.4) Electrical activity in neurons creates electric fields that may impact the electrical properties of neighboring neurons.[47](https://www.openphilanthropy.org/brain-computation-report#footnote47_y6mh18x \"See e.g. Anastassiou et al. (2011) and Chang (2019), along with the other citations in Section 2.3.4. \")\n* [Other forms of axon signaling](#section_2.3.5)*:* The process of firing an action potential has traditionally been thought of as a binary decision.[48](https://www.openphilanthropy.org/brain-computation-report#footnote48_40cg5iw \"See Bullock et al. (2005), describing the history of early neuroscience: “physiological studies established that conduction of electrical activity along the neuronal axon involved brief, all-or-nothing, propagated changes in membrane potential called action potentials. It was thus often assumed that neuronal activity was correspondingly all-or-nothing and that action potentials spread over all parts of a neuron. The neuron was regarded as a single functional unit: It either was active and “firing” or was not” (p. 791).\") However, some recent evidence indicates that processes within a neuron other than “to fire or not to fire” can matter for synaptic communication.[49](https://www.openphilanthropy.org/brain-computation-report#footnote49_qes5ysp \"See Zbili and Debanne (2019) for a review, together with the other citations in Section 2.3.5.\")\n* [Blood flow](#section_2.3.6)*:* Blood flow in the brain correlates with neural activity, which has led some to suggest that it might be playing a role in information-processing.[50](https://www.openphilanthropy.org/brain-computation-report#footnote50_crouzqe \"See Moore and Cao (2008): “we propose that hemodynamics also play a role in information processing through modulation of neural activity… We predict that hemodynamics alter the gain of local cortical circuits, modulating the detection and discrimination of sensory stimuli. This novel view of information processing—that includes hemodynamics as an active and significant participant— has implications for understanding neural representation and the construction of accurate brain models” (p. 2035).\")\n\n\nThis is not a complete list of all the possible signaling mechanisms that could in principle be operative in the brain.[51](https://www.openphilanthropy.org/brain-computation-report#footnote51_bar1658 \"A few others I am not discussing include: quantum dynamics (see endnote in section 1.6), the perineuronal net (see Tsien (2013) for discussion), and classical dynamics in microtubules (see Cantero et al. (2018)). I am leaving quantum dynamics aside mostly for the reasons listed in the endnote in section 1.6. I leave out the other two mechanisms partly because of time constraints, and partly because my impression is that they do not feature very prominently in the discourse on this topic. I bucket all the possible alternative mechanisms I am not discussing under the uncertainties discussed in Section 2.3.7.\") But these are some of the most prominent.\n\n\n#### 1.5.1 Uncertainty in neuroscience\n\n\nI want to emphasize one other meta-point about neuroscience: namely, that our current understanding of how the brain processes information is extremely limited.[52](https://www.openphilanthropy.org/brain-computation-report#footnote52_u3uo2xw \"A few representative summaries: Marcus (2015): “Neuroscience today is collection of facts, rather than ideas; what is missing is connective tissue. We know (or think we know) roughly what neurons do, and that they communicate with one another, but not what they are communicating. We know the identities of many of the molecules inside individual neurons and what they do. We know from neuroanatomy that there are many repeated structures (motifs) throughout the neocortex. Yet we know almost nothing about what those motifs are for, or how they work together to support complex real-world behavior. The truth is that we are still at a loss to explain how the brain does all but the most elementary things. We simply do not understand how the pieces fit together” (p. 205): Einevoll et al. (2015): “Despite decades of intense research efforts investigating the brain at the molecular, cell, circuit and system levels, the operating principles of the human brain, or any brain, remain largely unknown… At present we do not have any well-grounded, and certainly not generally accepted, theory about how networks of millions or billions of neurons work together to provide the salient brain functions in animals or humans. We do not even have a well-established model for how neurons in primary visual cortex of mammals work together to form the intriguing neuronal representations with, for example, orientation selectivity and direction selectivity that were discovered by Hubel and Wiesel sixty years ago (Hubel and Wiesel (1959)).” (p. 2, and p. 8).\") This was a consistent theme in my conversations with experts, and one of my clearest take-aways from the investigation as a whole.[53](https://www.openphilanthropy.org/brain-computation-report#footnote53_w6hc027 \"See especially Open Philanthropy's non-verbatim notes from a conversation with Prof. Eric Jonas, Prof. Shaul Druckmann, Prof. Erik De Schutter, Prof. Konrad Kording; Prof. Eve Marder; Dr. Adam Marblestone; and Dr. Stephen Larson.\")\n\n\nOne problem is that we need better tools. For example:\n\n\n* Despite advances, we can only record the spiking activity of a limited number of neurons at the same time (techniques like [fMRI](https://en.wikipedia.org/wiki/Functional_magnetic_resonance_imaging#:~:text=Functional%20magnetic%20resonance%20imaging%20or,to%20that%20region%20also%20increases.) and [EEG](https://en.wikipedia.org/wiki/Electroencephalography) are much lower resolution).[54](https://www.openphilanthropy.org/brain-computation-report#footnote54_yjm233l \"Kleinfield et al. (2019), (p. 1005), for description of various techniques and their limitations. See also Marblestone et al. (2013): “Simultaneously measuring the activities of all neurons in a mammalian brain at millisecond resolution is a challenge beyond the limits of existing techniques in neuroscience… Based on this analysis, all existing approaches require orders of magnitude improvement in key parameters” (p. 1); and Adam (2019): “A technology that simultaneously records membrane potential from multiple neurons in behaving animals will have a transformative effect on neuroscience research” (p. 413), a quote which suggests that at the least, such a technology is at the cutting edge of what’s available (the paper appears to describe progress on this front). Stevenson and Kording (2011) found that “the number of simultaneously recorded single neurons has been growing rapidly, doubling approximately every 7 years. The trend described here predicts that in 15 years physiologists should be able to record from approximately 1,000 neurons” (p. 141). Their data shows that as of 2010, the maximum was a few hundred, though I’m not sure where it is now (see p. 140).\")\n* We can’t record from all of a neuron’s synapses or dendrites simultaneously, making it hard to know what patterns of overall synaptic input and dendritic activity actually occur *in vivo*.[55](https://www.openphilanthropy.org/brain-computation-report#footnote55_qjpsj9i \"From Open Philanthropy's non-verbatim notes from a conversation with Prof. Erik De Schutter: “At this point, we have no way to reliably measure the input-output transformation of a neuron, where the input is defined as a specific spatio-temporal pattern of synaptic input. You can build models and test their input-output mappings, but you don’t really know how accurate these models are… In live imaging, it’s very difficult to see what’s happening at synapses. Some people do calcium imaging of pre-synaptic terminals, but this is only for one part of the overall synaptic input (and it may create artefacts). Currently, you cannot get a global picture of all the synaptic inputs to a single neuron. You can’t stain all the inputs, and for a big neuron you wouldn’t be able to image the whole relevant volume of space… you don’t actually know what the physiological pattern of inputs is.” See also Ujfalussy et al. (2018): “Our understanding of neuronal input integration remains limited because it is either based on data from in vitro experiments, studying neurons under highly simplified input conditions, or on in vivo approaches in which synaptic inputs were not observed or controlled, and thus a systematic characterization of the input-output transformation of neurons was not possible” (2018); and notes from Open Philanthropy's non-verbatim notes from a conversation with Prof. Shaul Druckmann: “It is very difficult to tell what spatio-temporal patterns of inputs are actually arriving at a neuron’s synapses in vivo. You can use imaging techniques, but this is very messy” (p. 2) \")\n* We also can’t stimulate all of a neuron’s synapses and/or dendrites simultaneously, making it hard to know how the cell responds to different inputs (and hence, which models can capture these responses).[56](https://www.openphilanthropy.org/brain-computation-report#footnote56_f4g2u2f \"From Open Philanthropy's non-verbatim notes from a conversation with Prof. Erik De Schutter: “Using glutamate uncaging, you can reliably activate single dendritic spines in vitro, and you can even do this in a sequence of spines, thereby generating patterns of synaptic input. However, even these patterns are limited. For example, you can’t actually activate synapses simultaneously, because your laser beam needs to move; there’s only so much you can do in a certain timeframe; and because it’s glutamate, you can only activate excitatory neurons” (p. 2). From Open Philanthropy's non-verbatim notes from a conversation with Prof. Shaul Druckmann:\\\"it is very difficult to tell how a neuron responds to arbitrary patterns of synaptic input. You can stimulate a pre-synaptic neuron and observe the response, but you can’t stimulate all pre-synaptic neurons in different combinations. And you can only patch-clamp one dendrite while also patch-clamping the soma (and this already requires world-class skill)\\\" (p. 2).\")\n* Techniques for measuring many lower-level biophysical mechanisms and processes, such as possible forms of ion channel plasticity, remain very limited.[57](https://www.openphilanthropy.org/brain-computation-report#footnote57_mn29j0b \"From Open Philanthropy's non-verbatim notes from a conversation with Prof. Shaul Druckmann: “Technology for measuring the properties relevant to detailed biophysical modeling has improved very little in the past 20 years … Neurons can have a few dozen of some 200-300 types of ions channels, which are strongly non-linear, with large effects, and which are spread out across the neuron. These cannot be modeled based on recordings of neuron spiking activity alone, and staining neurons for these ion channels is very difficult” (p. 2). And from Open Philanthropy's non-verbatim notes from a conversation with Prof. Konrad Kording: “current techniques are very bad at measuring ion channel plasticity. Neuroscientists don’t tend to focus on it for this reason” (p. 5).\")\n* Results in model animals may not generalize to e.g. humans.[58](https://www.openphilanthropy.org/brain-computation-report#footnote58_j1990lz \" From Open Philanthropy's non-verbatim notes from a conversation with Prof. Eric Jonas: “a lot of our animal models are wrong in clinically-relevant ways” (p. 5). And from Open Philanthropy's non-verbatim notes from a conversation with Prof. E.J. Chichilnisky: “There is variability in retinal function both across species and between individuals of the same species. Mouse retinas are very different from human retinas (a difference that is often ignored), and there is variability amongst monkey retinas as well” (p. 3).\")\n* Results obtained *in vitro* (that is, in a petri dish) may not generalize *in vivo* (that is, in a live functioning brain).[59](https://www.openphilanthropy.org/brain-computation-report#footnote59_2dbinr2 \"For example, spike-timing dependent plasticity -- a form of synaptic plasticity -- can be reliably elicited in vitro (see Open Philanthropy's non-verbatim notes from a conversation with Prof. Eric Jonas (p. 3)), but Schulz argues that “Direct evidence for STDP in vivo is limited and suffers from the fact that the used protocols significantly deviate, more often than not, from the traditional pairing of single pre- and postsynaptic spikes (Shulz and Jacob (2010)). Thus, many studies use long-lasting large-amplitude postsynaptic potentials (PSP), and pairing usually involves multiple postsynaptic spikes or high repetition rates. Our own experience from cortico-striatal synaptic plasticity experiments indicates that classic STDP may be less effective in vivo than commonly expected (Schulz et al., 2010)” (p. 1).\")\n* The tasks we can give model animals like rats to perform are generally very simple, and so provide limited evidence about more complex behavior.[60](https://www.openphilanthropy.org/brain-computation-report#footnote60_xp03igt \"From Open Philanthropy's non-verbatim notes from a conversation with Prof. Shaul Druckmann: “The tasks that neuroscientists tend to study in model animals are very simple. Many, for example, are some variant on a two-alternative forced choice task (e.g., teaching an animal to act differently, depending on which of two stimuli it receives). This task is extremely easy to model, both with a small number of highly simplified neurons, and with models that do not look like neurons at all. In this sense, tasks like these provide very little evidence about what level of modeling detail is necessary for reproducing more interesting behavior.” (p. 2). And from Open Philanthropy's non-verbatim notes from a conversation with Prof. Eric Jonas: “In an experiment with a model animal like a rat, which has a very complicated brain, the number of input/output bits we can control/observe is extremely small. This makes it very hard to do informative, high-throughput experiments. Even if you had a billion rats doing your experiment 24/7, you’d still only have a small number of bits going in and out” (p. 2).\")\n\n\nTools also constrain concepts. If we can’t see or manipulate something, it’s unlikely to feature in our theories.[61](https://www.openphilanthropy.org/brain-computation-report#footnote61_48zgkqe \"From Open Philanthropy's non-verbatim notes from a conversation with Dr. Adam Marblestone: “Neuroscience is extremely limited by available tools. For example, we have the concept of a post-synaptic potential because we can patch-clamp the post-synaptic neuron and see a change in voltage. When we become able to see every individual dendritic spine, we might see that each has a different response; or when we become able to see molecules, we might see faster state transitions, more interesting spatial organization, or more complicated logic at the synapses. We don’t really know, because we haven’t been able to measure” (p. 9).\") And certain models of e.g. neurons may receive scant attention simply because they are too computation-intensive to work with, or too difficult to constrain with available data.[62](https://www.openphilanthropy.org/brain-computation-report#footnote62_uqg5ge3 \"From Open Philanthropy's non-verbatim notes from a conversation with Prof. Konrad Kording: “current techniques are very bad at measuring ion channel plasticity. Neuroscientists don’t tend to focus on it for this reason” (p. 5). From Open Philanthropy's non-verbatim notes from a conversation with Prof. Shaul Druckmann: “The history of neuroscience sometimes seems like a process in which even though some process or level of detail is important, if it is very difficult to understand it, the community often shifts away from that level, and moves on to another level.. … he thinks that people don’t do detailed modeling because these models are ill-constrained at the current level of data that can be collected and it would require major investment to get the relevant data.” (p. 7). \")\n\n\nBut tools aren’t the only problem. For example, when [Jonas and Kording (2017)](https://journals.plos.org/ploscompbiol/article/file?id=10.1371/journal.pcbi.1005268&type=printable) examined a [simulated 6502 microprocessor](http://www.visual6502.org/) – a system whose processing they could observe and manipulate to arbitrary degrees – using analogues of standard neuroscientific approaches, they found that “the approaches reveal interesting structure in the data but do not meaningfully describe the hierarchy of information processing in the microprocessor” (p. 1).[63](https://www.openphilanthropy.org/brain-computation-report#footnote63_gqq9trg \"Jonas and Kording (2017): “There is a popular belief in neuroscience that we are primarily data limited...here we take a classical microprocessor as a model organism, and use our ability to perform arbitrary experiments on it to see if popular data analysis methods from neuroscience can elucidate the way it processes information. Microprocessors are among those artificial information processing systems that are both complex and that we understand at all levels, from the overall logical flow, via logical gates, to the dynamics of transistors. We show that the approaches reveal interesting structure in the data but do not meaningfully describe the hierarchy of information processing in the microprocessor. This suggests current analytic approaches in neuroscience may fall short of producing meaningful understanding of neural systems, regardless of the amount of data” (p. 1). Though see also Merel et al. (2020) (p. 2), who use a virtual rodent as a model system, and who take a more optimistic view.\") And artificial neural networks that perform complex tasks are difficult (though not necessarily impossible) to interpret, despite similarly ideal experimental access.[64](https://www.openphilanthropy.org/brain-computation-report#footnote64_8a8up8n \"See e.g. Lillicrap and Kording (2019): “...We can have a complete description of the network and its computations. And yet, neither we, nor anyone we know feels that they grasp how processing in these networks truly works. Said another way, besides gesturing to a network’s weights and elementary operations, we cannot say how it classifies an image as a cat or a dog, or how it chooses one Go move over another” (p. 1). That said, research on this topic is just getting underway, and some participants are optimistic. See e.g. Olah et al. (2020a): “thousands of hours of studying individual neurons have led us to believe the typical case is that neurons (or in some cases, other directions in the vector space of neuron activations) are understandable… our experience is that there’s usually a simple explanation behind these neurons, and that they’re actually doing something quite natural” (see “Claim 1: Features” and “Claim 2: Circuits”). Some of this work focuses on the type of feature detection that neuroscience already has some preliminary handle on, but efforts to explore the interpretability of other types of models are underway as well (see Greydanus (2017), Such et al. (2018), Rupprecht et al. (2019), here and OpenAI et al. (2019) (p. 30-35), for examples). Personally, I would not be at all surprised if this work ends up quite neuroscientifically informative.\")\n\n\nWe also don’t know what high-level task most neural circuits are performing, especially outside of peripheral sensory/motor systems. This makes it very hard to say what models of such circuits are adequate.[65](https://www.openphilanthropy.org/brain-computation-report#footnote65_yd8raxz \"From Open Philanthropy's non-verbatim notes from a conversation with Prof. Eve Marder: “It’s been hard to make progress in understanding neural circuits, because in order to know what details matter, you have to know what the circuit is doing, and in most parts of the brain, we don’t know this...It’s not that you can’t make simplifying assumptions. It’s that absent knowledge of what a piece of nervous system needs to be able to do, you have no way of assessing whether you’ve lost something fundamental or not” (p. 4); from Open Philanthropy's non-verbatim notes from a conversation with Prof. Eric Jonas: “One level of uncertainty comes from the difficulty of defining the high-level task that neural systems are trying to perform (e.g., the “computational level” in the hierarchy proposed by David Marr). Our attempts to capture cognitive tasks with objective functions we can fit machine learning models to are all extreme simplifications. For example, Prof. Jonas is fairly confident that the visual system is not classifying objects into one of k categories” (p. 1); and the notes from Open Philanthropy's non-verbatim notes from a conversation with Prof. E.J. Chichilnisky: “It’s hard to know when to stop fine-tuning the details of your model. A given model may be inaccurate to some extent, but we don’t know whether a given inaccuracy matters, or whether a human wouldn’t be able to tell the difference (though focusing on creating usable retinal prostheses can help with this)” (p. 3).\")\n\n\nIt would help if we had full functional models of the nervous systems of some simple animals. But we don’t.[66](https://www.openphilanthropy.org/brain-computation-report#footnote66_le47ua2 \"Dr. Stephen Larson suggested that one benefit of successfully simulating a simple nervous system would be that you could then bound the complexity necessary for such a simulation, and proceed with attempting to simplify it in a principled way (see Open Philanthropy's non-verbatim notes from a conversation with Dr. Stephen Larson, p. 2). Prof. Shaul Druckmann (see here, p. 6) and Prof. Erik De Schutter appeared sympathetic to a similar research program. From Open Philanthropy's non-verbatim notes from a conversation with Prof. Erik De Schutter:\\\"The best way forward is to try to explore and understand the function of the brain’s underlying mechanisms -- a project that may eventually lead to an understanding of what can be simplified. But to try to simplify things too early, before you understand them, is a dangerous game\\\" (p. 1). Exactly what level of modeling success has been achieved by brain simulations as yet is a complicated issue, but many appear to lack any capacity for complex task-performance (Eliasmith et al. (2012) is one exception; see Open Philanthropy's non-verbatim notes from a conversation with Prof. Chris Eliasmith for some discussion). Example brain simulations include: Arkhipov et al. (2018), Bileh et al. (2020), Markram et al. (2015); Izhikevich and Edelman (2007); Ananthanarayanan et al. (2009), Howell et al. (2000), Medina et al. (2000), McLaughlin (2000). See Garis et al. (2010) and Sandberg and Bostrom (2008) for surveys.\") For example, the nematode worm *[Caenorhabditis elegans](https://en.wikipedia.org/wiki/Caenorhabditis_elegans) (C. elegans)* has only 302 neurons, and a map of the connections between these neurons (the connnectome) has been available since 1986.[67](https://www.openphilanthropy.org/brain-computation-report#footnote67_2fs9wa7 \"See White et al. (1984). See Jabr (2012b)for some history, as well as Seung (2012): “Mapping the C. elegans nervous system took over a dozen years, though it contains only 7,000 connections” (“Introduction”).\") But we have yet to build a simulated *C. elegans* that behaves like the real worm across a wide range of contexts.[68](https://www.openphilanthropy.org/brain-computation-report#footnote68_gscesaz \"From Open Philanthropy's non-verbatim notes from a conversation with Dr. Stephen Larson, who works on the OpenWorm project: “Despite its small size, we do not yet have a model that captures even 50% of the biological behavior of the C. elegans nervous system. This is partly because we’re just getting to the point of being able to measure what the worm’s nervous system is doing well enough. It is possible to replicate certain kinds of worm behaviors, such as a crawling forward motion, using a very simple neural network. However, the same model cannot be used to make the worm shift into crawling backwards. Rather, you have to re-train it, and even then, you don’t know if the model makes the decision to crawl backward with the same frequency, and for the same reasons, that the real worm does. In general, evolution has equipped the worm to respond to a very wide range of conditions, and the worm’s biology has all of these intricate and complex mechanisms that could potentially be involved in the behaviors you care about” (p. 1). David Dalrymple, who used to work on emulating C. elegans, writes: “Contrary to popular belief, connectomes are not the biological equivalent of circuit schematics. Connectomes are the biological equivalent of what you'd get if you removed all the component symbols from a circuit schematic and left only the wires… What you actually need is to functionally characterize the system's dynamics by performing thousands of perturbations to individual neurons and recording the results on the network, in a fast feedback loop with a very very good statistical modeling framework which decides what perturbation to try next.” Sarma et al. (2018), in an overview of OpenWorm’s progress, write: “The level of detail that we have incorporated to date is inadequate for biological research. A key remaining component is to complete the curation and parameter extraction of Hodgkin–Huxley models for ion channels to produce realistic dynamics in neurons and muscles” (Section 3). Merel et al. (2020) create a “virtual rodent,” but this is not a bottom up emulation of a rodent brain.\")\n\n\nAll this counsels pessimism about the robustness of FLOP/s estimates based on our current neuroscientific understanding. And it increases the relevance of where we place the burden of proof. If we start with a strong default view about the complexity of the brain’s task-performance, and then demand proof to the contrary, our standards are unlikely to be met.\n\n\nIndeed, my impression is that various “defaults” in this respect play a central role in how experts approach this topic. Some take simple models that have had some success as a default, and then ask whether we have strong reason to think additional complexity necessary;[69](https://www.openphilanthropy.org/brain-computation-report#footnote69_09dybag \"Example approaches in this vein include Prof. Markus Meister, see Open Philanthropy's non-verbatim notes from a conversation with Prof. Markus Meister: “It is theoretically possible that the brain’s task-performance draws on complex chemical computations, implemented by protein circuits, that would require models much more complicated than those that have been successful in the retina. But Prof. Meister’s approach is to ask: is there any evidence that forces us to think in this more complicated way? That is, he starts with the simplest possible explanation of the phenomena, and then adds to this explanation when necessary. Some neuroscientists take a different approach. That is, they ask “what is the most complicated way that this thing could work?”, and then assume that nature is doing that” (p. 4); and from Open Philanthropy's non-verbatim notes from a conversation with Prof. Chris Eliasmith: “Prof. Eliasmith’s general approach is to see what simple models are able to do, and to introduce additional complexity only when doing so becomes necessary. In his models, he has thus far been able to successfully replicate various types of high-level behavior, along with various types of neuro-physiological data, without recourse to highly complex neuron models -- a result that he thinks substantially less likely in worlds where the brain’s performance on these tasks proceeds via biophysical mechanisms his models do not include. However, this doesn’t mean that we won’t discover contexts in which greater complexity is necessary. And we are very far away from being able to test what is required to capture high-level behavior on the scale of the full human brain” (p. 2). \") others take the brain’s biophysical complexity as a default, and then ask if we have strong reason to think that a given type of simplification captures everything that matters.[70](https://www.openphilanthropy.org/brain-computation-report#footnote70_cy6o3yk \"From Open Philanthropy's non-verbatim notes from a conversation with Dr. Stephen Larson: “the jury is still out on how much simplification is available, and Dr. Larson thinks that in this kind of uncertain context, you should focus on the worst-case, most conservative compute estimates as your default. This means worrying about all of the information-processing present in cell biology. In general, in studying complex biological mechanisms, Dr. Larson thinks that the burden of proof is on those who want to say that a given type of simplification is possible” (p. 2). From Open Philanthropy's non-verbatim notes from a conversation with Prof. Erik De Schutter: “Many common simplifications do not have solid scientific foundations, and are more at the level of “the way we do things.” The best way forward is to try to explore and understand the function of the brain’s underlying mechanisms -- a project that may eventually lead to an understanding of what can be simplified. But to try to simplify things too early, before you understand them, is a dangerous game … The brain was not engineered. Rather, it evolved, and evolution works by adding complexity, rather than by simplification. There are good reasons for this complexity. In order to evolve, you can’t have systems, at any level (proteins, channels, cells, brain regions), with unique functions. If you did, and a single mutation knocked out the function, the whole system would crash… Indeed, in general, many scientists who approach the brain from an engineering perspective end up on the wrong footing. Engineering is an appropriate paradigm for building AI systems, but if you want to understand the brain, you need to embrace the fact that it works because it is so complicated. Otherwise, it will be impossible to understand the system” (p. 1).\")\n\n\nNote the distinction, though, between how we should do neuroscience, and how we should bet now about where such science will ultimately lead, assuming we had to bet. The former question is most relevant to neuroscientists; but the latter is what matters here.\n\n\n\n#### 1.6 Clarifying the question\n\n\nConsider the set of cognitive tasks that the human brain can perform, where task performance is understood as the implementation of a specified type of relationship between a set of inputs and a set of outputs.[71](https://www.openphilanthropy.org/brain-computation-report#footnote71_mzce5ud \"I will not attempt a definition of which tasks count as “cognitive,” but the category should be construed as excluding tasks that are intuitively particular to the brain’s biological substrate -- for example, the task of implementing an input-output transformation that will serve as an effective means of predicting how the biological brain will respond to a certain kind of drug, or the task of serving as a good three-pound weight. LeCun and Bengio (2007) gesture at a somewhat similar subset of tasks, which they call the “AI-set”: “Among the set of all possible functions, we are particularly interested in a subset that contains all the tasks involved in intelligent behavior. Examples of such tasks include visual perception, auditory perception, planning, control, etc. The set does not just include specific visual perception tasks (e.g human face detection), but the set of all the tasks that an intelligent agent should be able to learn. In the following, we will call this set of functions the AI-set. Because we want to achieve AI, we prioritize those tasks that are in the AI-set” (p. 4-5). I am also excluding microscopically specified input-output relationships that an actual brain, operating in the type of noisy environments brains evolved in, cannot implement reliably.\") Examples of such tasks might include:\n\n\n* Reading an English-language description of a complex software problem, and, within an hour, outputting code that solves that problem.[72](https://www.openphilanthropy.org/brain-computation-report#footnote72_pw0wfc2 \"See Grace et al. (2018) for discussion of a simple version of this task, which involves writing “concise, efficient, and human-readable Python code to implement simple algorithms like quicksort” (p. 19). The median estimate by the experts she surveyed for when AI systems will be able to perform this task was 8.2 years from the time of the survey. GPT-3, a language model released by OpenAI in 2020, is capable of at least some forms of coding (see here for an especially vivid demonstration, here for another example, and here for more discussion).\")\n* Reading a randomly selected paper submitted to the journal *Nature*, and, within a week, outputting a review of the paper of quality comparable to an average peer-reviewer.[73](https://www.openphilanthropy.org/brain-computation-report#footnote73_ycpxatk \"Depending on one’s opinions of the peer review process, perhaps it is debatable whether GPT-3 can do this as well. See here for examples. I chose both the “complex software problem” task and the “review a nature paper” task before the GPT-3 results came out, and they were selected to be tasks that we couldn’t yet do with AI systems.\")\n* Reading newly-generated [Putnam Math competition](https://www.maa.org/math-competitions/putnam-competition) [problems](https://www.maa.org/sites/default/files/pdf/Putnam/Competition_Archive/2018PutnamProblems.pdf), and, within six hours, outputting answers that would receive a perfect score by standard judging criteria.[74](https://www.openphilanthropy.org/brain-computation-report#footnote74_ctrfimt \"See Grace et al. (2018) (p. 16), for discussion of a version of this task. The median estimate by the experts she surveyed for when AI systems will be able to perform this task was 33.8 years from the time of the survey.\")\n\n\nDefining tasks precisely can be arduous. I’ll assume such precision is attainable, but I won’t try to attain it, since little in what follows depends on the details of the tasks in question. I’ll also drop the adjective “cognitive” in what follows.\n\n\nI will also assume that sufficiently powerful computers can in principle perform these tasks (I focus solely on non-quantum computers – see endnote for discussion of quantum brain hypotheses).[75](https://www.openphilanthropy.org/brain-computation-report#footnote75_ab5n17s \"It has been occasionally hypothesized that some form of quantum-level information processing is occuring in the brain (see, for example, Hu and Wu (2004), Penrose and Hameroff (2011), and Fisher (2015) for suggestions in this vein, and see Tegmark (1999) and Litt et al. (2006) for counterarguments). My understanding, though, is that the large majority of experts believe that the brain’s information-processing is purely classical. For example, Sandberg and Bostrom (2008) write that: “Practically all neuroscientists subscribe to the dogma that neural activity is a phenomenon that occurs on a classical scale” (37). My impression is that the most influential arguments against quantum computation have been in the vein of Tegmark (1999), who argues that the timescales of quantum decoherence in the brain (~10-13 to 10-20 seconds) are too short to play a role in various possible methods of neural information processing, which proceed on much longer timescales (~10-3 to 10-1 seconds) (p. 1). That said, there is at least some evidence that non-trivial quantum dynamics play a role in some biological contexts (e.g., photosynthesis, enzyme catalysis, and avian navigation) where arguments that appeal solely to the fact that a biological system is warm/wet/noisy might have ruled them out (my thanks to Prof. David Wallace for suggesting I address this): see, e.g., McFadden and Al-Khalili (2018) for a review. Indeed, Fisher (2015) presents his hypothesis about quantum dynamics in the brain as immune to timescale-based objections. However, my impression at a glance is that his research at this stage is mostly at the level of establishing the theoretical possibility of some form of quantum computation in the brain, as opposed to verifying that such computation is actually occuring. Thus, for example, in this 2019 talk (36:40), he comments: “What I've offered is a story at this stage, if you want it's a partly formed picture puzzle, and what's needed are experiments to discern the precise shapes of the various pieces in this puzzle, and to see whether they actually exist as pieces, what shapes they are, and whether they start fitting together.” In general, the possibility of quantum computation in the brain is a further category of uncertainty; but it’s an additional can of worms, and because the hypothesis appears to play a comparatively small role in mainstream neuroscience, I’m not going to address it in depth.\") This assumption is widely shared both within the scientific community and beyond it. Some dispute it, but I won’t defend it here.[76](https://www.openphilanthropy.org/brain-computation-report#footnote76_trwaeux \"See Nicolesis and Circuel (2015), Lucas (1961), Dreyfus (1972) and Penrose (1994) for various forms of skepticism.\")\n\n\nThe aim of the report is to evaluate the extent to which the brain provides evidence, for some number of FLOP/s *F*, that for any task *T* that the human brain can perform, *T* can be performed with *F*.[77](https://www.openphilanthropy.org/brain-computation-report#footnote77_jyn925w \"Note that F does not need to be enough to match the task-performance of a “superbrain” trained and ready to perform any task that any human can perform: e.g., a brain that represents peak human performance on every task simultaneously. Einstein may do physics that requires x FLOP/s, and Toni Morrison may write novels that require y FLOP/s, but F only needs to be greater than or equal to both x and y: it doesn’t need to be greater than or equal to x+y.\") As a proxy for FLOP/s numbers with this property, I will sometimes talk about the FLOP/s sufficient to run a “task-functional model,” by which I mean a computational model that replicates a generic human brain’s task-performance. Of course, some brains can do things others can’t, but I’ll assume that at the level of precision relevant to this report, human brains are roughly similar, and hence that if *F* FLOP/s is enough to replicate the task performance of a generic human brain, roughly *F* is enough to replicate any task *T* the human brain can perform.[78](https://www.openphilanthropy.org/brain-computation-report#footnote78_3n859o0 \"Herculano-Houzel (2009) reports variation in neuron number within a species at around 10-50%. Reardon et al. (2018) write: “Brain size among normal humans varies as much as twofold.” Koch (2016) cites numbers ranging from 1,017 grams to 2,021 grams (though these are for post-mortem measures), and from 975 cm3 to 1499 cm3.\")\n\n\nThe project here is related to, but distinct from, directly estimating the *minimum* FLOP/s sufficient to perform any task the brain can perform*.* Here’s an analogy. Suppose you want to build a bridge across the local river, and you’re wondering if you have enough bricks. You know of only one such bridge (the “old bridge”), so it’s natural to look there for evidence. If the old bridge is made of bricks, you could count them. If it’s made of something else, like steel, you could try to figure out how many bricks you need to do what a given amount of steel does. If successful, you’ll end up confident that e.g. 100,000 bricks is enough to build such a bridge, and hence that the minimum is less than this. But how much less is still unclear. You studied an example bridge, but you didn’t derive theoretical limits on the efficiency of bridge-building.\n\n\nThat said, Dr. Paul Christiano expected there to be at least some tasks such (a) the brain’s methods of performing them are close to maximally efficient, and (b) these methods use most of the brain’s resources (see endnote).[79](https://www.openphilanthropy.org/brain-computation-report#footnote79_g2sh3nw \"From Open Philanthropy's non-verbatim notes from a conversation with Dr. Paul Christiano: “If you include a sufficiently broad range of tasks that the human brain can perform, and require similarly useful task-performance across the full range of inputs to which the brain could be exposed, it is likely that for at least one of the tasks in the relevant profile, for some set of inputs, the brain’s method will (a) be close to maximally algorithmically efficient (e.g., within an order of magnitude or two), and (b) use a substantial portion of the computational resources that the brain has available. For example, if you take a computer from the 60s, and you look at all of the tasks it could perform, Dr. Christiano expects that many of the algorithms it was running (for example: sorting), were close to optimally efficient. As another example, there is a very inefficient algorithm for SAT solving, which takes 2n time. For many inputs, we can improve on this algorithm by a huge amount, but we can’t for every input: indeed, there is a rough consensus amongst computer scientists that the very inefficient algorithm is close to the best one can do. Indeed, Dr. Christiano expects that for most algorithms, there will be some family of instances on which it does reasonably well. And given how large the space of possible tasks the brain performs is (we can imagine a very wide set of evaluation metrics and input regimes), the density of roughly-optimal-on-some-inputs algorithms doesn’t need to be that high for them to appear in the brain” (p. 7).\") I don’t investigate this claim here, but if true, it would make data about the brain more directly relevant to the minimum adequate FLOP/s budget.\n\n\nThe project here is also distinct from estimating the FLOP/s “equivalent” to the human brain. As I discuss in the report’s [appendix](#section_7), I think the notion of “the FLOP/s equivalent to the brain” requires clarification: there are a variety of importantly different concepts in the vicinity.\n\n\nTo get a flavor of this, consider the bridge analogy again, but assume that the old bridge is made of steel. What number of bricks would be “equivalent” to the old bridge? The question seems ill-posed. It’s not that bridges can’t be built from bricks. But we need to say more about what we want to know.\n\n\nI group the salient possible concepts of the “FLOP/s equivalent to the human brain” into four categories:\n\n\n1. [FLOP/s required for task-performance](#section_7.1), with no further constraints on *how* the tasks need to be performed.[80](https://www.openphilanthropy.org/brain-computation-report#footnote80_rkec41c \"It’s not entirely clear which concept Moravec and Kurzweil have in mind, but (1) has some support. See Moravec (1998): “How much further must this evolution proceed until our machines are powerful enough to approximate the human intellect?” (p. 52), and his reply to Anders Sandberg here: “It is the final computation that matters, not the fuss in doing it.” Kurzweil (2005): “if two methods achieve the same result but one uses more computation than the other, the more computationally intensive method will be considered to use only the amount of computation of the less intensive method” (p. 137).\")\n2. [FLOP/s required for task-performance + brain-like-ness constraints](#section_7.2)– that is, constraints on the similarity between how the AI system does it, and how the brain does it.\n3. [FLOP/s required for task-performance + findability constraints](#section_7.3) – that is, constraints on what sorts of training processes and engineering efforts would be able to create the AI system in question.\n4. [Other analogies with human-engineered computers](#section_7.4).\n\n\nAll these categories have their own problems (see section [A.5](#section_7.5) for a summary chart). The first is closest to the report’s focus, but as just noted, it’s hard (at least absent further assumptions) to estimate directly using example systems. The second faces the problem of identifying a non-arbitrary brain-like-ness constraint that picks out a unique number of FLOP/s, without becoming too much like the first. The third brings in a lot of additional questions about what sorts of systems are what sorts of findable. And the fourth, I suggest, either collapses into the first or second, or raises its own questions.\n\n\nIn the hopes of avoiding some of these problems, I have kept the report’s framework broad. The brain-based FLOP/s budgets I’m interested in don’t need to be uniquely “equivalent” to the brain, or as small as theoretically possible, or accommodating of any constraints on brain-like-ness or findability. They just need to be big enough, in principle, to perform the tasks in question.\n\n\nA few other clarifications:\n\n\n* Properties construed as consisting in something other than the implementation of a certain type of input-output relationship (for example, properties like phenomenal consciousness, moral patienthood, or continuity with a particular biological human’s personal identity – to the extent they are so construed) are not included in the definition of the type of task-performance I have in mind. Systems that replicate this type of task-performance may or may not also possess such properties, but what matters here are inputs and outputs.[81](https://www.openphilanthropy.org/brain-computation-report#footnote81_8l1haef \"See Sandberg and Bostrom (2008) (p. 11), for a taxonomy of possible brain-emulation success criteria. See Muehlhauser (2017) for an investigation at Open Philanthropy of consciousness and moral patienthood.\")\n* Many tasks require more than a brain. For example, they may require something like a body, or rely partly on information-processing taking place outside the brain.[82](https://www.openphilanthropy.org/brain-computation-report#footnote82_292smym \"There is a fairly widespread discourse related to the importance of “embodiment” in AI and cognitive science more broadly, which I have not engaged with in depth. At a glance, central points seem to be: (a) that the computation a brain performs is importantly adapted to the physical environment in which it operates, and the representations it employs are constrained by the body that implements them (see e.g. Hoffmann and Pfeifer (2012), and the discussion of “Body as constraint” in Wilson and Foglia (2015)), (b) that the morphology of body itself can contribute to control, perception, and computation proper, and that not all information-processing or storage takes place “inside the head” (Müller and Hoffmann (2017), the discussion of “Body as distributor” in Wilson and Foglia (2015), the literature on the “extended mind”), (c) that the body functions to coordinate/regulate the relationship between cognition and action (see “Body as Regulator” in Wilson and Foglia (2015)), and (d) that advanced AI systems won’t be developed until we make it possible for them to learn via engagement in with real-time, complex environments, possibly via robotic bodies (see Medlock (2017); Prof. Anthony Zador also suggested something like this in conversation, see here). These points may well be true, but I do not think they disrupt the conceptual foundations of the present investigation, which aims to estimate the compute sufficient to replicate the brain’s contribution to (possibly embodied) task-performance. If points related to embodiment are thought to extend to the claim that e.g. artificial systems without bodies are incapable, in principle, of solving software problems, competing in Math competitions, or reviewing science papers, then I simply disagree.\") In those cases, I’m interested in the FLOP/s sufficient to replicate the brain’s role.\n\n\n#### 1.7 Existing literature\n\n\n*(This section reviews existing literature.[83](https://www.openphilanthropy.org/brain-computation-report#footnote83_lcms571 \"This literature review draws from the reviews offered by Sandberg and Bostrom (2008) (p. 84-85); and Martins (2012), (p. 3-6). I have supplemented it with other estimates I encountered in my research. In order to limit its scope, I focus on direct attempts to estimate the computation sufficient to run a task-functional model.\") Those interested primarily in the report’s substantive content can skip to [Section 2](#section_2).)*\n\n\nA lot of existing research is relevant to estimating the FLOP/s sufficient to run a task-functional model. But efforts in the mainstream academic literature to address this question directly are comparatively rare (a fact that this report does not alter). Many existing estimates are informal, and they often do not attempt much justification of their methods or background assumptions. The specific question they consider also varies, and their credibility varies widely.[84](https://www.openphilanthropy.org/brain-computation-report#footnote84_wjleuqg \"The estimates that I think most worth taking seriously are generally the ones I discuss in the report itself.\")\n\n\n#### 1.7.1 Mechanistic method estimates\n\n\nThe most common approach assigns a unit of computation (such as a calculation, a number of bits, or a possibly brain-specific operation) to a spike through a synapse, and then estimates the rate of spikes through synapses by multiplying an estimate of the average firing rate by an estimate of the number of synapses.[85](https://www.openphilanthropy.org/brain-computation-report#footnote85_nhjltas \"Merkle (1989) attempts to estimate the number of spikes through synapses by estimating the energy dissipated by propagating a spike a certain distance, together with the number of synapses per unit distance, rather than counting spikes and synapses directly. He gets ~2e15 synaptic operations, assuming 1 synapse every millimeter, though it is unclear to me what grounds his estimate of synapses per unit distance: “To translate Ranvier ops (1-millimeter jumps) into synapse operations we must know the average distance between synapses, which is not normally given in neuroscience texts. We can estimate it: a human can recognize an image in about 100 milliseconds, which can take at most 100 one-millisecond synapse delays. A single signal probably travels 100 millimeters in that time (from the eye to the back of the brain, and then some). If it passes 100 synapses in 100 millimeters then it passes one synapse every millimeter--which means one synapse operation is about one Ranvier operation” (1989).\") Thus, [Merkle (1989)](https://www.merkle.com/brainLimits.html),[86](https://www.openphilanthropy.org/brain-computation-report#footnote86_tezkcp4 \"Merkle (1989): “We might count the number of synapses, guess their speed of operation, and determine synapse operations per second. There are roughly 1015 synapses operating at about 10 impulses/second, giving roughly 1016 synapse operations per second” (see “Other Estimates”).\") [Mead (1990)](https://web.stanford.edu/group/brainsinsilicon/documents/MeadNeuroMorphElectro.pdf),[87](https://www.openphilanthropy.org/brain-computation-report#footnote87_peg2hsk \"Mead (1990): “There are about 1016 synapses in the brain. A nerve pulse arrives at each synapse about ten times/s, on average. So in rough numbers, the brain accomplishes 1016 complex operations/s” (p. 1629). Some aspect of this estimate appears to be in error, however, as it seems to suggest the calculation 1016 synapses × 10 spikes/sec = 1016 spikes per synapse/sec.\") [Freitas (1996)](http://www.rfreitas.com/Nano/TheFutureOfComputers--Analog--March1996.htm),[88](https://www.openphilanthropy.org/brain-computation-report#footnote88_boayzm3 \"Freitas (1996): “A fair estimate is that the 1.5 kilogram organ has 1010 neurons with 103 synapses firing an average 10 times per second, which is about 1014 bits/second. Using 64-bit words like the largest supercomputers, that's about one teraflop” (see opening section).\") [Sarpeshkar (1997)](https://thesis.library.caltech.edu/3063/1/Sarpeshkar_R_1997.pdf),[89](https://www.openphilanthropy.org/brain-computation-report#footnote89_k50ybja \"Sarpeshkar (1997): “From the numbers in the first paragraph of Section 5.6.1, we know that there are about 2.4 × 1014 synapses in each cortex of the brain. The average firing rate of cortex is about 5-10 Hz - we shall use 7.5 Hz. Assuming that each synapse is always operational and constantly computing, then the number of synaptic operations per second is 2 × 2.4 × 1014 × 7.5 = 3.6 × 1015” (p. 202-203).\") [Bostrom (1998)](https://nickbostrom.com/superintelligence.html),[90](https://www.openphilanthropy.org/brain-computation-report#footnote90_ujtenio \"Bostrom (1998): “The human brain contains about 1011 neurons. Each neuron has about 5 × 103 synapses, and signals are transmitted along these synapses at an average frequency of about 102 Hz. Each signal contains, say, 5 bits. This equals 1017 ops” (see “Hardware Requirements” section).\") [Kurzweil (1999)](https://www.amazon.com/Age-Spiritual-Machines-Computers-Intelligence/dp/B000OYDNBA)),[91](https://www.openphilanthropy.org/brain-computation-report#footnote91_qyi74oy \"Kurzweil (1999): “With an estimated average of one thousand connections between each neuron and its neighbors, we have about 100 trillion connections, each capable of a simultaneous calculation... With 100 trillion connections, each computing at 200 calculations per second, we get 20 million billion calculations per second. This is a conservatively high estimate; other estimates are lower by one to three orders of magnitude” (see Chapter 6, section “Achieving the Hardware Capacity of the Human Brain”).\") [Dix (2005)](https://alandix.com/academic/papers/brain-and-web-2005/),[92](https://www.openphilanthropy.org/brain-computation-report#footnote92_a89cm02 \"Dix (2005): “At a simplified level each neuron’s level of activation is determined by pulses generated at the (1000 to 10,000) synapses connected to it. Some have a positive excitatory effect [sic] some are inhibitory. A crude model simply adds the weighted sum and 'fires' the neuron if the sum exceeds a value. The rate of this activity, the 'clock period' of the human brain is approximately 100 Hz - very slow compared to the GHz of even a home PC, but of course this happens simultaneously for all 10 billion neurons! If we think of the adding of the weighted synaptic value as a single neural operation (nuop) then each neuron has approximately 10,000 nuops per cycle, that is 1mega-nuop per second. In total the 10 billion neurons in the brain perform 10 peta-nuop per second.”\") [Malickas (2007)](https://www.aleph.se/Trans/Global/Uploading/gupload.html),[93](https://www.openphilanthropy.org/brain-computation-report#footnote93_cmsmgsb \"Malickas (2007): “The evaluation of the computational power of [sic] human brain [sic] very uncertain at this time. Some estimates of brain power could be based on the brain synapses number and neurons [sic] firing rate. The human brain have [sic] a 1011 neurons and each neuron has [sic] average of 102 - 104 synapses. The average firing rate of brain neurons is about 100-1000 Hz. As result the brain modeling would require the computational power of 1011 neurons × (102-104 synapses/neuron) × (100-1000 Hz) = 1015 - 1018 synapses/second” (see section “Computer”).\") and [Tegmark (2017)](https://www.amazon.com/Life-3-0-Being-Artificial-Intelligence/dp/1101946598/ref=tmm_hrd_swatch_0?_encoding=UTF8&qid=1586106499&sr=8-1)[94](https://www.openphilanthropy.org/brain-computation-report#footnote94_h3c4bw8 \"Tegmark (2017): “Multiplying together about 1011 neurons, about 104 connections per neuron and about one (100) firing per neuron each second might suggest that about 1015 FLOPS (1 petaFLOPS) suffice to simulate a human brain, but there are many poorly understood complications, including the detailed timing of firings and the question of whether small parts of neurons and synapses need to be simulated too” (see endnote 58, p. 340). That said, Tegmark presents this less as an independent estimate of his own, and more as an example of a certain methodology.\") are all variations on this theme.[95](https://www.openphilanthropy.org/brain-computation-report#footnote95_o5m6su3 \"Sandberg and Bostrom (2008) also cite Fiala (2007) as estimating “1014 synapses, identity coded by 48 bits plus 2 × 36 bits for pre‐and postsynaptic neuron id, 1 byte states. 10 ms update time… 256,000 terabytes/s” (p. 85), and Seitz (no date) as estimate “50-200 billion neurons, 20,000 shared synapses per neuron with 256 distinguishable levels, 40 Hz firing” (p. 85). However, I wasn’t able to find the original papers on a quick search. Adams (2013) estimates ~1e15 FLOP/s in a blog post, but his estimate of neuron count is off by two orders of magnitude.\") Their estimates range from ~1e12 to ~1e17 (though using basic different units of computation),[96](https://www.openphilanthropy.org/brain-computation-report#footnote96_m0qn8pp \"I haven't investigated comparisons between these different units and FLOP/s (though see Sandberg and Bostrom (2008), p. 91, for some discussion of the relationship between FLOP/s and MIPS).\") but the variation results mainly from differences in estimated synapse count and average firing rate, rather than differences in substantive assumptions about how to make estimates of this kind.[97](https://www.openphilanthropy.org/brain-computation-report#footnote97_kogm211 \"As I note in Section 2.1.1.1, many of these estimates rely on average spike rates that seem to me too high.\") In this sense, the helpfulness of these estimates is strongly correlated: if the basic approach is wrong, none of them are a good guide.\n\n\nOther estimates use a similar approach, but include more complexity. [Sarpeshkar (2010)](https://www.cambridge.org/core/books/ultra-low-power-bioelectronics/ED8504DA1504856B74E2502EA859FDEA) includes synaptic conductances (see discussion in [section 2.1.1.2.2](#section_2.1.1.2.2)), learning, and firing decisions in a lower bound estimate (6e16 FLOP/s);[98](https://www.openphilanthropy.org/brain-computation-report#footnote98_8j3fb59 \"Sarpeshkar (2010): “The brain’s neuronal cells output ~1ms pulses (spikes) at an average rate of 5 Hz [55]. The 240 trillion synaptic connections [1] amongst the brain’s neurons thus lead to a computational rate of at least 1015 synaptic operations per second. A synapse implements multiplication and filtering operations on every spike and sophisticated learning operations over multiple spikes. If we assume that synaptic multiplication is at least one floating-point operation (FLOP), the 20 ms second-order filter impulse response due to each synapse is 40 FLOPS, and that synaptic learning requires at least 10 FLOPS per spike, a synapse implements at least 50 FLOPS of computation per spike. The nonlinear adaptation-and- thresholding computations in the somatic regions of a neuron implement almost 1200 floating-point operations (FLOPS) per spike [66]. Thus, the brain is performing at least 50 FLOPS × 5Hz × 240 × 1012 + 1200 FLOPS × 5Hz × 22 × 109 = [approximate] 6 × 1016 FLOPS per second” (p. 748-749).\") [Martins et al. (2012)](https://repositorium.sdum.uminho.pt/bitstream/1822/20756/1/NanoroboticBrainMonitoring2012_%20draft%20with%20page%20numbers.pdf) estimate the information-processing rate of different types of neurons in different regions, for a total of ~5e16 bits/sec in the whole brain;[99](https://www.openphilanthropy.org/brain-computation-report#footnote99_cp6rkin \"Martins et al. (2012): “These data may be combined using Eqns. (1) and (2) to yield an estimate of the synaptic-processed spike rate of Tss = (4.31 ± 0.86) × 1015 spikes/sec and the synaptic-processed bit rate of Tsb = (5.52 ± 1.13) × 1016 bits/sec for the entire human brain” (p. 14).\") and [Kurzweil (2005)](https://www.amazon.com/Singularity-Near-Humans-Transcend-Biology-ebook/dp/B000QCSA7C) offers an upper bound estimate for a personality-level simulation of 1e19 calculations per second – an estimate that budgets 1e3 calculations per spike through synapse to capture nonlinear interactions in dendrites.[100](https://www.openphilanthropy.org/brain-computation-report#footnote100_44gf4xt \"Kurzweil (2005): “The ‘fan out’ (number of interneuronal connections) per neuron is estimated at 103. With an estimated 1011 neurons, that’s about 1014 connections. With a reset time of five milliseconds, that comes to about 1016 synaptic transactions per second. Neuron-model simulations indicate the need for about 103 calculations per synaptic transaction to capture the nonlinearities (complex interactions) in the dendrites and other neuron regions, resulting in an overall estimate of about 1019 cps for simulating the human brain at this level. We can therefore consider this an upper bound, but 1014 to 1016 cps to achieve functional equivalence of all brain regions is likely to be sufficient” (p. 124-125).\") Still others attempt estimates based on protein interactions ([Thagard (2002)](http://cogsci.uwaterloo.ca/Articles/molecules.html), 1e21 calculations/second);[101](https://www.openphilanthropy.org/brain-computation-report#footnote101_1c0argn \"Thagard (2002): “If we count the number of processors in the brain as not just the number of neurons in the brain, but the number of proteins in the brain, we get a figure of around a billion times 100 billion, or 1017. Even if it is not legitimate to count each protein as a processor all by itself, it is still evident from the discussion in Section 3 that the number of computational elements in the brain is more than the 1011 or 1012 neurons. Moreover, the discussion of hormones and other neuroregulators discussed in Section 5 shows that the number of computationally relevant causal connections is far greater than the thousand or so synaptic connections per neuron. I do not know how to estimate the number of neurons with hormonal receptors that can be influenced by a single neuron that secretes hormones or that activates glands which secrete hormones, but the number must be huge. If it is a million, and if every brain protein is viewed as a mini-processor, then the computational speed of the brain is on the order of 1023 calculations per second, far larger than the 1015 calculations per second that Kurzweil expects to be available by 2020, although less than where he expects computers to be by 2060. Thus quantitatively it appears that digital computers are much farther away than Kurzweil and Moravec estimate from reaching the raw computational power of the human brain” (see Section 7, “Artificial Intelligence”).\") microtubules ([Tuszynski (2006)](https://www.terasemjournals.org/GNJournal/GN0104/tuszynski_01e.html), 1e21 FLOP/s),[102](https://www.openphilanthropy.org/brain-computation-report#footnote102_i8dgg7m \"Tuszynski (2006): “There are four c-termini states per dimer because we have two states per monomer. There could be at least four states per electron inside the tubulin dimer, as they hop between two locations. There could be at least two computational changes due to the GTP hydrolysis. Thus there are 4 × 4 × 2, which is 32 states per dimer; thirteen dimers per ring; and 1,250 rings per midsize microtubule. If you do the math, the result is about 100 kilobytes per microtubule. Calculating the number of microtubules per neuron, you get one gigabyte of processing power per neuron. There are ten billion neurons. You have ten to the 19th bytes per brain and they oscillate or make transitions in this state on the order of nanoseconds, and ten to the 28th flops per brain” (p. 4-5 on the website).\") individual neurons ([von Neumann (1958)](https://www.amazon.com/Computer-Brain-Silliman-Memorial-Lectures/dp/0300181116), 1e11 bits/second);[103](https://www.openphilanthropy.org/brain-computation-report#footnote103_i1kkgg4 \"von Neumann (1958): “Thus the standard receptor would seem to accept about 14 distinct digital impressions per second, which can probably be reckoned as the same number of bits. Allowing 1010 nerve cells, assuming that each one of them is under suitable conditions essentially an (inner or outer) receptor, a total input of 14 × 1010 bits per second results” (p. 63).\") and possible computations performed by dendrites and other neural mechanisms ([Dettmers (2015)](https://timdettmers.com/2015/07/27/brain-vs-deep-learning-singularity/), 1e21 FLOP/s).[104](https://www.openphilanthropy.org/brain-computation-report#footnote104_qogkimq \"Dettmers (2015): “So my estimate would be 1.075×1021 FLOPS for the brain, the fastest computer on earth as of July 2013 has 0.58×1015 FLOPS for practical application (more about this below)” (see section “estimation of cerebellar input/output dimensions”).\")\n\n\nA related set of estimates comes from the literature on brain simulations. [Ananthanarayanan et al. (2009)](https://people.eecs.berkeley.edu/~demmel/cs267_Spr10/Lectures/RajAnanthanarayanan_SC09-a63.pdf) estimates >1e18 FLOP/s to run a real-time human brain simulation;[105](https://www.openphilanthropy.org/brain-computation-report#footnote105_baa2ca3 \"See Ananthanarayanan et al. (2009), Figure 8 (p. 10). Greenemeier (2009) cites IBM’s Dharmendra Modha (one of the authors on the paper) as estimating that a computer comparable to the human brain would need to perform 4e16 operations per second, but I’m not sure his methodology.\") [Waldrop (2012)](https://www.nature.com/news/computer-modelling-brain-in-a-box-1.10066) cites Henry Markram as estimating 1e18 FLOP/s to run a very detailed simulation;[106](https://www.openphilanthropy.org/brain-computation-report#footnote106_ugg4eka \"Waldrop (2012): “The computer power required to run such a grand unified theory of the brain would be roughly an exaflop, or 1018 operations per second — hopeless in the 1990s. But Markram was undaunted: available computer power doubles roughly every 18 months, which meant that exascale computers could be available by the 2020s (see 'Far to go'). And in the meantime, he argued, neuroscientists ought to be getting ready for them” (see section “Markram’s big idea”). See also this chart. \") Markram, in a [2018 video (18:28)](https://youtu.be/DvE-nphgswY?t=1112), estimates that you’d need ~4e29 FLOP/s to run a “real-time molecular simulation of the human brain”;[107](https://www.openphilanthropy.org/brain-computation-report#footnote107_lgmt7y6 \"He also discusses a possible lower estimate around 19:43, but the video is too blurry for me to read the numbers.\") and Eugene Izhikevich estimates that a real-time brain simulation would require ~1e6 processors running at 384 GHz.[108](https://www.openphilanthropy.org/brain-computation-report#footnote108_c6ytj55 \"See here. See also Izhikevich and Edelman (2007).\")\n\n\n[Sandberg and Bostrom (2008)](https://www.fhi.ox.ac.uk/brain-emulation-roadmap-report.pdf) also estimate the FLOP/s requirements for brain emulations at different levels of detail. Their estimates range from 1e15 FLOP/s for an “analog network population model,” to 1e43 FLOP/s for emulating the “stochastic behavior of single molecules.”[109](https://www.openphilanthropy.org/brain-computation-report#footnote109_shkthll \"See Sandberg and Bostrom (2008) (p. 80-81). My impression is that these estimates were very rough, and their 1e18 estimate for a spiking neural network seems inconsistent with the estimate methodology they use elsewhere in the chart, since 1e15 entities × 10 FLOPs per entity × 1e3 time-steps per second = 1e19 FLOP/s.\") They report that in an informal poll of attendees at a workshop on whole brain emulation, the consensus appeared to be that the required level of resolution would fall between “Spiking neural network” (1e18 FLOP/s), and “Metabolome” (1e25 FLOP/s).[110](https://www.openphilanthropy.org/brain-computation-report#footnote110_kld6skz \"Strong selection effects were like at work in determining who was present at the workshop.\")\n\n\nDespite their differences, I group all of these estimates under the broad heading of the “mechanistic method,” as all of them attempt to identify task-relevant causal structure in the brain’s biological mechanisms, and quantify it in some kind of computational unit.\n\n\n\n#### 1.7.2 Functional method estimates\n\n\nA different class of estimates focus on the FLOP/s sufficient to replicate the function of some portion of the brain, and then attempt to scale up to an estimate for the brain as a whole (the “functional method”). [Moravec (1988)](https://www.amazon.com/Mind-Children-Future-Robot-Intelligence/dp/0674576187/ref=sr_1_2?dchild=1&keywords=mind+children+moravec&qid=1586128538&sr=8-2), for example, estimates the computation required to do what the retina does (1e9 calculations/second) and then scales up (1e14 calc/s).[111](https://www.openphilanthropy.org/brain-computation-report#footnote111_xu192cs \"See Moravec (1988), Chapter 2 (p. 51-74). See also Moravec (1988), Moravec (2008). I discuss this estimate in detail in Section 3.1.\") [Merkle (1989)](https://www.merkle.com/brainLimits.html) performs a similar retina-based calculation and gets 1e12-1e14 ops/sec.[112](https://www.openphilanthropy.org/brain-computation-report#footnote112_y2n1wbk \"Kurzweil (2005) also cites Zaghloul and Boahen (2006) as an example of replicating retinal functionality, but does not attempt a quantitative estimate using it (endnote 41, p. 532).\")\n\n\n[Kurzweil (2005)](https://www.amazon.com/Singularity-Near-Humans-Transcend-Biology-ebook/dp/B000QCSA7C) offers a functional method estimate (1e14 calcs/s) based on work by Lloyd Watts on sound localization,[113](https://www.openphilanthropy.org/brain-computation-report#footnote113_77eoep9 \"Kurzweil (2005): “Another estimate comes from the work of Lloyd Watts and his colleagues on creating functional simulations of regions of the human auditory system, which I discuss further in chapter 4… Watts’s own group has created functionally equivalent re-creations of these brain regions derived from reverse engineering. He estimates that 1011 cps are required to achieve human-level localization of sounds. The auditory cortex regions responsible for this processing comprise at least 0.1 percent of the brain’s neurons. So we again arrive at a ballpark estimate of around 1014 cps (1011 cps × 103)” (p. 123).\") another (1e15 calcs/s) based on an cerebellar simulation at the University of Texas;[114](https://www.openphilanthropy.org/brain-computation-report#footnote114_m2kycos \"Kurzweil (2005): “Yet another estimate comes from a simulation at the University of Texas that represents the functionality of a cerebellum region containing 104 neurons; this required about 108 cps, or about 104 cps per neuron. Extrapolating this over an estimated 1011 neurons results in a figure of about 1015 cps for the entire brain” (p. 123).\") and a third (1e14 calcs/s), in his [2012 book](https://www.amazon.com/How-Create-Mind-Thought-Revealed-ebook/dp/B007V65UUG/ref=tmm_kin_swatch_0?_encoding=UTF8&qid=&sr=), based on the FLOP/s he estimates is required to emulate what he calls a “pattern recognizer” in the neocortex.[115](https://www.openphilanthropy.org/brain-computation-report#footnote115_uziij9h \"Kurzweil (2012): “emulating one cycle in a single pattern recognizer in the biological brain’s neocortex would require about 3,000 calculations. Most simulations run at a fraction of this estimate. With the brain running at about 102 (100) cycles per second, that comes to 3 × 105 (300,000) calculations per second per pattern recognizer. Using my estimate of 3 × 108 (300 million) pattern recognizers, we get about 1014 (100 trillion) calculations per second” (p. 195).\") [Drexler (2019)](https://www.fhi.ox.ac.uk/wp-content/uploads/Reframing_Superintelligence_FHI-TR-2019-1.1-1.pdf) uses the FLOP/s required for various deep learning systems (specifically: [Google’s Inception architecture](https://arxiv.org/pdf/1409.4842.pdf), [Deep Speech 2](http://proceedings.mlr.press/v48/amodei16.pdf), and [Google’s neural machine translation model](https://arxiv.org/pdf/1609.08144.pdf)) to generate various estimates he takes to suggest that 1e15 FLOP/s is sufficient to match the brain’s functional capacity.[116](https://www.openphilanthropy.org/brain-computation-report#footnote116_ab1y3do \"Drexler (2019): “In light of the above comparisons, all of which yield values of RPFLOP in the 10 to 1000 range, it seems likely that 1 PFLOP/s machines equal or exceed the human brain in raw computation capacity. To draw the opposite conclusion would require that the equivalents of a wide range of seemingly substantial perceptual and cognitive tasks would consistently require no more than an implausibly small fraction of total neural activity” (p. 188).\")\n\n\n#### \n\n\n#### 1.7.3 Limit method estimates\n\n\n[Sandberg (2016)](https://arxiv.org/pdf/1602.04019.pdf) uses Landauer’s principle to generate an upper bound of ~2e22 irreversible operations per second in the brain – a methodology I consider in more detail in [Section 4](#section_4).[117](https://www.openphilanthropy.org/brain-computation-report#footnote117_nfzjc5y \"Sandberg (2016): “20 W divided by 1.3 × 10-21 J (the Landauer limit at body temperature) suggests a limit of no more than 1.6·1022 irreversible operations per second” (p. 5).\") [De Castro (2013)](https://link.springer.com/article/10.1007/s11023-013-9302-x) estimates a similar limit, also from Landauer’s principle, on perceptual operations performed by the parts of the brain involved in rapid, automatic inference (1e23 operations per second).[118](https://www.openphilanthropy.org/brain-computation-report#footnote118_gjxsb8g \"De Castro (2013): “If system 1 is considered to be a powerful computer operating at maximum Landauer efficiency—i.e., at a minimum energy cost equal to kBT ln(2)—that works at an average brain temperature, the number of perceptual operations per second that it could perform is on the order of 1023 (1/kB), depending on the idiosyncratic power of the brain” (p. 483).\") I have yet to encounter other attempts to bound the brain’s overall computation via Landauer’s principle,[119](https://www.openphilanthropy.org/brain-computation-report#footnote119_dl1zb98 \"Though there is some discussion of it on Metaculus.\") though many papers discuss related issues in the brain and in biological systems more broadly.[120](https://www.openphilanthropy.org/brain-computation-report#footnote120_k8el95a \"For example, Laughlin et al. (1998) estimate that “synapses and cells are using 105 to 108 times more energy than the thermodynamic minimum” (the minimum they have in mind is on the order of a kT per bit “observed”); and Levy et al. (2014) argue that once the costs of communication and computation in the brain are adequately distinguished, it is possible to identify places in which the energy efficiency of neural computation approaches the minimum set by Landauer. For more on the energy efficiency of neural computation, see also Laughlin (2001), Attwell and Laughlin (2001), Balasubramanian et al. (2001), Hasenstaub et al. (2010), Levy and Baxter (1996), Skora et al. (2017), Levy and Baxter (2002), Balasubramanian and Berry (2002), Niven et al. (2007), Lennie (2003), Howarth et al. (2010), and Sarpeshkar (2010), Chapter 23. For discussions of thermodynamics in the brain in particular, see Collel and Fauquet (2015), Varpula (2013), Deli et al. (2017), and Street (2016). Work on the “free energy principle” (see e.g. Friston (2010)) in the context of the brain also has connection to thermodynamics. In a not-specifically-neural context, Kempes et al. (2017) argue: “Here we show that the computational efficiency of translation, defined as free energy expended per amino acid operation, outperforms the best supercomputers by several orders of magnitude, and is only about an order of magnitude worse than the Landauer bound” (p. 1); and Wolpert (2016) attempts to extend a version of Landauer’s reasoning to derive the minimal free energy required by an organism to run a stochastic map from sensor inputs to actuator outputs. See also Ouldridge and ten Wolde (2017), Ouldridge (2017), Sartori et al. (2014), Mehta and Schwab (2012), and Mehta et al. (2016).\")\n\n\n\n#### 1.7.4 Communication method estimates\n\n\n[AI Impacts](https://aiimpacts.org/brain-performance-in-teps/) estimates the communication capacity of the brain (measured as “traversed edges per second” or [TEPS](https://en.wikipedia.org/wiki/Traversed_edges_per_second)), then combines this with an observed ratio of TEPS to FLOP/s in some human-engineered computers, to arrive an estimate of brain FLOP/s (~1e16-3e17 FLOP/s).[121](https://www.openphilanthropy.org/brain-computation-report#footnote121_29ofeeo \"AI Impacts: “Among a small number of computers we compared4, FLOPS and TEPS seem to vary proportionally, at a rate of around 1.7 GTEPS/TFLOP. We also estimate that the human brain performs around 0.18 – 6.4 × 1014 TEPS. Thus if the FLOPS:TEPS ratio in brains is similar to that in computers, a brain would perform around 0.9 – 33.7 × 1016 FLOPS.5 We have not investigated how similar this ratio is likely to be.” (See section “Conversion from brain performance in TEPS”).\") I discuss methods in this broad category – what I call, the “communication method” – in [Section 5](#section_5).\n\n\nLet’s turn now to evaluating the methods themselves. Rather than looking at all possible ways of applying them, my discussion will focus on what seem to me like the most plausible approaches I’m aware of, and the most important arguments/objections.\n\n\n\n \n\n\n\n2 The mechanistic method\n------------------------\n\n\nThe first method I’ll be discussing – the “mechanistic method” – attempts to estimate the computation required to model the brain’s biological mechanisms at a level of detail adequate to replicate task performance.\n\n\nSimulating the brain in extreme detail would require enormous amounts of computational power.[122](https://www.openphilanthropy.org/brain-computation-report#footnote122_g9dmi27 \"See e.g. the rough estimates from Sandberg and Bostrom (2008) (p. 80-81), to the effect that emulating the states of the protein complexes in the brain would require 1e27 FLOP/s, and that emulating the stochastic behavior of single molecules in the brain would require 1e43 FLOP/s. Henry Markham, in a 2018 video (18:28), estimates the FLOP/s burdens of running a “real-time molecular simulation of the human-brain” at 4E29 FLOP/s. Today’s top supercomputers can do roughly 1e17 FLOP/s. Mike Frank projects that 1e21 FLOP/s would require more than a gigawatt of power in 2030 -- comparable to the power generated by the Hoover Dam -- and his chart suggests that physical limits would begin to cause serious problems for performing many orders of magnitude more than that on currently-reasonable amounts of power..\") Which details would need to be included in a computational model, and which, if any, could be left out or summarized?\n\n\nThe approach I’ll pursue focuses on signaling between cells. Here, the idea is that for a process occurring in a cell to matter to task-performance, it needs to affect the type of signals (e.g. neurotransmitters, neuromodulators, electrical signals at gap junctions, etc.) that cell sends to other cells.[123](https://www.openphilanthropy.org/brain-computation-report#footnote123_jzdna4c \"I first encountered the idea that the computational relevance of processes within the neuron are bottlenecked by intercellular signaling via one of our technical advisors, Dr. Dario Amodei. See also Open Philanthropy's non-verbatim notes from a conversation with Prof. Dong Song: “Prof. Song thinks that everyone should agree that neurons are the fundamental computational unit of the brain. If you can replicate all the neuron activity, you’ll probably be able to replicate brain function. Neurons communicate with each other via spikes. Variables internal to a neuron are important to determining the neuron’s spiking behavior in response to inputs, but the other neurons do not know or care about these internal variables. So as long as you can replicate the input-output mapping at the level of spiking, you are basically replicating the relevant function of a single neuron. So if you have a good spiking neuron model, and you connect your neurons correctly, you should be able to replicate brain function” (p. 2). Robin Hanson gestures at a similar idea in the beginning of his his 2017 TED talk. My general impression was that almost all of the neuroscientists I spoke to took something like this kind of paradigm for granted. \") Hence, a model of that cell that replicates its signaling behavior (that is, the process of receiving signals, “deciding” what signals to send out, and sending them) would replicate the cell’s role in task-performance, even if it leaves out or summarizes many other processes occuring in the cell. Do that for all the cells in the brain involved in task-performance, and you’ve got a task-functional model.\n\n\nI’ll divide the signaling processes that might need to be modeled into three categories:\n\n\n1. *Standard neuron signaling*.[124](https://www.openphilanthropy.org/brain-computation-report#footnote124_c3am1ey \"\\\"Standard\\\" here indicates “the type of neuron signaling people tend to focus on.” Whether it is the signaling method that the brain relies on most heavily is a more substantive question.\") I’ll divide this into two parts:\n\t* *Synaptic transmission*. The signaling process that occurs at a chemical synapse as a result of a spike.\n\t* *Firing decisions*. The processes that cause a neuron to spike or not spike, depending on input from chemical synapses and other variables.\n2. *Learning*. Processes involved in learning and memory formation (e.g., synaptic plasticity, intrinsic plasticity, and growth/death of cells and synapses), where not covered by (1).\n3. *Other signaling mechanisms*. Any other signaling mechanisms (neuromodulation, electrical synapses, ephaptic effects, glial signaling, etc.) not covered by (1) or (2).\n\n\nAs a first-pass framework, we can think of synaptic transmission as a function from spiking inputs at synapses to some sort of output impact on the post-synaptic neuron; and of firing decisions as (possibly quite complex) functions that take these impacts as inputs, and then produce spiking outputs – outputs which themselves serve as inputs to downstream synaptic transmission. Learning changes these functions over time (though it can involve other changes as well, like growing new neurons and synapses). Other signaling mechanisms do other things, and/or complicate this basic picture.\n\n\n\n[![mmbasicframeworklong2.png](https://www.openphilanthropy.org/files/Blog/mmbasicframeworklong2.png)](https://www.openphilanthropy.org/files/Blog/mmbasicframeworklong2.png)**Figure 5: Basic framework I use for the mechanistic method.**\n \n\n\n \n\n\n \n\n\n \n\n\n \n\n\n \n\n\n \n\n\n \n\n\n\nThis isn’t an ideal carving, but hopefully it’s helpful regardless.[125](https://www.openphilanthropy.org/brain-computation-report#footnote125_z0yp9du \"In particular, the categories plausibly overlap: much of the standard neuron signaling in the brain may be in the service of what would generally be folk-theoretically understood as “learning” (see Open Philanthropy's non-verbatim notes from a conversation with Dr. Adam Marblestone: “it might be that all of the neurons and synapses in the brain are there in order to make the brain more likely to converge on a solution while learning,” (p. 7)); various alternative signaling mechanisms (for example, neuromodulation, and signaling in certain types of glial cells) may themselves be central to learning as well.\") Here’s the mechanistic method formula that results:\n\n\n\n> Total FLOP/s = FLOP/s for standard neuron signaling + \n> \n> FLOP/s for learning + \n> \n> FLOP/s for other signaling mechanisms\n> \n> \n\n\nI’m particularly interested in the following argument:\n\n\n1. You can capture standard neuron signaling and learning with somewhere between ~1e13-1e17 FLOP/s overall.\n2. This is the bulk of the FLOP/s burden (other processes may be important to task-performance, but they won’t require comparable FLOP/s to capture).\n\n\nI’ll discuss why one might find (I) and (II) plausible in what follows. I don’t think it at all clear that these claims are true, but they seem plausible to me, partly on the merits of various arguments I’ll discuss, and partly because some of the experts I engaged with were sympathetic (others were less so). I also discuss some ways this range could be too high, and too low.\n\n\n### \n\n\n#### 2.1 Standard neuron signaling\n\n\nHere is the sub-formula for standard neuron signaling:\n\n\n\n> FLOP/s for standard neuron signaling = FLOP/s for synaptic transmission + FLOP/s for firing decisions\n> \n> \n\n\nI’ll budget for each in turn.\n\n\n \n\n\n#### 2.1.1 Synaptic transmission\n\n\nLet’s start with synaptic transmission. This occurs as a result of spikes through synapses, so I’ll base this budget on *spikes through synapses per second × FLOPs per spike through synapse* (I discuss some assumptions this involves below).\n\n\n\n#### 2.1.1.1 Spikes through synapses per second\n\n\nHow many spikes through synapses happen per second?\n\n\nAs noted above, the human brain has roughly 100 billion neurons.[126](https://www.openphilanthropy.org/brain-computation-report#footnote126_chajgdw \"Azevedo et al. (2009): “We find that the adult male human brain contains on average 86.1 ± 8.1 billion NeuN-positive cells (“neurons”) and 84.6 ± 9.8 billion NeuN-negative (“nonneuronal”) cells” (532). My understanding is that the best available method of counting neurons is isotropic fractionation, which proceeds by dissolving brain structures into a kind of homogenous “brain soup,” and then counting cell nuclei (see Herculano-Houzel and Lent (2005) for a more technical description of the process, and Bartheld et al. (2016) for a history of cell-counting in the brain). Note that there may be substantial variation in cell counts between individuals (for example, according to Bartheld et al. (2016) (p. 9), citing Haug (1986) and Pakkenberg and Gundersen (1997), neocortical neuron count may vary by a factor of more than two, though I haven’t checked these further citations).\") Synapse count appears to be more uncertain,[127](https://www.openphilanthropy.org/brain-computation-report#footnote127_o6sytwb \"See e.g. Pakkenberg et al. (2002): “Synapses have a diameter of 200–500 nm and can only be seen by electron microscopy. The primary problem in assessing the number of synapses in human brains is their lack of resistance to the decay starting shortly after death” (p. 98).\") but most estimates I’ve seen fall in the range of an average of 1,000-10,000 synapses per neuron, and between 1e14 and 1e15 overall.[128](https://www.openphilanthropy.org/brain-computation-report#footnote128_upexhe0 \"Kandel et al. (2013): “An average neuron forms and receives 1,000 to 10,000 synaptic connections. Thus 1014 to 1015 synaptic connections are formed in the brain” (p. 175). Henry Markram uses 1e15 total synapses in this video (18:31); AI Impacts suggests 1.8-3.2e14. A number of synapse estimates focus on the cerebral cortex, and in particular on the neocortex (the cerebral cortex is divided into two parts, the neocortex, and the allocortex, but Swenson (2006) suggests that “most of the cerebral cortex is neocortex”). For example: Tang et al. (2001), for example, write that “The average total number of synapses in the neocortex of five young male brains was 164 × 1012 (CV = 0.17)” (p. 258); Pakkenberg et al. (2003): “The total number of synapses in the human neocortex is approximately 0.15 × 1015 (0.15 quadrillion) … On average, the neocortical neurons thus have about 7000 synapses each for intracortical reception and exchange of information” (p. 95 and 98); Zador (1999) writes that “A pyramidal neuron in the cortex receives excitatory synaptic input from 1e3 to 1e4 other neurons” (p. 1219) (he cites Shepherd (1990) for this number, though I haven’t followed up on the citation); Ananthanarayanan et al. (2009): “Cognition and computation arise from the cerebral cortex; a truly complex system that contains roughly 20 billion neurons and 200 trillion synapses” (Section 6). AI Impacts suggests that their impression is that this focus on the neocortex derives “from the assumption that the neocortex contains the great bulk of synapses in the brain” -- an impression that I share. They suggest that this assumption may derive in part from the fact that the neocortex represents the bulk of the brain’s volume. The cerebral cortex contains a minority of the brain’s neurons (about 19%, according to Azevedo et al. (2009) (p. 536)), but almost all of the rest reside in the cerebellum, and about 50 billion of those are non-neocortical cerebellar granule cells (at least according to Llinás et al. (2004) (p. 277)), which appear to have a comparatively small number of synapses each: “[Granule] cells are the most numerous in the CNS; there are about 5 × 1010 cerebellar granule cells in the human brain. Each cell has four or five short dendrites (each less than 30 μm long) that end in an expansion called a dendritic claw (see fig. 7.4C in chapter 7).” Wikipedia cites Llinás et al. (2004) as grounds for attributing 80-100 synaptic connections to granule cells, but I haven’t been able to find the relevant number. The cerebellum also contains Purkinje cells (up to 1.5e7, according to Llinás et al. (2004) (p. 276)), which can have over 100,000 synapses each, though I’m not sure average number (see Napper and Harvey (1988): “We conclude that there are some 175,000 parallel fiber synapses on an individual Purkinje cell dendritic tree in the cerebellar cortex of the rat” (abstract), though this is an old estimate). I have not attempted to estimate the synapses in the cerebellum in particular, and I am not sure the extent to which synapse counts for granule cells and Purkinje cells overlap (a possibility that could lead to double counting). AI Impacts, on the basis of energy consumption and volume estimates for the neocortex, guesses the number of synapses in the entire brain is “somewhere between 1.3 and 2.3 times the number in the cerebral cortex.”\")\n\n\nHow many spikes arrive at a given synapse per second, on average?\n\n\n* Maximum neuron firing rates can exceed 100 Hz,[129](https://www.openphilanthropy.org/brain-computation-report#footnote129_8jywjmc \"Wang et al. (2016): “By recording in human, monkey, and mouse neocortical slices, we revealed that FS neurons in human association cortices (mostly temporal) could generate APs at a maximal mean frequency (Fmean) of 338 Hz and a maximal instantaneous frequency (Finst) of 453 Hz, and they increase with age” (p. 1). Marblestone et al. (2013): “certain neurons spike at 500 Hz or faster (Gittis et al. (2010))” (section 2.2).\") but *in vivo* recordings suggest that neurons usually fire at lower rates – between 0.01 and 10 Hz.[130](https://www.openphilanthropy.org/brain-computation-report#footnote130_hqxkyyt \"Barth and Poulet (2012) (p. 4-5), list a large number firing rates overserved in rat neurons, almost all of which appear to be below 10 Hz. Buzaki and Mizuseki (2014): “Recent quantifications of firing patterns of cortical pyramidal neurons in the intact brain have shown that the mean spontaneous and evoked firing rates of individual neurons span at least four orders of magnitude and that the distribution of both stimulus-evoked and spontaneous activity in cortical neurons obeys a long-tailed, typically lognormal, pattern” (p. 266). I have not attempted to calculate mean rates using the numbers in Buzaki and Mizuseki (2014). See also the studies cited by AI impacts in the section titled “estimates of the rate of firing in non-human visual cortex.”\")\n* Experts I engaged with tended to use average firing rates of 1-10 Hz.[131](https://www.openphilanthropy.org/brain-computation-report#footnote131_63gjxz9 \"Anthony Zador used an average rate of 1 Hz (see Open Philanthropy's non-verbatim notes from a conversation with Prof. Anthony Zador, p. 4). Konrad Kording suggested that neurons run at roughly 10 Hz (see Open Philanthropy's non-verbatim notes from a conversation with Prof. Konrad Kording). Sarpeshkar (citing Attwell and Laughlin (2001)), uses 5 Hz. Ananthanarayanan et al. (2009) suggest that the average neural firing rate is “typically at least 1 Hz” (3.1.2).\")\n* Energy costs limit spiking. [Lennie (2003)](http://www2.bcs.rochester.edu/sites/plennie/pdfs/Lennie03a.pdf#page=3), for example, uses energy costs to estimate a 0.16 Hz average in the cortex, and 0.94 Hz “using parameters that all tend to underestimate the cost of spikes.”[132](https://www.openphilanthropy.org/brain-computation-report#footnote132_gla22en \"See p. 494-495.\") He also estimates that “to sustain an average rate of 1.8 spikes/s/neuron would use more energy than is normally consumed by the whole brain” (13 Hz would require more than the whole body).[133](https://www.openphilanthropy.org/brain-computation-report#footnote133_9jn7tcs \"P. 495.\")\n* Existing recording methods may bias towards active cells.[134](https://www.openphilanthropy.org/brain-computation-report#footnote134_4pub7il \"Barth and Poulet (2012): “accumulating experimental evidence, using non-selective methods to assess the activity of identified, individual neurons, indicates that traditional extracellular recordings may have been strongly biased by selection of the most active cells” (p. 1). Buzaki and Mizuseki (2014): “Each recording technique has some caveat. For example, patch-clamping of neurons may affect the firing patterns of neurons. Cell-attached methods are less invasive, but here the identity of the recorded cell often remains unknown and one might argue that the skewed distribution simply reflects the recording of large numbers of slow-firing pyramidal cells and a smaller number of faster-discharging interneurons. Furthermore, long-term recordings are technically difficult to obtain, and this may result in biased sampling of more-active neurons. Extracellular recording of spikes with sharp metal electrodes typically offers reliable single neuron isolation; however, as in cell-attached recordings, sampling of single neurons is often biased towards selecting fast-firing cells because neurons with low firing rates are often not detected during short recording sessions. Moreover, in many cases, only evoked firing patterns in very short time windows are examined. Chronic recordings with tetrodes and silicon probes can reduce such bias towards cells with a high firing rate, as the electrodes are moved infrequently and large numbers of neurons can be monitored from hours to days. In addition, one can separate the recorded population into excitatory and inhibitory neuron types in vivo through physiological characterization or by using optogenetic methods. Caveats of the extracellular probe methods include the lack of objective quantification of spike contamination and omission, the difficulty in isolating exceedingly slow-firing neurons and the lack of objective segregation of different neuron types. The left tail of the firing-rate distribution can especially vary across studies because neurons with low firing rates are often not detected during short recording sessions or because an arbitrary cut-off rate eliminates slow-firing cells. The differences in the right tail of the distribution across studies and species are probably the result of inadequate segregation of principal cells and interneurons” (p. 276).\") [Shoham et al. (2005)](https://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.457.6826&rep=rep1&type=pdf), for example, suggests that recordings may overlook large numbers of “silent” neurons that fire infrequently (on one estimate for the cat primary visual cortex, >90% of neurons may qualify as “silent”).[135](https://www.openphilanthropy.org/brain-computation-report#footnote135_u77pyc1 \"Shoham et al. (2005): “To summarize, the existence of large populations of silent neurons has been suggested recently by experimental evidence from diverse systems. Only some regions and neuron types show this phenomenon: as counterexamples, interneurons and cerebellar Purkinje cells are active most or all of the time. Nonetheless, the diversity of cases in which many neurons appear to be silent includes major neuron types in the mammalian neocortex and hippocampus, the cerebellum, and the zebra finch song system. Silent neurons may be a recurring principle of brain organization” (see Conclusion, p. 6). They also suggest that their estimate of the “recordable radius” around an electrode suggests “a silent fraction of at least 90%” of neurons in the cat primary visual cortex (see Conclusion, p. 6).\")\n\n\nSynthesizing evidence from a number of sources, [AI Impacts](https://aiimpacts.org/rate-of-neuron-firing/#:~:text=So%20based%20on%20this%20rough,less%20than%201.82%20per%20second.) offers a best guess average of 0.1-2 Hz. This sounds reasonable to me (I give most weight to the metabolic estimates). I’ll use 0.1-1 Hz, partly because [Lennie (2003)](http://www2.bcs.rochester.edu/sites/plennie/pdfs/Lennie03a.pdf#page=3) treats 0.94 Hz as an overestimate, and partly because I’m mostly sticking with order-of-magnitude level precision. This suggests an overall range of **~1e13-1e15 spikes through synapses per second**(1e14-1e15 synapses × 0.1-1 spikes per second).[136](https://www.openphilanthropy.org/brain-computation-report#footnote136_k3yrezi \"It’s also possible that the metabolic considerations could be used as evidence for the combinations of synapse count and average spiking rate that would be compatible with the brain’s energy budget. For example, it’s possible that 10,000 synapses per neuron is incompatible with higher average spiking rates. However, I have not investigated this. Thanks to Carl Shulman for suggesting this possibility.\")\n\n\nNote that many of the mechanistic method estimates reviewed in 1.6.1 assume a higher average spiking rate, often in the range of 100 Hz.[137](https://www.openphilanthropy.org/brain-computation-report#footnote137_65x5da1 \"Examples include: Bostrom (1998): “signals are transmitted along these synapses at an average frequency of about 102 Hz” (“Hardware requirements”); Mead (1990): “A nerve pulse arrives at each synapses about ten times/s, on average” (p. 1629); Merkle (1989): “There are roughly 1015 synapses operating at about 10 impulses/second”; Dix (2005): “The rate of this activity, the 'clock period' of the human brain is approximately 100 Hz”; Kurzweil (1999): “With 100 trillion connections, each computing at 200 calculations per second, we get 20 million billion calculations per second” (Chapter 6, “Achieving the Hardware Capacity of the Human Brain”).\") For the reasons listed above, I think 100 Hz too high. ~10 Hz seems more possible (though it requires [Lennie (2003)](http://www2.bcs.rochester.edu/sites/plennie/pdfs/Lennie03a.pdf#page=3) to be off by 1-2 orders of magnitude, and my best guess is lower): in that case, we’d add an orders of magnitude to the high-end estimates below.\n\n\n\n#### 2.1.1.2 FLOPs per spike through synapse\n\n\nHow many FLOPs do we need to capture what matters about the signaling that occurs when a spike arrives at a synapse?\n\n\n\n#### 2.1.1.2.1 A simple model\n\n\nA simple answer is: one FLOP. Why might one think this?\n\n\nOne argument is that in the context of standard neuron signaling (setting aside learning), what matters about a spike through a synapse is that it increases or decreases the post-synaptic membrane potential by a certain amount, corresponding to the synaptic weight. This could be modeled as a single addition operation (e.g., add the synaptic weight to the post-synaptic membrane potential). That is, one FLOP (of some precision, see below).[138](https://www.openphilanthropy.org/brain-computation-report#footnote138_xdjqs5r \"This model of synaptic transmission was suggested by our technical advisor, Dr. Dario Amodei. See also Open Philanthropy's non-verbatim notes from a conversation with Prof. Shaul Druckmann: \\\"Setting aside plasticity, most people assume that modeling the immediate impact of a pre-synaptic spike on the post-synaptic neuron is fairly simple. Specifically, you can use a single synaptic weight, which reflects the size of the impact of a spike through that synapse on the post-synaptic membrane potential.\")\n\n\nWe can add several complications without changing this picture much:[139](https://www.openphilanthropy.org/brain-computation-report#footnote139_od5a01p \"The bullet points below were inspired by comments from Dr. Dario Amodei as well.\")\n\n\n* Some estimates treat a spike through a synapse as multiplication by a synaptic weight. But spikes are binary, so in a framework based on individual spikes, you’re really only “multiplying” the synaptic weight by 0 or 1 (e.g., if the neuron spikes, then multiply the weight by 1, and add it to the post-synaptic membrane potential; otherwise, multiply it by 0, and add the result – 0 – to the post-synaptic membrane potential).\n* In artificial neural networks, input neuron activations are sometimes analogized to non-binary spike rates (e.g., average numbers of spikes over some time interval), which are multiplied by synaptic weights and then summed.[140](https://www.openphilanthropy.org/brain-computation-report#footnote140_y4g0w5q \"See Matt Botvinick’s comments on this podcast: “The activity of units in a deep learning system is broadly analogous to the spike rate of a neuron” (see 57.20 here).\") This would be two FLOPs (or one [Multiply-Accumulate](https://en.wikipedia.org/wiki/Multiply%E2%80%93accumulate_operation)). But since such rates take multiple spikes to encode, this analogy plausibly suggests less than two FLOPs per spike through synapse.\n\n\nHow precise do these FLOPs need to be?[141](https://www.openphilanthropy.org/brain-computation-report#footnote141_1s1wnmp \"Precision, here, refers to number of bits used to represent the floating point numbers in question.\") That depends on the number of distinguishable synaptic weights/membrane potentials. Here are some relevant estimates:\n\n\n* [Koch (1999)](https://www.amazon.com/Biophysics-Computation-Information-Computational-Neuroscience/dp/0195181999) suggests “between 6 and 7 bits of resolution” for variables like neuron membrane potential.[142](https://www.openphilanthropy.org/brain-computation-report#footnote142_qngerpw \"Koch (1999): “It is doubtful whether the effective resolution, that is, the ratio of minimal change in any one variable, such as Vm or [Ca2+]i, relative to the noise amplitude associated with this variable, exceeds a factor of 100. Functionally, this corresponds to between 6 and 7 bits of resolution, a puny number compared to a standard 32-bit machine architecture” (p. 471).\")\n* [Bartol et al. (2015)](https://elifesciences.org/articles/10778) suggest a minimum of “4.7 bits of information at each synapse” (they don’t estimate a maximum).[143](https://www.openphilanthropy.org/brain-computation-report#footnote143_akdpn9p \"See Bartol et al. (2015) (abstract): “Signal detection theory holds that at a Signal-to-Noise Ratio (SNR) of 1, a common detection threshold used in psychophysical experiments, an ideal observer can correctly detect whether a signal is higher or lower than some threshold 69% of the time (Green and Swets (1966); Schultz (2007)). Put another way, if random samples are drawn from two Gaussian distributions whose areas overlap by 31%, an ideal observer will correctly assign a given sample to the correct distribution 69% of the time. Using this logic, we found that ~26 different mean synaptic strengths could span the entire range, assuming CV = 0.083 for each strength level, and a 69% discrimination threshold (Figure 8, see Materials and methods)” (this quote is from the “Results” section of the paper). The “e-life digest” for the paper also suggests that previous estimates were lower than this: “This estimate is markedly higher than previous suggestions. It implies that the total memory capacity of the brain – with its many trillions of synapses – may have been underestimated by an order of magnitude. Additional measurements in the same and other brain regions are needed to confirm this possibility” (see “e-life digest”).\")\n* [Sandberg and Bostrom (2008)](https://www.fhi.ox.ac.uk/brain-emulation-roadmap-report.pdf) cite evidence for ~1 bit, 3-5 bits, and 0.25 bits stored at each synapse.[144](https://www.openphilanthropy.org/brain-computation-report#footnote144_ny7brcb \"Sandberg and Bostrom (2008): “Assumption on the order of one bit of information per synapse has some support on theoretical grounds. Models of associative neural networks have an information storage capacity slightly under 1 bit per synapse depending on what kind of information is encoded (Nadal (1991); Nadal and Toulouse (1990)). Extending the dynamics of synapses for storing sequence data does not increase this capacity (Rehn and Lansner (2004)). Geometrical and combinatorial considerations suggest 3‐5 bits per synapse (Stepanyants, Hof et al. (2002); Kalisman, Silberberg et al. (2005)). Fitting theoretical models to Purkinje cells suggests that they can reach 0.25 bits/synapse (Brunel, Hakim et al. (2004))” (p. 84).\")\n* [Zador (2019)](http://zadorlab.labsites.cshl.edu/wp-content/uploads/sites/59/2019/08/A-critique-of-pure-learning-and-what-artificial-neuralnetworks-can-learn-from-animal-brains.pdf) suggests “a few” bits/synapse to specify graded synaptic strengths.[145](https://www.openphilanthropy.org/brain-computation-report#footnote145_e8r6qhq \"Zador (2019): “a few extra bits/synapse would be required to specify graded synaptic strengths. But because of synaptic noise and for other reasons, synaptic strength may not be specified very precisely” (p. 5).\")\n* [Lahiri and Ganguli (2013)](https://papers.nips.cc/paper/4872-a-memory-frontier-for-complex-synapses.pdf) suggest that the number of distinguishable synaptic strengths can be “as small as two”[146](https://www.openphilanthropy.org/brain-computation-report#footnote146_5u8l0dq \"Lahiri and Ganguli (2013): “recent experimental work has shown that many synapses are more digital than analog; they cannot robustly assume an infinite continuum of analog values, but rather can only take on a finite number of distinguishable strengths, a number than can be as small as two [4-6] (though see [7])”.\") (though they cite [Enoki et al. (2009)](https://www.cell.com/neuron/fulltext/S0896-6273(09)00204-9?_returnURL=https%3A%2F%2Flinkinghub.elsevier.com%2Fretrieve%2Fpii%2FS0896627309002049%3Fshowall%3Dtrue) as indicating greater precision).[147](https://www.openphilanthropy.org/brain-computation-report#footnote147_qtcr5g8 \"Enoki et al. (2009): “The results demonstrate that individual Schaffer collateral synapses on CA1 pyramidal neurons behave in an incremental rather than binary fashion, sustaining graded and bidirectional long-term plasticity” (“summary”).\")\n\n\nA standard FLOP is 32 bits, and half-precision is 16 – well in excess of these estimates. Some hardware uses even lower-precision operations, which may come closer. I’d guess that 8 bits would be adequate.\n\n\n**If we assume 1 (8-bit) FLOP per spike through synapse, we get an overall estimate of 1e13-1e15 (8-bit) FLOP/s for synaptic transmission**. I won’t continue to specify the precision I have in mind in what follows.\n\n\n#### \n\n\n#### 2.1.1.2.2 Possible complications\n\n\nHere are a few complications this simple model leaves out.\n\n\n*Stochasticity*\n\n\nReal chemical synaptic transmission is stochastic. Each vesicle of neurotransmitter has a certain probability of release, conditional on a spike arriving at the synapse, resulting in variation in synaptic efficacy across trials.[148](https://www.openphilanthropy.org/brain-computation-report#footnote148_ienw4ky \"Siegelbaum et al. (2013c): “The mean probability of transmitter release from a single active zone also varies widely among different presynaptic terminals, from less than 0.1 (that is, a 10% chance that a presynaptic action potential will trigger release of a vesicle) to greater than 0.9” ... “Thus central neurons vary widely in the efficacy and reliability of synaptic transmission. Synaptic reliability is defined as the probability that an action potential in a pre-synaptic cell leads to some measurable response in the post-synaptic cell -- that is, the probability that a presynaptic action potential releases one or more quanta of transmitter. Efficacy refers to the mean amplitude of the synaptic response, which depends on both the reliability of synaptic transmission and on the mean size of the response when synaptic transmission does occur” (p. 271). Koch (1999): “We have seen that single synapses in the mammalian cortex appear to be unreliable: release at single sites can occur as infrequently as one out of every 10 times (or even less) that an action potential invades the presynaptic terminal (Fig. 4.3)” (p. 327).\") This isn’t necessarily a design defect. Noise in the brain may have benefits,[149](https://www.openphilanthropy.org/brain-computation-report#footnote149_lmzpkhk \"See e.g. McDonnel and Ward (2011), Jonas (2014, unpublished), and Faisel et al. (2008) (p. 3) for discussion of the benefits of noise.\") and we know that the brain can make synapses reliable.[150](https://www.openphilanthropy.org/brain-computation-report#footnote150_3p2waz9 \"As Siegelbaum et al. (2013c) note, “in synaptic connections where a low probability of release is deleterious for function, this limitation is overcome by simply having many active zones [that is, neurotransmitter release sites] in one synapse” (p. 271). The fact that the brain can choose to have reliable synapses if necessary leads Koch (1999) to suggest that there may be some “computational advantage to having unreliable synapses” -- for example, increasing the number of distinguishable states a synapse can be in (p. 327).\")\n\n\nWould capturing the contribution of this stochasticity to task performance require many extra FLOP/s, relative to a deterministic model? My guess is no.\n\n\n* The relevant probability distribution (a binomial distribution, according to [Siegelbaum et al. (2013c)](https://neurology.mhmedical.com/content.aspx?bookid=1049§ionid=59138636), (p. 270)), appears to be fairly simple, and Dr. Paul Christiano, one of our technical advisors, thought that sampling from an approximation of such a distribution would be cheap.[151](https://www.openphilanthropy.org/brain-computation-report#footnote151_a3oqmr9 \"From Open Philanthropy's non-verbatim notes from a conversation with Dr. Paul Christiano: \\\"One way of modeling synaptic stochasticity is by assigning a fixed release probability to each synaptic vesicle, conditional on presynaptic activity. Dr. Christiano does not think that modeling spikes through synapses in this way would constitute a significant increase in required compute, relative to modeling each spike through synapse deterministically. Sampling from a normal distribution is cheap unless you need a lot of precision, and even then, Dr. Christiano believes that the cost is just linear in the number of bits of precision that you want. At 8 bits of precision and 10 vesicles, he expects that it would be possible to perform the relevant sampling with about the same amount of energy as a FLOP\\\" (p. 5).\")\n* My background impression is that in designing systems for processing information, adding noise is easy; *limiting* noise is hard (though this doesn’t translate directly into a FLOPs number).\n* Despite the possible benefits of noise, my guess is that the brain’s widespread use of stochastic synapses has a lot to do with resource constraints (more reliable synapses require more neurotransmitter release sites).[152](https://www.openphilanthropy.org/brain-computation-report#footnote152_hy7p3jc \"See Seigelbaum et al. (2013) quotes above. From Open Philanthropy's non-verbatim notes from a conversation with Prof. Erik De Schutter: “Some hypothesize that it’s about energy efficiency, but there is no proof of this.” (p. 3).\")\n* Many neural network models don’t include this stochasticity.[153](https://www.openphilanthropy.org/brain-computation-report#footnote153_ut288pm \"From Open Philanthropy's non-verbatim notes from a conversation with Prof. Erik De Schutter: “[synaptic stochasticity] is almost never included in neural network models” (p. 3). From Open Philanthropy's non-verbatim notes from a conversation with Prof. Chris Eliasmith: ‘Pretty much everything Prof. Eliasmith does with his models works fine in a stochastic regime, but stochastic approaches require more synapses, so he does not bother with them. This decision is driven primarily by the availability of deterministic large-scale computational platforms. If there were cheap stochastic computers available, Prof. Eliasmith would probably use stochastic approaches” (p. 3).\")\n\n\nThat said, one expert I spoke with (Prof. Erik De Schutter) thought it an open question whether the brain manipulates synaptic stochasticity in computationally complex ways.[154](https://www.openphilanthropy.org/brain-computation-report#footnote154_iyyadpx \" From Open Philanthropy's non-verbatim notes from a conversation with Prof. Erik De Schutter: “It’s an open question whether you could capture this stochasticity by drawing from a relatively simple distribution, or whether the brain manipulates synaptic stochasticity in more computationally complex ways” (p. 3).\")\n\n\n*Synaptic conductances*\n\n\nThe ease with which ions can flow into the post-synaptic cell at a given synapse (also known as the *synaptic conductance*) changes over time as the ion channels activated by synaptic transmission open and close.[155](https://www.openphilanthropy.org/brain-computation-report#footnote155_eeue7aw \"This change can be modeled in different ways (for example, as an exponential decay, or as a difference of exponentials), and different post-synaptic receptors exhibit different behaviors. See Dayan and Abbott (2001) (p. 182), Figure 5.15, and the pictures of different models here.\") The simple “addition” model above doesn’t include this – rather, it summarizes the impact of a spike through synapse as a single, instantaneous increase or decrease to post-synaptic membrane potential.\n\n\n[Sarpeshkar (2010)](https://www.cambridge.org/core/books/ultra-low-power-bioelectronics/ED8504DA1504856B74E2502EA859FDEA), however, appears to treat the temporal dynamics of synaptic conductances as central to the computational function of synapses.[156](https://www.openphilanthropy.org/brain-computation-report#footnote156_irupfq7 \"Sarpeshkar (2010): “Synapses are effectively spike-dependent electrochemical gm generators [my understanding is that “gm” stands for conductance]. They convert the input digital spike impulse arriving from a presynaptic transmitting neuronal axon into an exponential analog impulse-response current on the receiving dendrite of the postsynaptic neuron” (p. 739).\") He assumes, as a lower bound, that “the 20 ms second-order filter response due to each synapse is 40 FLOPs,” and that such operations occur on every spike.[157](https://www.openphilanthropy.org/brain-computation-report#footnote157_919tkzq \"Sarpeshkar (2010): “A synapse implements multiplication and filtering operations on every spike and sophisticated learning operations over multiple spikes. If we assume that synaptic multiplication is at least one floating-point operation (FLOP), the 20 ms second-order filter impulse response due to each synapse is 40 FLOPS, and that synaptic learning requires at least 10 FLOPS per spike, a synapse implements at least 50 FLOPS of computation per spike” (p. 748-749).\")\n\n\nI’m not sure exactly what [Sarpeshkar (2010)](https://www.cambridge.org/core/books/ultra-low-power-bioelectronics/ED8504DA1504856B74E2502EA859FDEA) has in mind here, but it seems plausible to me that the temporal dynamics of a neuron’s synaptic conductances can influence membrane potential, and hence spike timing, in task-relevant ways.[158](https://www.openphilanthropy.org/brain-computation-report#footnote158_if5q4rq \"I’m partly influenced here by comments from Dr. Adam Marblestone, see Open Philanthropy's non-verbatim notes from a conversation with Dr. Adam Marblestone: “If you neglect this temporal shape, you’ll get the wrong output: it matters that incoming spikes coincide and add up properly” (p. 3).\") One expert also emphasized the complications to neuron behavior introduced by the conductance created by a particular type of post-synaptic receptor called an NMDA-receptor – conductances that [Beniaguev et al. (2020)](https://www.biorxiv.org/content/10.1101/613141v2.full.pdf) suggest may substantially increase the complexity of a neuron’s I/O (see discussion in [Section 2.1.1.2](#section_2.1.1.2)).[159](https://www.openphilanthropy.org/brain-computation-report#footnote159_88dbbzg \"See Open Philanthropy's non-verbatim notes from a conversation with Prof. Shaul Druckmann: “the long time-constant of NMDA receptors increases the complexity of the neuron’s input-output transformation” (p. 3). Beniaguev et al. (2020): “Detailed studies of synaptic integration in dendrites of cortical pyramidal neurons suggested the primary role of the voltage-dependent current through synaptic NMDA receptors, including at the subthreshold and suprathreshold (the NMDA-spike) regimes (Polsky, Mel, and Schiller (2004); Branco, Clark, and Häusser (2010)). As NMDA receptors depend nonlinearly on voltage it is highly sensitive not only to the activity of the synapse in which the receptors are located but also to the activity of (and the voltage generated by) neighboring synapses and to their dendritic location. Moreover, the NMDA-current has slow dynamics, promoting integration over a time window of tens of milliseconds (Major, Larkum, and Schiller (2013); Doron et al. (2017))” (p. 8).\") That said, two experts thought it likely that synaptic conductances could either be summarized fairly easily or left out entirely.[160](https://www.openphilanthropy.org/brain-computation-report#footnote160_8z3ghmx \"From Open Philanthropy's non-verbatim notes from a conversation with Prof. Anthony Zador: “He does not think that … we need to include the details of synaptic conductances in our models” (p. 1). From Open Philanthropy's non-verbatim notes from a conversation with Dr. Adam Marblestone: “Dr. Marblestone is not sure that you need the exact shape [of the synaptic conductance], or that it needs to be re-computed every time. Specialized hardware could also be helpful (though one can say this for everything). Overall, Dr. Marblestone expects it to be possible to either leave out or simplify this computation” (p. 3).\")\n\n\n*Sparse FLOPs and time-steps per synapse*\n\n\nEstimates based on spikes through synapses assume that you don’t need to budget any FLOPs for when a synapse *doesn’t* receive a spike, but could have. Call this the “sparse FLOPs assumption.”[161](https://www.openphilanthropy.org/brain-computation-report#footnote161_n5slur7 \"My discussion of this assumption is inspired by some comments from Dr. Dario Amodei.\") In current neural network implementations, the analogous situation (e.g., artificial neuron activations of 0) creates inefficiencies, which some new hardware designs aim to avoid.[162](https://www.openphilanthropy.org/brain-computation-report#footnote162_6hmdngt \"See, for example, the recent Cerebras whitepaper: “Multiplying by zero is a waste—a waste of silicon, power, and time, all while creating no new information. In deep learning, the data are often very sparse. Half to nearly all the elements in the vectors and matrices that are to be multiplied together are zeros. The source of the zeros are fundamental deep learning operations, such as the rectified linear unit nonlinearity (ReLU) and dropout, both of which introduce zeros into neural network tensors...when the data is 50 to 98% zeros, as it often is in neural networks, then 50 to 98% of your multiplications are wasted. Because the Cerebras SLA core was designed specifically for the sparse linear algebra of neural networks, it never multiplies by zero. To take advantage of this sparsity, the core has built-in, fine-grained dataflow scheduling, so compute is triggered by the data. The scheduling operates at the granularity of a single data value so only non-zero data triggers compute. All zeros are filtered out and can be skipped in the hardware. In other words, the SLA core never multiplies by zero and never propagates a zero across the fabric” (p. 5).\") But this seems more like an engineering challenge than a fundamental feature of the brain’s task-performance.\n\n\nNote, though, that for some types of brain simulation, budgets would be based on *time-steps per synapse* instead, regardless of what is actually happening at synapse over that time. Thus, for a simulation of a 1e14-1e15 synapses run at 1 ms resolution (1000 timesteps per second), you’d get 1e17-1e18 timesteps per synapse – a number that would then be multiplied by your FLOPs budget per time-step at each synapse; and smaller time-steps would yield higher numbers. Not all brain simulations do this (see, e.g., [Ananthanarayanan et al. (2009)](https://people.eecs.berkeley.edu/~demmel/cs267_Spr10/Lectures/RajAnanthanarayanan_SC09-a63.pdf), who simulate time-steps at neurons, but events at synapse),[163](https://www.openphilanthropy.org/brain-computation-report#footnote163_hybeweg \"Ananthanarayanan et al. (2009): “The basic algorithm of our cortical simulator C2 [2] is that neurons are simulated in a clock-driven fashion whereas synapses are simulated in an event-driven fashion. For every neuron, at every simulation time step (say 1 ms), we update the state of each neuron, and if the neuron fires, generate an event for each synapse that the neuron is post-synaptic to and presynaptic to. For every synapse, when it receives a pre- or post-synaptic event, we update its state and, if necessary, the state of the post-synaptic neuron” (p. 3).\") but various experts use it as a default methodology.[164](https://www.openphilanthropy.org/brain-computation-report#footnote164_x5musf2 \"See e.g. Sandberg and Bostrom (2008) (p. 80-81); and Henry Markram, in a 2018 video (18:28).\")\n\n\nGoing forward, I’ll assume that on simple models of synaptic transmission where the synaptic weight is not changing during time-steps without spikes, we don’t need to budget any FLOPs for those time-steps (the budgets for different forms of synaptic plasticity are different story, and will be covered in the learning section). If this is wrong, though, it could increase budgets by a few orders of magnitude (see [Section 2.4.1](#section_2.4.1)).\n\n\n*Others*\n\n\nThere are likely many other candidate complications that the simple model discussed above does not include. There is intricate molecular machinery located at synapses, much of which is still not well-understood. Some of this may play a role in synaptic plasticity (see [Section 2.2](#section_2.2) below), or just in maintaining a single synaptic weight (itself a substantive task), but some may be relevant to standard neuron signaling as well.[165](https://www.openphilanthropy.org/brain-computation-report#footnote165_2soem9q \"From Open Philanthropy's non-verbatim notes from a conversation with Prof. Blake Richards: “Some neuroscientists are interested in the possibility that a lot of computation is occurring via molecular processes in the brain. For example, very complex interactions could be occurring in a structure known as the post-synaptic density, which involves molecular machinery that could in principle implicate many orders of magnitude of additional compute per synapse. We don’t yet know what this molecular machinery is doing, because we aren’t yet able to track the states of the synapses and molecules with adequate precision. There is evidence that perturbing the molecular processes within the synapse alters the dynamics of synaptic plasticity, but this doesn’t necessarily provide much evidence about whether these processes are playing a computational role. For example, their primary role might just be to maintain and control a single synaptic weight, which is itself a substantive task for a biological system” (p. 2). See also Bhalla (2014): “Neurons perform far more computations than the conventional framework of summation and propagation of electrical signals from dendrite to soma to axon. There is an enormous and largely hidden layer of molecular computation, and many aspects of neuronal plasticity have been modeled in chemical terms. Memorable events impinge on a neuron as special input patterns, and the neuron has to decide if it should ‘remember’ this event. This pattern-decoding decision is mediated by kinase cascades and signaling networks over millisecond to hour-long timescales. The process of cellular memory itself is rooted in molecular changes that give rise to life-long, stable physiological changes. Modeling studies show how cascades of synaptic molecular switches can achieve this, despite stochasticity and molecular turnover. Such biochemically detailed models form a valuable conceptual framework to assimilate the complexities of chemical signaling in neuronal computation” (abstract).\")\n\n\n*Higher-end estimate*\n\n\nI’ll use 100 FLOPs per spike through synapse as a higher-end FLOP/s budget for synaptic transmission. This would at least cover Sarpeshkar’s 40 FLOP estimate, and provide some cushion for other things I might be missing, including some more complex manipulations of synaptic stochasticity.\n\n\n**With 1 FLOP per spike through synapse as a low-end, and 100 FLOPs as a high end, we get 1e13-1e17 FLOP/s overall**. Firing rate models might suggest lower numbers; other complexities and unknowns, along with estimates based on time-steps rather than spikes, higher numbers.\n\n\n\n#### 2.1.2 Firing decisions\n\n\nThe other component of standard neuron signaling is firing decisions, understood as mappings from synaptic inputs to spiking outputs.\n\n\nOne might initially think these likely irrelevant: there are 3-4 orders of magnitude more synapses than neurons, so one might expect events at synapses to dominate the FLOP/s burden.[166](https://www.openphilanthropy.org/brain-computation-report#footnote166_9pm5y18 \"From Open Philanthropy's non-verbatim notes from a conversation with Prof. Barak Pearlmutter: “Prof. Pearlmutter thought that the compute for firing decisions would be “in the noise” relative to compute for spikes through synapses, because there are so many fewer neurons than synapses” (p. 2). And from Open Philanthropy's non-verbatim notes from a conversation with Prof. Anthony Zador: “There is a big difference, computationally, between processes that happen at every synapse, and processes that only happen at the soma, because there are orders of magnitude fewer somas than synapses” (p. 2).\") But as just noted, we’re counting FLOPs at synapses based on *spikes*, not time-steps. Depending on the temporal-resolution we use (this varies across models), the number of time-steps per second (often ≥1000) plausibly exceeds the average firing rate (~0.1-1 Hz) by 3-4 orders of magnitude as well. Thus, if we need to compute firing decisions every time-step, or just generally more frequently than the average firing rate, this could make up for the difference between neuron and synapse count (I discuss this more in [Section 2.1.2.5](#section_2.1.2.5)). And firing decisions could be more complex than synaptic transmission for other reasons as well.\n\n\nNeuroscientists implement firing decisions using neuron models that can vary enormously in their complexity and biological realism. [Herz et al. (2006)](http://www.ini.ethz.ch/~cwang/ModelingSingleNeuron.pdf) group these models into five rough categories:[167](https://www.openphilanthropy.org/brain-computation-report#footnote167_c8rjwso \"See Fig. 1. (p. 80).\")\n\n\n1. *Detailed compartmental models*. These attempt detailed reconstruction of a neuron’s physical structure and the electrical properties of its dendritic tree. This tree is modeled using many different “compartments” that can each have different membrane potentials.\n2. *Reduced compartmental models*. These include fewer distinct compartments, but still more than one.\n3. *Single compartment models*. These ignore the spatial structure of the neuron entirely and focus on the impact of input currents on the membrane potential in a single compartment.\n\t1. The [Hodgkin-Huxley model](https://en.wikipedia.org/wiki/Hodgkin%E2%80%93Huxley_model), a classic model in neuroscience, is a paradigm example of a single compartment model. It models different [ionic conductances](https://www.cvphysiology.com/Arrhythmias/A007a#:~:text=Ions%20move%20across%20the%20cell,change%20in%20the%20membrane%20potential.) in the neuron using a series of differential equations. According to [Izhikevich (2004)](https://www.izhikevich.org/publications/whichmod.pdf), it requires ~120 FLOPs per 0.1 ms of simulation – ~1e6 FLOP/s overall.[168](https://www.openphilanthropy.org/brain-computation-report#footnote168_hyw5s0o \"See figure 2.\")\n\t2. My understanding is that “[integrate-and-fire](https://pubmed.ncbi.nlm.nih.gov/16622699/#:~:text=The%20integrate%2Dand%2Dfire%20neuron%20model%20is%20one%20of%20the,injected%20current%20that%20it%20receives.)”-type models – another classic neuron model, but much more simplified – would also fall into this category. [Izhikevich (2004)](https://www.izhikevich.org/publications/whichmod.pdf) suggests that these require ~5-13 FLOPs per ms per cell, 5000-13,000 FLOP/s overall.[169](https://www.openphilanthropy.org/brain-computation-report#footnote169_beg4yu8 \"See figure 2. Integrate and fire models are roughly 5-15 FLOPs per ms: Hodgkin-Huxley is 1200.\")\n4. *Cascade models*. These models abstract away from ionic conductances, and instead attempt to model a neuron’s input-output mapping using a series of higher-level linear and non-linear mathematical operations, together with sources of noise. The “neurons” used in contemporary deep learning can be seen as variants of models in this category.[170](https://www.openphilanthropy.org/brain-computation-report#footnote170_bso0024 \" One expert I spoke to said this, though the comment didn’t end up in the conversation notes.\") These cascade models can also incorporate operations meant to capture transformations of synaptic inputs that occur in dendrites.[171](https://www.openphilanthropy.org/brain-computation-report#footnote171_yr22pja \"See Fig. 3. (p. 83), in Herz et al. (2006). The two-layer cascade model they discuss resembles the one suggested by Poirazi et al. (2003). See Section 2.1.2.2 for more discussion of dendritic computation in particular.\")\n5. *Black box models*. These neglect biological mechanisms altogether.\n\n\nProf. Erik De Schutter also mentioned that greater computing power has made even more biophysically realistic models available.[172](https://www.openphilanthropy.org/brain-computation-report#footnote172_b4piqbd \" From Open Philanthropy's non-verbatim notes from a conversation with Prof. Erik De Schutter: “Old multi-compartmental models, based on cable theory, described voltage in one dimension, and the typical resolution was on the order of tens of microns per compartment. That is adequate for modeling voltage, but molecular events happen on much smaller scales. Researchers now have much more computing power available to them, and so can build more ambitious models. For instance, they can now use fully stochastic, three-dimensional \\\"mesh\\\" models with sub-micron resolution (typically on the order of 100 nanometers). These can incorporate molecular reactions, as well as features of cell biology like spatial models of synaptic vesicles. fully stochastic, three-dimensional \\\"mesh\\\" models with sub-micron resolution (typically on the order of 100 nanometers). These can incorporate molecular reactions, as well as features of cell biology like spatial models of synaptic vesicles” (p. 1-2).\") And models can in principle be arbitrarily detailed.\n\n\nWhich of these models (if any) would be adequate to capture what matters about firing decisions? I’ll consider four categories of evidence: the predictive success of different neuron models; some specific arguments about the computational power of dendrites; a collection of other considerations; and expert opinion/practice.\n\n\n##### \n\n\n#### 2.1.2.1 Predicting neuron behavior\n\n\nLet’s first look at the success different models have had in predicting neuron spike patterns.\n\n\n\n#### 2.1.2.1.1 Standards of accuracy\n\n\nHow accurate do these predictions need to be? The question is still open.\n\n\nIn particular, debate in neuroscience continues about whether and when to focus on spike rates (e.g., the average number of spikes over a given period), vs. the timings of individual spikes.[173](https://www.openphilanthropy.org/brain-computation-report#footnote173_aasbfz6 \"From a review article by Brette (2015): “Do individual spikes matter or can neural computation be essentially described in terms of rates, with spikes physically instantiating this description? This contentious question has generated considerable debate in neuroscience, and is still unsettled” (p. 1). Brette lists a large number of citations relevant to the debate. It’s also possible that something else altogether matters as well (see, e.g., the discussion of other forms of axon signaling in Section 2.3.5).\")\n\n\n* Many results in neuroscience focus on rates,[174](https://www.openphilanthropy.org/brain-computation-report#footnote174_5k465h6 \"Koch (1999) describes a standard procedure: “In a typical physiological experiment, the same stimulus is presented multiple times to a neuron and its response is recorded (Fig. 14.1). One immediately notices that the detailed response of the cell changes from trial to trial….Given the pulselike nature of spike trains, the standard procedure to quantify the neuronal response is to count how many spikes arrived within some sampling window Δt and to divide this number by the number of presentations” (p. 331). One example of a plausible role of firing rates comes from neurons in the visual cortex, whose firing rates correlate with features of visual images. Classic results in this respect include motion-sensitive neurons in the frog visual system (sometimes characterized as “bug-detectors”) (see Maturna et al. (1960) (p. 148), and Yuste (2015), in the section on “History of the neuron doctrine”) and the orientation-selectivity of neurons in V1 (Hubel and Wisel (1959), also see video here). Maheswaranathan et al. (2019) also discuss various computations performed in the retina, all of which are expressed in terms of spike rates. Examples include Latency Coding, Motion Reversal, Motion Anticipation, and the Omitted Stimulus Response. See (p. 14). See also Surya Ganguli’s description of the results at 4:56 here. Markus Meister, in a 2016 talk (34:04), also discusses a retinal ganglion cell whose firing rate appears to respond to the average of the center of the images in a naturalistic movie (its firing rate remains roughly the same when the entire movie is reduced to this simple summary)\") as do certain neural prostheses.[175](https://www.openphilanthropy.org/brain-computation-report#footnote175_50ysqpj \"See e.g. Hochberg (2012): “Raw neural signals for each channel were sampled at 30 kHz and fed through custom Simulink (Mathworks Inc., Natick, MA) software in 100 ms bins (S3) or 20 ms bins (T2) to extract threshold crossing rates; these threshold crossing rates were used as the neural features for real-time decoding and for filter calibration” (p. 5). See also this discussion at (1:02:00-1:05:00) the Neuralink Launch Event on July 16, 2019. \")\n* In some contexts, it’s fairly clear that spike timings can be temporally precise.[176](https://www.openphilanthropy.org/brain-computation-report#footnote176_7zk9r1h \"See e.g. Weiss et al. (2018): “many sensory systems use millisecond or even sub-millisecond precise spike timing across sensory neurons to rapidly encode stimulus features (e.g., visual patterns in salamanders [Gollisch and Meister (2008)], direction of sound in barn owls [Carr and Konishi (1990)], and touch location in leeches [Thomson and Kristan (2006)])” (p. 76). Zuo et al. (2015), in a discussion of perceptual decisions in the rat somatosensory cortex: “These results indicate that spike timing makes crucial contributions to tactile perception, complementing and surpassing those made by rate” (abstract). See Funabiki et al. (2011) for very temporally precise in vivo sensitivity in the auditory system of owls, though this could emerge from combining many imprecise inputs: “In owls, NL neurons change their firing rates with changes in ITD of <10 μs (Carr and Konishi (1990); Peña et al. (1996)), far below the spike duration of the neurons (e.g., ∼1 ms).”\")\n* One common argument for rates appeals to variability in a neuron’s response to repeated exposure to the same stimulus.[177](https://www.openphilanthropy.org/brain-computation-report#footnote177_03l045b \"Brette (2015): “Perhaps the most used argument against spike-based theories is the fact that spike trains in vivo are variable both temporally and over trials (Shadlen and Newsome (1998)), and yet this might well be the least relevant argument. This assertion is what philosophers call a ‘category error’, when things of one kind are presented as if they belonged to another. Specifically, it presents the question as if it were about variability vs. reproducibility. I will explain how variability can arise in spike-based theories, but first an important point to make is that the rate-based view does not explain variability, but rather it simply states that there is variability” (see section on “Assertion #2”). Brette goes on to list a number of objections to appeals to variability as evidence for rate-based theories. \") My impression is that this argument is not straightforward to make rigorous, but it seems generally plausible to me that if rates are less variable than timings, they are also better suited to information-processing.[178](https://www.openphilanthropy.org/brain-computation-report#footnote178_0t746zz \"One expert suggested this type of thought.\")\n* A related argument is that in networks of artificial spiking neurons, adding a single spike results in very different overall behavior.[179](https://www.openphilanthropy.org/brain-computation-report#footnote179_18tt7id \"See e.g. Izhikevich and Edelman (2007), in the context of a neural network simulation: “We perturbed a single spike (34, 35) in this regime (out of millions) and showed that the network completely reorganized its firing activity within half a second. It is not clear, however, how to interpret this sensitivity in response to perturbations (Fig. 5). On one hand, one could say that this sensitivity indicates that only firing patterns in a statistical sense should be considered, and individual spikes are too volatile. On the other hand, one could say that this result demonstrates that every spike of every neuron counts in shaping the state of the brain, and hence the details of the behavior, at any particular moment. This conclusion would be consistent with the experimental observations that microstimulation of a single tactile afferent is detectable in human subjects (36), and that microstimulation of single neurons in somatosensory cortex of rats affects behavioral responses in detection tasks (37)” (p. 3597).\") This plausibly speaks against very precisely-timed spiking in the brain, since the brain is robust to forms of noise that can shift spike timings[180](https://www.openphilanthropy.org/brain-computation-report#footnote180_nzmknxi \"E.g., stochastic processes in the brain can cause a neuron to spike at one time, rather than another, without the brain's cognitive processing breaking down. See Faisal et al. (2008) for discussion of a number of these processes.\") as well as to our adding spikes to biological networks.[181](https://www.openphilanthropy.org/brain-computation-report#footnote181_7nq6g36 \"See Doose et al. (2016) for one study of in vivo stimulation in rats. Sandberg (2013) argues for a more general point in this vicinity: “Brains sensitive to microscale properties for their functioning would exhibit erratic and non-adaptive behavior” (p. 260). See also Hanson (2011) for comments in a somewhat similar vein. Though note that single impulse stimulation to nerve fibers can result in sensory responses in humans: Vallbo et al. (1984): “It was confirmed that a single impulse in a single FA I unit may elicit a sensory response in the attending subject, whereas a much larger input was required from SA I units, which are also less sensitive to mechanical stimuli. This was one of several findings supporting the impression that differential receptive properties, even within a group of afferents, were associated with different sensory responses. It was concluded that a train of impulses in a single tactile unit may produce within the brain of the subject a construct which specifies with great accuracy the skin area of the unit's terminals as well as a tactile subquality which is related to unit properties” (abstract).\")\n\n\nMy current guess is that in many contexts, but not all, spike rates are sufficient.\n\n\nEven if we settled this debate, though, we’d still need to know how accurately the relevant rates/timings would need to be predicted.[182](https://www.openphilanthropy.org/brain-computation-report#footnote182_0w89g43 \"From Open Philanthropy's non-verbatim notes from a conversation with Prof. Chris Eliasmith: There is no “magical answer” to the question of how accurate a model of neuron spiking needs to be. In experiments fitting neuron models to spike timing data, neuroscientists pick a metric, optimize their model according to that metric, and then evaluate the model according to that metric as well, leaving ongoing uncertainty about the importance of the aspects of neural activity that the relevant metric doesn’t capture” (p. 2). \") Here, a basic problem is that in many cases, we don’t know what tasks a neuron is involved in performing, or what role it’s playing. So we can’t validate a model by showing that it suffices to reproduce a given neuron’s role in task-performance – the test we actually care about.[183](https://www.openphilanthropy.org/brain-computation-report#footnote183_bsq2qbl \"From Open Philanthropy's non-verbatim notes from a conversation with Prof. Eve Marder: “It’s been hard to make progress in understanding neural circuits, because in order to know what details matter, you have to know what the circuit is doing, and in most parts of the brain, we don’t know this...It’s not that you can’t make simplifying assumptions. It’s that absent knowledge of what a piece of nervous system needs to be able to do, you have no way of assessing whether you’ve lost something fundamental or not” (p. 4); and the notes from Open Philanthropy's non-verbatim notes from a conversation with Prof. E.J. Chichilnisky: “It’s hard to know when to stop fine-tuning the details of your model. A given model may be inaccurate to some extent, but we don’t know whether a given inaccuracy matters, or whether a human wouldn’t be able to tell the difference (though focusing on creating usable retinal prostheses can help with this)” (p. 3).\")\n\n\nIn the absence of such validation, one approach is to try to limit the model’s prediction error to within the trial-by-trial variability exhibited by the biological neuron.[184](https://www.openphilanthropy.org/brain-computation-report#footnote184_nko203c \"Keat et al. (2001): “Is this level of accuracy sufficient? In the real world, the visual system operates exclusively on single trials, without the luxury of improving resolution by averaging many responses to identical stimuli. Nor is there much opportunity to average across equivalent cells, because neurons in the early visual system tend to tile the visual field with little redundancy. Consequently, operation of the visual system under natural conditions does not require the properties of these neurons to be specified more precisely than their trial-to-trial fluctuations. To understand a neuron's role in visual behavior, we therefore suggest that a model of the light response can be deemed successful if its systematic errors are as small as the neuron's random errors” (p. 810). See also Open Philanthropy's non-verbatim notes from a conversation with Dr. Stephen Baccus: “Prof. Baccus expects that there would be consensus in the field that if a model’s correlation with an individual cell’s response to a stimulus matches the correlation between that cell’s responses across different trials with that stimulus, and the model also captures all of the higher-order correlations across different cells, this would suffice to capture everything that the retina is communicating to the brain. Indeed, it would do so almost by definition” (p. 2).\") But if you can’t identify and control all task-relevant inputs to the cell, it’s not always clear what variability is or is not task-relevant.[185](https://www.openphilanthropy.org/brain-computation-report#footnote185_mplj43e \"Brette (2015): “The lack of reproducibility of neural responses to sensory stimuli does not imply that neurons respond randomly to those stimuli. There are a number of sensible arguments supporting the hypothesis that a large part of this variability reflects changes in the state of the neuron or of its neighbors, changes that are functionally meaningful” (see the section on the “State-Dependence”). See also the discussion in Faisal (2012): “The question whether this neuronal trial-to-trial variability is[:] Indeed just noise (defined in the following as individually unpredictable, random events that corrupt signals) [;] Results because the brain is to [sic] complex to control the conditions across trials (e.g. the organisms may become increasingly hungry or tired across trials) [;] Or rather the reflection of a highly efficient way of coding information [;] cannot easily be answered. In fact, being able to decide whether we are measuring the neuronal activity that is underlying the logical reasoning and not just meaning- less noise is a fundamental problem in neuroscience, with striking resemblance to finding the underlying message in cryptographic code breaking efforts (Rieke et al. (1997))” (p. 231).\")\n\n\nNor is it clear how much progress a given degree of predictive success represents.[186](https://www.openphilanthropy.org/brain-computation-report#footnote186_ata0g89 \"From Open Philanthropy's non-verbatim notes from a conversation with Dr. Stephen Baccus: “various correlation coefficient measures and information theory measures do not address the importance of the meaning of a given signal. For example, if your model misses a tiger hiding in the bushes, that’s pretty important, even though the difference might account for only a very small fraction of the correlation coefficient between your model and the retina’s response” (p. 2).\") Consider an analogy with human speech. I might be able to predict many aspects of human conversation using high-level statistics about common sounds, volume variations, turn-taking, and so forth, without actually being able to replicate or generate meaningful sentences. Neuron models with some predictive success might be similarly off the mark (and similar meanings could also presumably be encoded in different ways: e.g., “hello,” “good day,” “greetings,” etc.).[187](https://www.openphilanthropy.org/brain-computation-report#footnote187_m22okig \"My thanks to Carl Shulman and Katja Grace for discussion of this analogy.\")\n\n\n#### 2.1.2.1.2 Existing results\n\n\nWith these uncertainties in mind, let’s look at some existing efforts to predict neuron spiking behavior with computational models (these are only samples from a very large literature, which I do not attempt to survey).[188](https://www.openphilanthropy.org/brain-computation-report#footnote188_2jreu87 \"Naud and Gerstner (2012a) and Herz et al. (2006) for overviews of various models; and Guo et al. (2014) for a review of retinal models in particular.\")\n\n\nMany of these come with important additional caveats:\n\n\n* Many model *in vitro* neuron behavior, which may differ from *in vivo* behavior in important ways.[189](https://www.openphilanthropy.org/brain-computation-report#footnote189_tz98ekg \"See e.g. Schulz (2010): “the network state in vitro is fundamentally different from the in vivo situation. In acute slices in particular, background synaptic activity is almost absent.”\")\n* Some use simpler models to predict the behavior of more detailed models. But we don’t really know how good the detailed models are, either.[190](https://www.openphilanthropy.org/brain-computation-report#footnote190_9cl0xl9 \"From Open Philanthropy's non-verbatim notes from a conversation with Prof. Shaul Druckmann: “Prof. Druckmann does not think it obvious that the kind of multi-compartmental biophysical models neuroscientists generally use are adequate to capture what a neuron does, as these models, too, involve a huge amount of simplification. Calcium dynamics are the most egregious example. Real neurons clearly do things with calcium, which moves around the cell in a manner that has consequences for e.g. calcium-dependent ion channels. Most biophysical models, however, simplify this a lot, and in general, they treat ions just as concentrations affected by currents.” (p. 4).\")\n* We are very limited in our ability to collect *in vivo* data about the spatio-temporal input patterns at dendrites. This makes it hard to tell how models respond to realistic input patterns.[191](https://www.openphilanthropy.org/brain-computation-report#footnote191_5j8hayw \"From Open Philanthropy's non-verbatim notes from a conversation with Prof. Erik De Schutter: “At this point, we have no way to reliably measure the input-output transformation of a neuron, where the input is defined as a specific spatio-temporal pattern of synaptic input. You can build models and test their input-output mappings, but you don’t really know how accurate these models are… In live imaging, it’s very difficult to see what’s happening at synapses. Some people do calcium imaging of pre-synaptic terminals, but this is only for one part of the overall synaptic input (and it may create artefacts). Currently, you cannot get a global picture of all the synaptic inputs to a single neuron. You can’t stain all the inputs, and for a big neuron you wouldn’t be able to image the whole relevant volume of space… you don’t actually know what the physiological pattern of inputs is.” See also Ujfalussy et al. (2018): “Our understanding of neuronal input integration remains limited because it is either based on data from in vitro experiments, studying neurons under highly simplified input conditions, or on in vivo approaches in which synaptic inputs were not observed or controlled, and thus a systematic characterization of the input-output transformation of neurons was not possible” (2018); and Open Philanthropy's non-verbatim notes from a conversation with Prof. Shaul Druckmann: “It is very difficult to tell what spatio-temporal patterns of inputs are actually arriving at a neuron’s synapses in vivo. You can use imaging techniques, but this is very messy” (p. 2)\") And we know that certain behaviors (for example, dendritic non-linearities) are only triggered by specific input patterns.[192](https://www.openphilanthropy.org/brain-computation-report#footnote192_l73dgkh \"From Open Philanthropy's non-verbatim notes from a conversation with Prof. Shaul Druckmann: “many dendritic non-linearities contribute more strongly when triggered by synaptic inputs arriving at similar times to similar dendritic locations (“clustering”), and there is evidence that such clustering occurs (“clustering”), and there is evidence that such clustering occurs in vivo. In this sense, a random input regime is unrepresentative, more weakly non-linear than it should be and therefore may be particularly easy to model.” (p. 3).\")\n* We can’t stimulate neurons with arbitrary input patterns. This makes it hard to test their full range of behavior.[193](https://www.openphilanthropy.org/brain-computation-report#footnote193_2hel8di \"From Open Philanthropy's non-verbatim notes from a conversation with Prof. Erik De Schutter: “Using glutamate uncaging, you can reliably activate single dendritic spines in vitro, and you can even do this in a sequence of spines, thereby generating patterns of synaptic input. However, even these patterns are limited. For example, you can’t actually activate synapses simultaneously, because your laser beam needs to move; there’s only so much you can do in a certain timeframe; and because it’s glutamate, you can only activate excitatory neurons” (p. 2). From Open Philanthropy's non-verbatim notes from a conversation with Prof. Shaul Druckmann: “It is very difficult to tell how a neuron responds to arbitrary patterns of synaptic input. You can stimulate a pre-synaptic neuron and observe the response, but you can’t stimulate all pre-synaptic neurons in different combinations. And you can only patch-clamp one dendrite while also patch-clamping the soma (and this already requires world-class skill)” (p. 2).\")\n* Models that predict spiking based on current injection into the soma skip whatever complexity might be involved in capturing processing that occurs in dendrites.[194](https://www.openphilanthropy.org/brain-computation-report#footnote194_cnpaq20 \"From Open Philanthropy's non-verbatim notes from a conversation with Prof. Erik De Schutter: “There is a tradition of integrate and fire modeling that achieves very accurate fits of neuron firings in response to noisy current injection into the soma (more accurate, indeed, than could be achieved by current biophysical models). However, this is a very specific type of experiment, which doesn’t tell you anything about what happens to synaptic input in the dendrites” (p. 2). From Open Philanthropy's non-verbatim notes from a conversation with Prof. Shaul Druckmann: “One neuron modeling competition proceeded by assuming that dendritic inputs are randomly distributed, and that dendrites just integrate inputs linearly -- assumptions used to create a pattern of current to be injected into the soma of the neurons whose spikes were recorded. If these assumptions are true, then there is good reason to think that fairly simple models are adequate. However, these assumptions are very friendly to the possibility of non-detailed modeling. The point of complex models is to capture the possibly non-linear dendritic dynamics that determine what current goes into the soma: after that point, modeling is much easier. And we don’t know to what extent non-random inputs trigger these dendritic dynamics. There were also a few other aspects of this neuron modeling competition that were not optimal. For example, it was fairly easy to game the function used to evaluate the models” (p. 4). \")\n\n\nA number of the results I looked at come from the retina, a thin layer of neural tissue in the eye, responsible for the first stage of visual processing. This processing is largely (though not entirely) feedforward:[195](https://www.openphilanthropy.org/brain-computation-report#footnote195_dd98n7e \"From Open Philanthropy's non-verbatim notes from a conversation with Prof. Markus Meister: “Information in the retina also flows in an almost exclusively feedforward direction (though there are some feedback signals, and it is an interesting question what those fibers do)” (p. 3).\") the retina receives light signals via a layer of ~100 million photoreceptor cells (rods and cones),[196](https://www.openphilanthropy.org/brain-computation-report#footnote196_nt21qt4 \"See Meister et al. (2013) (p. 577-578). Note also that photoreceptor cells do not spike. Meister et al. (2013): “Photoreceptors do not fire action potentials; like bipolar cells they release neurotransmitter in a graded fashion using a specialized structure, the ribbon synapse” (p. 592).\") processes them in two further cell layers, and sends the results to the rest of the brain via spike patterns in the optic nerve – a bundle of roughly a million axons of neurons called *retinal ganglion cells*.[197](https://www.openphilanthropy.org/brain-computation-report#footnote197_y5bocf2 \"Meister et al. (2013): “The retina is a thin sheet of neurons, a few hundred micrometers thick, composed of five major cell types that are arranged in three cellular layers separated by two synaptic layers” (p. 577). See Meister et al. (2013) (p. 578). The optic nerve also contains glial cells (see Butt et al. (2004)).\")\n\n\n \n\n\n\n[![RetinaGanglionCells.png](https://www.openphilanthropy.org/files/Research/Brain_Compute/image5.png)](https://www.openphilanthropy.org/files/Research/Brain_Compute/image5.png)Figure 6: Diagram of the retina. From [Dowling (2007)](http://www.scholarpedia.org/article/Retina), unaltered. Licensed under [CC BY-SA 3.0](https://creativecommons.org/licenses/by-sa/3.0/).1[98](https://www.openphilanthropy.org/brain-computation-report#footnote198_0j0bxpc)\n\n\n \n\n\nI focused on the retina in particular partly because it’s the subject of a prominent functional method estimate in the literature (see [Section 3.1.1](#section_3.1.1)*)*, and partly because it offers advantages most other neural circuits don’t: we know, broadly, what task it’s performing (initial visual processing); we know what the relevant inputs (light signals) and outputs (optic nerve spike trains) are; and we can measure/manipulate these inputs/outputs with comparative ease.[199](https://www.openphilanthropy.org/brain-computation-report#footnote199_ehchkaf \"From Open Philanthropy's non-verbatim notes from a conversation with Prof. Markus Meister: “Information in the retina also flows in an almost exclusively feedforward direction (though there are some feedback signals, and it is an interesting question what those fibers do)” (p. 3)\\\"\") That said, as I discuss in [Section 3.1.2](#section_3.1.2), it may also be an imperfect guide to the brain as a whole.\n\n\nHere’s a table with various modeling results that purport to have achieved some degree of success. Most of these I haven’t investigated in detail, and don’t have a clear sense of the significance of the quoted results. And as I discuss in later sections, some of the deep neural network models (e.g., [Beniaguev et al. (2020)](https://www.biorxiv.org/content/10.1101/613141v2.full.pdf), [Maheswaranathan et al. (2019)](https://www.biorxiv.org/content/10.1101/340943v5.full.pdf), [Batty et al. (2017)](https://openreview.net/pdf?id=HkEI22jeg)) are very FLOP/s intensive (~1e7-1e10 FLOP/s per cell).[200](https://www.openphilanthropy.org/brain-computation-report#footnote200_y8kcqb3 \"See Section 2.1.2.2 for discussion of Beniaguev et al. (2020); and see Section 3.1 for discussion of Maheswaranathan et al. (2019) and Batty et al. (2017)).\") A more exhaustive investigation could estimate the FLOP/s costs of all the listed models, but I won’t do that here.\n\n\n \n\n\n\n\n\n\n\n| SOURCE | MODEL TYPE | THING PREDICTED | STIMULI | RESULTS |\n| --- | --- | --- | --- | --- |\n| [Beniaguev et al. (2020)](https://www.biorxiv.org/content/10.1101/613141v2.full.pdf) | Temporally convolutional network with 7 layers and 128 channels per layer | Spike timing and membrane potential of a detailed model of a Layer 5 cortical pyramidal cell | Random synaptic inputs | “accurately, and very efficiently, capture[s] the I/O of this neuron at the millisecond resolution … For binary spike prediction (Fig. 2D), the AUC is 0.9911. For somatic voltage prediction (Fig. 2E), the RMSE is 0.71mV and 94.6% of the variance is explained by this model” |\n| [Maheswaranathan et al. (2019)](https://www.biorxiv.org/content/10.1101/340943v5.full.pdf) | Three-layer convolutional neural network | Retinal ganglion cell (RGC) spiking in isolated salamander retina | Naturalistic images | >0.7 correlation coefficient (retinal reliability is 0.8) |\n| [Ujfalussy et al. (2018)](https://www.sciencedirect.com/science/article/pii/S0896627318307372) | Hierarchical cascade of linear-nonlinear subunits | Membrane potential of in-vivo validated biophysical model of L2/3 pyramidal cell | *In vivo*-like input patterns | “Linear input integration with a single global dendritic nonlinearity achieved above 90% prediction accuracy.” |\n| [Batty et al. (2017)](https://openreview.net/pdf?id=HkEI22jeg) | Shared two-layer recurrent network | RGC spiking in isolated primate retina | Natural images | 80% of explainable variance. |\n| [2016 talk (39:05)](https://youtu.be/2UpiWMukZeI?t=2344) by Markus Meister | Linear-non-linear | RGC spiking (not sure of experimental details) | Naturalistic movie | 80% correlation with real response (cross-trial correlation of real responses was around 85-90%). |\n| [Naud et al. (2014)](https://www.frontiersin.org/articles/10.3389/fncom.2014.00090/full) | Two compartments, each modeled with a pair of non-linear differential equations and a small number of parameters that approximate the Hodgkin-Huxley equations | *In vitro* spike timings of layer 5 pyramidal cell | Noisy current injection into the soma and apical dendrite | “The predicted spike trains achieved an averaged coincidence rate of 50%. The scaled coincidence rate obtained by dividing by the intrinsic reliability ([Jolivet et al. (2008a)](https://www.sciencedirect.com/science/article/abs/pii/S0165027007005535?via%3Dihub); [Naud and Gerstner (2012b)](https://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.381.6258&rep=rep1&type=pdf)) was 72%, which is comparable to the state-of-the performance for purely somatic current injection which reaches up to 76% ([Naud et al. (2009)](https://pdfs.semanticscholar.org/cb2c/7a2ff006349e763b08d7067de00f0308657d.pdf)).” |\n| [Bomash et al. (2013)](https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3544815/) | Linear-non-linear | RGC spiking in isolated mouse retina | Naturalistic and artificial | “the model cells carry the same amount of information,” “the quality of the information is the same.” |\n| [Nirenberg and Pandarinath (2012)](https://www.pnas.org/content/pnas/early/2012/08/08/1207035109.full.pdf) | Linear-non-linear | RGC spiking in isolated mouse retina | Natural scenes movie | “The firing patterns … closely match those of the normal retina,”; brain would map the artificial spike trains to the same images “90% of the time.” |\n| [Naud and Gerstner (2012a)](https://www.researchgate.net/publication/264893074_The_Performance_and_Limits_of_Simple_Neuron_Models_Generalizations_of_the_Leaky_Integrate-and-Fire_Model) | Review of a number of simplified neuron models, including Adaptive Exponential Integrate and Fire (AdEx) and Spike Response Model (SRM) | *In vitro* spike timings of various neuron types | Simulating realistic conditions *in vitro* by injecting a fluctuating current into the soma | “Performances are very close to optimal,” considering variation in real neuron responses. “For models like the AdEx or the SRM, [the percentage of predictable spikes predicted] ranged from 60% to 82% for pyramidal neurons, and from 60% to 100% for fast-spiking interneurons.” |\n| [Gerstner and Naud (2009)](https://science.sciencemag.org/content/326/5951/379.long) | Threshold model | *In vivo* spiking activity of neuron in the lateral geniculate nucleus (LGN) | Visual stimulation of the retina | Predicted 90.5% of spiking activity |\n| [Gerstner and Naud (2009)](https://science.sciencemag.org/content/326/5951/379.long) | Integrate-and-fire model with moving threshold | *In vitro* spike timings of (a) a pyramidal cell, and (b) an interneuron | Random current injection | 59.6% of pyramidal cell spikes, 81.6% of interneuron spikes. |\n| [Song et al. (2007)](https://bmsr.usc.edu/files/2012/09/1053.pdf) | Multi-input multi-output model | Spike trains in the CA3 region of the rat hippocampus while it was performing a memory task | Input spike trains recorded from rat hippocampus | “The model predicts CA3 output on a msec-to-msec basis according to the past history (temporal pattern) of dentate input, and it does so for essentially all known physiological dentate inputs and with approximately 95% accuracy.” |\n| [Pillow et al. (2005)](https://www.jneurosci.org/content/25/47/11003) | Leaky integrate and fire model | RGC spiking in *in vitro* macaque retina | Artificial (“pseudo-random stimulus”) | “The fitted model predicts the detailed time structure of responses to novel stimuli, accurately capturing the interaction between the spiking history and sensory stimulus selectivity.” |\n| [Brette and Gerstner (2005)](https://www.ncbi.nlm.nih.gov/pubmed/16014787) | Adaptive Exponential Integrate-and-fire Model | Spike timings for detailed, conductance-based neuron model | Injection of noisy synaptic conductances | “Our simple model predicts correctly the timing of 96% of the spikes (+/- 2 ms)…” |\n| [Rauch et al. (2003)](https://journals.physiology.org/doi/pdf/10.1152/jn.00293.2003) | Integrate-and-fire model with spike-frequency-dependent adaptation/facilitation | *In vitro* firing of rat neocortical pyramidal cells | *In vivo*-like noisy current injection into the soma. | “the integrate-and-fire model with spike-frequency- dependent adaptation /facilitation is an adequate model reduction of cortical cells when the mean spike frequency response to *in vivo*–like currents with stationary statistics is considered.” |\n| [Poirazi et al. (2003)](https://www.sciencedirect.com/science/article/pii/S0896627303001491) | Two-layer neural network | Detailed biophysical model of a pyramidal neuron | “An extremely varied, spatially heterogeneous set of synaptic activation patterns” | 94% of variance explained (a single-layer network explained 82%) |\n| [Keat et al. (2001)](https://www.sciencedirect.com/science/article/pii/S0896627301003221) | Linear-non-linear | RGC spiking in salamander and rabbit isolated retinas, and retina/LGN spiking in anesthetized cat | Artificial (“random flicker stimulus’) | “The simulated spike trains are about as close to the real spike trains as the real spike trains are across trials.” |\n\n\n\n \n\n\n**Figure 7: List of some efforts to predict neuron behavior that appear to have had some amount of success.** \n\n\nWhat should we take away from these results? Without much of an understanding of the details here, my current high-level take-away is that it seems like some models do pretty well in some conditions, but in many cases, these conditions aren’t clearly informative about *in vivo* behavior across the brain, and absent better functional understanding and experimental access, it’s hard to say exactly what level of predictive accuracy is required, in response to what types of inputs. There are also incentives to present research in an optimistic light, and contexts in which our models do much worse won’t have ended up on the list (though note, as well, that additional predictive accuracy need not require additional FLOP/s – it may be that we just haven’t found the right models yet).\n\n\nLet’s look at some other considerations.\n\n\n##### \n\n\n#### 2.1.2.2 Dendritic computation\n\n\nSome neuron models don’t include dendrites. Rather, they treat dendrites as directly relaying synaptic inputs to the soma.\n\n\nA common objection to such models is that dendrites can do more than this.[201](https://www.openphilanthropy.org/brain-computation-report#footnote201_9wwwpxb \"See e.g. London and Häusser (2005): “In this review we argue that this model is oversimplified in view of the properties of real neurons and the computations they perform. Rather, additional linear and nonlinear mechanisms in the dendritic tree are likely to serve as computational building blocks, which combined together play a key role in the overall computation performed by the neuron” (p. 504).\") For example:\n\n\n* The passive membrane properties of dendrites (e.g. resistance, capacitance, and geometry) can create nonlinear interactions between synaptic inputs.[202](https://www.openphilanthropy.org/brain-computation-report#footnote202_r43rc3s \"Stuart and Spruston (2015): “Rall and others found that the passive membrane properties of dendrites, that is, their resistance and capacitance as well as their geometry, influence the way neurons integrate synaptic inputs in complex ways, enabling a wide range of nonlinear operations” (p. 1713). For example: if you inject a high-frequency current into a dendrite, the local voltage response in that dendrite will be higher frequency and larger amplitude than the response recorded in the soma (see London and Häusser (2005) (p. 508)); when multiple inputs arrive in a similar dendritic location at the same time, the impact on the membrane potential of the first can reduce the size of the impact on the membrane potential of the other (see London and Häusser (2005) (p. 507)); and when excitatory and inhibitory inputs arrive at a similar location in the dendrite, the inhibitory input can “shunt” the excitatory input, reducing its impact on somatic membrane potential in a manner distinct from a linear sum, and perhaps even cancelling the excitatory signal entirely (see London and Häusser (2005) (p. 509)).\")\n* Active, voltage-dependent channels can create action potentials within dendrites, some of which can backpropagate through the dendritic tree.[203](https://www.openphilanthropy.org/brain-computation-report#footnote203_1iu6pxc \"See London and Häusser (2005) (p. 509-516), and Stuart and Spruston (2015) (p. 1713-1714). If a back-propagating action potential occurs at the same time as a certain type of input to the dendrite, this can trigger a burst of somatic action potentials (see London and Häusser (2005) (p. 509)). A new class of calcium-mediated dendritic action-potentials (dCaAPs) was recently discovered in humans, and shown to make possible a type of input-output relation previously thought to require a network of neurons. Gidon et al. (2020): “we investigated the dendrites of layer 2 and 3 (L2/3) pyramidal neurons of the human cerebral cortex ex vivo. In these neurons, we discovered a class of calcium-mediated dendritic action potentials (dCaAPs) whose waveform and effects on neuronal output have not been previously described…. These dCaAPs enabled the dendrites of individual human neocortical pyramidal neurons to classify linearly non-separable inputs—a computation conventionally thought to require multilayered networks” (from the abstract).\")\n\n\nEffects like these are sometimes called “dendritic computation.”[204](https://www.openphilanthropy.org/brain-computation-report#footnote204_1yiousr \"See Reyes (2001), London and Häusser (2005), Stuart and Spruston (2015), Payeur et al. (2019), and Poirazi and Papoutsi (2020) for reviews.\")\n\n\nMy impression is that the importance of dendritic computation to task-performance remains somewhat unclear: many results are *in vitro*, and some may require specific patterns of synaptic input.[205](https://www.openphilanthropy.org/brain-computation-report#footnote205_cukag7x \"See discussion of synaptic clustering on p. 310 of Poirazi and Papoutsi (2020), though they also suggest that “The above predictions suggest that dendritic — and, consequently, somatic — spiking is not necessarily facilitated by synaptic clustering, as was previously assumed” (p. 310).\") That said, one set of *in vivo* measurements found very active dendrites: specifically, dendritic spike rates 5-10x larger than somatic spike rates,[206](https://www.openphilanthropy.org/brain-computation-report#footnote206_xtzfmqx \"Moore et al. (2017): “The dendritic spike rates, however, were fivefold greater than the somatic spike rates of pyramidal neurons during slow-wave sleep and 10-fold greater during exploration. The high stability of dendritic signals suggested that these large rates are unlikely to arise due to the injury caused by the electrodes” (p. 1 of “Research Article Summary”).\") which the authors take to suggest that dendritic spiking might dominate the brain’s energy consumption.[207](https://www.openphilanthropy.org/brain-computation-report#footnote207_pjfgr2b \"Moore et al. (2017): “the total energy consumption in neural tissue ... could be dominated by the dendritic spikes” (p. 8). The Science summary here also notes that dendrites occupy more than 90% of neuronal tissue.\") Energy is scarce, so if true, this would suggest that dendritic spikes are important for something. And dendritic dynamics appear to be task-relevant in a number of neural circuits.[208](https://www.openphilanthropy.org/brain-computation-report#footnote208_8oeycmd \"See London and Häusser (2005) (p. 516-524), and Payeur et al. (2019) for examples. See also Schmidt-Hiever et al. (2017): “Our results suggest that active dendrites may therefore constitute a key cellular mechanism for ensuring reliable spatial navigation” (abstract).\")\n\n\nHow many extra FLOP/s do you need to capture dendritic computation, relative to “point neuron models” that don’t include dendrites? Some considerations suggest fairly small increases:\n\n\n* A number of experts thought that models incorporating a small number of additional dendritic sub-units or compartments would likely be adequate.[209](https://www.openphilanthropy.org/brain-computation-report#footnote209_y7u1r72 \"Stephen Baccus recalled estimates from Bartlett Mel to the effect that something in the range of five dendritic sub-units would be sufficient (see Open Philanthropy's non-verbatim notes from a conversation with Dr. Stephen Baccus, p. 3). Markus Meister also suggested that models of cortical pyramidal cells that include two point neurons -- one for the dynamics at the soma, and the other for the dynamics in the apical tuft -- can account for a lot of what’s going on (see Open Philanthropy's non-verbatim notes from a conversation with Prof. Markus Meister, p. 4). From Open Philanthropy's non-verbatim notes from a conversation with Prof. Anthony Zador: \\\"Much of Prof. Zador’s PhD work was devoted to the hypothesis that dendritic computation is the key difference between artificial neural networks and real brains. However, at the end of the day, he was led to the conclusion that dendritic computation does not make a qualitative difference to the computational capacity of a neuron. There is some computational boost, but the same effect could be achieved by replacing each biological neuron with a handful of artificial neurons\\\" (p. 3). See also Naud et al. (2014): “We conclude that a simple two-compartment model can predict spike times of pyramidal cells stimulated in the soma and dendrites simultaneously. Our results support that regenerating activity in the apical dendritic is required to properly account for the dynamics of layer 5 pyramidal cells under in-vivo-like conditions” (abstract). See also Ujfalussy et al. (2018), though I’m not sure exactly how complex their model was: “We used the hLN to predict the somatic membrane potential of an in vivo-validated detailed biophysical model of a L2/3 pyramidal cell. Linear input integration with a single global dendritic nonlinearity achieved above 90% prediction accuracy.” (abstract).\")\n* It may be possible to capture what matters about dendritic computation using a “point neuron” model.[210](https://www.openphilanthropy.org/brain-computation-report#footnote210_y3olldq \"See Li et al. (2019): “We derive an effective point neuron model, which incorporates an additional synaptic integration current arising from the nonlinear interaction between synaptic currents across spatial dendrites. Our model captures the somatic voltage response of a neuron with complex dendrites and is capable of performing rich dendritic computations” (p. 15246).\")\n* Some active dendritic mechanisms may function to “linearize” the impact at the soma of synaptic inputs that would otherwise decay, creating an overall result that looks more like direct current injection.[211](https://www.openphilanthropy.org/brain-computation-report#footnote211_ztde5jk \"From Open Philanthropy's non-verbatim notes from a conversation with Prof. Chris Eliasmith: “There are also arguments that certain forms of active dendritic computation function to “linearize” the inputs -- e.g., to combat the attenuation of an input signal as it travels through the dendritic tree, such that the overall result looks more like direct injection into the soma” (p. 3-4).\")\n* Successful efforts to predict neuron responses to task-relevant inputs (e.g., retinal responses to natural movies) would cover dendritic computation automatically (though at least some prominent forms of dendritic computation don’t happen in the retina).[212](https://www.openphilanthropy.org/brain-computation-report#footnote212_lamjoq9 \"For example, various results explore the computational role of active computation in the apical dendrite of cortical pyramidal cells (see London and Häusser (2005) for examples). For results related to dendritic computation that does happen in the retina, see Taylor et al. (2000) and Hanson et al. (2019).\")\n\n\n*Tree structure*\n\n\nOne of Open Philanthropy’s technical advisors (Dr. Dario Amodei) also suggests a more general constraint. Many forms of dendritic computation, he suggests, essentially amount to non-linear operations performed on sums of subsets of a neuron’s synaptic inputs.[213](https://www.openphilanthropy.org/brain-computation-report#footnote213_r4bf1x7 \"I'm not sure exactly what grounds this suggestion, but it is consistent with a number of abstract models of dendritic computation. See Poirazi et al. (2003); Tzilivaki et al. (2019); Jadi et al. (2014); and Ujfalussy et al. (2018). All of these use sigmoidal non-linearities in dendritic subunits. See e.g. Ujfalussy et al. (2018)\\\"We chose a sigmoid nonlinearity for several reasons. First, the sigmoid has been proposed elsewhere as an appropriate dendritic nonlinearity (Poirazi et al., 2003a, Polsky et al., 2004). Second, under different parameter settings and input statistics, the sigmoid is sufficiently flexible to capture purely linear, sublinear, and supralinear behavior, as well as combinations thereof.\\\"\") Because dendrites are structured as a branching tree, the number of such non-linearities cannot exceed the number of inputs,[214](https://www.openphilanthropy.org/brain-computation-report#footnote214_hsy7kch \"It is possible to formulate and prove this sort of limitation using graph theory. However, the proof is quite long, and I won’t include it here.\") and thus the FLOP/s costs they can impose is limited.[215](https://www.openphilanthropy.org/brain-computation-report#footnote215_xfzl14q \"Some assumption is required here to the effect that the non-linearities themselves can’t be that expensive, and/or performed many times in a row. I haven’t explored this much, but I could imagine questions about the interchangeability of nonlinearities in artificial neural networks being relevant (see discussion in next section). Poirazi et al. (2003), Tzilivaki et al. (2019), Jadi et al. (2014), and Ujfalussy et al. (2018) all use sigmoidal non-linearities, a standard version of which (y = 1 / (1 + exp-x)) appears to be ~4 FLOPs (see “Activation Functions” here).\") Feedbacks created by active dendritic spiking could complicate this picture, but the tree structure will still limit communication between branches. Various experts I spoke with were sympathetic to this kind of argument,[216](https://www.openphilanthropy.org/brain-computation-report#footnote216_tc37def \"See the notes from Open Philanthropy's non-verbatim notes from a conversation with Dr. Adam Marblestone (p. 5): As Dr. Marblestone understands this argument, the idea is that while there may well be dendritic non-linearities, you should expect a tree-like structure of local interactions, and activity in one part of the tree can’t exert fast, long-range influence on activity in another part. This rules out scenarios where, for example, any synapse can communicate with any other -- a scenario in which required compute could scale with the square of the number of synapses. This argument is consistent with Dr. Marblestone’s perspective, and he thinks it is very interesting, though it would be nice to formalize it more precisely. From Open Philanthropy's non-verbatim notes from a conversation with Prof. Barak Pearlmutter (p. 2): Prof. Pearlmutter was sympathetic to the idea that the tree-structure of dendrites would limit the compute burdens that dendritic computation could introduce. There is an important distinction between causal models that are tree-structured and ones that are not tree-structured. Non-tree structured causal model can have cycles that quickly become very computationally expensive, whereas tree structured models are comparatively easy to compute. He suggested that this type of consideration applies to dendrites as well (including in the context of feedbacks between the dendrites and the soma). Prof. Pearlmutter thought it a fairly good intuition that dendritic computation would only implicate a small constant factor increase in required compute, though very complicated local interactions could introduce uncertainty. From Open Philanthropy's non-verbatim notes from a conversation with Prof. Chris Eliasmith (p. 3): Prof. Eliasmith believes that neurons probably have non-linearities in their dendrites. In attempting to construct models of attention, for example, he has found that he needs more model neurons than seem biologically realistic, and the neuron count would go way down if he had certain kinds of non-linearities in the dendrites. Including these non-linearities would not drastically increase compute burdens (it might be equivalent to a 2× increase). A simple version would basically involve treating a single neuron as a two-layer neural network, in which dendrites collect inputs and then perform a non-linearity before passing the output to the soma. Prof. Eliasmith is sympathetic to the idea that the tree-structure of dendrites limits the additional complexity that dendritic computation could implicate in the context of such multi-layer networks (e.g., the tree-structure limits the outgoing connections of a dendritic sub-unit, and additional non-linearities in the neuron do not themselves add much compute in a regime where spikes through synapses are already the dominant compute burden). That said, there are many mechanisms in neurons that could in principle make everything more complicated.\") though one was skeptical.[217](https://www.openphilanthropy.org/brain-computation-report#footnote217_suebofg \"From Open Philanthropy's non-verbatim notes from a conversation with Prof. Shaul Druckmann: \\\"Prof. Druckmann does not think that appeals to the manageable compute burdens of modeling of dendrites as comparatively small multi-layer neural networks (for example, with each dendritic sub-unit performing its own non-linearity on a subset synaptic inputs) definitively address the possibility that modeling dendritic non-linearities requires very large amounts of compute. Small multi-layer network models are really just a guess about what’s required to capture the neuron’s response to realistic inputs. For example, in a recent unpublished paper, David Beniaguev, Idan Segev, and Michael London found that adding NMDA currents to the detailed model increased the size of the neural network required to replicate its outputs to seven layers (the long time-constant of NMDA receptors increases the complexity of the neuron’s input-output transformation). Adding in other neuron features could require many more layers than this. 10 layers might be manageable, but 500 is a pain, and the true number is not known\\\" (p. 3).\")\n\n\nHere’s a toy illustration of this idea.[218](https://www.openphilanthropy.org/brain-computation-report#footnote218_e4dml2i \"This type of illustration was also suggested by Dr. Amodei.\") Consider a point neuron model that adds up 1000 synaptic inputs, and then passes them through a non-linearity. To capture the role of dendrites, you might modify this model by adding, say, 10 dendritic subunits, each performing a non-linearity on the sum of 100 synaptic inputs, the outputs of which are summed at the soma and then passed through a final non-linearity (multi-layer approaches in this broad vicinity are fairly common).[219](https://www.openphilanthropy.org/brain-computation-report#footnote219_x4qj9fw \" See Poirazi et al. (2003); Tzilivaki et al. (2019); Jadi et al. (2014); and Ujfalussy et al. (2018).\")\n\n\n \n\n\n![](https://www.openphilanthropy.org/wp-content/uploads/subunitdiagram2-e1645205565247.png)**Figure 8: Contrasting a point neuron model with a tree-structured dendritic sub-unit model.**\n\n \n\n\n \n\n\nIf we budget 1 FLOP per addition operation, and 10 per non-linearity (this is substantial overkill for certain non-linearities, like a ReLU),[220](https://www.openphilanthropy.org/brain-computation-report#footnote220_y9dib6e \"From Open Philanthropy's non-verbatim notes from a conversation with Dr. Paul Christiano: “A ReLU costs less than a FLOP. Indeed, it can be performed with many fewer transistors than a multiply of equivalent precision” (p. 6). See here for some discussion of the FLOPs costs of a tanh, and here for discussion of exponentials. A standard sigmoid activation (y = 1 / (1 + exp-x)) appears to be ~4 FLOPs (see “Activation Functions” here). Poirazi et al. (2003) use various sigmoids in this vein, see Figure 5.\") we get the following budgets:\n\n\n\n> **Point neuron model**: \n> \n> Soma: 1000 FLOPs (additions) + 10 FLOPs (non-linearity) \n> \n> Total: 1010 FLOPs \n> \n> **Sub-unit model**: \n> \n> Dendrites: 10 (subunits) × (100 FLOPs (additions) + 10 FLOPs (non-linearity)) \n> \n> Soma: 10 FLOPs (additions) + 10 FLOPs (non-linearity) \n> \n> Total: 1120 FLOPs\n> \n> \n\n\nThe totals aren’t that different (in general, the sub-unit model requires 11 additional FLOPs per sub-unit), even if the sub-unit model can do more interesting things. And if the tree-structure caps the number of non-linearities (and hence, sub-units) at the number of inputs, then the maximum increase is a factor of ~11×.[221](https://www.openphilanthropy.org/brain-computation-report#footnote221_fakywoz \"This factor is centrally determined by the ratio of FLOPs per input to FLOPs per non-linearity. This is 10x in the example above, but this is on the high end for non-linearities in ANNs.\") This story would alter if, for example, subunits could be fully connected, with each receiving all synaptic inputs, or all the outputs from subunits in a previous layer. But this fits poorly with a tree structured physiology.\n\n\nNote, though, that the main upshot of this argument is that dendritic non-linearities won’t add that much computation *relative to a model that budgets 1 FLOP per input connection per time-step*. Our budget for synaptic transmission above, however, was based on spikes through synapses per second, not time-steps per synapse per second. In that context, if we assume that dendritic non-linearities need to be computed every time-step, then adding e.g. 100 or 1000 extra dendritic non-linearities per neuron could easily increase our FLOP/s budget by 100 or 1000x (see endnote for an example).[222](https://www.openphilanthropy.org/brain-computation-report#footnote222_au7fn0r \"Thus, for example, assuming 1000 inputs and a 1 Hz average firing rate, on average there will be one spike through synapse per 1 ms timestep. If we budget 1 FLOP per spike through synapse, but assume 100 dendritic sub-units, each performing non-linearities on 10 synaptic input connections each, and we assume that everything but spikes through synapses must be computed every time-step, we get the following budget per 1 ms timestep:  Point neuron model (assuming sparse FLOP/s for synaptic transmission):   Soma: 1 FLOPs (average number of input spikes per ms) + 10 FLOPs (non-linearity)   Total: 11 FLOPs  Sub-unit model:   Dendrites: 100 (subunits) × (.01 FLOPs (average number spikes through synapse per 10 synapses per ms) + 10 FLOPs (non-linearity))   Soma: 100 FLOPs (additions from sub-unit outputs) + 10 FLOPs (non-linearity)   Total: ~1110 FLOPs \") That said, my impression is that many actual ANN models of dendritic computation use fewer sub-units, and it may be possible to avoid computing firing decisions/dendritic non-linearities every time-step as well – see brief discussion in [section 2.1.2.](#section_2.1.2.5)[5](https://www.openphilanthropy.org/brain-computation-report#OverallFlopsforFiringDecisions).\n\n\n*Cortical neurons as deep neural networks*\n\n\nWhat about evidence for larger FLOP/s costs from dendritic computation? One interesting example is [Beniaguev et al. (2020)](https://www.biorxiv.org/content/10.1101/613141v2.full.pdf), who found that they needed a very large deep neural network (7 layers, 128 channels per layer) to accurately predict the outputs of a detailed biophysical model of a cortical neuron, once they added conductances from a particular type of receptor ([NMDA receptors](https://en.wikipedia.org/wiki/NMDA_receptor)).[223](https://www.openphilanthropy.org/brain-computation-report#footnote223_122mthx \"Beniaguev et al. (2020): “A thorough search of configurations of deep and wide fully-connected neural network architectures (FCNs) have failed to provide a good fit to the I/O characteristics of the L5PC model. These failures suggest a substantial increase in the complexity of I/O transformation compared to that of I&F. Indeed, only temporally convolutional network architecture (TCN) with 7 layers and 128 channels per layer, provided a good fit (Fig. 2B, C Fig. S5)” (p. 7).\") Without these conductances, they could do it with a much smaller network (a fully connected DNN with 128 hidden units and only one hidden layer), suggesting that it’s the dynamics introduced by NMDA-conductances in particular, as opposed to the behavior of the detailed biophysical model more broadly, that make the task hard.[224](https://www.openphilanthropy.org/brain-computation-report#footnote224_smjhlmp \"Beniaguev et al. (2020): “We hypothesized that removing NMDA dependent synaptic currents from our L5PC model will significantly decrease the size of the respective DNN… after removing the NMDA voltage dependent conductance, such that the excitatory input relies only on AMPA mediated conductances, we have managed to achieve a similar quality fit as in Fig. 2 when using a much smaller network - a fully connected DNN (FCN) with 128 hidden units and only a single hidden layer (Fig. 3B). This significant reduction in complexity is due to the ablation of NMDA channels” (p. 8-10).\")\n\n\nThis 7-layer network requires a *lot* of FLOPs: roughly 2e10 FLOP/s per cell.[225](https://www.openphilanthropy.org/brain-computation-report#footnote225_nitsi6h \"Here's my estimate, which the lead author tells me looks about right. 1st layer: 1278 synaptic inputs × 35 × 128 = 5.7 million MACCs (from line 140 and lines 179-180 here); Next 6 layers: 6 layers × 128 × 35 × 128 = 3.4 million MACCs. Total per ms: ~ 10 million MACCs. Total per second: ~10 billion MACCs. Multiplied by 2 to count individual FLOPs (see “It’s dot products all the way down” here) = ~20 billion FLOP/s per cell. Though the authors also note that “the accuracy of the model was insensitive to the temporal kernel sizes of the different DNN layers when keeping the total temporal extent of the entire network fixed, so the temporal extent of the first layer was selected to be larger than subsequent layers mainly for visualization purposes” (p. 7). I’m not sure what kind of difference this might make. Note also that this is still less than the biophysical model itself, which they say ran several orders of magnitude slower: “Note that, despite its seemingly large size, the resulting TCN represents a substantial decrease in computational resources relative to a full simulation of a detailed biophysical model (involving numerical integration of thousands of nonlinear differential equations), as indicated by a speedup of simulation time by several orders of magnitude” (p. 8).\") Scaled up by 1e11 neurons, this would be **~2e21 FLOP/s overall**. And these numbers could yet be too small: perhaps you need greater temporal/spatial resolution, greater prediction accuracy, a more complex biophysical model, etc., not to mention learning and other signaling mechanisms, in order to capture what matters.\n\n\nI think that this is an interesting example of positive evidence for very high FLOP/s estimates. But I don’t treat it as strong evidence on its own. This is partly out of general caution about updating on single studies (or even a few studies) I haven’t examined in depth, especially in a field as uncertain as neuroscience. But there are also a few more specific ways these numbers could be too high:\n\n\n* It may be possible to use a smaller network, given a more thorough search. Indeed, the authors suggest that this is likely, and have made data available to facilitate further efforts.[226](https://www.openphilanthropy.org/brain-computation-report#footnote226_5j8664a \"Beniaguev et al. (2020) (p. 15): It is important to emphasize that, due to optimization, the complexity measure described above is an upper bound of the true computational complexity of the I/O of a single neuron, i.e., it is possible that there exists a much smaller neural network that could mimic the biophysical neuron with a similar degree of accuracy but the training process we used could not find it. Additionally, we note that we have limited our architecture search space only to fully connected (FCN) and temporally convolutional (TCN) neural network architectures. It is likely that additional architectural search could yield even simpler and more compact models for any desired degree of prediction accuracy. In order to facilitate this search inthe [sic] scientific community, we hereby release our large readymade [sic] dataset of simulated inputs and outputs of a fully complex single layer 5 cortical neuron in an invivo [sic] like regime so that the community can focus on modelling various aspects of this endeavour and avoid running the simulations themselves.\")\n* They focus on predicting both membrane potential and individual spikes very precisely.\n* This is new (and thus far unpublished) work, and I’m not aware of other results of this kind.\n\n\nThe authors also suggest an interestingly concrete way to validate their hypothesis: namely, teach a cortical L5 pyramidal neuron to implement a function that this kind of 7-layer network can implement, such as classifying handwritten digits.[227](https://www.openphilanthropy.org/brain-computation-report#footnote227_i4s9tjl \"Beniaguev et al. (2020): “now that we estimate that a cortical L5 pyramidal neuron is equivalent to a deep network with 7 hidden layers, this DNN could be used to teach the respective neuron to implement a function which is in the scope of the capabilities of such a network, such as classifying hand written digits or a sequence of auditory sounds. One can then both validate the hypothesis that single neurons could perform complex computational tasks and investigate how these neurons can implement such complex tasks” (p. 16).\") If biological neurons can perform useful computational tasks thought to require very large neural networks to perform, this would indeed be very strong evidence for capacities exceeding what simple models countenance.[228](https://www.openphilanthropy.org/brain-computation-report#footnote228_eq9x5h6 \"Though see Open Philanthropy's non-verbatim notes from a conversation with Dr. Paul Christiano: “Dr. Christiano is very skeptical of the hypothesis that a single, biological cortical neuron could be used to classify handwritten digits” (p. 6). \") That said, “X is needed to predict the behavior of Y” does not imply that “Y can do anything X can do” (consider, for example, a supercomputer and a hurricane).\n\n\nOverall, I think that dendritic computation is probably the largest source of uncertainty about the FLOP/s costs of firing decisions. I find the [Beniaguev et al. (2020)](https://www.biorxiv.org/content/10.1101/613141v2.full.pdf) results suggestive of possible lurking complexity; but I’m also moved somewhat by the relative simplicity of some common abstract models of dendritic computation, by the tree-structure argument above, and by experts who thought dendrites unlikely to imply a substantial increase in FLOP/s.\n\n\n\n#### 2.1.2.3 Crabs, locusts, and other considerations\n\n\nHere are some other considerations relevant to the FLOP/s costs of firing decisions.\n\n\n*Other experimentally accessible circuits*\n\n\nThe retina is not the only circuit where we have (a) some sense of what task it’s performing, and (b) relatively good experimental access. Here are two others I looked at that seem amenable to simplified modeling.\n\n\n* [A collection of ~30 neurons](http://www.scholarpedia.org/article/Stomatogastric_ganglion) in the [decapod](https://en.wikipedia.org/wiki/Decapoda) crustacean stomach create rhythmic firing patterns that control muscle movements. Plausibly, maintaining these rhythms is the circuit’s high-level task.[229](https://www.openphilanthropy.org/brain-computation-report#footnote229_gukt45d \"From Open Philanthropy's non-verbatim notes from a conversation with Prof. Eve Marder: “You can see maintaining these rhythms as the high-level function that the circuit is performing at a given time (transitions between modes of operation are discussed below). Neuroscientists had a wiring diagram for the pyloric rhythm in 1980, and there was a fairly good first-principles idea of how it worked back then. It is not too difficult to model tri-phasic rhythm” (p. 1).\") Such rhythms can be modeled well using single-compartment, Hodgkin-Huxley-type neuron models.[230](https://www.openphilanthropy.org/brain-computation-report#footnote230_b1k70h2 \"From Open Philanthropy's non-verbatim notes from a conversation with Prof. Eve Marder: “Prof. Marder and her collaborators have used single-compartment conductance models to replicate the rhythms in the stomatogastric ganglion” (p. 4). And from Open Philanthropy's non-verbatim notes from a conversation with Dr. Adam Marblestone: “These neurons create oscillations that can be very well modeled and understood using Hodgkin-Huxley type neuron models” (p. 4).\") And naively, it seems to me like they could be re-implemented directly without using neuron models at all.[231](https://www.openphilanthropy.org/brain-computation-report#footnote231_6b3e6kd \"E.g., if what matters about these rhythms is that just that units activate in a certain regular, rhythmic sequence (I'm not sure about the details here, and the full range of dynamics that matter could be much more complicated), it seems possible to create this sort of sequence in a very non-brain-like way. That said, achieving the brain's level of robustness and flexibility in maintaining these rhythms across different circumstances is a different story.\") What’s more, very different biophysical parameters (for example, synapse strengths and intrinsic neuron properties) result in very similar overall network behavior, suggesting that replicating task-performance does not require replicating a single set of such parameters precisely.[232](https://www.openphilanthropy.org/brain-computation-report#footnote232_ototbnh \"Prinz et al. (2004): “To determine how tightly neuronal properties and synaptic strengths need to be tuned to produce a given network output, we simulated more than 20 million versions of a three-cell model of the pyloric network of the crustacean stomatogastric ganglion using different combinations of synapse strengths and neuron properties. We found that virtually indistinguishable network activity can arise from widely disparate sets of underlying mechanisms, suggesting that there could be considerable animal-to-animal variability in many of the parameters that control network activity, and that many different combinations of synaptic strengths and intrinsic membrane properties can be consistent with appropriate network performance” (p. 1345). See also Marder and Goaillard (2006) for review of other related findings, for example Figure 2, “Neurons with similar intrinsic properties have different ratios of conductances” (p. 566), Figure 4, “Similar network behavior with different underlying conductances” (p. 569) and Figure 6, “Constancy of network performance despite major size changes during growth” (p. 571). See also the non-verbatim notes from Open Philanthropy's non-verbatim notes from a conversation with Dr. Adam Marblestone: “There are important molecular mechanisms at work, but these function to make the circuit robust. For example, across crabs, gene expression levels in equivalent stomatogastric neurons vary a lot, but they are correlated within a given crab, suggesting that there are many different gene expression solutions that can create the same functioning network, and that the cell’s mechanisms are set up to make sure the neurons find such a solution. This system has many different possible states, which can be induced by different neuromodulators. But in any given one of those states, the real-time, fast computation is fairly understandable. Perhaps the whole brain is like that” (p. 4).\") That said, Prof. Eve Marder, an expert on this circuit, noted that the circuit’s biophysical mechanisms function in part to ensure smooth transitions between modes of operation – transitions that most computational models cannot capture.[233](https://www.openphilanthropy.org/brain-computation-report#footnote233_6ucbfmw \" From Open Philanthropy's non-verbatim notes from a conversation with Prof. Eve Marder: “Biology has found a series of mechanisms that allow the system to transition smoothly between different modes of operation. For example, you can walk slowly or quickly. Although eventually you will change gait. Prof. Marder believes that such smooth transitions are centrally important to understanding brains, especially big brains. The mechanisms involved allow brains to avoid having to fine-tune or find singular solutions. However, most computational models don’t capture these transitions. For example, if you want to capture the behavior of an eight channel neuron with a three channel model, you’ll hit nasty bifurcations. Indeed, one hypothesis is that neurons have many ion channels with overlapping functions because this facilitates smooth transitions between states” (p. 2).\")\n* In a circuit involved in [locust collision avoidance](https://www.researchgate.net/profile/Haleh_Fotowat/publication/50362225_Collision_Detection_as_a_Model_for_Sensory-Motor_Integration/links/00b4953a9717f744e2000000.pdf), low-level biophysical dynamics in the dendrites and cell body of a task-relevant neuron are thought to implement high-level mathematical operations (logarithm, multiplication, addition) that a computational model could replicate directly.[234](https://www.openphilanthropy.org/brain-computation-report#footnote234_9hza8a8 \"Locusts jump out of the way when you show them a “looming stimulus” -- that is, a visual stimulus that grows in size in a manner that mimics an object on a collision course with the locust (see videos here and slower-motion here). In a particular locust neuron known as the lobula giant movement detector (LGMD), the firing rate of this neuron increases, peaks, and decreases as collision with the object appears to become imminent, and the peak firing rate occurs with a fixed delay after the object reaches a particular threshold angular size on the retina (See Fotowat and Gabbiani (2011) (p. 4)). Gabbiani et al. (2002) hypothesize that this angular size “might be the imaged-based retinal variable used to trigger escape responses in the face of an impending collision. Indeed, a leg flexion (presumably in preparation for an escape jump) has been shown to follow the peak LGMD firing rate with a fixed delay” (p. 320). The LGMD also synapses onto a further neuron -- the descending contralateral movement detector (DCMD) -- that connects to motor neurons responsible for jumping, and which itself fires every time the LGMD fires. The timing of take-off can be very well predicted from the peak firing rate of the DCMD (see Fotowat and Gabbiani (2011) (p. 12)). What’s more, examination of the physiology of the neuron supports a particular hypothesis about how its biological hardware implements this function. The dendritic tree of the LGMD can be divided into two portions -- an excitatory portion and an inhibitory portion. The excitatory portion receives input from the visual system roughly proportionate to the angular velocity (that is, the rate of change of the angular size) of the stimulus raised to the power of two to three, and then outputs positive current roughly proportionate to the logarithm of angular velocity. The inhibitory portion, by contrast, receives input roughly proportionate to the square of the angular size of the stimulus, and outputs negative current in an approximately linear relationship to the angular size of the stimulus (the relationship is actually best described by a sigmoid, but it is treated as linear in the overall model). These positive and negative currents then combine at the spike initiation zone in a manner that results in an overall membrane potential that reflects the sum of the positive and negative currents. The average spiking rate of the neuron is then proportionate to the membrane potential raised to the power three, which is roughly equivalent to an exponential at the relevant scales (see Jones and Gabbiani (2012), Figure 8, for a description of this hypothesis, together with Christof Koch’s discussion here).\")\n\n\nI expect that further examination of the literature would reveal other examples in this vein.[235](https://www.openphilanthropy.org/brain-computation-report#footnote235_8iz3kq7 \"See Fig 1 in Jadi et al. (2014) for some other examples of circuit models using point neuron models. They cite Raymond et al. (1996) for cerebellar circuit models; Raphael et al. (2010) for a model of the spinal cord; and Crick (1984) for a model of attention. Grid cells might be another example, and the Jeffress model of auditory coincidence detection. See also Open Philanthropy's non-verbatim notes from a conversation with Prof. Eve Marder: “There are also some circuits in leeches, C. elegans, flies, and electric fish that are relatively well-characterized” (p. 4).\")\n\n\n*Selection effects*\n\n\nNeuroscientific success stories might be subject to selection effects.[236](https://www.openphilanthropy.org/brain-computation-report#footnote236_f3atlod \"From Open Philanthropy's non-verbatim notes from a conversation with Dr. Stephen Larson: “There may be selection bias at work in appeals to the success of simple models in some contexts as evidence for their adequacy in general. With respect to phenomena that simple models have thus far failed to explain, such explanation might not be possible” (p. 4).\") For example, the inference “A, B, and C can be captured with simple models, therefore probably X, Y, and Z can too” is bad if the reason X, Y, and Z haven’t yet been so captured is that they can’t be.\n\n\nHowever, other explanations may also be available. For example, it seems plausible to me we’ve had more success in peripheral sensory/motor systems than deeper in the cortex because of differences in the ease with which task-relevant inputs and outputs can be identified, measured, and manipulated, rather than differences in the computation required to run adequate models of neurons in those areas.[237](https://www.openphilanthropy.org/brain-computation-report#footnote237_n3kinnk \"I’m partly influenced here by discussions with Dr. Adam Marblestone, see Open Philanthropy's non-verbatim notes from a conversation with Dr. Adam Marblestone: “Dr. Marblestone does not think that selection effects nullify the evidence provided by our understanding of peripheral sensory and motor systems. E.g., it’s not that we did experiments on a bunch of systems, and some of them we couldn’t figure out, and some of them we could. Rather, the distribution of neuroscientific success has more to do with our experimental access to peripheral sensory/motor systems, together with differences in the types of theories you would need to have in order to explain more architecturally-complex circuits deeper in the brain. Similarly, Dr. Marblestone does not think that the fact that we can’t simulate C. elegans is a good argument for any kind of special computation taking place within C. elegans neurons. Lots of other explanations are available: notably, that it’s very difficult to figure out the right parameters” (p. 8). See also the section in the notes from Open Philanthropy's non-verbatim notes from a conversation with Prof. Markus Meister entitled “Scientific advantages of peripheral systems” (p. 2-3), as well as Open Philanthropy's non-verbatim notes from a conversation with Prof. Eve Marder (p. 4), section title: “The epistemic barriers to understanding circuits.”\") And FLOP/s requirements do not seem to be the major barrier to e.g. *C. elegans* simulation.[238](https://www.openphilanthropy.org/brain-computation-report#footnote238_t18s8xs \"From Open Philanthropy's non-verbatim notes from a conversation with Dr. Stephen Larson, who works on the OpenWorm project: “Despite its small size, we do not yet have a model that captures even 50% of the biological behavior of the C. elegans nervous system. This is partly because we’re just getting to the point of being able to measure what the worm’s nervous system is doing well enough” (p. 1). David Dalrymple, who used to work on emulating C. elegans, writes: “What you actually need is to functionally characterize the system's dynamics by performing thousands of perturbations to individual neurons and recording the results on the network, in a fast feedback loop with a very very good statistical modeling framework which decides what perturbation to try next.” Sarma et al. (2018), in an overview of OpenWorm’s progress, write: “The level of detail that we have incorporated to date is inadequate for biological research. A key remaining component is to complete the curation and parameter extraction of Hodgkin–Huxley models for ion channels to produce realistic dynamics in neurons and muscles” (Section 3).\")\n\n\n*Evolutionary history*\n\n\nTwo experts (one physicist, one neuroscientist) mentioned the evolutionary history of neurons as a reason to think that they don’t implement extremely complex computations. The basic thought here seemed to be something like: (a) neurons early in evolutionary history seem likely to have been doing something very simple (e.g., basic stimulus-response behavior), (b) we should expect evolution to tweak and recombine these relatively simple components, rather than to add a lot of complex computation internal to the cells, and (c) indeed, neurons in the human brain don’t seem that different from neurons in very simple organisms.[239](https://www.openphilanthropy.org/brain-computation-report#footnote239_6xq37ft \"From Open Philanthropy's non-verbatim notes from a conversation with Dr. Adam Marblestone: “Some neural circuits, like ones in the spinal cord, are very simple. And one can imagine primitive synapses, involved in primitive computations like “if you get some dopamine, move this part of the jellyfish like so.” Genetic programs build these machines on the basis of relatively simple specifications, and you have to be able to reliably repurpose these machines without every molecule mattering. Dr. Marblestone expects that evolution proceeded by reusing and recombining these relatively simple, reliable components” (p. 4-5). See also Open Philanthropy's non-verbatim notes from a conversation with Prof. Jared Kaplan: “It is theoretically possible that there is a large amount of additional computation taking place within neurons, but this seems very implausible, and Prof. Kaplan finds it difficult to evaluate arguments that condition on this possibility. One reason this seems implausible is that neurons aren’t that different across species, and it does not seem plausible to Prof. Kaplan that in simple species with very few neurons, large amounts of computation are taking place inside the neurons. One would need a story about when this complex internal computation developed in the evolutionary history of neurons” (p. 2-3).\") I haven’t looked into this, but it seems like an interesting angle.[240](https://www.openphilanthropy.org/brain-computation-report#footnote240_gwqlnq7 \"Though see also comments from Open Philanthropy's non-verbatim notes from a conversation with Prof. Erik De Schutter: “The brain was not engineered. Rather, it evolved, and evolution works by adding complexity, rather than by simplification. There are good reasons for this complexity. In order to evolve, you can’t have systems, at any level (proteins, channels, cells, brain regions), with unique functions. If you did, and a single mutation knocked out the function, the whole system would crash. Whereas if you have overlapping functions, performance suffers somewhat, but something else can take over. If you don’t allow for this, you can’t evolve, since evolution works by random mutations, and most mutations are not positive” (p. 4).\")\n\n\n*Communication bottlenecks*\n\n\nA number of experts mentioned limitations on the bits that a neuron receives as input and sends as output (limitations imposed by e.g. firing precision, the number of distinguishable synaptic states, etc.) as suggestive of a relatively simple input-output mapping.[241](https://www.openphilanthropy.org/brain-computation-report#footnote241_17adylm \"Dr. Dario Amodei suggests considerations in this vein, though I’m not sure I’ve understood what he has in mind. See also Open Philanthropy's non-verbatim notes from a conversation with Prof. Jared Kaplan: “most of his probability mass on the hypothesis that most of the computation performed by the brain is visible as information transferred between synapses… It is theoretically possible that there is a large amount of additional computation taking place within neurons, but this seems very implausible” (p. 2); and my discussions of the communication method with Dr. Paul Christiano, see Open Philanthropy's non-verbatim notes from a conversation with Dr. Paul Christiano. That said, Amodei, Christiano, and Kaplan all work at the same organization (OpenAI), so their beliefs and arguments may be correlated due to internal discussion.\")\n\n\nI’m not sure exactly how this argument works (though I discuss one possibility in the communication method section). In theory, very large amounts of computation can be required to map a relatively small number of possible inputs (e.g., [the product of two primes](https://en.wikipedia.org/wiki/RSA_numbers#:~:text=Opteron%2Dbased%20computer.-,RSA%2D240,Emmanuel%20Thom%C3%A9%20and%20Paul%20Zimmermann.), [a boolean formula](https://en.wikipedia.org/wiki/Boolean_satisfiability_problem)) to a small a number of possible outputs (e.g., the prime factors, a bit indicating whether the formula is satisfiable).[242](https://www.openphilanthropy.org/brain-computation-report#footnote242_czywoue \"From Open Philanthropy's non-verbatim notes from a conversation with Dr. Paul Christiano: “Neurons receive only a limited number of bits in, and they output only a limited number of bits. However, in principle, you can imagine computational elements receiving encodings of computationally intensive problems via their synaptic inputs (e.g., “is this boolean formula satisfiable?”), and then outputting one of a comparatively small set of difficult-to-arrive-at answers.” (p. 6).\") For example, [RSA-240](https://en.wikipedia.org/wiki/RSA_numbers#:~:text=Opteron%2Dbased%20computer.-,RSA%2D240,Emmanuel%20Thom%C3%A9%20and%20Paul%20Zimmermann.) is ~800 bits (if we assume 1000-10,000 input synapses, each receiving 1 spike/s in 1 of 1000 bins, a neuron would be receiving ~10-100k bits/s),[243](https://www.openphilanthropy.org/brain-computation-report#footnote243_mhyh6e0 \"Here I’m using a rough estimation method suggested by Dr. Paul Christiano, from Open Philanthropy's non-verbatim notes from a conversation with Dr. Paul Christiano: “You can roughly estimate the bandwidth of axon communication by dividing the firing rate by the temporal resolution of spiking. Thus, for example, if the temporal precision is 1 ms, and neurons are spiking at roughly 1 Hz, then each spike would communicate ~10 bits of information (e.g., log2(1000)). If you increase the temporal precision to every microsecond, that’s only a factor of two difference (e.g., log2(1,000,000) = ~20 bits)” (p. 2). There is a large literature on the information carried by action potentials that I’m not engaging with. See Dayan and Abbott (2001), Chapter 4 (p. 123-150); Zador (1998); Tsubo et al. (2012), Fuhrmann et al. (2001), Mainen and Sejnowski (1995), and van Steveninck et al. (1997).\") but it took ~900 [core years](https://www.computecanada.ca/research-portal/accessing-resources/glossary/#:~:text=Core%20year%3A%20The%20equivalent%20of,based%20on%20core%20year%20allocations.) on a 2.1 Ghz CPU to factor.[244](https://www.openphilanthropy.org/brain-computation-report#footnote244_3jt96bk \"See here, and more discussion of the difficulties here.\") And the bits that the human brain as a whole receives and outputs may also be quite limited relative to the complexity of its information-processing (Prof. Markus Meister suggested ~10-40 bits per second for various motor outputs).[245](https://www.openphilanthropy.org/brain-computation-report#footnote245_jsg0ogd \"From Open Philanthropy's non-verbatim notes from a conversation with Prof. Markus Meister: “Prof. Meister thinks that people often overestimate the sophistication of the tasks that humans perform, which tend to involve low-bandwidth outputs. People have measured the bits per second involved in different types of motor outputs (e.g., typing, playing piano, athletics, speaking speed, etc.), and the numbers are in the range of 10-40 bits per second. Similarly, people have tried to measure the information rate of human thought (for example, by seeing how much information humans can retain per second in reading), and it’s in the same ballpark” (p. 5).\")\n\n\nOf course, naively, neurons (indeed, brains) don’t seem to be factorizing integers. Indeed, in general, I think this may well be a good argument, and I welcome attempts to make it more explicit and quantified. Suppose, for example, that a neuron receives ~10-100k bits/s and outputs ~10 bits/s. What would this suggest about the FLOP/s required to reproduce the mapping, and why?\n\n\n*Ability to replicate known types of neuron behavior*\n\n\nAccording to [Izhikevich (2004)](https://www.izhikevich.org/publications/whichmod.pdf), some neuron models, such as simple integrate-and-fire models, can’t replicate known types of neuron behaviors, some of which (like adaptations in spike frequency over time, and spike delays that depend on the strength of the inputs)[246](https://www.openphilanthropy.org/brain-computation-report#footnote246_drpfhd2 \"Izhikevich (2004): “The most common type of excitatory neuron in mammalian neocortex, namely the regular spiking (RS) cell, fires tonic spikes with decreasing frequency, as in Fig. 1(f). That is, the frequency is relatively high at the onset of stimulation, and then it adapts. Low-threshold spiking (LTS) inhibitory neurons also have this property. The interspike frequency of such cells may encode the time elapsed since the onset of the input” (p. 1064); “Most cortical neurons fire spikes with a delay that depends on the strength of the input signal. For a relatively weak but superthreshold input, the delay, also called spike latency, can be quite large, as in Fig. 1(i). The RS cells in mammalian cortex can have latencies of tens of ms. Such latencies provide a spike-timing mechanism to encode the strength of the input” (p. 1065).\") seem to me plausibly important to task-performance:[247](https://www.openphilanthropy.org/brain-computation-report#footnote247_wz798tg \"Izhikevich (2004): “The most efficient is the I&F model. However, the model cannot exhibit even the most fundamental properties of cortical spiking neurons, and for this reason it should be avoided by all means. The only advantage of the I&F model is that it is linear, and hence amenable to mathematical analysis. If no attempts to derive analytical results are made, then there is no excuse for using this model in simulations” (p. 1069). See also Jolivet et al. (2008b): “What follows from the results of challenge A displayed in Tables 1 and 2 is that standard leaky integrate-and-fire models or other off-the-shelf methods are not sufficient to account for the variety of firing patterns and firing rates generated by a single neuron. The conclusion is that one has to include some dynamics in the threshold so as to achieve two things: first, to account in some rough fashion for neuronal refractoriness, and, second, to gain some flexibility in matching the mean firing rates across different stimulation paradigms. We had already shown that predicting subthreshold membrane voltage is relatively easy (Jolivet et al. (2006a)). Predicting the exact timing of spikes is where the difficulty resides” (p. 425).\")\n\n\n \n\n\n\n[![model chart](https://www.openphilanthropy.org/files/Research/Brain_Compute/image7.png)](https://www.openphilanthropy.org/files/Research/Brain_Compute/image7.png)**Figure 9: Diagram of which behaviors different models can capture**. © 2004 IEEE. Reprinted, with permission, from Izhikevich, Eugene. “[Which model to use for cortical spiking neurons?](https://www.izhikevich.org/publications/whichmod.pdf)”. IEEE Transactions on Neural Networks, Vol. 15, No. 5, 2004. Original caption: “Comparison of the neuro-computational properties of spiking and bursting models; see Fig. 1. ‘#of FLOPS’ is an approximate number of floating point operations (addition, multiplication, etc.) needed to simulate the model during a 1 ms time span. Each empty square indicates the property that the model should exhibit in principle (in theory) if the parameters are chosen appropriately, but the author failed to find the parameters within a reasonable period of time.”\n\n\nNote, though, that Izhikevich suggests that his own model can capture these behaviors, for 13 FLOPs per ms.\n\n\n*Simplifying the Hodgkin-Huxley model*\n\n\nSome experts argue that the Hodgkin-Huxley model can be simplified:\n\n\n* Prof. Dong Song noted that the functional impacts of its ion channel dynamics are highly redundant, suggesting that you can replicate the same behavior with fewer equations.[248](https://www.openphilanthropy.org/brain-computation-report#footnote248_4yfu91t \"From Open Philanthropy's non-verbatim notes from a conversation with Prof. Dong Song: “The functional impact of ion channel dynamics in the context of a Hodgkin-Huxley model is highly redundant. This makes Prof. Song think that Hodgkin-Huxley models can be simplified -- e.g. you can replicate the input-output behavior of the Hodgkin-Huxley model, with fewer equations. Indeed, this almost has to be the case. There are also studies that show that many different combinations of ionic channels can generate the same overall behavior, both for a single neuron and a small neuronal circuit” (p. 2).\")\n* [Izhikevich (2003)](https://www.izhikevich.org/publications/spikes.pdf) claims that “[His simplified neuron model] consists of only two equations and has only one nonlinear term, i.e., *v2*. Yet … the difference between it and a whole class of biophysically detailed and accurate Hodgkin–Huxley-type models, including those consisting of enormous number of equations and taking into account all possible information about ionic currents, is just a matter of coordinate change.”[249](https://www.openphilanthropy.org/brain-computation-report#footnote249_525djbd \"He cites Hoppensteadt and Izhikevich (2001), in which he goes into more detail: “Briefly, a model is canonical for a family if there is a continuous change of variables that transforms any other model from the family into this one, as we illustrate in Figure 1. For example, the entire family of weakly coupled oscillators of the form (1) can be converted into the canonical phase model (6), where Hij depend on the particulars of the functions fi and gij. The change of variables does not have to [be] invertible, so the canonical model is usually lower-dimensional, simple, and tractable. Yet, it retains many important features of the family. For example, if the canonical model has multiple attractors, then each member of the family has multiple attractors..” (p. 1). \")\n\n\n*ANNs and interchangeable non-linearities*\n\n\nArtificial neural networks (ANNs) have led to breakthroughs in AI, and we know they can perform very complex tasks.[250](https://www.openphilanthropy.org/brain-computation-report#footnote250_zhg31x8 \"Here is a summary of recent AI progress from Hassabis et al. (2017): “In AI, the pace of recent research has been remarkable. Artificial systems now match human performance in challenging object recognition tasks (Krizhevsky et al. (2012)) and outperform expert humans in dynamic, adversarial environments such as Atari video games (Mnih et al. (2015)), the ancient board game of Go (Silver et al. (2016)), and imperfect information games such as heads-up poker (Moravčík et al. (2017)). Machines can autonomously generate synthetic natural images and simulations of human speech that are almost indistinguishable from their real-world counterparts (Lake et al. (2015), van den Oord et al. (2016)), translate between multiple languages (Wu et al. (2016)), and create “neural art” in the style of well-known painters (Gatys et al. (2015))” (p. 250). See also LeCun et al. (2015) for a review of deep learning progress. Other recent advances include OpenAI et al. (2019), Vinyals et al. (2019), Radford et al. (2019), Brown et al. (2020).\") Yet the individual neuron-like units are very simple: they sum weighted inputs, and their “firing decisions” are simple non-linear operations, like a [ReLU](https://machinelearningmastery.com/rectified-linear-activation-function-for-deep-learning-neural-networks/).[251](https://www.openphilanthropy.org/brain-computation-report#footnote251_82ztpws \"See Kriegeskorte (2015) and Nielsen’s “Neural Networks and Deep Learning” for general introductions.\")\n\n\nThe success of ANNs is quite compatible with the biological neurons doing something very different. And comparisons between brains and exciting computational paradigms can be over-eager.[252](https://www.openphilanthropy.org/brain-computation-report#footnote252_mfzf562 \"From Open Philanthropy's non-verbatim notes from a conversation with Prof. Eric Jonas: “Prof. Jonas does not think that there is a clear meaning to the claim that the brain is a deep learning system, and he is unconvinced by the argument that ‘the brain is doing optimization, and what is deep learning but optimization?’. He also has a long-term prior that researchers are too quick to believe that the brain is doing whatever is currently popular in machine learning, and he doesn’t think we’ve found the right paradigm yet” (p. 3).\") Still, knowing that ANN-like units are useful computational building-blocks makes salient the possibility that biological neurons are useful for similar reasons. Alternative models, including ones that incorporate biophysical complications that ANNs ignore, cannot boast similar practical success.\n\n\nWhat’s more, the non-linear operations used in artificial neurons are, at least to some extent, interchangeable.[253](https://www.openphilanthropy.org/brain-computation-report#footnote253_4ukxme1 \"From Open Philanthropy's non-verbatim notes from a conversation with Prof. Anthony Zador: “In the early days of neural networks, people thought you needed sigmoid activation functions, and that piecewise linear models could not work because they are not differentiable. But it turns out that computers can handle the function having one non-differentiable point, so the two are largely interchangeable, and it’s fine to go with the more convenient option. The main constraint is that the function needs to be monotonically increasing. This is an example of a case in which the precise function generating a neuron’s output does not matter” (p. 2). See also Kriegeskorte (2015): “The particular shape of the nonlinear activation function does not matter to the class of input–output mappings that can be represented” (p. 422); and Tegmark (2017): “It’s been proven that almost any function will suffice as long as it’s not linear (a straight line)” (p. 72, endnote 5).\") That is, instead of a ReLU, you can use e.g., a sigmoid (though different operations have different pros and cons). If we pursue the analogy with firing decisions, this interchangeability might suggest that the detailed dynamics that give rise to spiking are less important than the basic function of passing synaptic inputs through some non-linearity or other.\n\n\nOn a [recent podcast](https://www.youtube.com/watch?v=3t06ajvBtl0), Dr. Matthew Botvinick also mentions a chain of results going back to the 1980s showing that the activity in the units of task-trained deep learning systems bears strong resemblance to the activity of neurons deep in the brain. I discuss a few recent visual cortex results in this vein in [Section 3.2](#section_3.2), and note a few other recent results in [Section 3.3](#section_3.3).[254](https://www.openphilanthropy.org/brain-computation-report#footnote254_xg5lnbd \"See Matthew Botvinick’s comments in this podcast: “I consider the networks we use in deep learning research to be a reasonable approximation to the mechanisms that carry information in the brain…If you go back to the 1980s, there’s an unbroken chain of research in which a particular strategy is taken, which is: hey, let’s train a deep learning system, let’s train a multi-layer neural network, on this task that we trained our rat on, or our monkey on, or this human being on, and let’s look at what the units deep in the system are doing, and let’s ask whether what they’re doing resembles what we know about what neurons deep in the brain are doing; and over and over and over and over and over, that strategy works, in the sense that, the learning algorithms that we have access to, which typically center on backpropagation, they give rise to patterns of activity, patterns of response, patterns of neuronal behavior in these artificial models, that look hauntingly similar to what you see in the brain. Is that a coincidence? … the circumstantial evidence is overwhelming” (see (53:00-1:00:00 here).\") Insofar as a much broader set of results in this vein is available, that seems like relevant evidence as well.\n\n\n*Intuitive usefulness*\n\n\nOne of our technical advisors, Dr. Paul Christiano, noted that from a computer science perspective, the Hodgkin-Huxley model just doesn’t look very useful. That is, it’s difficult to describe any function for which (a) this model is a useful computational building block, and (b) its usefulness arises from some property it has that simpler computational building blocks don’t.[255](https://www.openphilanthropy.org/brain-computation-report#footnote255_eybco2x \"From Open Philanthropy's non-verbatim notes from a conversation with Dr. Paul Christiano, (p. 6).\") Perhaps something similar could be said of even more detailed biophysical models.\n\n\nNote, though, advocates of large compute burdens need not argue that actual biophysical models themselves are strictly necessary; rather, they need only argue for the overall complexity of a neuron’s input-output transformation.\n\n\n*Noise bounds*\n\n\nVarious experts suggest that noise in the brain may provide an upper bound on the compute required to do what it does.[256](https://www.openphilanthropy.org/brain-computation-report#footnote256_5mfhs6m \"Sandberg (2013): “The noise level in the nervous system is fairly high, with spike-timing variability reaching milliseconds due to ion channel noise. Perceptual thresholds and motor precision are noise limited. Various noise management solutions such as redundant codes, averaging and bias have evolved (Faisal et al. (2008)). In synapses the presynaptic transient influx of calcium ions as a response to an action potential corresponds to just 13,000 ions (Koch (1999)) (p. 458), and on the postsynaptic side just 250 ions (Koch (1999))(p. 302). These numbers are so small that numeric noise begins to be significant, and the chemical dynamics can no longer be described as average concentrations. However, biological systems can resist the discretization noise through error correction mechanisms that lead to discrete attractor dynamics, in line with the evidence that synaptic plasticity involve discrete changes rather than graded response (Ajay and Bhalla (2006) Bhalla (2004) and Elliott (2011)). It is hence not implausible that there exist sufficient scale separation on the synaptic and neuronal level: information is transmitted in a discrete code (with a possible exception of timing) between discrete entities. At finer resolution thermal and chemical noise will be significant, suggesting that evolution would have promoted error correction and hence scale separation” (p. 261). See also Open Philanthropy's non-verbatim notes from a conversation with Prof. Konrad Kording: “If you want upper bounds on required compute, you can look at the parts list of the computing elements in the brain, the noisiness of which will put physical limits on the amount of computation they can do. This might result in very high estimates. For example, it might say that every ion channel does a bit roughly every ten milliseconds. This approach doesn’t necessarily rule out molecules and proteins as possible avenues of computation. However, some molecules may equilibrate so fast that you can replace them with a variable that describes their average state (e.g., mean field theory is applicable). You can’t do this across a neuron: there are NMDA spikes and other complexities. So the question is: what is the compartment size where local averaging is possible? People disagree. Some think the brain has organized as itself to be mean-field modelable, but they have never shown much evidence for that. Still, at some length-scale (say, ten micrometers) and some time-scale (much faster than electrophysiology), everything will equilibrate” (p. 4).\") However, I’m not sure how to identify this bound, and haven’t tried.\n\n\n\n#### 2.1.2.4 Expert opinion and practice\n\n\nThere is no consensus in neuroscience about what models suffice to capture task-relevant neuron behavior.[257](https://www.openphilanthropy.org/brain-computation-report#footnote257_n1eglo4 \"Gerstner and Naud (2009): “Opinions strongly diverge on what constitutes a good model of a neuron” (p. 379). Herz et al. (2006): “Even today, it remains unclear which level of single-cell modeling is appropriate to understand the dynamics and computations carried out by such large systems (p. 83-4). Kriegeskorte (2015): “Opinions diverge as to whether more biologically detailed models will ultimately be needed” (see section: “What is meant by the term neural network?”). Gabriel Kreiman, in this talk (8:00): “What’s the exact resolution at which we should study neural systems is a fundamental open question, we don’t know what’s the right level of abstraction. There are people who think about brains in the context of blood flow and millions and millions of neurons averaged together. There are people who think we need to actually pay attention to the exact details of how every single dendrite integrates information and so on. For many of us this is a sufficient level of abstraction, the notion that there is a neuron that can integrate information.” Dayan and Abbott (2001): “It is often difficult to identify the appropriate level of modeling for a particular problem” (p. xiii). See also the non-verbatim notes from Open Philanthropy's non-verbatim notes from a conversation with Prof. E.J. Chichilnisky: “Discussion of the compute sufficient to replicate the brain’s information-processing is very speculative. We don’t know enough about the brain to give answers with confidence, and different people with neuroscientific expertise will answer differently” (p. 1); from Open Philanthropy's non-verbatim notes from a conversation with Prof. Barak Pearlmutter: “Mr. Carlsmith asked Prof. Pearlmutter about his views about the level of modeling detail necessary to create brain models that can replicate task performance. Prof. Pearlmutter suggested that “the truth is: we don’t know,” and that while we may have intuitions, science has shown us that intuitions are not very reliable” (p. 1). See also Open Philanthropy's non-verbatim notes from a conversation with Prof. Konrad Kording, Prof. Eric Jonas, and Prof. Erik De Schutter.\")\n\n\nA number of experts indicated that in practice, the field’s emphasis is currently on comparatively simple models, rather than on detailed modeling.[258](https://www.openphilanthropy.org/brain-computation-report#footnote258_yttl1ox \"From Open Philanthropy's non-verbatim notes from a conversation with Prof. Erik De Schutter: “Modeling neural networks at the level of simple spiking neuron models or rate-based models is very popular. Prof. De Schutter thinks the field would benefit from a greater diversity of approaches” (p. 2); from Open Philanthropy's non-verbatim notes from a conversation with Prof. Shaul Druckmann “The field has basically given up on detailed biophysical modeling. In the 1990s, there were many papers in top journals on the topic, but now there are almost none. Prof. Druckmann expects that the large majority of people who do not work in early sensory systems would say that detailed biophysical modeling is unnecessary for understanding the brain’s computation” (p. 7).\") But this evidence is indirect. After all, the central question a neuroscientist needs to ask is not (a) “what model is sufficient, in principle, to replicate task-relevant behavior?”, but rather (b) “what model will best serve the type of neuroscientific understanding I am aiming to advance, given my constraints?”.\n\n\nIndeed, much discussion of model complexity is practical: it is often said that biophysical models are difficult to compute, fit to data, and understand; that simpler models, while better on these fronts, come at the cost of biological realism; and that the model you need depends on the problem at hand.[259](https://www.openphilanthropy.org/brain-computation-report#footnote259_ep1jd3l \"Herz et al. (2006): “The appropriate level of description depends on the particular goal of the model. Indeed, finding the best abstraction level is often the key to success” (p. 80). Pozzorini et al. (2015): “Detailed biophysical models with stochastic ion channel dynamics can in principle account for every aspect of single-neuron activity; however, due to their complexity, they require high computational power… Overall, a reliable and efficient fitting procedure for detailed biophysical models is not known” (p. 2). Izhikevich (2004): “The [Hodkin-Huxley] model is extremely expensive to implement… one can use the Hodgkin–Huxley formalism only to simulate a small number of neurons or when simulation time is not an issue” (p. 1069). Dayan and Abbott (2001): “A frequent mistake is to assume that a more detailed model is necessarily superior. Because models act as bridges between levels of understanding, they must be detailed enough to make contact with the lower level yet simple enough to provide clear results at the higher level” (p. xiii). Beniaguev et al. (2019): “Simulation of compartmental models entails numerically solving thousands of coupled nonlinear differential equations which is computationally intensive (Segev and Rall (1998); Burke (2000)). Moreover, while the simulation provides good fit to data, it is not optimized for providing conceptual understanding of the process by which it is achieved” (p. 14). Kobayashi et al. (2009): ‘It has recently become possible to use elaborate simulation platforms, such as NEURON (Hines and Carnevale (1997)) and GENESIS (Bower and Beeman (1995)), for reproducing experimental data. Because of nonlinearity and complexity, however, parameter optimization of the HH type models is a notoriously difficult problem (Achard and De Schutter (2006); Goldman et al. (2001); Huys et al. (2006)), and these models require a high computational cost, which hinders performing the simulation of a massively interconnected network” (p. 1).\") Thus, answers to (a) and (b) can come apart: you can think that ultimately, we’ll need complex models, but that simpler ones are more useful given present constraints; or to that ultimately, simplifications are possible, but detailed modeling is required to identify them.[260](https://www.openphilanthropy.org/brain-computation-report#footnote260_rs9m7et \" From Open Philanthropy's non-verbatim notes from a conversation with Prof. Erik De Schutter: “The best way forward is to try to explore and understand the function of the brain’s underlying mechanisms -- a project that may eventually lead to an understanding of what can be simplified. But to try to simplify things too early, before you understand them, is a dangerous game” (p. 1); from Open Philanthropy's non-verbatim notes from a conversation with Dr. Stephen Larson: “OpenWorm’s approach is to throw as much complexity into the neuron models as they think is necessary (this is currently roughly at the level of a Hodgkin-Huxley model, plus some additional features), in an effort to really nail down that their model is capturing the worm’s behavior across many conditions and timescales. Success in such a project would allow you to bound the complexity necessary for such a simulation (indeed, this is one of Dr. Larson’s motivations for working on it). After that, you could attempt to simplify the model in a principled way. However, the jury is still out on how much simplification is available, and Dr. Larson thinks that in this kind of uncertain context, you should focus on the worst-case, most conservative compute estimates as your default” (p. 2).\")\n\n\nStill, some experts answer (a) explicitly. In particular:\n\n\n* A number of experts I spoke to expected comparatively simple models (e.g., simpler than Hodgkin-Huxley) to be adequate.[261](https://www.openphilanthropy.org/brain-computation-report#footnote261_p085zba \" From Open Philanthropy's non-verbatim notes from a conversation with Prof. Anthony Zador (p. 1-2): Prof. Zador believes that integrate-and-fire neuron models, or something like them, are adequate to capture the contribution of a neuron to the brain’s information-processing. He does not think that Hodgkin-Huxley-type models are required, or that we need to include the details of synaptic conductances in our models. However, he believes that the temporal dynamics of spiking are important. That is, it matters that there are discrete spikes, occurring at particular moments in time, which are the conduit of information between neurons...That said, he does not think that the nuances of how these spikes are generated matter very much. The integrate and fire model is one mathematically tractable model, but there are others which, if more mathematically tractable, would be fine as well. From Open Philanthropy's non-verbatim notes from a conversation with Prof. Dong Song (p. 1-2): In his view, to replicate intelligence at a level similar to humans (as opposed to some more detailed level of simulation accuracy), you don’t need to model quantum phenomena, or ionic channels, or even Hodgkin-Huxley-level dynamics. Rather, a spiking neuron model, with a rich array of input-output behavior, is sufficient. That said, certain simplified spiking neuron models are probably not sufficient. These included linear integrate-and-fire neurons, the Izhikevich model (a simplified version of the Hodgkin-Huxley model), and the models used in Prof. Song’s MIMO model. Prof. Chris Eliasmith, whose large-scale brain model SPAUN uses leaky-integrate-and-fire neurons (see p. 16 here), thought such neuron models likely adequate for task-performance (see Open Philanthropy's non-verbatim notes from a conversation with Prof. Chris Eliasmith (p. 5)): Prof. Eliasmith thinks that neuron models at roughly the level of detail he uses in SPAUN (possibly including some non-linearities in the dendrites), if scaled up to the size of the brain as a whole, would be able not just to replicate cognitive performance, but also to reflect a functional profile similar to biological neurons. From Open Philanthropy's non-verbatim notes from a conversation with Prof. Markus Meister (p.1-4): The computations performed in the retina are fairly well-understood… If your goal is to predict the spiking outputs of the retina, you don’t need a highly intricate model (for example, you don’t have to simulate the details of every neuron using multi-compartmental models). Rather, you can use very compact models known as “point neuron models,” which you can connect together with simple synapses.… To create a functional model of the whole retina, in the extreme case you’d need a point-neuron model for every cell. However, you can probably get away with less than that, because there are a lot of regularities that can be simplified computationally.… Prof. Meister would be sympathetic to scaling up from the retina as a way of putting an upper limit on the difficulty of simulating the brain as a whole. Prof. Meister has not actually done this back-of-the-envelope calculation, but budgeting based on the rate at which action potentials arrive at synapses, multiplied by the number of synapses, seems like roughly the right approach. … There is evidence that single point neuron models are not sufficient to explain all neural phenomena. For example, in cortical pyramidal cells, the basal dendrites and soma operate with different dynamics than the apical tuft. Using two point-neuron models (one for the soma, and another for the apical tuft), you can capture this fairly well. These are more powerful models, but they are not dramatically more computationally complex: e.g., it’s basically a factor of two. From Open Philanthropy's non-verbatim notes from a conversation with Dr. Stephen Baccus (p. 5): To build a functional computational model of the retina as a whole, you could use a linear filter and a threshold as a model unit, and you could have something like one model unit per cell in the retina. However, in some of Prof. Baccus’s models, they have less than this. Whether you’d need e.g. one model unit for every interneuron, or one for every two or three interneurons, isn’t clear, but it’s around that order of magnitude. Prof. Baccus does not think simulating more complex aspects of neuron biology, like dendrites, compartments and ion channels, would be necessary for replicating the retina’s input-output relationship…Prof. Baccus thinks the answer is “maybe” to the question of whether the compute necessary to model neurons in the retina will be similar to the compute necessary to model neurons in the cortex. You might expect a volume by volume comparison to work as a method of scaling up from the retina to the cortex. Dr. Adam Marblestone offered an estimate that seemed to assume that firing decisions would be in the noise. From Open Philanthropy's non-verbatim notes from a conversation with Dr. Adam Marblestone (p. 9): Dr. Marblestone is fairly comfortable with one FLOP per spike through synapse as a low-end estimate, and ~100 FLOPs per spike through synapse (roughly comparable to the estimate offered by Prof. Rahul Sarpeshkar) as a high-end estimate. His best guess is 10-100 FLOPs per spike through synapse. Prof. Barak Pearlmutter said something similar, and he was sympathetic to the idea that dendritic computation would add only a small constant factor. From Open Philanthropy's non-verbatim notes from a conversation with Prof. Barak Pearlmutter (p. 2-4): Prof. Pearlmutter thought that the compute for firing decisions would be “in the noise” relative to compute for spikes through synapses, because there are so many fewer neurons than synapses… Prof. Pearlmutter thought it a fairly good intuition that dendritic computation would only implicate a small constant factor increase in required compute, though very complicated local interactions could introduce uncertainty… Overall, Prof. Pearlmutter thought that an estimate based on 100 FLOPs per spike through synapse, with a factor of two for learning, sounded fairly reasonable.\") I expect many computational neuroscientists who have formed opinions on the topic (as opposed to remaining agnostic) to share this view.[262](https://www.openphilanthropy.org/brain-computation-report#footnote262_bq51aob \"A number of experts we engaged with indicated that many in the field are sympathetic to the adequacy of models less compute-intensive than single-compartment Hodgkin-Huxley (though we have very few comments in this respect publicly documented), and it fits with my impressions more broadly. See also Open Philanthropy's non-verbatim notes from a conversation with Prof. Shaul Druckmann “The field has basically given up on detailed biophysical modeling. In the 1990s, there were many papers in top journals on the topic, but now there are almost none. Prof. Druckmann expects that the large majority of people who do not work in early sensory systems would say that detailed biophysical modeling is unnecessary for understanding the brain’s computation” (p. 7) (though whether Hodgkin-Huxley would fall under \\\"detailed\\\" biophysical modeling isn't totally clear to me).\")\n* Various experts suggest that some more detailed biophysical models are adequate.[263](https://www.openphilanthropy.org/brain-computation-report#footnote263_14y6joe \"Jonathan Pillow says in a lecture: “Obviously if I simulate the entire brain using multi-compartment Hodkin-Huxley models that describe the opening and closing of every channel, clearly that model has the capacity to do anything that the brain can do” (16:10). Pozzorini et al. (2015) write: “Detailed biophysical models with stochastic ion channel dynamics can in principle account for every aspect of single-neuron activity” (p. 2). Beniaguev et al. (2019): “Thanks to the introduction of compartmental models (Rall (1964)) and digital anatomical reconstructions, we can now account for nearly all those experimental phenomena, as well as explore conditions that are not accessible with current experimental technique. In that sense we have developed along the last 50 or so years a faithful model of the input-output transformation of neurons” (p. 14).\")\n* In an informal poll of participants at a 2007 workshop on Whole Brain Emulation, the consensus appeared to be that a level of detail somewhere between a “spiking neural network” and the “metabolome” would be adequate (strong selection effects likely influenced who was present).[264](https://www.openphilanthropy.org/brain-computation-report#footnote264_mgq61oi \"Workshop participants included: John Fiala, Robin Hanson, Kenneth Jeffrey Hayworth, Todd Huffman, Eugene Leitl, Bruce McCormick, Ralph Merkle, Toby Ord, Peter Passaro, Nick Shackel, Randall A. Koene, Robert A. Freitas Jr and Rebecca Roache. From a brief google, a number of these people appear to be involved in the Brain Preservation Foundation, and some (such as Toby Ord and Rebecca Roache) are philosophers rather than neuroscientists. Sandberg and Bostrom (2008): “An informal poll among workshop attendees produced a range of estimates of the required resolution for WBE is. The consensus appeared to be level 4‐6. Two participants were more optimistic about high level models, while two suggested that elements on level 8‐9 may be necessary at least initially (but that the bulk of mature emulation, once the basics were understood, could occur on level 4‐5).” (p 14).\")\n\n\nA number of other experts I spoke with expressed more uncertainty, agnosticism, and sympathy towards higher end estimates.[265](https://www.openphilanthropy.org/brain-computation-report#footnote265_xs4cr17 \"From Open Philanthropy's non-verbatim notes from a conversation with Dr. Stephen Larson (p. 5): On the basis of his experience at OpenWorm thus far, Dr. Larson thinks it unlikely that very simplified neuron models (e.g., integrate-and-fire neurons, or models akin to the artificial neurons used in deep neural networks) are going to be sufficient to describe the information-processing dynamics involved in the worm’s behavior…. Dr. Larson does not think that there is strong evidence that spikes and synaptic inputs are the most informative processes for studying information-processing in the brain… Given the many uncertainties involved in estimates of this kind, Dr. Larson believes that the right conclusion is something like: there is insufficient evidence to justify concluding anything (as opposed to, e.g., “there is some moderate evidence in favor of X FLOP/s, so maybe let’s believe that?”). In statistics, for example, one wants a P value less than 0.05, and Dr. Larson is not sure we have anything like that for these FLOP/s estimates. From Open Philanthropy's non-verbatim notes from a conversation with Prof. Erik De Schutter: Prof. De Schutter thinks that at this point, we simply are not in a position to place any limits on the level of biological detail that might be relevant to replicating the brain’s task-performance. Many common simplifications do not have solid scientific foundations, and are more at the level of ‘the way we do things.’ From Open Philanthropy's non-verbatim notes from a conversation with Prof. Eric Jonas (p. 6): many electrophysiologists would say that we don’t know what neurons are doing. And they would ask: how can we start making claims about the computational capacity of networks of neurons, if we don’t know how individual neurons work? Prof. Jonas is sympathetic to this. There are a variety of complexities that make the computations performed by a neuron extremely difficult to quantify. Examples include: dendritic spiking, the complex dynamics present in synapses (including large numbers of non-linearities), the diversity of ion-channel receptors, post-translational modification, alternative splicing, and various receptor trafficking regimes. Some people attempt to draw comparisons between neurons and transistors. However, even with a billion transistors, Prof. Jonas does not know how to create a reasonable simulation of a neuron. From Open Philanthropy's non-verbatim notes from a conversation with Prof. Konrad Kording (p. 4): Examination of neurons reveals that they are actually very non-linear, and the computations involved in plasticity probably include a large number of factors distributed across the cell. In this sense, a neuron might be equivalent to a three-layer neural network, internally trained using backpropagation. In that case, you’d need to add another factor of roughly 105 to your compute estimate, for a total of 1020 multiplications per second. This would be much less manageable… The difference between the estimates generated by these different approaches is very large -- something like ten orders of magnitude. It’s unclear where the brain is on that spectrum … Prof. Kording’s hunch is that in order to replicate firing decisions in neurons, you’d need to break the neuron into pieces of something like ten microns (this would hundreds, maybe thousands of compartments per neuron). This hunch is grounded in a belief that neurons are very non-linear. From Open Philanthropy's non-verbatim notes from a conversation with Prof. Shaul Druckmann (p. 3): We can distinguish between two approaches to the brain’s biophysical complexity. One camp argues: 'let’s not assume we need to include a given type of biophysical complexity in our models, until doing so becomes necessary.' The other argues: 'If this complexity were in fact important, we would not currently be able to tell.' Prof. Druckmann tends to be in this latter camp, though he thinks that the former is a fair and practical approach. Though note that: Prof. Druckmann would be extremely surprised if future working models of human intelligence incorporate large amounts of biophysical detail (e.g., molecular dynamics). He is confident that the type of non-linearities generated by real biophysics can be more efficiently emulated in different ways in a model. Therefore, these models will look more like giant networks of simple artificial neurons than giant networks of Hodgkin-Huxley models.\") And many (regardless of specific opinion) suggested that views about this topic (including, sometimes, their own) can emerge in part from gut feeling, a desire for one’s own research to be important/tractable, and/or from the tradition and assumptions one was trained in.[266](https://www.openphilanthropy.org/brain-computation-report#footnote266_h38ii39 \"From Open Philanthropy's non-verbatim notes from a conversation with Prof. Erik De Schutter: Many common simplifications do not have solid scientific foundations, and are more at the level of ‘the way we do things.’ From Open Philanthropy's non-verbatim notes from a conversation with Prof. Konrad Kording (p. 5): In general, people are often willing to take a philosophical position, without much evidence, if it makes their research more important. From Open Philanthropy's non-verbatim notes from a conversation with Prof. Anthony Zador (p. 5): Prof. Zador’s views about the relative importance of different neural mechanisms are shaped centrally by gut feeling and scientific aesthetic. Neuroscientists have debated this issue for decades, and ultimately the proof is in the pudding. Prof. Zador expects that a lot of neuroscientists would say that just we don’t know what amount of compute would be required to match human-level task performance. There is also a wide diversity of views in the field, and many people’s views are centrally shaped by their research background. For example, people with backgrounds in biology are generally more excited about incorporating biological detail; people who study humans tend to focus on the importance of learning; and people who study small animals will like C. elegans or fruit flies focus less on learning and more on innate behaviors. From Open Philanthropy's non-verbatim notes from a conversation with Prof. Dong Song (p. 2): It would be hard for Prof. Song to prove his view. From Open Philanthropy's non-verbatim notes from a conversation with Prof. Barak Pearlmutter (p. 1): Prof. Pearlmutter suggested that ‘the truth is: we don’t know,’ and that while we may have intuitions, science has shown us that intuitions are not very reliable. From Open Philanthropy's non-verbatim notes from a conversation with Prof. E.J. Chichilnisky (p. 2): no one has been able to prove one way or another whether detailed biophysical modeling is necessary. It’s hard to know, and there isn’t a lot of evidence. There are high-quality experimental and computational efforts underway to understand this...People’s views about the right level of biophysical detail to focus on are sometimes shaped by what they’re good at (e.g., computational simplifications, vs. detailed biophysical analysis). And some people find just biophysical complexity intrinsically interesting. From Open Philanthropy's non-verbatim notes from a conversation with Prof. Shaul Druckmann (p. 6): Prof. Druckmann believes that at our current conceptual understanding of neural computation, many statements in neuroscience to the effect that “we can reduce X to Y” are based mostly on personal opinion, sometimes influenced in part by what current technology allows us to do, rather than in well-justified, first-principles reasoning.\")\n\n\n \n\n\n#### 2.1.2.5 Overall FLOP/s for firing decisions\n\n\nWhere does this leave us in terms of overall FLOP/s for firing decisions? Here’s a chart with some examples of possible levels of complexity, scaled up to the brain as a whole:\n\n\n \n\n\n\n\n\n\n\n\n**Figure 10: FLOP/s budgets for different models of neuron firing decisions**| ANCHOR | FLOPS | SIZE OF TIMESTEP | FLOP/S FOR 1E11 NEURONS |\n| --- | --- | --- | --- |\n| ReLU | 1 FLOP per operation[267](https://www.openphilanthropy.org/brain-computation-report#footnote267_jgaaay6 \"From Open Philanthropy's non-verbatim notes from a conversation with Dr. Paul Christiano:\\\"A ReLU costs less than a FLOP. Indeed, it can be performed with many fewer transistors than a multiply of equivalent precision\\\" (p. 6).\") | 10 ms[268](https://www.openphilanthropy.org/brain-computation-report#footnote268_jcaxjqs \"This number is just a ballpark for lower temporal resolutions. For example, it’s the resolution used by Maheswaranathan et al. (2019).\") | 1e13 |\n| Izhikevich spiking neuron model | 13 FLOPs per ms[269](https://www.openphilanthropy.org/brain-computation-report#footnote269_062xotz \"Izhikevich (2004), (p. 1068).\") | 1 ms[270](https://www.openphilanthropy.org/brain-computation-report#footnote270_8kwfcsn \"Izhikevich (2004) seems to be assuming at least 1000 time-steps per second: “It takes only 13 floating point operations to simulate 1 ms of the model, so it is quite efficient in large-scale simulations of cortical networks. When and (a,b,c,d) = (0.2, 2, -56, -16) and I = -99, the model has chaotic spiking activity, though the integration time step [here Izhikevich uses a symbol that google doc endnotes can’t reproduce] should be small to achieve adequate numerical precision” (p. 1068).\") | ~1e15 |\n| Single compartment Hodgkin-Huxley model | 120 FLOPs per .1 ms[271](https://www.openphilanthropy.org/brain-computation-report#footnote271_llmwrmw \"Izhikevich (2004), (p. 1069).\") | .1 ms[272](https://www.openphilanthropy.org/brain-computation-report#footnote272_sbr4erq \"The FLOPs estimate for the Hodgkin-Huxley model given in Izhikevich (2004) appears to assume at least 10,000 timesteps/sec: “It takes 120 floating point operations to evaluate 0.1 ms of model time (assuming that each exponent takes only ten operations), hence, 1200 operations/1 ms” (p. 1069). I'm not entirely confident that the \\\".1 ms of model time\\\" Izhikevich is referring to corresponds with a .1 ms time-step, but this fits with with his characterization of the model as consisting of tens of parameters and requiring at least 10 FLOPs for each exponent. And regardless, it seems unlikely that he has time-steps larger than .1 ms in mind, given that he budgets based on .1 ms increments.\") | ~1e17 |\n| [Beniaguev et al. (2020)](https://www.biorxiv.org/content/10.1101/613141v2.full.pdf) DNN | 1e7 FLOPs per ms[273](https://www.openphilanthropy.org/brain-computation-report#footnote273_n90n51a \"Here's my estimate, which the lead author of the paper tells me looks about right. 1st layer: 1278 synaptic inputs × 35 × 128 = 5.7 million MACCs (from line 140 and lines 179-180 here); Next 6 layers: 6 layers × 128 × 35 × 128 = 3.4 million MACCs. Total per ms: ~ 10 million MACCs. Total per second: ~10 billion MACCs. Multiplied by 2 to count individual FLOPs (see “It’s dot products all the way down” here) = ~20 billion FLOP/s per cell. Though the authors also note that “the accuracy of the model was insensitive to the temporal kernel sizes of the different DNN layers when keeping the total temporal extent of the entire network fixed, so the temporal extent of the first layer was selected to be larger than subsequent layers mainly for visualization purposes” (p. 7). I’m not sure what kind of difference this might make.\") | 1 ms | ~1e21 |\n| [Hay et al. (2011)](https://journals.plos.org/ploscompbiol/article?id=10.1371/journal.pcbi.1002107) detailed L5PC model | 1e10 FLOPs per ms?[274](https://www.openphilanthropy.org/brain-computation-report#footnote274_jk1gull \"This is a very loose estimate, based on scaling up the estimate for the Beniaguev et al. (2020) DNN by ~1000x, on the basis of their reporting, in the 2019 version of the paper, that “In our tests we obtained a factor of ~2000 speed up when using the DNN instead of its compartmental-model counterpart” (p. 15). In the current paper they report a “a speedup of simulation time by several orders of magnitude” (p. 8).\") | ? | 1e24? |\n\n\n \n\n\nEven the lower-end numbers here are competitive with the budgets for synaptic transmission above (1e13-1e17 FLOP/s). This might seem surprising, given the difference in synapse and neuron count. But as I noted at the beginning of the section, the budgets for synaptic transmission were based on average firing rates; whereas I’m here assuming that firing decisions must be computed once per time-step (for some given size of time-step).[275](https://www.openphilanthropy.org/brain-computation-report#footnote275_y61agdw \"This is somewhat analogous to the approach taken by Ananthanarayanan et al. (2009): “The basic algorithm of our cortical simulator C2 [2] is that neurons are simulated in a clock-driven fashion whereas synapses are simulated in an event-driven fashion. For every neuron, at every simulation time step (say 1 ms), we update the state of each neuron, and if the neuron fires, generate an event for each synapse that the neuron is post-synaptic to and presynaptic to. For every synapse, when it receives a pre- or post-synaptic event, we update its state and, if necessary, the state of the post-synaptic neuron” (p. 3, Section 3).”\")\n\n\nThis assumption may be mistaken. Dr. Paul Christiano, for example, suggested that it would be possible to accumulate inputs over some set of time-steps, then calculate what the output spike pattern would have been over that period.[276](https://www.openphilanthropy.org/brain-computation-report#footnote276_zpiir2b \"From Open Philanthropy's non-verbatim notes from a conversation with Dr. Paul Christiano: “Dr. Christiano expects that in modeling a neuron’s input-output function, one would not need to compute, every time-step, whether or not the neuron fires during that time-step. Rather, you could accumulate information about the inputs to a neuron over a longer period, and then compute the timing of its spikes over that period all at once. This definitely holds in a purely feedforward context - e.g., for a given neuron, you could simply compute all of the times that the neuron fires, and then use this information to compute when all of the downstream neurons fire, and so on. The fact that the brain’s architecture is highly recurrent complicates this picture, as the firing pattern of a particular neuron may be able to influence the inputs that that same neuron receives. However, the time it takes for an action potential to propagate would be a lower bound on how long it would be possible to wait in accumulating synaptic inputs (since the timescale of a neuron’s influence on its own inputs is capped by the propagation time of its outgoing signals)” (p. 6).\") And [Sarpeshkar (2010)](https://www.cambridge.org/core/books/ultra-low-power-bioelectronics/ED8504DA1504856B74E2502EA859FDEA) appears to assume that the FLOP/s he budgets for firing decisions (enough for 1 ms of Hodgkin-Huxley model) need only be used every time the neuron spikes.[277](https://www.openphilanthropy.org/brain-computation-report#footnote277_wis0hrn \"Sarpeshkar (2010) employs what appears to be a single-compartment Hodgkin-Huxley model of firing decisions as a lower bound (he cites Izhikevich (2004), and uses an estimate of 1200 FLOPs per firing decision -- the number that Izhikevich gives for running a Hodgkin-Huxley model for one ms (see p. 1066)), but he assumes that the model only needs to be “run” every time a neuron spikes (he uses a 5 Hz average rate) (p. 747-8). My intuition, though, would’ve been that because you do not know ahead of time whether or not the synaptic inputs are sufficient to cause an action potential, you would need to calculate this more often than spiking actually occurs.\") If something like this is true, the numbers would be lower.\n\n\nOther caveats:\n\n\n* I’m leaning heavily on the FLOPs estimates in [Izhikevich (2004)](https://www.izhikevich.org/publications/whichmod.pdf), which I haven’t verified.\n* Actual computation burdens for running e.g. a Hodgkin-Huxley model depend on implementation details like platform, programming language, integration method, etc.[278](https://www.openphilanthropy.org/brain-computation-report#footnote278_cinlses \"From Open Philanthropy's non-verbatim notes from a conversation with Prof. Eve Marder: “the computational power necessary to run e.g. a full Hodgkin-Huxley model depends a lot on implementation: e.g., what platform you use, what language you’re using, what method of integration, and what time-step for integration (all of your compute time goes to integrations)” (p. 4-5).\")\n* In at least some conditions, simulations of integrate-and-fire neurons can require very fine grained temporal resolution (e.g., 0.001 ms) to capture various properties of network behavior.[279](https://www.openphilanthropy.org/brain-computation-report#footnote279_dp62lw2 \"See Hansel et al. (1998): “It is shown that very small time steps are required to reproduce correctly the synchronization properties of large networks of integrate-and-fire neurons when the differential system describing their dynamics is integrated with the standard Euler or second-order Runge-Kutta algorithms” (p. 467) ... An integration time step of t = 0.001 ms is actually required to evaluate correctly the coherence of the network in this regime” (p. ). Thanks to the expert who pointed me to this paper.\") Temporal resolutions like this would increase the numbers above considerably. However, various other simulations using simplified spiking neuron models, such as the leaky-integrate-and-fire simulations run by Prof. Chris Eliasmith (which actually perform tasks like recognizing numbers and predicting sequences of them), use lower resolutions.[280](https://www.openphilanthropy.org/brain-computation-report#footnote280_rznhfwp \"From Open Philanthropy's non-verbatim notes from a conversation with Prof. Chris Eliasmith: “Prof. Eliasmith typically uses 1 ms time-steps in the simulations he builds” (p. 3); and Eliasmith et al. (2012) use leaky-Integrate-and-fire models (see p. 16 of the supplementary materials). Izhikevich (2004) reports various types of collective neuron behavior in simulations using his 13 FLOP/ms model at 1 ms resolution, and others for a different simulation at 0.5 ms for neuron simulation and 1 ms for synaptic dynamics (see Izhikevich et al. (2004), “Neuronal Dynamics”). Ananthanarayanan et al. (2009) use 0.1-1 ms (see p. 3, Section 3.1.1) for “single-compartment phenomenological spiking neurons” (they cite Izhikevich et al. (2004), which suggests to me that they are using Izhikevich models as well).\")\n* The estimate above for [Hay et al. (2011)](https://journals.plos.org/ploscompbiol/article?id=10.1371/journal.pcbi.1002107) is especially rough.[281](https://www.openphilanthropy.org/brain-computation-report#footnote281_xmqjwai \"It's based on scaling up the estimate for the Beniaguev et al. (2020) DNN by ~1000x, on the basis of their reporting, in the 2019 version of the paper, that “In our tests we obtained a factor of ~2000 speed up when using the DNN instead of its compartmental-model counterpart” (p. 15). In the current paper they report a “a speedup of simulation time by several orders of magnitude” (p. 8).\")\n* The high end of this chart is not an upper bound on modeling complexity. Biophysical modeling can in principle be arbitrarily detailed.\n\n\nOverall, my best guess is that the computation required to run single-compartment Hodgkin-Huxley models of every neuron in the brain (1e17 FLOP/S, on the estimate above) is overkill for capturing the task-relevant dimensions of firing decisions. This is centrally because:\n\n\n* Efforts to predict neuron behavior using simpler models (including simplified models of dendritic computation) appear to have had a decent amount of success (though these results also have many limitations, and I’m not in a great position to evaluate them).\n* With the exception of [Beniaguev et al. (2020)](https://www.biorxiv.org/content/10.1101/613141v2.full.pdf), I don’t see much positive evidence that dendritic computation alters this picture dramatically.\n* I find some of the considerations canvassed in [Section 2.1.2.3](#section_2.1.2.3) (other simple circuits; the success of ANNs with simple, interchangeable non-linearities) suggestive; and I think that others I don’t understand very well (e.g., communication bottlenecks, mathematical results showing that the Hodgkin-Huxley equations can be simplified) may well be quite persuasive on further investigation.\n* My impression is that a substantial fraction (maybe a majority?) of computational neuroscientists who have formed positive opinions about the topic (as opposed to remaining agnostic) would also think that single-compartment Hodgkin-Huxley is overkill for capturing task-performance (though it may be helpful for other forms of neuroscientific understanding).\n\n\nThus, **I’ll use 1e17 FLOP/s as a high-end estimate for firing decisions**.\n\n\n**The Izhikevich spiking neuron model estimate (1e15 FLOP/s) seems to me like a decent default estimate**, as it can capture more behaviors than a simple integrate-and-fire model, for roughly comparable FLOP/s (indeed, Izhikevich seems to argue that it can do anything a Hodgkin-Huxley model can). And if simpler operations (e.g., a ReLU) and/or lower time resolutions are adequate, we’d drop to something like 1e13 FLOP/s, possibly lower. **I’ll use 1e13 FLOP/s as a low end**, leaving us with an overall range similar to the range for synaptic transmission: 1e13 to 1e17 FLOP/s.\n\n\n#### 2.2 Learning\n\n\nThus far, we have been treating the synaptic weights and firing decision mappings as static over time. In reality, though, experience shapes neural signaling in a manner that improves task performance and stores task-relevant information. I’ll call these changes “learning.”\n\n\nSome of these may proceed via standard neuron signaling (for example, perhaps firing patterns in networks with static weights could store short-term memories).[282](https://www.openphilanthropy.org/brain-computation-report#footnote282_5rx71bi \"From Open Philanthropy's non-verbatim notes from a conversation with Dr. Adam Marblestone: “It might be that all of the neurons and synapses in the brain are there in order to make the brain more likely to converge on a solution while learning, but that once learning has taken place, the brain implements a function that can be adequately approximated using much less compute” (p. 7).\") But the budgets thus far already cover this. Here I’ll focus on processes that we haven’t yet covered, but which are thought to be involved in learning. These include:\n\n\n* Synaptic weights change over time (“*synaptic plasticity*”). These changes are often divided into categories:\n\t+ Short-term plasticity (e.g., changes lasting from hundreds of milliseconds to a few seconds).\n\t+ Long-term plasticity (changes lasting longer).[283](https://www.openphilanthropy.org/brain-computation-report#footnote283_if3mwye \"Tsodyks and Wu (2013): “Compared with long-term plasticity (Bi and Poo (2001)), which is hypothesized as the neural substrate for experience-dependent modification of neural circuit, STP has a shorter time scale, typically on the order of hundreds to thousands of milliseconds.” See also Ghanbari et al. (2017), (p. 1), Bliss and Lømo (1973), and Citri and Malenka (2008). It is also possible to break these categories down more finely. Clopath (2012), for example, writes: “A change in synaptic strength can last for different lengths of time: we speak about short-term plasticity when the change lasts up to a few minutes, early-long-term plasticity when it lasts up to a few hours and late-long-term plasticity when it lasts beyond the experiment’s duration (which is often about 10 h) but is thought to last much longer even, possibly a life-time. This last type of plasticity is also called synaptic consolidation or maintenance” (p. 251). Sandberg and Bostrom (2008) suggest that short-term synaptic plasticity “likely plays a role in a variety of brain functions, such as temporal filtering (Fortune and Rose (2001)), auditory processing (Macleod, Horiuchi et al. (2007)) and motor control (Nadim and Manor (2000))” (p. 32). Types of synaptic plasticity can be further subdivided according to whether the relevant change increases (“facilitation”/”potentiation”) or decreases (“depression”) the size of the post-synaptic impact of a spike through that synapse: see Tosdyks and Wu (2013) and Yang and Calakos (2013).\")\n* The type of synaptic plasticity neurons exhibit can itself change (“*[meta-plasticity](http://www.scholarpedia.org/article/Metaplasticity)*”).\n* The electric properties of the neurons (for example, ion channel expression, spike threshold, resting membrane potential) also change (“*[intrinsic plasticity](http://www.scholarpedia.org/article/Intrinsic_plasticity)*”).[284](https://www.openphilanthropy.org/brain-computation-report#footnote284_8j6p0jx \"Cudmore and Desai (2008): “Intrinsic plasticity is the persistent modification of a neuron’s intrinsic electrical properties by neuronal or synaptic activity. It is mediated by changes in the expression level or biophysical properties of ion channels in the membrane, and can affect such diverse processes as synaptic integration, subthreshold signal propagation, spike generation, spike backpropagation, and meta-plasticity.” Indeed, it has been shown that a type of neuron in the cerebellum known as a cerebellar Purjinke cell can learn timed responses to inputs in a manner that does not rely on synaptic plasticity. Johansson et al. (2014): “The standard view of the mechanisms underlying learning is that they involve strengthening or weakening synaptic connections. Learned response timing is thought to combine such plasticity with temporally patterned inputs to the neuron. We show here that a cerebellar Purkinje cell in a ferret can learn to respond to a specific input with a temporal pattern of activity consisting of temporally specific increases and decreases in firing over hundreds of milliseconds without a temporally patterned input. Training Purkinje cells with direct stimulation of immediate afferents, the parallel fibers, and pharmacological blocking of interneurons shows that the timing mechanism is intrinsic to the cell itself. Purkinje cells can learn to respond not only with increased or decreased firing but also with an adaptively timed activity pattern” (p. 14930).\")\n* New neurons, synapses, and dendritic spines grow over time, and old ones die.[285](https://www.openphilanthropy.org/brain-computation-report#footnote285_f7c4itc \"See e.g. Munno and Syed (2003), Ming and Song (2011), Grutzendler et al. (2002), Holtmaat et al. (2005).\")\n\n\nSuch changes can be influenced by many factors, including pre-synaptic and post-synaptic spiking,[286](https://www.openphilanthropy.org/brain-computation-report#footnote286_6tlzb9w \"See e.g. Markram et al. (1997).\") receptor activity in the post-synaptic dendrite,[287](https://www.openphilanthropy.org/brain-computation-report#footnote287_306bfxn \"See Luscher and Malenka (2012).\") the presence or absence of various neuromodulators,[288](https://www.openphilanthropy.org/brain-computation-report#footnote288_2krzune \"See e.g. Gerstner et al. (2018), and Nadim and Bucher (2014).\") interactions with glial cells,[289](https://www.openphilanthropy.org/brain-computation-report#footnote289_ty2zt5r \"See Monday et al. (2018) (p. 7-8).\") chemical signals from the post-synaptic neuron to the pre-synaptic neuron,[290](https://www.openphilanthropy.org/brain-computation-report#footnote290_4nty86n \"See Tao and Poo (2001).\") and gene expression.[291](https://www.openphilanthropy.org/brain-computation-report#footnote291_05a2564 \"See Yap and Greenberg (2018).\") There is a lot of intricate molecular machinery plausibly involved,[292](https://www.openphilanthropy.org/brain-computation-report#footnote292_0sg24zp \"See Bhalla (2014), Figure 1, for a diagram depicting some of this machinery.\") which we don’t understand well and which can be hard to access experimentally[293](https://www.openphilanthropy.org/brain-computation-report#footnote293_ndt82lx \"From Open Philanthropy's non-verbatim notes from a conversation with Prof. Blake Richards: “Some neuroscientists are interested in the possibility that a lot of computation is occurring via molecular processes in the brain. For example, very complex interactions could be occurring in a structure known as the post-synaptic density, which involves molecular machinery that could in principle implicate many orders of magnitude of additional compute per synapse. We don’t yet know what this molecular machinery is doing, because we aren’t yet able to track the states of the synapses and molecules with adequate precision. There is evidence that perturbing the molecular processes within the synapse alters the dynamics of synaptic plasticity, but this doesn’t necessarily provide much evidence about whether these processes are playing a computational role. For example, their primary role might just be to maintain and control a single synaptic weight, which is itself a substantive task for a biological system” (p. 2). Monday et al. (2018): ‘The cellular basis of learning and memory is one of the greatest unsolved mysteries in neuroscience … Despite significant advancements in the molecular basis of neurotransmission, exactly how transmitter release is modified in a long-term manner remains largely unclear” (p. 1-2).\") (though some recent learning models attempt to incorporate it).[294](https://www.openphilanthropy.org/brain-computation-report#footnote294_kfedsz3 \"Lahiri and Ganguli (2013): Lahiri and Ganguli (2013): “To understand the functional contribution of such molecular complexity to learning and memory, it is essential to expand our theoretical conception of a synapse from a single scalar to an entire dynamical system with many internal molecular functional states” (p. 1). Benna and Fusi (2016): “The molecular machinery responsible for memory consolidation at the level of synaptic connections is believed to employ a complex network of diverse biochemical processes that operate on different timescales. Understanding how these processes are orchestrated to preserve memories over a lifetime requires guiding principles to interpret the complex organization of the observed synaptic molecular interactions and explain its computational advantage. Here we present a class of synaptic models that can efficiently harness biological complexity to store and preserve a huge number of memories on long timescales, vastly outperforming all previous synaptic models of memory” (p. 1697). Kaplanis et al. (2018): “we show that by equipping tabular and deep reinforcement learning agents with a synaptic model that incorporates this biological complexity (Benna and Fusi (2016)), catastrophic forgetting can be mitigated at multiple timescales. In particular, we find that as well as enabling continual learning across sequential training of two simple tasks, it can also be used to overcome within-task forgetting by reducing the need for an experience replay database” (p. 1). Zenke et al. (2017): “In this study, we introduce intelligent synapses that bring some of this biological complexity into artificial neural networks. Each synapse accumulates task relevant information over time, and exploits this information to rapidly store new memories without forgetting old ones. We evaluate our approach on continual learning of classification tasks, and show that it dramatically reduces forgetting while maintaining computational efficiency” (abstract).\") And other changes in the brain could be relevant as well.[295](https://www.openphilanthropy.org/brain-computation-report#footnote295_qugpogh \"Activity-dependent myelination might be one example (see e.g. Faria et al. (2019)).\")\n\n\nOf course, many tasks (say, tying your shoes) don’t require much learning, once you know how to do them. And many tasks are over before some of the mechanisms above have had time to have effects, suggesting that such mechanisms can be left out of FLOP/s budgets for those tasks.[296](https://www.openphilanthropy.org/brain-computation-report#footnote296_icahuh0 \"Though short-term plasticity is both (a) fairly fast and (b) possibly involved in working memory, which many tasks require. See also Sandberg and Bostrom (2008): “Since neurogenesis occurs on fairly slow timescales (> 1 week) compared to brain activity and normal plasticity, it could probably be ignored in brain emulation if the goal is an emulation that is intended to function faithfully for only a few days and not to exhibit truly long‐term memory consolidation or adaptation” (p. 35).\")\n\n\nBut learning to perform new tasks, sometimes over long timescales, is itself a task that the brain can perform. So a FLOP/s estimate for any task that the brain can perform needs to budget FLOP/s for all forms of learning.\n\n\nHow many FLOP/s? Here are a few considerations.\n\n\n\n#### 2.2.1 Timescales\n\n\nSome of the changes involved in learning occur less frequently than spike through synapses. Growing new neurons, synapses, and dendritic spines is an extreme example. At a glance, the number of new neurons per day in adult humans appears to be on the order of hundreds or less;[297](https://www.openphilanthropy.org/brain-computation-report#footnote297_n8o3a01 \"Sorrells et al. (2018): “In humans, some studies have suggested that hundreds of new neurons are added to the adult dentate gyrus every day, whereas other studies find many fewer putative new neurons.” See also Moreno-Jimenez et al. (2019): “we identified thousands of immature neurons in the DG of neurologically healthy human subjects up to the ninth decade of life” (abstract).\") and [Zuo et al. (2005)](https://pubmed.ncbi.nlm.nih.gov/15848798/) report that over two weeks, only 3%-5% of dendritic spines in adult mice were eliminated and formed (though Prof. Erik De Schutter noted that networks of neurons can rewire themselves over tens of minutes).[298](https://www.openphilanthropy.org/brain-computation-report#footnote298_3kmdprg \"Zuo et al. (2005): “In adult mice (4-6 months old), 3%-5% of spines were eliminated and formed over 2 weeks in various cortical regions. Over 18 months, only 26% of spines were eliminated and 19% formed in adult barrel cortex” (from the abstract). From Open Philanthropy's non-verbatim notes from a conversation with Prof. Erik De Schutter: “Networks of neurons can rewire themselves fairly quickly, over timescales of tens of minutes. These changes correlate with improvements in performance on tasks” (p. 3).\") Because these events are so comparatively rare, I expect modeling their role in task-performance to be quite cheap relative to e.g. 1e14 spikes through synapses/sec.[299](https://www.openphilanthropy.org/brain-computation-report#footnote299_0g0p2f7 \"Dr. Dario Amodei suggested considerations in this vein.\") This holds even if the number of FLOPs required per event is very large, which I don’t see strong reason to expect.\n\n\nSomething similar may apply to some other types of changes to e.g. synaptic weights and intrinsic neuron properties:\n\n\n* Some long-term changes require building new biochemical machinery (receptors, ion channels, etc.), which seems resource-intensive relative to e.g. synaptic transmission (though I don’t have numbers here).[300](https://www.openphilanthropy.org/brain-computation-report#footnote300_gmdpody \"See e.g. this diagram of a potentiated synapse, illustrating an increased number of post-synaptic receptors\") This suggests limitations on frequency.\n* If a given type of change lasts a long time *in vivo* (and hence, is not “reset” very frequently) or is triggered primarily by relatively rare events (e.g., sustained periods of high-frequency pre-synaptic spiking), this could also suggest such limitations.[301](https://www.openphilanthropy.org/brain-computation-report#footnote301_0lxnibf \"Thus, for example, Bliss and Lømo (1973), in an early result related to long-lasting synaptic potentiation, use conditioning spike trains of 10-15 secs, and 3-4 seconds (p. 331).\")\n* It seems plausible that some amount of stability is required for long-term information storage.[302](https://www.openphilanthropy.org/brain-computation-report#footnote302_ja6bkie \"See discussion of the “stability - plasticity dilemma,” e.g. Mermillod et al. (2013). One possible solution is to use multiple dynamical variables operating on different timescales -- see Benna and Fusi (2016).\")\n\n\nMore generally, some biochemical mechanisms involved in learning are relatively slow-moving. The signaling cascades triggered by some neuromodulators, for example, are limited by the speed of chemical diffusion, which [Koch (1999)](https://www.amazon.com/Biophysics-Computation-Information-Computational-Neuroscience/dp/0195181999) suggests extends their timescales to seconds or longer;[303](https://www.openphilanthropy.org/brain-computation-report#footnote303_21l28lg \"Koch (1999): “An important distinction between ionotropic and metabotropic receptors is their time scale. While members of the former class act rapidly, terminating within a very small fraction of a second, the speed of the latter class is limited by diffusion. Biochemical reactions can happen nearly instantaneously at the neuronal time scale. However, if a synaptic input to a metabotropic receptor induces the release of some messenger, such as calcium ions, which have to diffuse to the cell body in order to ‘do their thing,’ the time scale is extended to seconds or longer “ (p. 95). See also Siegelbaum et al. (2013b): “whereas the action of ionotropic receptors is fast and brief, metabotropic receptors produce effects that begin slowly and persist for long periods, ranging from hundreds of milliseconds to many minutes” (p. 236). \") [Bhalla (2014)](https://www.sciencedirect.com/science/article/abs/pii/S0959438813002171) characterizes various types of chemical computation within synapses as occurring on timescales of seconds;[304](https://www.openphilanthropy.org/brain-computation-report#footnote304_zqd65gk \"See p. 32. Bhalla (2014) also suggests that chemical computation involves 1e6 “computations per second” per neuron.\") and [Yap and Greenberg (2018)](https://www.cell.com/neuron/pdf/S0896-6273(18)30901-2.pdf) characterize gene transcription taking place over minutes as “rapid.”[305](https://www.openphilanthropy.org/brain-computation-report#footnote305_ik9iwww \"Yap and Greenberg (2018): “Discovered by Greenberg and Ziff in 1984 (Greenberg and Ziff (1984)), the rapid and transient induction of Fos transcription provided the first evidence that mammalian cells could respond to the outside world within minutes by means of rapid gene transcription, in particular through the activation of specific genes (Cochran et al. (1984); Greenberg et al. (1985); Greenberg et al. (1986); Kruijer et al. (1984); Lau and Nathans (1987); Müller et al. (1984))” (p. 331). \") This too might suggest limits on required FLOP/s.\n\n\nI discuss arguments that appeal to timescales in more detail in [Section 2.3](#section_2.3). As I note there, I don’t think these arguments are conceptually airtight, but I find them suggestive nonetheless, and I expect them to apply to many processes involved in learning.\n\n\nThat said, the frequency with which a given change occurs does not necessarily limit the frequency with which biophysical variables involved in the process need to be updated, or decisions made about what changes to implement as a result.[306](https://www.openphilanthropy.org/brain-computation-report#footnote306_6x3exyg \"Indeed, certain models of synaptic plasticity explicitly include variables whose state is not immediately expressed in changes to synaptic efficacy (that is, in the size of the effect that a spike through that synapse has on a downstream neuron). See e.g. three-factor learning rules discussed by Gerstner et al. (2018). From Open Philanthropy's non-verbatim notes from a conversation with Dr. Adam Marblestone: “Compute increases are more likely to come from synaptic decisions that get computed on something like a per-spike basis. For example, you might need to do a lot of fast computation in order to set the synaptic “flag” variables involved in some neo-Hebbian three-factor learning rules, even if these variables take a long time to have effects” (p. 3).\") What’s more, some forms of synaptic plasticity occur on short timescales, reflecting rapid changes in e.g. calcium or neurotransmitter in a synapse;[307](https://www.openphilanthropy.org/brain-computation-report#footnote307_97l1xxq \"Tsodyks and Wu (2013): “Compared with long-term plasticity (Bi and Poo (2001)), which is hypothesized as the neural substrate for experience-dependent modification of neural circuit, STP has a shorter time scale, typically on the order of hundreds to thousands of milliseconds.” Cheng et al. (2018): “It is well established that both augmentation and potentiation are triggered by a transient rise in calcium concentration within the presynaptic terminal.”\") and [Bhalla (2014)](https://www.sciencedirect.com/science/article/abs/pii/S0959438813002171) notes that spike-timing dependent plasticity “requires sharp temporal discrimination of the order of a few milliseconds” (p. 32).\n\n\n\n#### 2.2.2 Existing models\n\n\nThere is no consensus model for how the brain learns,[308](https://www.openphilanthropy.org/brain-computation-report#footnote308_u9yjp2f \"From Open Philanthropy's non-verbatim notes from a conversation with Prof. Blake Richards: “it is very difficult to say at this point exactly how much compute would be required to model learning in the brain, because there is a lot of disagreement in the field as to how sophisticated the learning algorithms in the brain are. This is partly because we don’t have a good hold on how much human learning is truly general purpose, vs. constrained to particular tasks” (p. 1).\") and the training required to create state of the art AI systems seems in various ways comparatively inefficient.[309](https://www.openphilanthropy.org/brain-computation-report#footnote309_j8qsip2 \"See Yann LeCun’s 2017 talk: “How does the brain learn so much so quickly?”, and Stuart Russell’s comments here: “I think another area where deep learning is clearly not capturing the human capacity for learning, is just in the efficiency of learning. I remember in the mid ’80s going to some classes in psychology at Stanford, and there were people doing machine learning then and they were very proud of their results, and somebody asked Gordon Bower, “how many examples do humans need to learn this kind of thing?” And Gordon said “one [sic] Sometimes two, usually one”, and this is genuinely true, right? If you look for a picture book that has one to two million pictures of giraffes to teach children what a giraffe is, you won’t find one. Picture books that tell children what giraffes are have one picture of a giraffe, one picture of an elephant, and the child gets it immediately, even though it’s a very crude cartoonish drawing, of a giraffe or an elephant, they never have a problem recognizing giraffes and elephants for the rest of their lives. Deep learning systems are needing, even for these relatively simple concepts, thousands, tens of thousands, millions of examples, and the idea within deep learning seems to be that well, the way we’re going to scale up to more complicated things like learning how to write an email to ask for a job, is that we’ll just have billions or trillions of examples, and then we’ll be able to learn really, really complicated concepts. But of course the universe just doesn’t contain enough data for the machine to learn direct mappings from perceptual inputs or really actually perceptual input history. So imagine your entire video record of your life, and that feeds into the decision about what to do next, and you have to learn that mapping as a supervised learning problem. It’s not even funny how unfeasible that is. The longer the deep learning community persists in this, the worse the pain is going to be when their heads bang into the wall.” That said, work on this topic is ongoing, and these comparisons don’t seem straightforward.\") There is debate over comparisons with learning algorithms like backpropagation[310](https://www.openphilanthropy.org/brain-computation-report#footnote310_6bn1yf6 \"SSee e.g., Guerguiev et al. (2017), Bartunov et al. (2018), and Hinton (2011). From Guerguiev et al. (2017): “Backpropagation assigns credit by explicitly using current downstream synaptic connections to calculate synaptic weight updates in earlier layers, commonly termed ‘hidden layers’ (LeCun et al., 2015) (Figure 1B). This technique, which is sometimes referred to as ‘weight transport’, involves non-local transmission of synaptic weight information between layers of the network (Lillicrap et al. (2016); Grossberg (1987)). Weight transport is clearly unrealistic from a biological perspective (Bengio et al. (2015); Crick (1989)). It would require early sensory processing areas (e.g. V1, V2, V4) to have precise information about billions of synaptic connections in downstream circuits (MT, IT, M2, EC, etc.). According to our current understanding, there is no physiological mechanism that could communicate this information in the brain. Some deep learning algorithms utilize purely Hebbian rules (Scellier and Bengio, 2016; Hinton et al. (2006)). But, they depend on feedback synapses that are symmetric to feedforward synapses (Scellier and Bengio, 2016; Hinton et al. (2006)), which is essentially a version of weight transport. Altogether, these artificial aspects of current deep learning solutions to credit assignment have rendered many scientists skeptical of the proposal that deep learning occurs in the real brain (Crick, 1989; Grossberg (1987); Harris (2008); Urbanczik and Senn (2009)). Recent findings have shown that these problems may be surmountable, though. Lillicrap et al. (2016), Lee et al. (2015) and Liao et al. (2015) have demonstrated that it is possible to solve the credit assignment problem even while avoiding weight transport or symmetric feedback weights” (p. 3).\") (along with meta-debate about whether this debate is meaningful or worthwhile).[311](https://www.openphilanthropy.org/brain-computation-report#footnote311_l5bk1na \"See e.g. David Pfau via twitter: “In 100 years, we’ll look back on theories of ‘how the brain does backpropagation’ the way we look at the luminiferous aether now.” See also Open Philanthropy's non-verbatim notes from a conversation with Prof. Eric Jonas: “Prof. Jonas does not think that there is a clear meaning to the claim that the brain is a deep learning system” (p. 3).\")\n\n\nStill, different models can at least serve as examples of possible FLOP/s costs. Here are a few that came up in my research.\n\n\n\n\n**Figure 11: Some example learning models**| LEARNING MODEL | DESCRIPTION | FLOP/S COSTS | EXPERT OPINION |\n| --- | --- | --- | --- |\n| *Hebbian rules* | Classic set of models. A synapse strengthens or weakens as a function of pre-synaptic spiking and post-synaptic spiking, possibly together with some sort of external modulation/reward.[312](https://www.openphilanthropy.org/brain-computation-report#footnote312_nar90fi \"See e.g. Gerstner et al. (2018) for some descriptions. From Open Philanthropy's non-verbatim notes from a conversation with Dr. Adam Marblestone: “A lot of the learning models discussed in neuroscience are also significantly simpler than backpropagation: e.g., three-factor rules like “if the pre-synaptic neuron was active, and the post-synaptic neuron was active, and you had dopamine in the last ~3 seconds, then strengthen” (p. 6). From Open Philanthropy's non-verbatim notes from a conversation with Prof. Anthony Zador: “We know the general outlines of the rules governing synaptic plasticity. The synapse gets stronger and weaker as a function of pre and post synaptic activity, and external modulation” (p. 3).\") | 3-5 FLOPs per synaptic update?[313](https://www.openphilanthropy.org/brain-computation-report#footnote313_p7n156i \"From Open Philanthropy's non-verbatim notes from a conversation with Prof. Chris Eliasmith: “In the large scale brain simulations that Chris Eliasmith builds, he often uses an error-driven Hebbian rule, which computes updates to synaptic weights based on pre-synaptic activity, post-synaptic activity, and an error signal (which, in the brain, could proceed via a mechanism like dopamine modulation). This rule requires on the order of three to five operations per synapse (a couple of products, and then a weight update), though the total burden depends on how often you perform the updates” (p. 4).\") | Prof. Anthony Zador expected the general outlines to be correct.[314](https://www.openphilanthropy.org/brain-computation-report#footnote314_cka8moy \"From Open Philanthropy's non-verbatim notes from a conversation with Prof. Anthony Zador: “We know the general outlines of the rules governing synaptic plasticity. The synapse gets stronger and weaker as a function of pre and post synaptic activity, and external modulation. There is a lot of room for discovery there, and it may be difficult to get just right, but conceptually, it’s pretty simple. Prof. Zador expects it to be possible to capture synaptic plasticity with a small number of FLOPs per spike through synapse” (p. 3).\") Prof. Chris Eliasmith uses a variant in his models.[315](https://www.openphilanthropy.org/brain-computation-report#footnote315_ausmoux \"From Open Philanthropy's non-verbatim notes from a conversation with Prof. Chris Eliasmith: “In the large scale brain simulations that Chris Eliasmith builds, he often uses an error-driven Hebbian rule, which computes updates to synaptic weights based on pre-synaptic activity, post-synaptic activity, and an error signal (which, in the brain, could proceed via a mechanism like dopamine modulation)” (p. 4).\") |\n| *[Benna and Fusi (2016)](https://www.nature.com/articles/nn.4401)* | Models synapses as a dynamical system of variables interacting on multiple timescales. May help resolve the “stability-plasticity dilemma,” on which overly plastic synapses are too easily overwritten, but overly rigid synapses are unable to learn. May also help with online learning. | ~2-30x the FLOPs to run a model with one parameter per synapse? (very uncertain)[316](https://www.openphilanthropy.org/brain-computation-report#footnote316_f9yt0fr \"Kaplanis et al. (2018) add 30 extra dynamical variables per synapse, but manage to increase runtime by only 1.5-2 times relative to a control model, though I’m not sure about the details here. They note that “the complexity of the algorithm is O(mN), where N is the number of trainable parameters in the network and m is the number of Benna-Fusi variables per parameter.”\") | Some experts argue that shifting to synaptic models of this kind, involving dynamical interactions, is both theoretically necessary and biologically plausible.[317](https://www.openphilanthropy.org/brain-computation-report#footnote317_0k1ohee \"See e.g. Lahiri and Ganguli (2013): “To understand the functional contribution of such molecular complexity to learning and memory, it is essential to expand our theoretical conception of a synapse from a single scalar to an entire dynamical system with many internal molecular functional states” (p. 1). Benna and Fusi (2016): “The molecular machinery responsible for memory consolidation at the level of synaptic connections is believed to employ a complex network of diverse biochemical processes that operate on different timescales. Understanding how these processes are orchestrated to preserve memories over a lifetime requires guiding principles to interpret the complex organization of the observed synaptic molecular interactions and explain its computational advantage. Here we present a class of synaptic models that can efficiently harness biological complexity to store and preserve a huge number of memories on long timescales, vastly outperforming all previous synaptic models of memory” (p. 1697). My understanding is that Fusi and Abbott (2007) is a precursor to some of this work.\") |\n| *First order gradient descent methods* | Use slope of the loss function to minimize the loss.[318](https://www.openphilanthropy.org/brain-computation-report#footnote318_b0ndkna \"From Open Philanthropy's non-verbatim notes from a conversation with Prof. Blake Richards: “First-order gradient descent methods, like back-propagation, use the slope of the loss function to minimize the loss” (p. 1-2).\") Widespread use in machine learning. Contentious debate about biological plausibility. | ~2× a static network. The learning step is basically a backwards pass through the network, and going forward and backward come at roughly the same cost.[319](https://www.openphilanthropy.org/brain-computation-report#footnote319_x4zynha \"From Open Philanthropy's non-verbatim notes from a conversation with Prof. Blake Richards: “[For first-order gradient descent methods], learning is basically a backwards pass through the network, so the compute required scales linearly with the number of neurons and synapses in the network, adding only a small constant factor” (p. 1-2). See also Open Philanthropy's non-verbatim notes from a conversation with Prof. Barak Pearlmutter: “Prof. Pearlmutter’s best-guess estimate was that the learning overhead (that is, the compute increase from moving from a non-adaptive system to an adaptive system) would be a factor of two. It could be more or less, but this is a number we actually understand, because the existing learning algorithms that we know work for large-scale systems, and that we have put effort into optimizing -- for example, backpropagation -- implicate roughly this type of overhead” (p. 3).\") | Prof. Konrad Kording, Prof. Barak Pearlmutter, and Prof. Blake Richards favored estimates based on this anchor/in this range of FLOP/s costs.[320](https://www.openphilanthropy.org/brain-computation-report#footnote320_44y93lr \"From Open Philanthropy's non-verbatim notes from a conversation with Prof. Barak Pearlmutter: “Prof. Pearlmutter’s best-guess estimate was that the learning overhead (that is, the compute increase from moving from a non-adaptive system to an adaptive system) would be a factor of two. It could be more or less, but this is a number we actually understand, because the existing learning algorithms that we know work for large-scale systems, and that we have put effort into optimizing -- for example, backpropagation -- implicate roughly this type of overhead” (p. 3). From Open Philanthropy's non-verbatim notes from a conversation with Prof. Konrad Kording: “Prof. Kording thinks that learning in the brain requires the same amount of compute as processing. If you have a compute graph, going forwards and backwards comes at roughly the same cost” (p. 3). From Open Philanthropy's non-verbatim notes from a conversation with Prof. Blake Richards: “Prof. Richards favors the hypothesis that the brain uses a learning method with compute scaling properties similar to backpropagation. This is partly because humans are capable of learning so many tasks that were not present in the evolutionary environment (and hence are unlikely to be hardwired into our brains), with comparatively little data (e.g., less than a weight-perturbation algorithm would require)” (p. 2).\") |\n| *Second order gradient descent methods* | Take into account not just the slope of the loss function, but also the curvature. Arguably better than gradient descent methods, but require more compute, so used more rarely.[321](https://www.openphilanthropy.org/brain-computation-report#footnote321_2ra142f \"From Open Philanthropy's non-verbatim notes from a conversation with Prof. Blake Richards: “More sophisticated learning algorithms, such as second-order gradient methods, take into account not just the slope of the loss function gradient but also its curvature. These require more compute (the compute per learning step scales as a polynomial with the number of neurons and synapses), which is why people don’t use these techniques, even though they are arguably much better” (p. 2).\") | Large. Compute per learning step scales as a polynomial with the number of neurons and synapses in a network.[322](https://www.openphilanthropy.org/brain-computation-report#footnote322_a6jb6ye \"See previous endnote.\") | Dr. Paul Christiano thought it very implausible that the brain implements a rule of this kind.[323](https://www.openphilanthropy.org/brain-computation-report#footnote323_fcf365p \"From Open Philanthropy's non-verbatim notes from a conversation with Dr. Paul Christiano: “Based on his understanding of the brain’s physiology, Dr. Christiano thinks it extremely implausible that the brain could be implementing second-order optimization methods” (p. 7).\") Dr. Adam Marblestone had not seen any proposals in this vein.[324](https://www.openphilanthropy.org/brain-computation-report#footnote324_jcqc7eb \"From Open Philanthropy's non-verbatim notes from a conversation with Dr. Adam Marblestone: “He has not seen proposals for how second-order gradient methods of learning could be implemented in the brain.” (p. 6).\") |\n| *Node-perturbation algorithms* | Involves keeping/consolidating random changes to the network that result in reward, and getting rid of changes that result in punishment. As the size of a network grows, these take longer to converge than first-order gradient methods.[325](https://www.openphilanthropy.org/brain-computation-report#footnote325_yqtp9tg \"From Open Philanthropy's non-verbatim notes from a conversation with Prof. Blake Richards: “In the other direction, there are algorithms known as “weight-perturbation” or “node-perturbation” algorithms. These involve keeping/consolidating random changes to the network that result in reward, and getting rid of changes that result in punishment (a process akin to updating parameters based on simple signals of “hotter” and “colder”). These algorithms require less compute than first-order gradient descent methods, but they take longer to converge as the size of the network grows. In this sense, they involve trade-offs between compute and time” (p. 2).\") | <2× a static network (e.g., less than first-order gradient descent methods).[326](https://www.openphilanthropy.org/brain-computation-report#footnote326_2kl2gw4 \"See previous endnote.\") | Prof. Blake Richards thought that humans learn with less data than this kind of algorithm would require.[327](https://www.openphilanthropy.org/brain-computation-report#footnote327_cj8hlda \"From Open Philanthropy's non-verbatim notes from a conversation with Prof. Blake Richards: “Prof. Richards favors the hypothesis that the brain uses a learning method with compute scaling properties similar to backpropagation. This is partly because humans are capable of learning so many tasks that were not present in the evolutionary environment (and hence are unlikely to be hardwired into our brains), with comparatively little data (e.g., less than a weight-perturbation algorithm would require)” (p. 2).\") |\n\n\nCaveats:\n\n\n* This is far from an exhaustive list.[328](https://www.openphilanthropy.org/brain-computation-report#footnote328_g9ppb7s \"From Open Philanthropy's non-verbatim notes from a conversation with Dr. Adam Marblestone: “There are also non-gradient methods of learning. For example, some people are interested in Bayesian belief propagation, though Dr. Marblestone is not aware of efforts to describe how this might be implemented at the level of e.g. dendrites. We shouldn’t assume that the brain is doing some sort of gradient-based learning” (p. 6). See also Gütig and Sompolinsky (2006) (though I’m not sure if this would fall into one of the categories above).\")\n* The brain may be learning in a manner quite dissimilar from any known learning models. After all, it succeeds in learning in ways we can’t replicate with artificial systems.\n* I haven’t investigated these models much: the text and estimates above are based primarily on comments from experts (see endnotes for citations). With more time and expertise, it seems fairly straightforward to generate better FLOP/s estimates.\n* Synaptic weights are often treated as the core learned parameters in the brain,[329](https://www.openphilanthropy.org/brain-computation-report#footnote329_8bturdb \"From Open Philanthropy's non-verbatim notes from a conversation with Dr. Kate Storrs: “Dr. Storrs’ sense is that, in the parts of the field she engages with most closely (e.g., systems level modeling, visual/cognitive/perceptual modeling, human behavior), and maybe more broadly, a large majority of people treat synaptic weights as the core learned parameters in the brain. That said, she is not a neurophysiologist, and so isn’t the right person to ask about what sort of biophysical complexities could imply larger numbers of parameters. She is peripherally aware of papers suggesting that glia help store knowledge, and there are additional ideas as well. The truth probably involves mechanisms other than synaptic weights, but she believes that the consensus is that such weights hold most of the knowledge” (p. 2).\") but alternative views are available. For example, Prof. Konrad Kording suggested that the brain could be optimizing ion channels as well (there are considerably more ion channels than synapses).[330](https://www.openphilanthropy.org/brain-computation-report#footnote330_qyhi0sa \"From Open Philanthropy's non-verbatim notes from a conversation with Prof. Konrad Kording: “Here is one non-standard argument for this degree of non-linearity in neurons. Adjusting synapses in helpful ways requires computing how that synapse should adjust based on its contribution to whether the neuron fires. But this computation applies in basically the same way to individual ion channels in the cell: e.g., if the brain can signal to the synapse how to adjust in order to improve neuron firing, it can do the same for ion channels, at no additional cost. This makes Prof. Kording thinks that the brain is optimizing both. However, current techniques are very bad at measuring ion channel plasticity. Neuroscientists don’t tend to focus on it for this reason. There are considerably more ion channels than synapses, and ion channels change how synapses linearly and nonlinearly interact with one another. This suggests an uglier computational space” (p. 4-5).\") Thus, the factor increase for learning need not be relative to a static model based on synapses.\n* As noted above, some of what we think of as learning and memory may be implemented via standard neuron signaling, rather than via modifications to e.g. synaptic weights/firing decisions.\n\n\nWith that said, a number of these examples seem to suggest relatively small factor increases for learning, relative to some static baseline (though what that baseline should be is a further question). Second-order gradient methods would be more than this, but I have yet to hear anyone argue that the brain uses these, or propose a biological implementation. And node perturbation would be less (though this may require more data than humans use).\n\n\n#### 2.2.3 Energy costs\n\n\nIf we think that FLOP/s costs correlate with energy expenditure in the brain, we might be able to estimate the FLOP/s costs for learning via the energy spent on it. For example, [Lennie (2003)](http://www2.bcs.rochester.edu/sites/plennie/pdfs/Lennie03a.pdf#page=3) estimates that >50% of the total energy in the neocortex goes to processes involved in standard neuron signaling – namely, maintaining resting potentials in neurons (28%), reversing Na+ and K+ fluxes from spikes (13%), and spiking itself (13%).[331](https://www.openphilanthropy.org/brain-computation-report#footnote331_ff4ho2p \"See p. 494.\") That would leave <50% for (a) other learning process beyond this and (b) everything else (maintaining glial resting potentials is another 10%). Very naively, this might suggest less than a 2× factor for learning, relative to standard neuron signaling.\n\n\nShould we expect FLOP/s costs to correlate with energy expenditure? Generally speaking, larger amounts of information-processing take more energy, so the thought seems at least suggestive (e.g., it’s somewhat surprising if the part of your computer doing 99% of the information-processing is using less than half the energy).[332](https://www.openphilanthropy.org/brain-computation-report#footnote332_16p857n \"Sarpeshkar (2010): “Information is always represented by the states of variables in a physical system, whether that system is a sensing, actuating, communicating, controlling, or computing system or a combination of all types. It costs energy to change or to maintain the states of physical variables. These states can be in the voltage of a piezoelectric sensor, in the mechanical displacement of a robot arm, in the current of an antenna, in the chemical concentration of a regulating enzyme in a cell, or in the voltage on a capacitor in a digital processor. Hence, it costs energy to process information, whether that energy is used by enzymes in biology to copy a strand of DNA or in electronics to filter an input. To save energy, one must then reduce the amount of information that one wants to process. The higher the output precision and the higher the temporal bandwidth or speed at which the information needs to be processed, the higher is the rate of energy consumption, i.e., power. To save power, one must then reduce the rate of information processing...The art of low-power design consists of decomposing the task to be solved in an intelligent fashion such that the rate of information processing is reduced as far as is possible without compromising the performance of the system” (p. 9).\") In the context of biophysical modeling, though, it’s less obvious, as depending on the level of detail in question, modeling systems that use very little energy can be very FLOP/s intensive.\n\n\n#### 2.2.4 Expert opinion\n\n\nA number of experts were sympathetic to FLOP/s budgets for learning in the range of 1-100 FLOPs per spike through synapse.[333](https://www.openphilanthropy.org/brain-computation-report#footnote333_64735qn \"From Open Philanthropy's non-verbatim notes from a conversation with Prof. Blake Richards (p. 3): Based on Prof. Richard’s best guess, it seems reasonable to him to budget an order of magnitude of compute for learning, on top of a budget of roughly one FLOP (possibly a bit more) per spike through synapse. However, it could also be higher or lower. From Open Philanthropy's non-verbatim notes from a conversation with Prof. Anthony Zador (p. 3): Prof. Zador expects it to be possible to capture synaptic plasticity with a small number of FLOPs per spike through synapse. From Open Philanthropy's non-verbatim notes from a conversation with Prof. Barak Pearlmutter (p. 4): Overall, Prof. Pearlmutter thought that an estimate based on 100 FLOPs per spike through synapse, with a factor of two for learning, sounded fairly reasonable. From Open Philanthropy's non-verbatim notes from a conversation with Dr. Adam Marblestone (p. 9): Dr. Marblestone expects that both three-factor rules and backpropagation-type methods would imply compute burdens within an order of magnitude or two of estimates based on 1 FLOP per spike through synapse…Dr. Marblestone is fairly comfortable with one FLOP per spike through synapse as a low-end estimate, and ~100 FLOPs per spike through synapse (roughly comparable to the estimate offered by Prof. Rahul Sarpeshkar) as a high-end estimate. His best guess is 10-100 FLOPs per spike through synapse. From Open Philanthropy's non-verbatim notes from a conversation with Prof. Chris Eliasmith (p. 5): In the large scale brain simulations that Chris Eliasmith builds, he often uses an error-driven Hebbian rule, which computes updates to synaptic weights based on pre-synaptic activity, post-synaptic activity, and an error signal (which, in the brain, could proceed via a mechanism like dopamine modulation). This rule requires on the order of three to five operations per synapse (a couple of products, and then a weight update), though the total burden depends on how often you perform the updates…Prof. Eliasmith thinks that neuron models at roughly the level of detail he uses in SPAUN (possibly including some non-linearities in the dendrites), if scaled up to the size of the brain as a whole, would be able not just to replicate cognitive performance, but also to reflect a functional profile similar to biological neurons.\") Some of this sympathy was based on using (a) Hebbian models, or (b) first-order gradient descent models as an anchor.\n\n\n[Sarpeshkar (2010)](https://www.cambridge.org/core/books/ultra-low-power-bioelectronics/ED8504DA1504856B74E2502EA859FDEA) budgets at least 10 FLOPs per spike through synapse for synaptic learning.[334](https://www.openphilanthropy.org/brain-computation-report#footnote334_muo5ixc \"Sarpeshkar (2010): “If we assume that synaptic multiplication is at least one floating-point operation (FLOP), the 20 ms second-order filter impulse response due to each synapse is 40 FLOPS, and that synaptic learning requires at least 10 FLOPS per spike, a synapse implements at least 50 FLOPS of computation per spike” (p. 748-749).\") Other experts expressed agnosticism and/or openness to much higher numbers;[335](https://www.openphilanthropy.org/brain-computation-report#footnote335_hgdnl8w \"From Open Philanthropy's non-verbatim notes from a conversation with Prof. Eric Jonas: “Prof. Jonas is not convinced by any arguments he’s heard that attempt to limit the amount of state you can store in a neuron. Indeed, some recent work explores the possibility that some information is stored using DNA. If there are actually molecular-level storage mechanisms at work in these systems, that would alter compute estimates by multiple orders of magnitude. … Prof. Jonas thinks that estimating the complexity of learning in the brain involves even more uncertainty than estimates based on firing decisions in neurons. Neuroscientists have been studying things like spike timing dependent plasticity and long-term plasticity for decades, and we can elicit versions of them reliably in vitro. But it’s much harder to understand the actual biological processes occurring in vivo in a behaving animal, because we have so much less experimental access. The machine learning community has multiple theories of the computational complexity of learning. However, these don’t seem to capture the interesting properties of natural systems or existing machine learning systems. … He also has a long-term prior that researchers are too quick to believe that the brain is doing whatever is currently popular in machine learning, and he doesn’t think we’ve found the right paradigm yet” (p. 3-4). One other expert I spoke with was also skeptical/agnostic, though I didn’t do notes from this conversation.\") and one (Prof. Konrad Kording) argued for estimates based on ion-channel plasticity, rather than synaptic plasticity.[336](https://www.openphilanthropy.org/brain-computation-report#footnote336_1ybqtqo \"From Open Philanthropy's non-verbatim notes from a conversation with Prof. Konrad Kording: “Here is one non-standard argument for this degree of non-linearity in neurons. Adjusting synapses in helpful ways requires computing how that synapse should adjust based on its contribution to whether the neuron fires. But this computation applies in basically the same way to individual ion channels in the cell: e.g., if the brain can signal to the synapse how to adjust in order to improve neuron firing, it can do the same for ion channels, at no additional cost. This makes Prof. Kording thinks that the brain is optimizing both. However, current techniques are very bad at measuring ion channel plasticity. Neuroscientists don’t tend to focus on it for this reason. There are considerably more ion channels than synapses, and ion channels change how synapses linearly and nonlinearly interact with one another. This suggests an uglier computational space” (p. 4-5).\")\n\n\n\n#### 2.2.5 Overall FLOP/s for learning\n\n\nOf the many uncertainties afflicting the mechanistic method, the FLOP/s required to capture learning seems to me like one of the largest. Still, based on the timescales, algorithmic anchors, energy costs, and expert opinions just discussed, **my best guess is that learning does not push us outside the range already budgeted for synaptic transmission: e.g., 1-100 FLOPs per spike through synapse**.\n\n\n* Learning might well be in the noise relative to synaptic transmission, due to the timescales involved.\n* 1-10 FLOPs per spike through synapse would cover various estimates for short-term synaptic plasticity and Hebbian plasticity; along with factors of 2× or so (à la first order gradient descent anchors, or the run-time slow-down in [Kaplanis et al. (2018)](https://arxiv.org/pdf/1802.07239.pdf)) on top of lower-end synaptic transmission estimates.\n* 100 FLOPs per spike through synapse would cover the higher-end Benna-Fusi estimate above (though this was very loose), as well as some cushion for other complexities.\n\n\nTo me, the most salient route to higher numbers uses something other than spikes through synapses as a baseline. For example, if we used timesteps per second at synapses instead, and 1 ms timesteps, then X FLOPs per *timestep* per synapse for learning would imply X × 1e17-1e18 FLOP/s (assuming 1e14-15 synapses). Treating learning costs as scaling with ion channel dynamics (à la Prof. Konrad Kording’s suggestion), or as a multiplier on higher-end standard neuron signaling estimates, would also yield higher numbers.\n\n\nI could also imagine being persuaded by arguments of roughly the form: “A, B, and C simple models of learning lead to X theoretical problems (e.g., catastrophic forgetting), which D more complex model solves in a biologically plausible way.” Such an argument motivates the model in [Benna and Fusi (2016)](https://www.nature.com/articles/nn.4401), which boasts some actual usefulness to task-performance to boot (e.g. [Kaplanis et al. (2018)](https://arxiv.org/pdf/1802.07239.pdf)). There may be other models with similar credentials, but higher FLOP/s costs.\n\n\nI don’t, though, see our ignorance about how the brain learns as a strong positive reason, just on their own, to think larger budgets are required. It’s true that we don’t know enough to rule out such requirements. But “we can’t rule out X” does not imply “X should be our best guess.”\n\n\n#### 2.3 Other signaling mechanisms\n\n\nLet’s turn to other signaling mechanisms in the brain. There are a variety. They tend to receive less attention than standard neuron signaling, but some clearly play a role in task-performance, and others might.\n\n\nOur question, though, is not whether these mechanisms matter. Our question is whether they meaningfully increase a FLOP/s budget that already covers standard neuron signaling and learning.[337](https://www.openphilanthropy.org/brain-computation-report#footnote337_nn3igzb \"Dr. Dario Amodei emphasized this distinction.\")\n\n\nAs a preview: my best guess is that they don’t. This is mostly because:\n\n\n1. My impression is that most experts who have formed opinions on the topic (as opposed to remaining agnostic) do not expect these mechanisms to account for the bulk of the brain’s information-processing, even if they play an important role.[338](https://www.openphilanthropy.org/brain-computation-report#footnote338_agpn26h \"A number of experts we engaged with indicated that many computational neuroscientists would not emphasize these other mechanisms very much (though their comments in this respect are not publicly documented); and the experts I interviewed didn’t tend to emphasize such mechanisms either.\")\n2. Relative to standard neuron signaling, each of the mechanisms I consider is some combination of (a) slower, (b) less spatially-precise, (c) less common in the brain (or, not substantially more common), or (d) less clearly relevant to task-performance.\n\n\nBut of course, familiar caveats apply: there’s a lot we don’t know, experts might be wrong (and/or may not have given this issue much attention), and the arguments aren’t conclusive.\n\n\nArguments related to (a)-(d) will come up a few times in this section, so it’s worth a few general comments about them up front.\n\n\n*Speed*\n\n\nIf a signaling mechanism X involves slower-moving elements, or processes that take longer to have effects, than another mechanism Y, does this suggest a lower FLOP/s budget for X, relative to Y? Heuristically, and other things equal: yes, at least to my mind. That is, naively, it seems harder to perform lots of complex, useful information-processing per second using slower elements/processes (computers using such elements, for example, are less powerful). And various experts seemed to take considerations in this vein quite seriously.[339](https://www.openphilanthropy.org/brain-computation-report#footnote339_0rkhynu \"For example, Dr. Adam Marblestone noted that his own implicit ontology distinguishes between “fast, real-time computation,” -- the rough equivalent of “standard neuron signaling” on the categorization I’ve been using -- and other processes (see Open Philanthropy's non-verbatim notes from a conversation with Dr. Adam Marblestone (p. 2)). And Prof. Anthony Zador suggested that processes that proceed on longer timescales won’t add much computational burden (see Open Philanthropy's non-verbatim notes from a conversation with Prof. Anthony Zador (p. 4)).\")\n\n\nThat said, other things may not be equal. X signals might be sent more frequently, as a result of more complex decision-making, with more complex effects, etc.[340](https://www.openphilanthropy.org/brain-computation-report#footnote340_0kg10hg \"From Open Philanthropy's non-verbatim notes from a conversation with Dr. Adam Marblestone: “It’s also hard to rule out the possibility that even though relevant processes (e.g., neuropeptide signaling) are proceeding on slow timescales, there are so many of them, implicating sufficiently many possible states and sufficiently complex interactions, that a lot of compute is required regardless” (p. 3).\") What’s more, the details of actually measuring and modeling different timescales in the brain may complicate arguments that appeal to them. For example, Prof. Eve Marder noted that traditional views about timescales separations in neuroscience emerge in part from experimental and computational constraints: in reality, slow processes and fast processes interact.[341](https://www.openphilanthropy.org/brain-computation-report#footnote341_szu7cnf \"From Open Philanthropy's non-verbatim notes from a conversation with Prof. Eve Marder: “Both experimentalists and theorists sometimes act as though there’s a mechanistic wall between short-term, middle-term, and long-term changes in neural systems. This is partly because you have to come up with experiments that will occur over a given timeframe (two hours, two days, two weeks). But that doesn’t mean the time constants of these processes are two hours, two days, two weeks, etc.: it’s just that you designed an experimental protocol that allows you to see the difference between these periods of time. Historically, limitations on computational resources have also played a role in popularizing such separations. In the old days, people were limited by how much they could compute by the timesteps and integrators they were using, so there was tremendous pressure to separate timescales: no one wants to integrate over very long times at the rates you’d need to in order to capture fast dynamics. Thus, for example, people will take a model with eight or ten currents, and try to reduce it by separating timescales. If you’re clever, you can retain various essential features, but it’s hard to know if you’ve got them all. Whether or not such separations between timescales are biologically reasonable, though, they were computationally necessary, and they have resulted in ingrained beliefs in the field. In reality, the nervous system has an incredible ability to move seamlessly between timescales ranging from milliseconds to years, and the relevant processes interact. That is, short time-scale processes influence long time-scale processes, and vice versa. And unlike digital computers, the brain integrates over very long timescales at very fast speeds easily and seamlessly” (p. 2-3). In an ordinary differential equation model, variables that update more slowly might impose comparable FLOP/s costs to faster variables. \")\n\n\nIt’s also generally worth distinguishing between different lengths of time that can be relevant to a given signaling process, including:\n\n\n* How long it takes to trigger the sending of a signal X.\n* How long it takes for a signal X to reach its target Y.\n* How long it takes for X’s reaching Y to have effect Z.\n* How frequently signals X are sent.\n* How long effect Z can last.\n* How long effect Z does in fact last *in vivo*.\n\n\nThese can feed into different arguments in different ways. I’ll generally focus on the first three.\n\n\n*Spatial precision*\n\n\nIf a signaling mechanism X is less spatially precise than another mechanism Y (e.g., signals arise from the combined activities of many cells, and/or affect groups of cells, rather than being targeted at individual cells), does this suggest lower FLOP/s budgets for X, relative to Y? Again: heuristically, and other things equal, I think it does. That is, naively, units that can send and receive individualized messages seem to me better equipped to implement more complex information-processing per unit volume. And various experts took spatial precision as an important indicator of FLOP/s burdens as well.[342](https://www.openphilanthropy.org/brain-computation-report#footnote342_jdhjjxy \"From Open Philanthropy's non-verbatim notes from a conversation with Prof. Anthony Zador: “while global signals may be very important to a model’s function, they won’t add much computational burden (the same goes for processes that proceed on longer timescales). It takes fewer bits to specify a global signal, almost by definition” (p. 4). From Open Philanthropy's non-verbatim notes from a conversation with Prof. Barak Pearlmutter: “He also suggested that ephaptic effects would be ‘in the noise’ because they are bulk effects, representation of which would involve one number that covers thousands of synapses” (p. 3).\") Again, though, there is no conceptual necessity here: X might nevertheless be very complex, widespread, etc. relative to Y.\n\n\n*Number/frequency*\n\n\nIf X is less common than Y, or happens less frequently, this seems to me a fairly straightforward pro tanto reason to budget fewer FLOP/s for it. I’ll treat it as such, even though clearly, it’s no guarantee.\n\n\n*Task-relevance*\n\n\nThe central role of standard neuron signaling in task-performance is well established. For many of these alternative signaling mechanisms, though, the case is weaker. Showing that you can make something can happen in a petri dish, for example, is different from showing that it happens *in vivo* and matters to task-performance (let alone that it implies a larger FLOP/s budget than standard neuron signaling). Of course, in some cases, if something did happen *in vivo* and matter to task-performance, we couldn’t easily tell. But I won’t, on these grounds, assume that every candidate for such a role plays it.\n\n\nLet’s look at the mechanisms themselves.\n\n\n\n#### 2.3.1 Other chemical signals\n\n\nThe brain employs many chemical signals other than the neurotransmitters involved in standard neuron signaling. For example:\n\n\n* Neurons release larger molecules known as [neuropeptides](https://en.wikipedia.org/wiki/Neuropeptide), which diffuse through the space between cells.[343](https://www.openphilanthropy.org/brain-computation-report#footnote343_as6aafh \"Leng and Ludwig (2008): Leng and Ludwig (2008): “Classical neurotransmitters are released from axon terminals by Ca2+-dependent exocytosis (Burgoyne and Morgan (2003)); they are packaged in small synaptic vesicles which are preferentially localized at synapses, although recent evidence indicates that extrasynaptic vesicular release can also occur from the somato/dendritic regions of neurones (Cheramy et al. (1981); Huang and Neher (1996); Zilberter et al. (2005)). Peptides are also released by Ca2+-dependent exocytosis, but they are packaged in large dense-core vesicles which generally are not localized to synapses; some are found at synapses, but these vesicles tend to be distributed in soma, dendrites and in axonal varicosities as well as at nerve endings” (p. 5625). See also Mains and Eipper (1999). Russo (2017): “All neuropeptides act as signal transducers via cell-surface receptors. Nearly all neuropeptides act at G-protein coupled receptors (Figure 2). This is an important distinction from ion channel-coupled receptors, since G-protein coupled signaling is consistent with neuropeptides inducing a slower and modulatory response compared to neurotransmitters. In addition, neuropeptide receptors have relatively high ligand affinities (nanomolar Kds), compared to neurotransmitter receptors. This allows a small amount of diffused peptide to still activate receptors. In summary, the combination of these features allows neuropeptides to be active at relatively large distances at relatively low concentrations” (p. 5). My impression is that neuropeptides can also diffuse through the blood (see Mains and Eipper (1999): “Probably the first neuropeptide to be identified was vasopressin, a nine-amino-acid peptide secreted by the nerve endings in the neural lobe of the pituitary. The source of the vasopressin is the magnocellular neurons of the hypothalamus, which send axons to the neurohypophysis, which is the site of release into the blood, in classic neurosecretory fashion”).\")\n* Neurons produce gases like nitric oxide and carbon monoxide, as well as lipids known as [endocannabinoids](https://www.sciencedirect.com/topics/neuroscience/endocannabinoids), both of which can pass directly through the cell membrane.[344](https://www.openphilanthropy.org/brain-computation-report#footnote344_zs3z49o \"See Siegelbaum et al. (2013b), (p. 248), and Alger (2002).\")\n\n\nChemicals that neurons release that regulate the activity of groups of neurons (or other cells) are known as *neuromodulators*.[345](https://www.openphilanthropy.org/brain-computation-report#footnote345_4gwl7hi \"Burrows (1996): “A neuromodulator is a messenger released from a neuron in the central nervous system, or in the periphery, that affects groups of neurons, or effector cells that have the appropriate receptors. It may not be released at synaptic sites, often acts through second messengers and can produce long-lasting effects. The release may be local so that only nearby neurons or effectors are influenced, or may be more widespread, which means that the distinction with a neurohormone can become very blurred. The act of neuromodulation, unlike that of neurotransmission, does not necessarily carry excitation of inhibition from one neuron to another, but instead alters either the cellular or synaptic properties of certain neurons so that neurotransmission between them is changed” (p. 195).\")\n\n\nChemical signals other than classical neurotransmitters are very common in the brain,[346](https://www.openphilanthropy.org/brain-computation-report#footnote346_0uoiyws \"See e.g. Smith et al. (2019): “Our analysis exposes transcriptomic evidence for dozens of molecularly distinct neuropeptidergic modulatory networks that directly interconnect all cortical neurons.”\") and very clearly involved in task performance.[347](https://www.openphilanthropy.org/brain-computation-report#footnote347_9jg93c9 \"Koch (1999): “It is difficult to overemphasize the importance of modulatory effects involving complex intracellular biochemical pathways. The sound of stealthy footsteps at night can set our heart to pound, sweat to be released, and all our senses to be at a maximum level of alertness, all actions that are caused by second messengers. They underlie the difference in sleep-wake wake behavior, in affective moods, and in arousal, and they mediate the induction of long-term term memories” (p. 95).\") For example, they can alter the input-output function of individual neurons and neural circuits.[348](https://www.openphilanthropy.org/brain-computation-report#footnote348_4hfs29h \"Marder (2012): “Because neuromodulators can transform the intrinsic firing properties of circuit neurons and alter effective synaptic strength, neuromodulatory substances reconfigure neuronal circuits, often massively altering their output... the neuromodulatory environment constructs and specifies the functional circuits that give rise to behavior” (abstract).\")\n\n\nHowever, some considerations suggest limited FLOP/s budgets, relative to standard neuron signaling:\n\n\n* *Speed*: Signals that travel through the extracellular space are limited by the speed of chemical diffusion, and some travel distances much longer than a 20 nm synaptic cleft.[349](https://www.openphilanthropy.org/brain-computation-report#footnote349_u8wndiz \"Smith et al. (2019): “secreted neuropeptides are thought to persist long enough (e.g., minutes) in brain interstitial spaces for diffusion to very-high-affinity NP-GPCRs hundreds of micrometers distant from release sites… Though present information is limited, eventual degradation by interstitial peptidases nonetheless probably restricts diffusion of most neuropeptides to sub-millimeter, local circuit distance scales.”\") What’s more, nearly all neuropeptides act via metabotropic receptors, which take longer to have effects on a cell than the ionotropic receptors involved in standard neuron signaling.[350](https://www.openphilanthropy.org/brain-computation-report#footnote350_ap9aami \"This is a point suggested by Dr. Dario Amodei. See also Siegelbaum et al. (2013b): “whereas the action of ionotropic receptors is fast and brief, metabotropic receptors produce effects that begin slowly and persist for long periods, ranging from hundreds of milliseconds to many minutes” (p. 236). Koch (1999) says something similar, attributing the difference at least in part to the time it takes for a second messenger to diffuse through a cell: “An important distinction between ionotropic and metabotropic receptors is their time scale. While members of the former class act rapidly, terminating within a very small fraction of a second, the speed of the latter class is limited by diffusion. Biochemical reactions can happen nearly instantaneously at the neuronal time scale. However, if a synaptic input to a metabotropic receptor induces the release of some messenger, such as calcium ions, which have to diffuse to the cell body in order to ‘do their thing,’ the time scale is extended to seconds or longer\\\" (p. 95). Russo (2017): “All neuropeptides act as signal transducers via cell-surface receptors. Nearly all neuropeptides act at G-protein coupled receptors (Figure 2). This is an important distinction from ion channel-coupled receptors, since G-protein coupled signaling is consistent with neuropeptides inducing a slower and modulatory response compared to neurotransmitters” (p. 5).\")\n* *Spatial precision*: Some (maybe most?) of these chemical signals act on groups of cells. As [Leng and Ludwig (2008)](https://www.ncbi.nlm.nih.gov/pmc/articles/pmid/18845614/) put it: “peptides are public announcements … they are messages not from one cell to another, but from one population of neurones to another.”[351](https://www.openphilanthropy.org/brain-computation-report#footnote351_z03yi0w \"See the abstract.\")\n* *Frequency*: Neuropeptides are released less frequently than classical neurotransmitters. For example, [Leng and Ludwig (2008)](https://www.ncbi.nlm.nih.gov/pmc/articles/pmid/18845614/) suggest that the release of a vesicle containing neuropeptide requires “several hundred spikes,” and that oxytocin is released at a rate of “1 vesicle per cell every few seconds.”[352](https://www.openphilanthropy.org/brain-computation-report#footnote352_bbu6j2n \"Leng and Ludwig (2008): “These arguments suggest that, in the neural lobe, exocytosis of a large dense-core vesicle is a surprisingly rare event; at any given nerve terminal, it may take about 400 spikes to release a single vesicle. As these sendings contain far more vesicles than are found at any synapse, synaptic release of peptides generally in the CNS seems likely to occur with a much lower probability of release. Release of oxytocin within the brain from the dendrites of magnocellular neurones is also infrequent, likely to occur at rates of only about 1 vesicle per cell every few seconds. This seems incompatible with the notion of peptides being effective and faithful mediators of information flow at short time scales and with spatial precision...There is clearly a massive qualitative discrepancy between the rates of release of synaptic vesicles and of peptide-containing vesicles … release of a peptide-containing vesicle is a comparatively rare event for any neurone” (p. 5629-5630).\") This may be partly due to resource constraints (neuropeptides, unlike classic neurotransmitters, are not recycled).[353](https://www.openphilanthropy.org/brain-computation-report#footnote353_azwuwh9 \"Leng and Ludwig (2008): “Peptide-containing vesicles may contain more than 10 times as much cargo (in terms of the number of messenger molecules)...There are no known reuptake mechanisms for the peptides and the vesicles cannot be re-used. Thus release of a peptide-containing vesicle is a comparatively rare event for any neurone, but one with potentially widespread and profound consequences (cf. volume transmission Fuxe et al. 2007)” (p. 5630).\")\n* Because neuromodulators play a key role in plasticity, some of their contributions may already fall under the budget for learning.\n\n\nThis is a coarse-grained picture of a very diverse set of chemical signals, some of which may not be so e.g. slow, imprecise, or infrequent. Still, a number of experts treat these properties as reasons to think that the FLOP/s for chemical signaling beyond standard neuron signaling would not add much to the budget.[354](https://www.openphilanthropy.org/brain-computation-report#footnote354_1py17w1 \"From Open Philanthropy's non-verbatim notes from a conversation with Prof. Anthony Zador: “Prof. Zador believes that neuromodulation is the dominant form of global signaling in the brain. However, while global signals may be very important to a model’s function, they won’t add much computational burden (the same goes for processes that proceed on longer timescales). It takes fewer bits to specify a global signal, almost by definition” (p. 4). Dr. Dario Amodei also took the slow timescales of such signals as evidence that they would not introduce substantially additional FLOP/s. See also Moravec (1988), who writes that “broadcast chemical messages are slow and contain only a relatively small amount of information. In a program their effect can probably be mimicked by a modest number of global variables that are referenced by other computations” (p. 163).\")\n\n\n\n#### 2.3.2 Glia\n\n\nNeurons are not the only brain cells. Non-neuron cells known as glia have traditionally been thought to mostly act to support brain function, but there is evidence that they can play a role in information-processing as well.[355](https://www.openphilanthropy.org/brain-computation-report#footnote355_800w93j \"Araque and Navarrete (2010): “The nervous system is formed by two major cell types, neurons and glial cells. Glial cells are subdivided into different types with different functions: oligodendroglia, microglia, ependimoglia and astroglia… Glial cells, and particularly astrocytes—the most abundant glial cell type in the central nervous system—were considered to play simple supportive roles for neurons, probably because they lack long processes connecting sensory and effector organs” (p. 2375). Bullock et al. (2005): “Astrocytes are now known to communicate among themselves by means of glial transmitters and neuromodulators as well as by gap junctions (18). Moreover, astrocytes can detect neurotransmitters that are released from neuronal chemical synapses (21). These transmitters are delivered via synaptic vesicles into the synaptic cleft and diffuse to perisynaptic astrocytes. Additionally, neurotransmitters can be released outside the synapse and detected by perisynaptic glia (22, 23). In response, astrocytes can regulate communication between neurons by modifying synaptic transmission through the release of neurotransmitters and neuromodulators (18). Thus, there may be a parallel system of information processing that interacts with neuronal communication but propagates over much slower time scales through a functionally reticular network of non-neuronal cells” (p. 792). Sandberg and Bostrom (2008): “Glia cells have traditionally been regarded as merely supporting actors to the neurons, but recent results suggest that they may play a fairly active role in neural activity” (p. 36).\")\n\n\nThis evidence appears to be strongest with respect to [astrocytes](https://en.wikipedia.org/wiki/Astrocyte), a star-shaped type of glial cell that extend thin arms (“processes”) to enfold blood vessels and synapses.\n\n\n* [Mu et al. (2019)](https://www.cell.com/cell/pdf/S0092-8674(19)30621-X.pdf) suggest that zebra fish astrocytes “perform a computation critical for behavior: they accumulate evidence that current actions are ineffective and consequently drive changes in behavioral states.”[356](https://www.openphilanthropy.org/brain-computation-report#footnote356_h27j4re \"See abstract.\")\n* Astrocytes exhibit a variety of receptors, activation of which leads to increases in the concentration of calcium within the cell and consequently the release of transmitters.[357](https://www.openphilanthropy.org/brain-computation-report#footnote357_cisc9ey \"Min et al. (2012): “astrocytes can sense a wide variety of neurotransmitters and signaling molecules, and respond with increased Ca2+ signaling” (p. 3). More detail: “when stimulated with specific metabotropic receptor agonists, astrocytes display prominent and extremely slow (up to 10 s of seconds) whole-cell Ca2+ responses…. astrocytes can modulate neurons by releasing transmitters themselves. These so-called gliotransmitters are very diverse, including conventional transmitters like GABA and glutamate, as well as signaling molecules like purines, D-serine, taurine, cytokines, peptides, and metabolites like lactate (Volterra and Meldolesi (2005)). Astrocytes can release transmitters through two mechanisms. Firstly, they can release transmitter containing vesicles through SNARE mediated exocytosis. Astrocytes contain the necessary proteins for SNARE mediated exocytosis (Araque et al. (2000); Bezzi et al. (2004); Parpura and Zorec (2010); Schubert et al. (2011)), and genetic or pharmacological interference with proteins of the SNARE-complex in astrocytes inhibits numerous forms of astrocyte-neuron signaling (Pascual et al. (2005); Jourdain et al. (2007); Halassa et al. (2009); Henneberger et al. (2010); Min and Nevian (2012)). Secondly, transmitter can be released through reverse transport (Héja et al. (2009)), or through membrane channels (Kozlov et al. (2006); Lee et al. (2010))... (p. 2-3). See Porter and McCarthy (1997) for more discussion of astrocyte receptors.\")\n* Changes in calcium concentrations can propagate across networks of astrocytes (a calcium “wave”) enabling a form of signaling over longer-distances.[358](https://www.openphilanthropy.org/brain-computation-report#footnote358_tmxk4kb \"Min et al. (2012): “When stimulated with specific metabotropic receptor agonists, astrocytes display prominent and extremely slow (up to 10 s of seconds) whole-cell Ca2+ responses. This is also true for in vivo experiments, where sensory stimulation reliably induces astroglial slow Ca2+ transients (Wang et al. (2006)) sometimes related to vascular responses (Petzold et al., 2008). The recorded Ca2+ signal can remain restricted to a single or few astrocytes responding to specific sensory stimuli (Wang et al. (2006); Schummers et al. (2008)). Additionally, since astrocytes form complex networks through gap-junctional coupling with neighboring astrocytes (for review see Giaume (2010); Giaume et al. (2010)) Ca2+ signals can spread like a wave through the astrocyte network (Nimmerjahn et al. (2009); Kuga et al. (2011)). Although the mechanisms underlying the propagation of such Ca2+ waves are not fully understood, transport of either IP3 or Ca2+ itself through gap-junctions may play an important role (Venance et al. (1997)). Furthermore, regenerative activity through astrocytic release of signaling molecules like ATP, which in turn activate Ca2+ signals in neighboring astrocytes, can be involved in Ca2+ wave propagation (Guthrie et al. (1999))” (p. 2).\") Sodium dynamics appear to play a signaling role as well.[359](https://www.openphilanthropy.org/brain-computation-report#footnote359_z83w73i \"Kirischuk et al. (2012): “In addition to generally acknowledged Ca2+ excitability of astroglia, recent studies have demonstrated that neuronal activity triggers transient increases in the cytosolic Na+ concentration ([Na+]i) in perisynaptic astrocytes. These [Na+]i transients are controlled by multiple Na+-permeable channels and Na+-dependent transporters; spatiotemporally organized [Na+]i dynamics in turn regulate diverse astroglial homeostatic responses such as metabolic/signaling utilization of lactate and glutamate, transmembrane transport of neurotransmitters and K+ buffering. In particular, near-membrane [Na+]i transients determine the rate and the direction of the transmembrane transport of GABA and Ca2+” (abstract). Bernardinell et al. (2004): “Glutamate-evoked Na+ increase in astrocytes has been identified as a signal coupling synaptic activity to glucose consumption. Astrocytes participate in multicellular signaling by transmitting intercellular Ca2+ waves. Here we show that intercellular Na+ waves are also evoked by activation of single cultured cortical mouse astrocytes in parallel with Ca2+ waves; however, there are spatial and temporal differences. Indeed, maneuvers that inhibit Ca2+ waves also inhibit Na+ waves; however, inhibition of the Na+/glutamate cotransporters or enzymatic degradation of extracellular glutamate selectively inhibit the Na+ wave. Thus, glutamate released by a Ca2+ wave-dependent mechanism is taken up by the Na+/glutamate cotransporters, resulting in a regenerative propagation of cytosolic Na+ increases. The Na+ wave gives rise to a spatially correlated increase in glucose uptake, which is prevented by glutamate transporter inhibition. Therefore, astrocytes appear to function as a network for concerted neurometabolic coupling through the generation of intercellular Na+ and metabolic waves” (abstract).\")\n* Astrocytes can also signal to neurons by influencing concentrations of ions or neurotransmitters in space between cells.[360](https://www.openphilanthropy.org/brain-computation-report#footnote360_6tsclpm \"Min et al. (2012): “astrocytes can sense a wide variety of neurotransmitters and signaling molecules, and respond with increased Ca2+ signaling. But how do astrocytes signal back to neurons? Broadly speaking, astrocytes can do this through three separate mechanisms. Firstly, because astrocytes are crucial for ion homeostasis, they can influence neurons by dynamically altering the ionic balance. Secondly, astrocytes can alter neuronal functioning by modulating the uptake of neurotransmitter molecules from the extracellular space (Theodosis et al. (2008)). Thirdly, astrocytes can release transmitters themselves (Araque et al. (2001))” (p. 3).\") They can regulate neuron activity, a variety of mechanisms exist via which they can influence short-term plasticity, and they are involved in both long-term plasticity and in the development of new synapses.[361](https://www.openphilanthropy.org/brain-computation-report#footnote361_f8rhgak \"Min et al. (2012): “Several studies have shown that astrocytes can regulate neuronal excitability. Astrocytes can achieve this through several mechanisms: by regulation of the extracellular ionic composition, by maintaining a tonic extracellular transmitter concentration, by regulation of basal synaptic transmission, and by the induction of phasic events in neighboring neurons” (p. 4). Min et al. (2012): “In addition to modulating neuronal excitability and basal synaptic transmission, astrocytes play a role in the specific strengthening or weakening of synaptic connections, either transiently (short-term plasticity), or long-lasting (long-term plasticity)” (p. 5). See p. 5-9 for more details on astrocyte involvement in short-term and long-term plasticity. Baldwin and Eroglu (2017): “astrocytes are key players in circuit formation, instructing the formation of synapses between distinct classes of neurons” (p. 1).\")\n* Human astrocytes also appear to be larger, and to exhibit more processes, than those of rodents, which has led to speculation that they play a role in explaining the human brain’s processing power.[362](https://www.openphilanthropy.org/brain-computation-report#footnote362_ef0zrw0 \"Oberheim et al. (2006): “Human protoplasmic astrocytes manifest a threefold larger diameter and have tenfold more primary processes than those of rodents” (p. 547). On these grounds, Oberheim et al. (2006) propose that the human brain’s astrocytes may play a role in explaining its unique computational power: “By integrating the activity of a larger contiguous set of synapses, the astrocytic domain might extend the processing power of human brain beyond that of other species” (p. 552).\")\n\n\nOther glia may engage in signaling as well. For example:\n\n\n* NG2 protein-expressing oligodendrocyte progenitor cells can receive synaptic input from neurons, form action potentials, and regulate synaptic transmission between neurons.[363](https://www.openphilanthropy.org/brain-computation-report#footnote363_l7aa53x \"Sakry et al. (2014): “Oligodendrocyte precursor cells (OPC) characteristically express the transmembrane proteoglycan nerve-glia antigen 2 (NG2) and are unique glial cells receiving synaptic input from neurons. The development of NG2+ OPC into myelinating oligodendrocytes has been well studied, yet the retention of a large population of synapse-bearing OPC in the adult brain poses the question as to additional functional roles of OPC in the neuronal network. Here we report that activity-dependent processing of NG2 by OPC-expressed secretases functionally regulates the neuronal network” (p. 1). Káradóttir et al. (2008): “We show here that there are two distinct types of morphologically identical oligodendrocyte precursor glial cells (OPCs) in situ in rat CNS white matter. One type expresses voltage-gated sodium and potassium channels, generates action potentials when depolarized and senses its environment by receiving excitatory and inhibitory synaptic input from axons” (p. 1).\")\n* Glial cells involved in the creation of myelin (the insulated sheath that surrounds axons) can detect and respond to axonal activity.[364](https://www.openphilanthropy.org/brain-computation-report#footnote364_1u6wenz \"Bullock et al. (2005): “Myelinating glia do not fire action potentials, but they can detect impulses in axons through membrane receptors that bind signaling molecules. These include ATP (16) and adenosine (17) that are released along the axon and also potassium that is released during intense neural activity” (p. 792). de Faria, Jr. et al. (2019): “Alternatively, active axons can also signal OPCs [oligodendrocyte precursor cells] via non‐synaptic vascular release of growth factors [e.g. platelet‐derived growth factor (PDGF) AA and neurotrophins] and neurotransmitters (e.g. glutamate, GABA or ATP). OPCs express not only ion channels including glutamate‐activated ion channels, the sodium and potassium channels, but also receptors of growth factors. These cellular properties make OPCs equipped to respond to neuronal activity” (p. 450).\")\n\n\nWould FLOP/s for the role of glia in task-performance meaningfully increase our budget? Here are some considerations:\n\n\n* *Speed*: Astrocytes can respond to neuronal events within hundreds of milliseconds,[365](https://www.openphilanthropy.org/brain-computation-report#footnote365_q5bw2s5 \"Stobart et al. (2018b): “We identified calcium responses in both astrocyte processes and endfeet that rapidly followed neuronal events (∼120 ms after). These fast astrocyte responses were largely independent of IP3R2-mediated signaling and known neuromodulator activity (acetylcholine, serotonin, and norepinephrine), suggesting that they are evoked by local synaptic activity. The existence of such rapid signals implies that astrocytes are fast enough to play a role in synaptic modulation and neurovascular coupling” 726)(. Agarwal et al. (2017); Bindocci et al. (2017); Lind et al. (2018); Otsu et al. (2015); Srinivasan et al. (2015); Stobart et al. (2018a) Winship et al. (2007): “These in vivo findings suggest that astrocytes can respond to sensory activity in a selective manner and process information on a subsecond time scale, enabling them to potentially form an active partnership with neurons for rapid regulation of microvascular tone and neuron–astrocyte network properties” (p. 6268). Min et al. (2012): “Two parallel studies have indeed identified small and relatively fast Ca2+ signals that are restricted to the astrocyte process (Di Castro et al. (2011); Panatier et al. (2011)). Two main classes of local calcium events have been identified: focal highly confined transients (about 4μm) and more robust regional events (about 12 μm; Figure 1; Di Castro et al. (2011)). The more local events have been proposed to be generated by spontaneous single vesicle release at individual synapses whereas the expanded events seem to be generated by single action potentials activating several neighboring synapses in the astrocyte domain” (p. 2-3).\") and they can detect individual synaptic events.[366](https://www.openphilanthropy.org/brain-computation-report#footnote366_8tfbze2 \"Panatier et al. (2011): “we show that astrocytes in the hippocampal CA1 region detect synaptic activity induced by single-synaptic stimulation... single pulse stimulation of neuronal presynaptic elements evoked local Ca2+ events in an astrocytic process” (p. 785, p. 787).\") However, the timescales of other astrocyte calcium dynamics are thought to be slower (on the order of seconds or more), and some effects require sustained stimulation.[367](https://www.openphilanthropy.org/brain-computation-report#footnote367_h21sy2t \"Wang et al. (2009): “Astrocytes are electrically non-excitable cells that, on a slow time scale of seconds, integrate synaptic transmission by dynamic increases in cytosolic Ca2+.” Panatier et al. (2011): “the detection and modulation mechanisms in astrocytes are deemed too slow to be involved in local modulation of rapid, basal synaptic transmission. Indeed, although Ca2+ activities have been reported in glial processes (Nett et al. (2002), Perea and Araque (2005), Santello et al. (2011), Wang et al. (2006)), Ca2+ signaling has been generally studied globally in the whole astrocyte, where the slow timescale of Ca2+ changes precludes any spatial and temporal match with fast and localized synaptic transmission. Moreover, trains of sustained stimulation of afferents were necessary to induce this type of glial Ca2+ activity” (p. 785).\")\n* *Spatial resolution*: Previous work assumed that astrocyte calcium signaling could not be spatially localized to e.g. a specific cellular compartment, but this appears to be incorrect.[368](https://www.openphilanthropy.org/brain-computation-report#footnote368_rtgiec5 \"Min et al. (2012): “The temporal characteristics of astrocytic Ca2+ transients have led to the idea that unlike neurons, astrocytes display exclusively particularly slow responses, and that their signals are not suited to be restricted to small cellular compartments, as happens for example, in dendritic spines” (p. 2).\")\n* *Number*: The best counting methods available suggest that the ratio of glia to neurons in the brain is roughly 1:1 (it was previously thought to be 10:1, but this appears to be incorrect).[369](https://www.openphilanthropy.org/brain-computation-report#footnote369_jqs1ktm \"von Bartheld et al. (2016): “The recently validated isotropic fractionator demonstrates a glia:neuron ratio of less than 1:1 and a total number of less than 100 billion glial cells in the human brain. A survey of original evidence shows that histological data always supported a 1:1 ratio of glia to neurons in the entire human brain, and a range of 40-130 billion glial cells. We review how the claim of one trillion glial cells originated, was perpetuated, and eventually refuted” (p. 1).\") This ratio varies across regions of the brain (in the cerebral cortex, it’s about 3:1).[370](https://www.openphilanthropy.org/brain-computation-report#footnote370_bdg0xla \"von Bartheld et al. (2016): “All three methods: histology, DNA extraction, and the IF method support numbers of about 10–20 billion neurons and at most a 2-fold larger number of glial cells (20–40 billion) in the human cerebral cortical grey matter, thus supporting an average GNR of approximately 1.5. Inclusion of the white matter (that underlies the grey matter of cerebral cortex) increases the GNR to about 3.0” (p. 11)\") Astrocytes appear to be about 20-40% of glia (though these numbers may be questionable);[371](https://www.openphilanthropy.org/brain-computation-report#footnote371_mq2uhrd \"Verkhratsky and Butt, eds. (2013): “The authors tried to calculate the relative numbers of glial cell types, and they found that astrocytes accounted for ~20 percent, oligodendrocytes for 75 per cent and micro glia for 5 per cent of the total glial cell population. The identifying criteria, however, were rather doubtful, since no specific staining was employed… In the earlier morphological studies, based on 2d counting, the distribution of glial cell types was found to be: astrocytes 40 per cent, oligodendrocytes 50 per cent and microglia 5-10 percent (Blinkow and Glezer (1968)” (p. 95-96).\") and NG2 protein-expressing oligodendrocyte progenitor cells discussed above are only 2-8% of the total cells in the cortex.[372](https://www.openphilanthropy.org/brain-computation-report#footnote372_6iyn45w \"Verkhratsky and Butt, eds. (2013): “NG2-glia constitute 8-9 per cent of total cells in white matter and 2-3 per cent of total cells in the gray matter, with an estimated density of 10-140 mm2 in the adult CNS (Nishyama et al., 2009)” (p. 326).\") If the average FLOP/s cost per glial cell were the same as the average per neuron, this would likely less than double our budget.[373](https://www.openphilanthropy.org/brain-computation-report#footnote373_88pgn08 \"This was a point suggested by Dr. Dario Amodei. See also Open Philanthropy's non-verbatim notes from a conversation with Prof. Konrad Kording: “Glial cells would imply a factor of two in required compute, but we are likely to be so many orders of magnitude wrong already that incorporating glia will not make the difference” (p. 3).\") That said, astrocytes may have more connections to other cells, on average, than neurons.[374](https://www.openphilanthropy.org/brain-computation-report#footnote374_ysj1xpz \"Oberheim et al. (2006): “Taking into account the increase in size of protoplasmic astrocytes that accompanies this increased synaptic density, we can estimate that each astrocyte supports and modulates the function of roughly two million synapses” (p. 549). Verkhratsky and Butt, eds. (2013): “A single protoplastmic astrocyte in rodent cortex contacts 4-8 neurones, surrounds ~300-600 neuronal dendrites and provides cover for up to 20,000-120,000 synapses residing within its domain (Bushong et al. (2002); Halassa et al. (2007b))... Human protoplasmic astrocytes are 2-3 times larger and exceedingly more complex; the processes of a single human protoplasmic astrocyte cover approximately 2 million synapses” (p. 114). Winship et al. (2007): “It is worth noting that astrocyte processes can contact up to 100,000 synapses (Bushong et al. (2002))” (p. 6271).\")\n* *Energy costs*: Neurons consume the majority of the brain’s energy. [Zhu et al. (2012)](https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3325488/pdf/nihms-357436.pdf) estimate that “a non-neuronal cell only utilizes approximately 3% of that [energy] used by a neuron in the human brain” – a ratio which they take to suggest that neurons account for 96% of the energy expenditure in human cortical grey matter, and 68% in white matter.[375](https://www.openphilanthropy.org/brain-computation-report#footnote375_k31xjrh \"Their methodology assumes that “the same type of neuron or non-neuronal cells is assumed to approximately have a similar energy expenditure no matter where they located (in GM or WM)” (p. 14). Given roughly equal numbers of neurons and non-neuronal cells in the brain as a whole (see Azevedo et al. (2009), (p. 536), this would naively suggest that neurons account for roughly 97% of the brain’s overall energy consumption. However, I’m not sure that such a naive application of their estimate is appropriate.\") [Attwell and Laughlin (2001)](https://journals.sagepub.com/doi/pdf/10.1097/00004647-200110000-00001) also predict a highly lopsided distribution of signaling-related energy consumption between neurons and glia in grey matter – a distribution supported by the observed distribution of mitochondria they suggest is found in [Wong-Riley (1989)](https://www.sciencedirect.com/science/article/abs/pii/0166223689901653) (see figure below). If glial cells were doing more information-processing than neurons, they would have to be doing it using much less energy – a situation in which, naively, it would appear metabolically optimal to have *more* glial cells than neurons. To me, the fact that neurons receive so much more of a precious resource suggests that they are the more central signaling element.[376](https://www.openphilanthropy.org/brain-computation-report#footnote376_q0lmi01 \"This is a point made by AI Impacts, who also add that “although we can imagine many possible designs on which glia would perform most of the information transfer in the brain while neurons provided particular kinds of special-purpose communication at great expense, this does not seem likely given our current understanding.”\")\n\n\n \n\n\n\n[![AttwellAndLaughlinMitochondria.png](https://www.openphilanthropy.org/files/Research/Brain_Compute/image8.png)](https://www.openphilanthropy.org/files/Research/Brain_Compute/image8.png)**Figure 12: Comparing neuron and glia energy usage in grey matter**. From Attwell, David and Laughlin, Simon. “[An Energy Budget for Signaling in the Grey Matter of the Brain](https://journals.sagepub.com/doi/pdf/10.1097/00004647-200110000-00001)”, Journal of Cerebral Blood Flow and Metabolism, 21:1133–1145, 2001; FIG. 3B, p. 1140, © 2001 The International Society for Cerebral Blood Flow and Metabolism. Reprinted by Permission of SAGE Publications, Ltd. FIG. 3A in the original text is not shown, original caption in endnote.[***377***](https://www.openphilanthropy.org/brain-computation-report#footnote377_b5l023g)\n\n\nOverall, while some experts are skeptical of the importance of glia to information-processing, the evidence that they play at least some role seems to me fairly strong.[378](https://www.openphilanthropy.org/brain-computation-report#footnote378_e2d4kms \" From Open Philanthropy's non-verbatim notes from a conversation with Prof. Anthony Zador: “Glia are very important to understanding disease, but Prof. Zador does not believe that they are important to computing in the brain” (p. 4).\") How central of a role, though, is a further question, and the total number of glial cells, together with their limited energy consumption relative to neurons, does not, to me, initially suggest that capturing this role would require substantially more FLOP/s than capturing standard neuron signaling and learning.\n\n\n\n#### 2.3.3 Electrical synapses\n\n\nIn addition to the chemical synapses involved in standard neuron signaling, neurons (and other cells) also form *electrical synapses* – that is, connections that allow ions and other molecules to flow directly from one cell into another. The channels mediating these connections are known as *[gap junctions](https://en.wikipedia.org/wiki/Gap_junction#:~:text=Gap%20junctions%20are%20a%20specialized,a%20regulated%20gate%20between%20cells.)*.\n\n\nThese have different properties than chemical synapses. In particular:\n\n\n* Electrical synapses are faster, passing signals in a fraction of a millisecond.[379](https://www.openphilanthropy.org/brain-computation-report#footnote379_84hhw86 \"See Siegelbaum and Koester (2013d), (p. 178)\")\n* Electrical synapses can be bi-directional, allowing each cell to influence the other.[380](https://www.openphilanthropy.org/brain-computation-report#footnote380_6odmoo1 \"See Siegelbaum and Koester (2013d), (p. 178)\")\n* Electrical synapses allow graded transmission of sub-threshold electrical signals.[381](https://www.openphilanthropy.org/brain-computation-report#footnote381_xopzi3z \"See Siegelbaum and Koester (2013d), (p. 178)\")\n\n\nMy impression is that electrical synapses receive much less attention in neuroscience than chemical synapses. This may be because they are thought to be some combination of:\n\n\n* Much less common.[382](https://www.openphilanthropy.org/brain-computation-report#footnote382_au6fl09 \"Siegelbaum and Koester (2013d): “Most synapses in the brain are chemical” (p. 177). Lodish et al. (2000): “We also briefly discuss electric synapses, which are much rarer, but simpler in function, than chemical synapses.” Purves et al. (2001): “Although they are a distinct minority, electrical synapses are found in all nervous systems, including the human brain.” Wang et al. (2010) suggest probabilities of 0.5% and 1.4% of coupling between pyramidal cells in different brain regions. From Open Philanthropy's non-verbatim notes from a conversation with Prof. Chris Eliasmith: “Adding gap junctions probably would not substantially increase the overall compute budget, because they are not very common” (p. 4). From Open Philanthropy's non-verbatim notes from a conversation with Prof. Barak Pearlmutter: “Prof. Pearlmutter characterized the comparatively minimal number of gap junction as the “bottom line” with respect to their computational role (p. 3).\")\n* More limited in the behavior they can produce (chemical synapses, for example, can amplify pre-synaptic signals).[383](https://www.openphilanthropy.org/brain-computation-report#footnote383_xlrufz4 \"Siegelbaum and Koester (2013d): “Electrical synapses are employed primarily to send rapid and stereotyped depolarizing signals. In contrast, chemical synapses are capable of more variable signaling and thus can produce more complex behaviors. They can mediate either excitatory or inhibitory actions in postsynaptic cells and produce electrical changes in the postsynaptic cell that last from milliseconds to many minutes. Chemical synapses also serve to amplify neuronal signals, so even a small presynaptic nerve terminal can alter the response of large postsynaptic cells. Not surprisingly, most synapses in the brain are chemical” (p. 177). Bullock et al. (2005) also suggest that “electrical transmission through gap junctions was initially considered primitive and likely incapable of the subtleties of chemical transmission through axon-dendrite synapses” (p. 792). From Open Philanthropy's non-verbatim notes from a conversation with Dr. Jess Riedel: “From a computational perspective, electrical synapses lack gain -- the ability to amplify signals. Dr. Riedel recalls that gain is a key property of computational units like transistors” (p. 5).\")\n* Involved in synchronization between neurons, or global oscillation, that does not imply complex information-processing.[384](https://www.openphilanthropy.org/brain-computation-report#footnote384_2xywc20 \"From Open Philanthropy's non-verbatim notes from a conversation with Dr. Adam Marblestone: “Sometimes the coupling between neurons created by gap junctions is so fast that they are treated as one neuron for modeling purposes. Gap junctions are also often thought of as supporting some kind of oscillation or globally coherent behavior that might not require a lot of computation. Whether gap junctions could create more computationally-expensive, non-linear interactions between different parts of neurons is an interesting question” (p. 6). Bennett and Zukin (2004): “Gap junctions can synchronize electrical activity and may subserve metabolic coupling and chemical communication as well. They are thought to play an important role in brain development, morphogenesis, and pattern formation (Bennett et al. (1991), Bruzzone et al. (1996), Dermietzel et al. (1989), Goodenough et al. (1996))” (p. 495).\")\n* Amenable to very simple modeling.[385](https://www.openphilanthropy.org/brain-computation-report#footnote385_qgzeuf5 \"From Open Philanthropy's non-verbatim notes from a conversation with Prof. Barak Pearlmutter: “[Prof. Pearlmutter] took the fact that gap junctions are roughly linear, and that they don’t involve time delays, as evidence they would be easy to model” (p. 3). Though Bullock et al. (2005) seem to suggest some forms of complex behavior: “an electrical impulse in one cell by no means inevitably propagates to the other cells with which it shares gap junctions. In fact, a channel within a gap junction is not necessarily open, and an entire gap junction may not transmit electrical current until it is appropriately modified in response to transmission from chemical synapses of the same, ‘presynaptic’ neuron” (p. 792).\")\n\n\nStill, electrical synapses can play a role in task-performance,[386](https://www.openphilanthropy.org/brain-computation-report#footnote386_59s5so0 \"Trenholm et al. (2013): “We identified a network of electrically coupled motion–coding neurons in mouse retina that act collectively to register the leading edges of moving objects at a nearly constant spatial location, regardless of their velocity” (abstract).\") and one expert suggested that they could create computationally expensive non-linear dynamics.[387](https://www.openphilanthropy.org/brain-computation-report#footnote387_6n9r1pd \"From Open Philanthropy's non-verbatim notes from a conversation with Dr. Stephen Larson: “Dr. Larson thinks that gap junctions can contribute to non-linear dynamics and near-chaotic dynamics within neural networks. As a rough rule of thumb: the more non-linear a system is, the more computationally expensive it is to simulate” (p. 3).\") What’s more, if they are sufficiently fast, or require sufficiently frequent updates, this could compensate for their low numbers. For example, one expert suggested that you can model gap junctions as synapses that update every timestep.[388](https://www.openphilanthropy.org/brain-computation-report#footnote388_6f7x4tj \"From Open Philanthropy's non-verbatim notes from a conversation with Prof. Chris Eliasmith: “You can model a gap junction as a connection that updates every timestep, rather than every time a spike occurs” (p. 4).\") But if chemical synapses only receive spikes, and hence update, ~once per second, and we use 1 ms timesteps, you’d need to have ~1000x fewer gap junctions in order for their updates not to dominate.\n\n\nOverall, my best guess is that incorporating electrical synapses would not substantially increase our FLOP/s budget, but this is centrally based on a sense that experts treat their role in information-processing as relatively minor.\n\n\n\n#### 2.3.4 Ephaptic effects\n\n\nNeuron activity creates local electric fields that can have effects on other neurons. These are known as *ephaptic effects*. We know that these effects can occur *in vitro* (see especially [Chiang et al. (2019)](https://www.ncbi.nlm.nih.gov/pmc/articles/PMC6312416/pdf/TJP-597-249.pdf))[389](https://www.openphilanthropy.org/brain-computation-report#footnote389_6limr9r \"They show that a wave of periodic neural activity can propagate across two physically separated pieces of hippocampal tissue (separation that removes the possibility of chemical or electrical synaptic communication), and that this propagation was blocked by a mechanism that cancels the relevant electrical field -- results that strongly suggest ephaptic effects as a causal mechanism. Chiang et al. (2019): “To confirm the absence of any role of synaptic transmission and to eliminate other forms of communication between neurons except for ephaptic coupling, we next examined the possibility that electric fields generated by pyramidal neurons could propagate through a cut in the tissue by activating other cells across a small gap of the tissue, thereby eliminating chemical, electrical synapses (gap junctions), or axonal transmission. Fig. 4A and B shows the propagation of the slow hippocampal periodic activity before and after the cut in the tissue. To ensure that the slice was completely cut, the two pieces of tissue were separated and then rejoined while a clear gap was observed under the surgical microscope. The slow hippocampal periodic activity could indeed generate an event on the other side of a complete cut through the whole slice (Fig. 4B). However, the slow hippocampal periodic activity failed to trigger the activity across the gap when the distance of the gap increased (Fig. 4C). The expanded window in Fig. 4D shows that the waveforms of the slow hippocampal periodic activity and the delay between two signals measured in recording electrodes 1 and 2 were similar. The speed of the slow hippocampal periodic activity across the tissue was not affected by the presence of the cut in Fig. 4E (t test, n = 36 events in 3 slices). Therefore, this experiment shows that slow hippocampal periodic activity can propagate along a cut tissue by activating cells on the other side without any chemical and electrical synaptic connections at a similar speed to those observed in the intact tissue” (p. 255).\") and entrain action potential firing,[390](https://www.openphilanthropy.org/brain-computation-report#footnote390_cdl8b62 \"Anastassiou et al. (2011): “We found that extracellular fields induced ephaptically mediated changes in the somatic membrane potential that were less than 0.5 mV under subthreshold conditions. Despite their small size, these fields could strongly entrain action potentials, particularly for slow (<8 Hz) fluctuations of the extracellular field” (abstract).Chang (2019): “Ephaptic coupling has been suggested as a mechanism involved in modulating neural activity from different regions of the nervous system (Jefferys (1995); Weiss and Faber (2010); Anastassiou and Koch (2015)) especially in the vertebrate retina (Vroman et al. (2013)) and in the olfactory circuit (Su et al. (2012)). Several studies also indicate that weak electric fields can influence the neural activity at the cortical and hippocampal network level (Francis et al. (2003); Deans et al. (2007); Fröhlich and McCormick (2010)). In hippocampal slices, weak electric fields can affect the excitability of pyramidal cells and the synchronization of the hippocampal network (Francis et al. (2003); Deans et al. (2007)). In the cortex, weak electric fields have also been shown to modulate slow periodic activity in the in vitro preparation (Frohlich & McCormick, Fröhlich and McCormick (2010)). Although endogenous electric fields are thought to be too weak to excite neurons, two recent studies suggest that weak electric fields are involved in the propagation of epileptiform activity at a specific speed of 0.1 m s−1(Zhang et al. (2014); Qiu et al. (2015))” (p. 250).\") and [Chiang et al. (2019)](https://www.ncbi.nlm.nih.gov/pmc/articles/PMC6312416/pdf/TJP-597-249.pdf) suggest that they may explain slow oscillations of neural activity observed *in vivo*.[391](https://www.openphilanthropy.org/brain-computation-report#footnote391_tk2oy18 \"Chiang et al. (2019): “Slow oscillations have been observed to propagate with speeds around 0.1 m s−1 throughout the cerebral cortex in vivo… The mechanism most consistent with the data is ephaptic coupling whereby a group of neurons generates an electric field capable of activating the neighbouring neurons” (p. 250).\")\n\n\nA recent paper, though, suggests that the question of whether they have any functional relevance *in vivo* remains quite open,[392](https://www.openphilanthropy.org/brain-computation-report#footnote392_ofakk78 \"Anastassiou and Koch (2015): “The biggest question about ephaptic coupling to endogenous fields remains its functional role: does such nonsynaptic, electric communication contribute to neural function and computationsin the healthy brain (e.g., in the absence of the strong fields generated during epileptic seizures or other pathological brain states)? And, if yes, where, how and under which conditions? While characterizing ephaptic effects at the level of synapses, neurons and circuits in slice remains invaluable, ephaptic coupling must ultimately be studied in behaving animals. This is particularly so as such effects are likely to be small (e.g., compared to spike threshold) and spatially diffuse (in the case of LFPs), suggesting a circuit-wide feedback mechanism, that is, at the level where neural processing relevant to behavior occurs [62]” (see “Outlook”).\") and one expert thought them unlikely to be important to task-performance.[393](https://www.openphilanthropy.org/brain-computation-report#footnote393_dqg03pl \"From Open Philanthropy's non-verbatim notes from a conversation with Prof. Anthony Zador: “Prof. Zador believes that ephaptic communication is very unlikely to be important to the brain’s information-processing” (p. 4).\")\n\n\nOne reason for doubt is that the effects on neuron membrane potential appear to be fairly small (e.g., <0.5 mV, compared with the ~15 mV gap between resting membrane potential and the threshold for firing),[394](https://www.openphilanthropy.org/brain-computation-report#footnote394_jn5mqgj \"Resting membrane potential is typically around -70 mV, and the threshold for firing is around -55 mV, though these vary somewhat. Anastassiou and Koch (2015): “such effects are likely to be small (e.g., compared to spike threshold)” (see “Outlook”).\") and may be drowned out by noise artificially lacking *in vitro*.[395](https://www.openphilanthropy.org/brain-computation-report#footnote395_scbcm3r \"Anastassiou and Koch (2015): “The usefulness of such studies for understanding ephaptic coupling to endogenous fields is limited–chiefly, the cases emulated in slice oversimplify in vivo activity where neurons are continuously bombarded by hundreds of postsynaptic currents along their intricate morphology in the presence of a spatially inhomogeneous and temporally dynamic electric field (Figure 1c; compare to fields in Figure 1a,b). Such limitations are present both for fields induced across parallel plates positioned millimeters away from each other (e.g., [24, 25, 30]) as well as fields elicited via stimulation pipettes (e.g., [1, 28]). To account for the impact of endogenous fields on single neurons, both the intracellular and extracellular voltage would not only need to be monitored along a single cell but also manipulated, and all this in the behaving animal” (see “Neurons (mesoscale)”).\")\n\n\nEven if they were task-relevant, though, they would be spatially imprecise – arising from, and exerting effects on, the activity of groups of neurons, rather than on individual cells. Two experts took this as reason to think their role in task-performance would not be computationally expensive to capture.[396](https://www.openphilanthropy.org/brain-computation-report#footnote396_brppyq2 \"From Open Philanthropy's non-verbatim notes from a conversation with Prof. Anthony Zador: “Prof. Zador believes that ephaptic communication is very unlikely to be important to the brain’s information-processing. Even if it was important, though, it would be a form of global signaling, and so comparatively inexpensive to model.” (p. 4). From Open Philanthropy's non-verbatim notes from a conversation with Prof. Barak Pearlmutter: “He also suggested that ephaptic effects would be ‘in the noise’ because they are bulk effects, representation of which would involve one number that covers thousands of synapses” (p. 3).\") That said, actually modeling electric fields seems plausibly quite FLOP/s-intensive.[397](https://www.openphilanthropy.org/brain-computation-report#footnote397_0nyd56c \"Sandberg and Bostrom (2008)): “If ephaptic effects were important, the emulation would need to take the locally induced electromagnetic fields into account. This would plausibly involve dividing the extracellular space (possibly also the intracellular space) into finite elements where the field can be assumed to be constant, linear or otherwise easily approximable. The cortical extracellular length constant is on order of ≈100 μm (Gardner‐Medwin (1983)), which would necessitate on the order of 1.4∙1012 such compartments if each compartment is 1/10 of the length constant. 37 Each compartment would need at least two vector state variables and 6 components of a conductivity tensor; assuming one byte for each, the total memory requirements would be on the order of 10 terabytes. Compared to estimates of neural simulation complexity, this is relatively manageable. The processing needed to update these compartments would be on the same order as a detailed compartment model of every neuron and glia cell” (p. 36-7).\")\n\n\n\n#### 2.3.5 Other forms of axon signaling\n\n\nAction potentials are traditionally thought of as binary choices – a neuron fires, or it doesn’t – induced by changes to somatic membrane potential, and synaptic transmission as a product of this binary choice.[398](https://www.openphilanthropy.org/brain-computation-report#footnote398_xlo7zii \"Bullock et al. (2005), describing the history of early neuroscience: “physiological studies established that conduction of electrical activity along the neuronal axon involved brief, all-or-nothing, propagated changes in membrane potential called action poten- tials. It was thus often assumed that neuronal activity was correspondingly all-or- nothing and that action potentials spread over all parts of a neuron. The neuron was regarded as a single functional unit: It either was active and “firing” or was not” (p. 791).\") But in some contexts, this is too simple. For example:\n\n\n* The waveform of an action potential (that is, its amplitude and duration) can vary in a way that affects neurotransmitter release.[399](https://www.openphilanthropy.org/brain-computation-report#footnote399_gos1qt3 \"Zbili and Debanne (2019): “When it invades the presynaptic terminal, the spike provokes the opening of voltage-gated calcium channels (Cav), leading to an increase of Ca2+concentration in the bouton and the release of neurotransmitters. Due to the power law between intra-terminal Ca2+ concentration and neurotransmitter release, small variations in presynaptic calcium entry, occurring through spike shape modifications, can lead to large changes in synaptic transmission (Sabatini and Regehr (1997); Bollmann et al. (2000); Bischofberger et al. (2002); Fedchyshyn and Wang (2005); Yang and Wang (2006); Bucurenciu et al. (2008); Scott et al. (2008); Neishabouri and Faisal (2014)). In fact, spike broadening during repetitive firing entails synaptic transmission facilitation in the pituitary nerve (Jackson et al. (1991)), dorsal root ganglion (Park and Dunlap (1998)) and mossy fiber bouton (Geiger and Jonas (2000)). Other studies showed that spike amplitude depression during repetitive firing provokes a decrease in synaptic transmission at hippocampal (Brody and Yue (2000); Prakriya and Mennerick (2000); He et al. (2002)) and cerebellar synapses (Kawaguchi and Sakaba (2015))” (p. 2).\")\n* Variations in the membrane potential that occur below the threshold of firing (“subthreshold” variations) can also influence synaptic transmission.[400](https://www.openphilanthropy.org/brain-computation-report#footnote400_5lk652l \"Zbili and Debanne (2019): “the synaptic strength depends on the subthreshold membrane potential of the presynaptic cell, indicating that the presynaptic spike transmits this analog information to the postsynaptic cell. However, the direction of this modulation of synaptic transmission seems to depend on the type of synapse” (p. 5). Zbili and Debanne (2019), reviewing the literature on effects of this broad type, report increases in neurotransmitter release ranging from 10-100%, depending on the study (p. 7). Shu et al. (2006), for example, caused a 29% median enhancement to the impact of a spike through synapse in ferret pyramidal cells by changing the membrane potential in the soma in a manner that stayed below the threshold for an action potential (abstract).\")\n* Certain neurons – for example, neurons in early sensory systems,[401](https://www.openphilanthropy.org/brain-computation-report#footnote401_rhcqw2u \"Juusola et al. (1996): “Many neurons use graded membrane-potential changes, instead of action potentials, to transmit information. Traditional synaptic models feature discontinuous transmitter release by presynaptic action potentials, but this is not true for synapses between graded-potential neurons. In addition to graded and continuous transmitter release, they have multiple active zones, ribbon formations and L-type Ca2+ channels. These differences are probably linked to the high rate of vesicle fusion required for continuous transmitter release. Early stages of sensory systems provide some of the best characterized graded-potential neurons, and recent work on these systems suggests that modification of synaptic transmission by adaptation is a powerful feature of graded synapses” (abstract).\") and neurons in invertebrates[402](https://www.openphilanthropy.org/brain-computation-report#footnote402_pwxc9c0 \"Graubard et al. (1980): “Graded synaptic transmission occurs between spiking neurons of the lobster stomatogastric ganglion. In addition to eliciting spike-evoked inhibitory potentials in postsynaptic cells, these neurons also release functionally significant amounts of transmitter below the threshold for action potentials. The spikeless postsynaptic potentials grade in amplitude with presynaptic voltage and can be maintained for long periods. Graded synaptic transmission can be modulated by synaptic input to the presynaptic neuron” (p. 3733).\") – also release neurotransmitter continually, in amounts that depend on non-spike changes to pre-synaptic membrane potential.[403](https://www.openphilanthropy.org/brain-computation-report#footnote403_xki512t \"Graded synaptic transmission is distinct from the spontaneous release of neurotransmitter associated with what are called “miniature postsynaptic currents.” From Faisal et al. (2008): “The classic manifestation of synaptic noise is the spontaneous miniature postsynaptic current (mPSC) that can be recorded in the absence of presynaptic input. Katz and collaborators interpreted mPSCs as being the result of spontaneously released neurotransmitter vesicles, thus establishing the quantal nature of synaptic transmission” (p. 7).\")\n* Some *in vitro* evidence suggests that action potentials can arise in axons without input from the soma or dendrites.[404](https://www.openphilanthropy.org/brain-computation-report#footnote404_hkmbxy8 \"See Dugladze et al. (2012): “We found that during in vitro gamma oscillations, ectopic action potentials are generated at high frequency in the distal axon of pyramidal cells (PCs) but do not invade the soma. At the same time, axo-axonic cells (AACs) discharged at a high rate and tonically inhibited the axon initial segment, which can be instrumental in preventing ectopic action potential back-propagation. We found that activation of a single AAC substantially lowered soma invasion by antidromic action potential in postsynaptic PCs. In contrast, activation of soma-inhibiting basket cells had no significant impact. These results demonstrate that AACs can separate axonal from somatic activity and maintain the functional polarization of cortical PCs during network oscillations” (abstract). See also Sheffield (2011): “In a subset of rodent hippocampal and neocortical interneurons, hundreds of spikes, evoked over minutes, resulted in persistent firing that lasted for a similar duration. Although axonal action potential firing was required to trigger persistent firing, somatic depolarization was not. In paired recordings, persistent firing was not restricted to the stimulated neuron – it could also be produced in the unstimulated cell. Thus, these interneurons can slowly integrate spiking, share the output across a coupled network of axons, and respond with persistent firing even in the absence of input to the soma or dendrites” (abstract).\")\n\n\nDo these imply substantial increases to FLOP/s budgets? Most of the studies I looked at seemed to be more in the vein of “here is an effect that can be created *in vitro*” than “here is a widespread effect relevant to *in vivo* task-performance,” but I only looked into this very briefly, the possible mechanisms/complexities are diverse, and evidence of the latter type is rare regardless.\n\n\nSome effects (though not all)[405](https://www.openphilanthropy.org/brain-computation-report#footnote405_roiaxlr \"Pre-synaptic hyperpolarization (decreasing the membrane potential) can have effects within 15-50 ms. Zbili and Debanne (2019): “ADFs present various time constants which determine their potential roles in network physiology. In fact, in most of the studies, d-ADF needs 100 ms to several seconds of presynaptic depolarization to occur. On the contrary, h-ADF can be produced by fast presynaptic hyperpolarization (15–50 ms; Rama et al. (2015a)). This difference is well explained by the underlying mechanism of d-ADF and h-ADF: slow accumulation of basal Ca2+ (Bouhours et al. (2011); Christie et al. (2011)) or slow Kv inactivation for d-ADF (Shu et al. (2006), Shu et al. (2007); Kole et al. (2007); Bialowas et al. (2015)), fast recovery from inactivation of Nav for h-ADF (Rama et al. (2015a); Zbili et al. (2016)). Therefore, d-ADF and h-ADF should have different consequences on information transfer in neuronal networks” (p. 8).\") also required sustained stimulation (e.g., “hundreds of spikes over several minutes,”[406](https://www.openphilanthropy.org/brain-computation-report#footnote406_icczzz2 \"Sheffield (2011): “In a subset of rodent hippocampal and neocortical interneurons, hundreds of spikes, evoked over minutes, resulted in persistent firing that lasted for a similar duration” (abstract).\") or “100 ms to several seconds of somatic depolarization”[407](https://www.openphilanthropy.org/brain-computation-report#footnote407_x22kc0f \"Zbili and Debanne (2019) report that in most studies, it takes “100 ms to several seconds of presynaptic depolarization” (p. 8).\")); and the range of neurons that can support axon signaling via sub-threshold membrane potential fluctuations also appears somewhat unclear, as the impact of such fluctuations is limited by the voltage decay along the axon.[408](https://www.openphilanthropy.org/brain-computation-report#footnote408_421accc \"My understanding is that the applicability of this consideration depends on the “length” or “space” constant associated with different axons in the brain, where the relevant issue is that the influence of pre-synaptic membrane potential changes along the axon decays exponentially in absence of active participation from ion channels. Here’s Backyard Brains on the length/space constant: “let's talk about the length constant (this is sometimes also called the \\\"space constant\\\"). The length constant (λ, or lambda) is a measure of how far the voltage travels down the axon before it decays to zero. If you have a length constant of 1 mm, that means at 1 mm away from the cell body in an axon, 37% of the voltage magnitude remains. At 2 mm away from the cell body in an axon, 14% of the magnitude remains, and at 3 mm away, 5% remains. This is representative of an ‘exponential decay’ function.” Here’s Zbili and Debanne (2019) on how this applies to analog-digital signaling along the axon: “One of the main issues concerning Analog-Digital Facilitations is the spatial extent of these phenomena along the axon. In fact, ADFs are produced by subthreshold modifications of the somatic potential that spreads to the presynaptic terminal and modifies presynaptic spike shape or basal Ca2+ (Debanne et al. (2013); Rama et al. (2015b)). Therefore, the axonal space constant is a major determinant of the spatial extent of ADF. The axonal space constant varies among neuronal types, depending on the axonal diameter, the density of axonal branching and the axonal membrane resistance (Sasaki et al. (2012)). In CA3 hippocampal neurons, the axonal space constant has been evaluated around 200–500 μm (Sasaki et al. (2012); Bialowas et al. (2015); Rama et al. (2015a)). In L5 pyramidal neurons, the value estimated ranges between 500 μm (Shu et al. (2006); Kole et al. (2007)) and 1,000 μm (Christie and Jahr (2009)). In CA1 pyramidal neurons, the axonal space constant was found to be around 700 μm (Kim (2014)). Therefore, ADFs seem to be restricted to local brain circuits. For example, d-ADF has been found between CA3 neurons but not at the synapses between CA3 and CA1 neurons (Sasaki et al. (2012)). However, several lines of evidence suggest that ADFs could also occur between more distant neurons...” (p. 160).\")\n\n\nOverall, though, I don’t feel very informed or clear about this one. As with electrical synapses, I think the central consideration for me is that the field doesn’t seem to treat it as central.\n\n\n\n#### 2.3.6 Blood flow\n\n\nBlood flow in the brain correlates with neural activity (this is why [fMRI](https://en.wikipedia.org/wiki/Functional_magnetic_resonance_imaging) works). This is often explained via the blood’s role in maintaining brain function (e.g., supplying energy, removing waste, regulating temperature).[409](https://www.openphilanthropy.org/brain-computation-report#footnote409_120hgih \"Moore and Cao (2008): “The standard modern view of blood flow is that it serves a physiological function unrelated to information processing, such as bringing oxygen to active neurons, eliminating “waste” generated by neural activity, or regulating temperature” (p. 2035).\") [Moore and Cao (2008)](https://journals.physiology.org/doi/pdf/10.1152/jn.01366.2006)), though, suggest that blood flow could play an information-processing role as well – for example, by delivering diffusible messengers like nitric oxide, altering the shape of neuron membranes, modulating synaptic transmission by changing brain temperatures, and interacting with neurons indirectly via astrocytes.[410](https://www.openphilanthropy.org/brain-computation-report#footnote410_4u1cx32 \"See Moore and Cao (2008), (p. 2037-2040).\") The timescales of activity-dependent changes in blood flow are on the order of hundreds of milliseconds (the effects of such changes often persist after a stimulus has ended, but Moore and Cao believe this is consistent with their hypothesis).[411](https://www.openphilanthropy.org/brain-computation-report#footnote411_mrs0jjj \"Moore and Cao (2008): “the somatosensory neocortex, blood flow increases measured using laser Doppler have been observed <200 ms after the onset of sensory-evoked neural responses (Matsuura et al. (1999); Norup Nielsen and Lauritzen (2001)). Similarly, optical imaging techniques that integrate over local volumes at somewhat slower temporal resolution typically record a significant increase in flow within ≤500 ms of sensory stimulus presentation (Dunn et al. (2005); Malonek et al. (1997); Martin et al. (2006)). The subsequent duration of these increases is often viewed as “poorly correlated” with neural activity, because functional hyperemia can sustain for seconds after the onset and offset of a stimulus. As discussed in a later section, this sustained temporal pattern may not be a mismatch between activity and flow, but rather may be consistent with the information processing role of blood flow” (p. 2037).\")\n\n\nMy impression, though, is that most experts don’t think that blood flow plays a very direct or central role in information-processing.[412](https://www.openphilanthropy.org/brain-computation-report#footnote412_mipfaff \"From Open Philanthropy's non-verbatim notes from a conversation with Dr. Stephen Larson: “It’s generally thought that blood flow is more of an epiphenomenon/a sign that other forms of information processing are occurring (akin to the heat generated by a CPU), than a mechanism of information-processing in itself” (p. 4).\") And the spatial resolution appears fairly coarse regardless: [Moore and Cao (2008)](https://journals.physiology.org/doi/pdf/10.1152/jn.01366.2006) suggest resolution at the level of a cortical column (a group of neurons[413](https://www.openphilanthropy.org/brain-computation-report#footnote413_cih30ha \"The exact number, along with the definition of a column, appears to be the subject of some debate (see Rakic (2008) for complaints). Krueger (2008): “In humans, each column contains 1000 to 10,000 cells.”\")), or an olfactory glomerulus (a cluster of connections between cells).[414](https://www.openphilanthropy.org/brain-computation-report#footnote414_fqi0b9b \"Moore and Cao (2008): “In the somatosensory and visual neocortex, a general consensus exists that the pattern of increased blood flow is similar to that of subthreshold neural activity, with a peak in signal that is localized to a cortical column (400 m) and an extent spanning several columns (Dunn et al. (2005); Hess et al. (2000); Lauritzen (2001); Sheth et al. (2004); Vanzetta et al. (2004); Yang et al. (1998)) … In other brain areas, evidence for more precise delivery has also been observed, because flow can be localized to a single glomerulus in the olfactory bulb during stimulus presentation (i.e., 100 m) (Chaigneau et al. (2003); Yang et al. (1998))” (p. 2037).\")\n\n\n\n#### 2.3.7 Overall FLOP/s for other signaling mechanisms\n\n\nHere is a chart summarizing some of the considerations just canvassed (see the actual sections for citations).\n\n\n\n\n\n| MECHANISM | DESCRIPTION | SPEED | SPATIAL PRECISION | NUMBER/FREQUENCY | EVIDENCE FOR TASK-RELEVANCE |\n| --- | --- | --- | --- | --- | --- |\n| **Other chemical signals** | Chemical signals other than classical neurotransmitters. Includes neuropeptides, gases like nitrous oxide, endocannabinoids, and others. | Limited by the speed of chemical diffusion, and by the timescales of metabotropic receptors. | Imprecise. Affect groups of cells by diffusing through the extracellular space and/or through cell membranes, rather than via synapses. | Very common. However, some signal broadcasts are fairly rare, and may take ~400 spikes to trigger. | Strong. Can alter circuit dynamics and neuron input-output functions, role in synaptic plasticity. |\n| **Glia** | Non-neuron cells traditionally thought to play a supporting role in the brain, but some of which may be more directly involved in task-performance. | Some local calcium responses within ~100 ms; other calcium signaling on timescales of seconds or longer. | Can respond locally to individual synaptic events. | ~1:1 ratio with neurons (not 100:1). Astrocytes (the most clearly task-relevant type of glial cell) are only 20-40% of glia. | Moderate. Role in zebrafish behavior. Plausible role in plasticity, synaptic transmission, and elsewhere. However, glia have a much smaller energy budget than neurons. |\n| **Electrical synapses** | Connections between cells that allow ions and other molecules to flow directly from one to the other. | Very fast. Can pass signals in a fraction of a millisecond. | Precise. Signals are passed between two specific cells. But may function to synchronize groups of neurons. | Thought to be less common than chemical synapses (but may be passing signals more continuously, and/or require more frequent updates?). | Can play a role, but thought to be less important than chemical synapses? More limited range of signaling behaviors. |\n| **Ephaptic effects** | Local electrical fields that can impact neighboring neurons. | ? Some oscillations that ephaptic effects could explain are slow-moving. Unsure about speed of lower-level effects. | Imprecise. Arises from activity of many cells, effects not targeted to specific cells. | ? | Weak? Small effects on membrane potential possibly swamped by noise *in vivo*. |\n| **Other forms of axon signaling** | Processes in a neuron other than a binary firing decision that impact synaptic transmission. | ? Some effects required sustained stimulation (minutes of spiking, 100 ms to seconds of depolarization). Others arose more quickly (15-50 ms of hyperpolarization). | Precise, proceeds via axons/individual synapses. | Unclear what range of neurons can support some of the effects (e.g., sub-threshold influences on synaptic transmission). | Some effects relevant in at least some species/contexts. Other evidence mostly from *in vitro* studies? |\n| **Blood flow** | Some hypothesize that blood flow in the brain is involved in information-processing. | Responses within hundreds of ms, which persist after stimulus has ended. | Imprecise. At the level of a cortical column, or a cluster of connections between cells. | ? | Weak. Widely thought to be epiphenomenal. |\n\n\n\n**Figure 13: Factors relevant to FLOP/s budgets for other signaling mechanisms in the brain.** \n\n\nObviously, my investigations were cursory, and there is a lot of room for uncertainty in each case. What’s more, the list is far from exhaustive,[415](https://www.openphilanthropy.org/brain-computation-report#footnote415_y14aila \"Other possibilities include the perineuronal net (see Tsien (2013) for discussion), and classical dynamics in microtubules (see Cantero et al. (2018)). I leave out the other two mechanisms partly because of time constraints, and partly because my impression is that they do not feature very prominently in the discourse on this topic.\") and other mechanisms may await discovery.[416](https://www.openphilanthropy.org/brain-computation-report#footnote416_suk651a \"Though see the non-verbatim notes from Open Philanthropy's non-verbatim notes from a conversation with Prof. Anthony Zador: “Prof. Zador is skeptical that there are major unknown unknowns in the parts list in the brain, given how much effort has gone into studying nervous systems. Biology is complicated, and there is still more to understand, but Prof. Zador does not think that what we are missing is a breakthrough in biology. Rather, what’s missing is an understanding of the brain’s organizing principles” (p. 4).\")\n\n\nStill, as mentioned earlier, **my best guess is that capturing the role of other signaling mechanisms (known and unknown) in task-performance does not require substantially more FLOP/s than capturing standard neuron signaling and learning**. This guess is primarily grounded in a sense that computational neuroscientists generally treat standard neuron signaling (and the plasticity thereof) as the primary vehicle of information-processing in the brain, and other mechanisms as secondary.[417](https://www.openphilanthropy.org/brain-computation-report#footnote417_56g8doz \"A number of experts we engaged with indicated that many computational neuroscientists would not emphasize other mechanisms very much (though their comments in this respect are not publicly documented); and the experts I interviewed didn’t tend to emphasize such mechanisms either.\") An initial look at the speed, spatial precision, prevalence, and task-relevance of the most salient of these mechanisms seems compatible with such a stance, so I’m inclined to defer to it, despite the possibility that it emerges primarily from outdated assumptions and/or experimental limitations, rather than good evidence.\n\n\n### \n\n\n#### 2.4 Overall mechanistic method FLOP/s\n\n\nHere are the main numbers we’ve discussed thus far:\n\n\n\n> **Standard neuron signaling**: ~1e13-1e17 FLOP/s \n> \n> *Synaptic transmission*: 1e13-1e17 FLOP/s \n> \n> Spikes through synapse per second: 1e13-1e15 \n> \n> FLOPs per spike through synapse:\n> \n> \n> Low: 1 (one addition and/or multiply operation, reflecting impact on post-synaptic membrane potential)\n> \n> \n> High: 100 (covers 40 FLOPs for synaptic conductances, plus cushion for other complexities) \n> \n> *Firing decisions*: 1e13-1e17 FLOP/s\n> \n> \n>  \n> \n> \n> Number of neurons: 1e11 \n> \n> FLOP/s per neuron: \n> \n> Low: 100 (ReLU, 10 ms timesteps) \n> \n> Middle: 10,000 (Izhikevich model, 1 ms timesteps) \n> \n> High: 1,000,000 (single compartment Hodgkin-Huxley model, 0.1 ms timesteps) \n> \n> **Learning**: <1e13 – 1e17 FLOP/s \n> \n> Spikes through synapse per second: 1e13-1e15 \n> \n> FLOPs per spike through synapse: \n> \n> Low: <1 (possibly due to slow timescales) \n> \n> Middle: 1-10 (covers various learning models – Hebbian plasticity, first-order gradient methods, possibly [Benna and Fusi (2016)](https://www.nature.com/articles/nn.4401) – and expert estimates, relative to low end baselines) \n> \n> High: 100 (covers those models with more cushion/relative to higher baselines). \n> \n> **Other signaling mechanisms**: do not meaningfully increase the estimates above.\n> \n> \n> **Overall range**: ~1e13-1e17 FLOP/s[418](https://www.openphilanthropy.org/brain-computation-report#footnote418_5af8x4k \"Technically, this would be ~3e13-3e17 FLOP/s, if we were really adding up synaptic transmission, firing decisions, and learning. But these ranges are sufficiently made-up and arbitrary that this sort of calculation seems to me misleadingly precise.\")\n> \n> \n\n\nTo be clear: the choices of “low” and “high” here are neither principled nor fully independent, and I’ve rounded aggressively.[419](https://www.openphilanthropy.org/brain-computation-report#footnote419_d7j3xc4 \"That is, I did not do fully independent analyses of each of these areas and then combine them (this is why the ranges are so similar). Rather, I started with a baseline, default model of 1 FLOP per spike through synapse, and then noted that budgeting 10-100x of cushion on top of that would cover various salient complexities and expert estimates across various of these categories.\") Indeed, another, possibly more accurate way to summarize the estimate might be:\n\n\n\n> \n> “There are roughly 1e14-1e15 synapses in the brain, receiving spikes about 0.1-1 times a second. A simple estimate budgets 1 FLOP per spike through synapse, and two extra orders of magnitude would cover some complexities related to synaptic transmission, as well as some models of learning. This suggests something like 1e13-1e17 FLOP/s. You’d also need to cover firing decisions, but various simple neuron models, scaled up by 1e11 neurons, fall into this range as well, and the high end (1e17 FLOP/s) would cover a level of modeling detail that I expect many computational neuroscientists to think unnecessary (single compartment Hodgkin-Huxley). Accounting for the role of other signaling mechanisms probably doesn’t make much of a difference to these numbers.”\n> \n> \n> \n\n\nThat is, this is meant to be a plausible ballpark, covering various types of models that seem plausibly adequate to me.\n\n\n#### 2.4.1 Too low?\n\n\nHere are some ways it could be too low:\n\n\n* **The choice to budget FLOP/s for synaptic transmission and learning based on spikes through synapses, rather than *timesteps* at synapses, is doing a lot of work.** If we instead budgeted based on timesteps, and used something like 1 ms resolution, we’d start with 1e17-1e18 FLOP/s as a baseline (1 FLOP per timestep per synapse). Finer temporal resolutions, and larger numbers of FLOPs per time-step, would drive these numbers higher.\n* **Some neural processes are extremely temporally precise.** For example, neurons in the owl auditory system can detect auditory stimulus timing at a precision of less than ten *microseconds*.[420](https://www.openphilanthropy.org/brain-computation-report#footnote420_6o4o0dj \"Funabiki et al. (2011): “In owls, NL neurons change their firing rates with changes in ITD of <10 μs (Carr and Konishi (1990); Peña et al. (1996)), far below the spike duration of the neurons (e.g., ∼1 ms). The data used for modeling these coincidence detection processes have so far come from in vitro studies in the chick's NL (Reyes et al. (1996); Funabiki et al. (1998); Kuba et al. (2005), (2006); Slee et al. (2010)), extracellular studies of the barn owl's NL neurons (Carr and Konishi (1990); Peña et al. (1996); Fischer et al. (2008)), and the owl's behavioral performance (Knudsen et al. (1979)). Specialized cellular mechanisms, including extraordinary fast glutamate receptors (Reyes et al. (1996); Trussell (1999); Kuba et al. (2005)), low threshold-activated potassium conductance (KLVA) (Reyes et al. (1996)), and remote spike initiation (Carr and Boudreau (1993b); Kuba et al. (2006); Ashida et al. (2007)), have been discussed as important elements of this extraordinary precise coincidence detection” (p. 15245).\") These cases may be sufficiently rare, or require combining a sufficient number of less-precise inputs, that they wouldn’t make much of a difference to the overall budget. However, if they are indicative of a need for much finer temporal precision across the board, they could imply large increases.\n* **Dendritic computation might imply much larger FLOP/s budgets than single-compartment Hodgkin-Huxley models.**[421](https://www.openphilanthropy.org/brain-computation-report#footnote421_55kzzqw \"From Open Philanthropy's non-verbatim notes from a conversation with Prof. Eric Jonas: “Active dendritic computation could conceivably imply something like 1-5 orders of magnitude more compute than a simple linear summation model of a neuron. And if dendritic morphology is evolving over time, you also need to be thinking about the space of all possible dendrites that could have formed, in addition to the current dendritic tree” (p. 3). He also added, though, “it’s reasonable to think that at the end of the day, simplified dendritic models are available. For example, Prof. Jonas has heard arguments suggesting that post-synapse, there is very little plasticity in dendrites, and that dendritic computation mostly involves applying random features to inputs” (p. 3).\") Results like [Beniaguev et al. (2020)](https://www.biorxiv.org/content/10.1101/613141v2.full.pdf) (~1e10 FLOP/s per neuron), discussed above, seem like some initial evidence for this.\n* **Some [CNN](https://en.wikipedia.org/wiki/Convolutional_neural_network)/[RNN](https://en.wikipedia.org/wiki/Recurrent_neural_network) models used to predict the activity of retinal neurons are very FLOP/s intensive as well.** I discuss this in [Section 3.1](#section_3.1).\n* **Complex molecular machinery at synapses or inside neurons might implement learning algorithms that would require more than 100 FLOPs per spike through synapse to replicate.**[422](https://www.openphilanthropy.org/brain-computation-report#footnote422_46l7rbe \"See e.g. Bhalla (2014).\") And I am intrigued by theoretical results showing that various models of synaptic plasticity lead to problems like catastrophic forgetting, and that introducing larger numbers of dynamical variables at synapses might help with online learning.[423](https://www.openphilanthropy.org/brain-computation-report#footnote423_u0my6xq \"Kaplanis et al. (2018): “we show that by equipping tabular and deep reinforcement learning agents with a synaptic model that incorporates this biological complexity (Benna and Fusi (2016)), catastrophic forgetting can be mitigated at multiple timescales. In particular, we find that as well as enabling continual learning across sequential training of two simple tasks, it can also be used to overcome within-task forgetting by reducing the need for an experience replay database” (p. 1). Zenke et al. (2017): “In this study, we introduce intelligent synapses that bring some of this biological complexity into artificial neural networks. Each synapse accumulates task relevant information over time, and exploits this information to rapidly store new memories without forgetting old ones. We evaluate our approach on continual learning of classification tasks, and show that it dramatically reduces forgetting while maintaining computational efficiency” (abstract).\")\n* **One or more of the other signaling mechanisms in the brain might introduce substantially additional FLOP/s burdens**(neuromodulation and glia seem like prominent candidates, though I feel most uncertainty about the specific arguments re: gap junctions and alternative forms of axon signaling).\n* **Processes in the brain that take place over longer timescales involve interactions between many biophysical variables in the brain that are not normally included in e.g. simple models of spiking.** The length of these timescales might limit the compute burdens such interactions imply, but if not, updating *all* relevant variables at a frequency similar to the most frequently updated variables could imply much larger compute burdens.[424](https://www.openphilanthropy.org/brain-computation-report#footnote424_r1jned3 \"From Open Philanthropy's non-verbatim notes from a conversation with Prof. Eve Marder: “In reality, the nervous system has an incredible ability to move seamlessly between timescales ranging from milliseconds to years, and the relevant processes interact. That is, short time-scale processes influence long time-scale processes, and vice versa. And unlike digital computers, the brain integrates over very long timescales at very fast speeds easily and seamlessly” (p. 2).\")\n* **Some of the basic parameters I’ve used could be too low.** The average spike rate might be more like 10 Hz than 0.1-1 Hz (I really doubt 100 Hz); synapse count might be >1e15; Hodgkin-Huxley models might require more FLOP/s than [Izhikevich (2004)](https://www.izhikevich.org/publications/whichmod.pdf) budgets, etc. Indeed, I’ve been surprised at how uncertain many very basic facts about the brain appear to be, and how wrong previous widely-cited numbers have been (for example, a 10:1 ratio between glia and neurons was widely accepted until it was corrected to roughly 1:1).[425](https://www.openphilanthropy.org/brain-computation-report#footnote425_ehc9bmy \"See von Bartheld et al. (2016): “The recently validated isotropic fractionator demonstrates a glia:neuron ratio of less than 1:1… We review how the claim of one trillion glial cells originated, was perpetuated, and eventually refuted.” (p. 1)). \")\n\n\nThere are also broader considerations that could incline us towards higher numbers by default, and/or skepticism of arguments in favor of the adequacy of simple models:\n\n\n* **We might expect evolution to take advantage of every possible mechanism and opportunity available for increasing the speed, efficiency, and sophistication of its information-processing.**[426](https://www.openphilanthropy.org/brain-computation-report#footnote426_24ktzxp \"From Open Philanthropy's non-verbatim notes from a conversation with Prof. Erik De Schutter: “The brain was not engineered. Rather, it evolved, and evolution works by adding complexity, rather than by simplification… Indeed, in general, many scientists who approach the brain from an engineering perspective end up on the wrong footing. Engineering is an appropriate paradigm for building AI systems, but if you want to understand the brain, you need to embrace the fact that it works because it is so complicated. Otherwise, it will be impossible to understand the system” (p. 4).\") Some forms of computation in biological systems, for example, appear to be extremely energy efficient.[427](https://www.openphilanthropy.org/brain-computation-report#footnote427_gyu2y9p \"See e.g. Kempes et al. (2017): “Here we show that the computational efficiency of translation, defined as free energy expended per amino acid operation, outperforms the best supercomputers by several orders of magnitude, and is only about an order of magnitude worse than the Landauer bound” (p. 1). Rahul Sarpeshkar, in a 2018 TED talk, suggests that cells are the most energy efficient computers that we know, and that they are already computing at an efficiency near the fundamental laws of physics (3:30-4:04). See also Laughlin et al. (1998): “Freed from heavy mechanical work, ion channels change conformation in roughly 100 μs32. In principle, therefore, a single protein molecule, switching at the rate of an ion channel with the stoi- chiometry of kinesin, could code at least 103 bit per second at a cost of 1 ATP per bit” (p. 39). See Sarpeshkar (2013) for more on computation in cells, and Sarpeshkar (2010) for more on the energy-efficiency of biological systems more generally: “A single cell in the body performs ~10 million energy-consuming biochemical operations per second on its noisy molecular inputs with ~1 pW of average power. Every cell implements a ~30,000 node gene-protein molecular interaction network within its confines. All the ~100 trillion cells of the human body consume ~80 W of power at rest. The average energy for an elementary energy-consuming operation in a cell is about 20kT, where kT is a unit of thermal energy. In deep submicron processes today, switching energies are nearly 104 – 105kT for just an elementary 0->1 digital switching operation. Even at 10 nm, the likely end of business-as-usual transistor scaling in the future, it is unlikely that we will be able to match such energy efficiency. Unlike traditional digital computation, biological computation is tolerant to error in elementary devices and signals. Nature illustrates that it is significantly more energy efficient to compute with error-prone devices and signals and then correct for these errors through feedback-and-learning architectures than to make every device and every signal in a system robust, as in traditional digital paradigms thus far” (p. 18-19). Bennett (1989) also suggests that “a few thermodynamically efficient data processing systems do exist, notably genetic enzymes such as RNA polymerase, which, under appropriate reactant concentrations, can transcribe information from DNA to RNA at a thermodynamic cost considerably less than kT per step” (p. 766).\") Indeed, I think that further examination of the sophistication of biological computation in other contexts could well shift my default expectations about the brain’s sophistication substantially (though I have tried to incorporate hazy forecasts in this respect into my current overall view).[428](https://www.openphilanthropy.org/brain-computation-report#footnote428_su7blgp \"See e.g. from Open Philanthropy's non-verbatim notes from a conversation with Prof. Eric Jonas: “Various discoveries in biology have altered Prof. Jonas’s sense of the complexity of what biological systems can be doing. Examples in this respect include non-coding RNA, the complexity present in the three-dimensional structure of the cell, histone regulatory frameworks, and complex binding events involving different chaperone proteins. The class of computation that Prof. Jonas can imagine a single cell doing now seems multiple orders of magnitude more complex than it did 20 years ago” (p. 4).\")\n* **It seems possible that the task-relevant causal-structure of the brain’s biology is just intrinsically ill-suited to replication using digital computer hardware,** even once you allow for whatever computational simplifications are available (though neuromorphic hardware might do better). For example, the brain may draw on analog physical primitives,[429](https://www.openphilanthropy.org/brain-computation-report#footnote429_pkitrks \"Sarpeshkar (1998): “Items 1 through 3 show that analog computation can be far more efficient than digital computation because of analog computation’s repertoire of rich primitives. For example, addition of two parallel 8-bit numbers takes one wire in analog circuits (using Kirchoff’s current law), whereas it takes about 240 transistors in static CMOS digital circuits. The latter number is for a cascade of 8 full adders. Similarly an 8-bit multiplication of two currents in analog computation takes 4 to 8 transistors, whereas a parallel 8-bit multiply in digital computation takes approximately 3000 transistors. Although other digital implementations could make the comparisons seem less stark, the point here is simply that exploiting physics to do computation can be powerful” (p. 1605). See also Daniel et al. (2013): “Because analog computation exploits powerful biochemical mathematical basis functions that are naturally present over the entire continuous range of input operation, they are an advantageous alternative to digital logic when resources of device count, space, time or energy are constrained” (p. 619).\") continuous (or very fine-grained) temporal dynamics,[430](https://www.openphilanthropy.org/brain-computation-report#footnote430_tygkqr6 \"See e.g. Open Philanthropy's non-verbatim notes from a conversation with Prof. Eve Marder: “Unlike digital computers, the brain integrates over very long timescales at very fast speeds easily and seamlessly” (p. 3).\") and/or complex biochemical interactions that are cheap for the brain, but very expensive to simulate.[431](https://www.openphilanthropy.org/brain-computation-report#footnote431_fekm37d \"From Open Philanthropy's non-verbatim notes from a conversation with Prof. Rosa Cao: “Digital computers achieve speed and reliability by ignoring many dimensions of what is happening in the system. In such a context, you only care about whether the voltage in the transistors is above or below a certain threshold, and designers try hard to shield this variable from disruptive physical fluctuations. The brain is built on fairly different principles. Its functional processes are not shielded from the dynamics of the brain’s biochemistry. Rather, the brain exploits this biochemistry to perform efficient computation. This makes the brain difficult to simulate. In nature, biochemical processes like protein-protein interactions just happen, so they are “free” for the brain to run. Simulating them, however, can be quite computationally expensive” (p. 1-2).\")\n* **Limitations on tools and available data plausibly do much to explain the concepts and assumptions most prominent in neuroscience.** As these limitations loosen, we may identify much more complex forms of information-processing than the field currently focuses on.[432](https://www.openphilanthropy.org/brain-computation-report#footnote432_p5t0pqb \"From Open Philanthropy's non-verbatim notes from a conversation with Dr. Adam Marblestone: “Neuroscience is extremely limited by available tools. For example, we have the concept of a post-synaptic potential because we can patch-clamp the post-synaptic neuron and see a change in voltage. When we become able to see every individual dendritic spine, we might see that each has a different response; or when we become able to see molecules, we might see faster state transitions, more interesting spatial organization, or more complicated logic at the synapses. We don’t really know, because we haven’t been able to measure. It’s also possible that some theories in neuroscience emerge and persist primarily because (a) they are the type of simple ideas that humans are able to come up with, and (b) these theories explain some amount of data (though it’s unclear how much). It’s hard to formulate complicated ideas about how the brain works that can then be made testable. “ (p. 9). From Open Philanthropy's non-verbatim notes from a conversation with Prof. Erik De Schutter: “with improvements in imaging and cell biology techniques, we discover all sorts of new complexities that we didn’t know were there” (p. 1).\") Indeed, it might be possible to extrapolate from trends in this vein, either in neuroscience or across biology more broadly.[433](https://www.openphilanthropy.org/brain-computation-report#footnote433_2xrg822 \"Thanks to Luke Muehlhauser for suggesting this possibility.\")\n* **Various experts mentioned track-records of over-optimism** about the ease of progress in biology, including via computational modeling;[434](https://www.openphilanthropy.org/brain-computation-report#footnote434_pcj01pk \"From Open Philanthropy's non-verbatim notes from a conversation with Prof. Eric Jonas: “There is a history of over-optimism about scientific progress in neuroscience and related fields. Prof. Jonas grew up in an era of hype about progress in science (e.g., “all of biology will yield its secrets in the next 20 years”), and has watched the envisioned future fail to arrive. Indeed, many problems have been multiple orders of magnitude more complicated than expected, to such a degree that some people are now arguing that science is slowing down, and must rely increasingly on breadth-first search through possible research paths. In biology, for example, there was a lot of faith that the human genome project would lead to more completeness and understanding than it did” (p. 4-5). From Open Philanthropy's non-verbatim notes from a conversation with Prof. Rosa Cao: “E. Coli, a comparatively simple, one-celled organism, exhibits fairly sophisticated behavior on the basis of carefully-tuned biochemical chains (for example, various rhythms at different timescales that allow the cell to survive in a range of environments). We have not yet been successfully able to capture this behavior in a computational model, despite throwing a lot of effort and computational power at the project. Indeed, there was a lot of excitement about projects like this a few decades ago, but it seems to Prof. Cao that this energy has since died down, partly due to greater appreciation of their difficulty. Similarly, efforts to build an artificial cell have proven very difficult. At some level, cells are simple, and we basically know what the components are. However, all of the biochemical processes are poised in a delicate balance with each other -- a balance that represents a vanishingly smaller percentage of all possible arrangements, and which is correspondingly difficult to replicate. Efforts to create functional brain simulations might run into similar problems. For example, it may be that the brain’s function depends on a particular type of relationship to the environment, which allows it to adjust and fine-tune its internal features in the right way” (p. 2).\") overly-aggressive claims in favor of particular neuroscientific research programs;[435](https://www.openphilanthropy.org/brain-computation-report#footnote435_h8fmcq1 \"From Open Philanthropy's non-verbatim notes from a conversation with Prof. Eric Jonas: “many in the neuroscience community feel that some neuroscientists made overly aggressive claims in the past about what amount of progress in neuroscience to expect (for example, from simulating networks of neurons at a particular level of resolution)” (p. 5).\") and over-eagerness to think of the brain via in terms of the currently-most-trendy computational/technological paradigms.[436](https://www.openphilanthropy.org/brain-computation-report#footnote436_5m5kgnk \"From Open Philanthropy's non-verbatim notes from a conversation with Prof. Eric Jonas: “[Prof. Jonas] also has a long-term prior that researchers are too quick to believe that the brain is doing whatever is currently popular in machine learning, and he doesn’t think we’ve found the right paradigm yet” (p. 3). From Open Philanthropy's non-verbatim notes from a conversation with Prof. Erik De Schutter: “He is also wary of the history of comparing the brain to the latest engineering technology (e.g., a steam engine, a classical computer, now maybe a quantum computer)” (p. 4).\") To the extent such track records exist, they could inform skepticism about arguments and expert opinions in a similar reference class (though on their own, they seem like only very indirect support for very large FLOP/s requirements, as many other explanations of such track records are available).\n\n\nAnd of course, more basic paradigm mistakes are possible as well.[437](https://www.openphilanthropy.org/brain-computation-report#footnote437_x5mnr6u \"Two experts thought this unlikely. From Open Philanthropy's non-verbatim notes from a conversation with Dr. Adam Marblestone: “Dr. Marblestone thinks that the probability that the field of neuroscience rests on some very fundamental paradigm mistake is very low. We’re missing a unified explanation of behavior and intelligence, but the basic picture of neurons as modular elements with some sort of transfer function and some sort of (possibly complicated) learning rule, without some extreme amount of internal computation taking place inside the cell, seems fairly solid to Dr. Marblestone” (p. 7).\")\n\n\nThis is a long list of routes to higher numbers; perhaps, then, we might expect at least *one* of them to track the truth. However:\n\n\n* Some particular routes are correlated: for example, worlds in which the brain can implement very sophisticated, un-simplifiable computation at synapses seem more likely to be ones in which it can implement such computation within dendrites as well.[438](https://www.openphilanthropy.org/brain-computation-report#footnote438_owl6yu5 \"Thanks to Dr. Dario Amodei and Dr. Owain Evans for suggesting that I consider correlations between different routes to higher numbers.\")\n* My vague impression is that experts tend to be inclined towards simplification vs. complexity across the board, rather than in specific patterns that differ widely. If this is true, then the reliability of the assumptions and methods these experts employ might be a source of broader correlations.\n* Some of these routes are counterbalanced by corresponding routes to lower numbers (e.g., basic parameters could be too high as well as too low; relevant timescales could be more coarse-grained rather than more fine-grained; etc). And there are more general routes to lower numbers as well, which would apply even if some of the considerations surveyed above are sound (see next section).\n\n\n#### 2.4.2 Too high?\n\n\nHere are a number of ways 1e13-1e17 FLOP/s might be overkill (I’ll focus, here, on ways that are actively suggested by examination of the brain’s mechanisms, rather than on the generic consideration that for any given way of performing a task, there may be a more efficient way).\n\n\n#### 2.4.2.1 Neuron populations and manifolds\n\n\nThe framework above focuses on individual neurons and synapses. But this could be too fine-grained. For example, various popular models in neuroscience involve averaging over groups of neurons, and/or treating them as redundant representations of high-level variables.[439](https://www.openphilanthropy.org/brain-computation-report#footnote439_nbeu0j9 \"From Open Philanthropy's non-verbatim notes from a conversation with Prof. Markus Meister: “Synapses are noisy, and silicon isn’t; and the brain uses huge numbers of neurons to represent the same variable, probably because a single neuron can’t do it robustly. Prof. Meister expects that human-level AI systems will use methods more naturally suited to silicon devices. This would suggest compute estimates lower than what scaling up from the retina would suggest” (p. 4). See Miller (2018): “The key variables of a firing-rate model are the firing rates, which correspond to the average number of spikes per unit time of a subset of similarly responsive cells. This is in contrast to spiking models in which the key variables are the membrane potentials of individual cells” (p. 211). Eliasmith (2013): “Consequently, we can think of the 2D state space as a standard Cartesian space, where two values (x and y co-ordinates) uniquely specify a single object as compactly as possible. In contrast, the 100D vector specifies the same underlying 2D object, but it takes many more resources (i.e., values) to do so. If there was no uncertainty in any of these 100 values, then this would simply be a waste of resources. However, in the much more realistic situation where there is uncertainty (resulting from noise of receptors, noise in the channels sending the signals, etc.), this redundancy can make specifying an underlying point much more reliable. And, interestingly, it can make the system much more flexible in how well it represents different parts of that space. For example, we could use 10 of those neurons to represent the first dimension, or we could use 50 neurons to do so. The second option would give a much more accurate representation of that dimension than the first. Being able to redistribute these resources to respond to task demands is one of the foundations of learning (see Section 6.4)” (p. 75). From Open Philanthropy's non-verbatim notes from a conversation with Dr. Adam Marblestone: “One way you might need less than 1 FLOP per spike through synapse is if you don’t need to model all of the neurons in the brain. For example, it might be that all of the neurons and synapses in the brain are there in order to make the brain more likely to converge on a solution while learning, but that once learning has taken place, the brain implements a function that can be adequately approximated using much less compute. A large amount of neuroscience treats populations of neurons as redundant representations of high-level variables relevant to information-processing” (p. 7).\")\n\n\nIndeed, *in vivo* recording shows that the dimensionality of the activity of a network of neurons is much smaller than the number of neurons themselves ([Wärnberg and Kumar (2017)](https://journals.plos.org/ploscompbiol/article/file?id=10.1371/journal.pcbi.1007074&type=printable) suggest a subspace spanned by ~10 variables, for local networks consisting of thousands of neurons).[440](https://www.openphilanthropy.org/brain-computation-report#footnote440_b2z7dsh \"From the author summary: “A network in the brain consists of thousands of neurons. A priori, we expect that the network will have as many degrees of freedom as its number of neurons. Surprisingly, experimental evidence suggests that local brain activity is confined to a subspace spanned by ~10 variables” (p. 1). See also Gallego et al. (2017): “Here we argue that the underlying network connectivity constrains these possible patterns of population activity (Okun et al. (2015), Sadtler et al. (2014), Tsodyks et al. (1999)) and that the possible patterns are confined to a low-dimensional manifold (Stopfer et al. (2003), Yu et al. (2009)) spanned by a few independent patterns that we call ‘neural modes.’ These neural modes capture a significant fraction of population covariance. It is the activation of these neural modes, rather than the activity of single neurons, that provides the basic building blocks of neural dynamics and function (Luczak et al. (2015), Sadtler et al. (2014), Shenoy et al. (2013))” (p. 2).\") This kind of low-dimensional subspace is known as a “neural manifold.”[441](https://www.openphilanthropy.org/brain-computation-report#footnote441_k8s837l \"My thanks to the expert who suggested I consider this.\")\n\n\nSome of this redundancy may be about noise: neurons are unreliable elements, so representing high-level variables using groups of them may be more robust.[442](https://www.openphilanthropy.org/brain-computation-report#footnote442_j27znit \"Faisal et al. (2008): “Averaging is used in many neural systems in which information is encoded as patterns of activity across a population of neurons that all subserve a similar function (for example, see REFS 142,143): these are termed neural population codes. A distributed representation of information of this type is more robust to the effects of noise. Many sensory systems form a spatially-ordered population — that is, a map — in which neighbouring neurons encode stimuli that share closely related features. Such spatially ordered populations support two basic goals of neural computation: first, a transformation between different maps (such as the direction of sounds into neck rotation) and, second, the combination of information from multiple sources (such as visual- and auditory-cue combination)144. The information capacity of a population of neurons is greatest when the noise sources across the population are not correlated. Noise correlations, which are often observed in populations of higher-order neurons, limit information capacity and have led to the development of population-coding strategies that account for the effects of correlations” (p. 10).\") Digital computers, though, are noise-free.\n\n\nIn general, the possibility of averaging over or summarizing groups of neurons suggests smaller budgets than the estimates above – possibly much smaller. If I had more time for this project, this would be on the top of my list for further investigation.\n\n\n##### \n\n\n#### 2.4.2.2 Transistors and emulation costs\n\n\nIf we imagine applying the mechanistic method to a digital computer we don’t understand, we plausibly end up estimating the FLOP/s required to model the activity of very low-level components: e.g. transistors, logic gates, etc (or worse, to simulate low-level physical processes within transistors). This is much more than the FLOP/s the computer can actually perform.\n\n\nFor example: a V100 has about 2e10 transistors, and a clock speed of ~1e9 Hz.[443](https://www.openphilanthropy.org/brain-computation-report#footnote443_b5op7sb \"See p. 10 here.\") A naive mechanistic method estimate for a V100 then, might budget 1 FLOP per clock-tick per transistor: 2e19 FLOP/s. But the chip’s actual computational capacity is ~1e14 FLOP/s – a factor of 2e5 less.\n\n\nThe costs of emulating different computer systems at different levels of detail may also be instructive here. For example, one attempt to simulate a 6502 microprocessor (original clock speed of [~1 Mhz](https://en.wikipedia.org/wiki/MOS_Technology_6502)) at the transistor level managed to run the simulated chip at 1 Khz using a computer running at ~1 GHz, suggesting a factor of ~1e6 slow-down.[444](https://www.openphilanthropy.org/brain-computation-report#footnote444_74a8tz5 \"From here: “Michael Steil and some collaborators had ported the code to C and were able to run at about 1kHz… This was only a thousand times slower than the original, running on a computer that was perhaps two million times faster.” Other emulations may be more efficient.\")\n\n\nOf course, there is no easy mapping between computer components and brain components; and there are components in the brain at lower-levels than neurons (e.g., ion channels, proteins, etc). Still, applying the mechanistic method to digital computers suggests that when we don’t know how the system works, there is no guarantee that we land on right level of abstraction, and hence that estimates based on counting synapses, spikes, etc. could easily be overkill relative to the FLOP/s requirements of the tasks the brain can actually perform (I discuss this issue more in the appendix).\n\n\nHow much overkill is harder to say, at least using the mechanistic method alone: absent knowledge of how a [V100](https://www.nvidia.com/en-us/data-center/v100/) processes information, it’s not clear to me how to modify the mechanistic method to arrive at 1e14 FLOP/s rather than 2e19. Other methods might do better.\n\n\nNote, though, that applying the mechanistic method without a clear understanding of whether models at the relevant level of abstraction could replicate task-performance at all could easily be “underkill” as well.\n\n\n\n#### 2.4.2.3 Do we need the whole brain?\n\n\nDo we need the whole brain? For some tasks, no. People with parts of their brains missing/removed can still do various things.\n\n\nA dramatic example is the cerebellum, which contains ~69 billion neurons – ~80% of the neurons in the brain as a whole.[445](https://www.openphilanthropy.org/brain-computation-report#footnote445_yic9gw7 \"Dr. Dario Amodei suggests considering whether we can leave out the cerebellum for certain types of tasks.\") Some people (a very small number) [don’t have cerebellums](https://en.wikipedia.org/wiki/Cerebellar_agenesis). Yet there are reports that in some cases, their intelligence is affected only mildly, if at all (though motor control can also be damaged, and some cognitive impairment can be severe).[446](https://www.openphilanthropy.org/brain-computation-report#footnote446_9rhhie7 \"From the National Organization for Rare Disorders: “Additional reports have noted individuals with cerebellar agenesis whose mental capacities were unaffected and who did not exhibit any symptoms of cerebellar agenesis (asymptomatic cases). However, other researchers have disputed these claims, stating that in virtually all of cases of cerebellar agenesis there have been observable symptoms including profound abnormalities in motor skills…. Intelligence may be unaffected. However, some affected individuals may display mild to moderate cognitive impairment. Some individuals with cerebellar agenesis have exhibited intellectual disability, but normal or near-normal motor skills. In addition to affecting motor skills, damage to the cerebellum has also been associated with abnormalities of non-motor functions. Cerebellar dysfunction may also be associated with abnormalities of visuospatial abilities, expressive language, working memory and affective behavior.” Cases of cerebellar agenesis are described in a popular article by Hamilton (2015) and in Gelal et al. (2016). The case described in Hamilton (2015) seems to involve at least mild cognitive impairment: the subject described has trouble coordinating different sources of information, and he “needed to be taught a lot of things that people with a cerebellum learn automatically, Sarah [his sister] says: how to speak clearly, how to behave in social situations and how to show emotion.” The cases in Gelal et al. (2016) also appear to involve substantive cognitive impairment: “The 61-year-old man had ataxia, dysarthria, abnormalities in cerebellar tests, severe cognitive impairment, and moderate mental retardation. The 26-year-old woman had dysmetria, dysdiadochokinesia, and dysarthria as well as mild cognitive impairment and mild mental retardation” (abstract)).\")\n\n\nDoes this mean we can reduce our FLOP/s budget by 80%? I’m skeptical. For one thing, while the cerebellum accounts for a large percentage of the brain’s neurons, it appears to account for a much smaller percentage of other things, including volume (~10%),[447](https://www.openphilanthropy.org/brain-computation-report#footnote447_wnqyh6e \"Swanson (1995) (p. 473).\") mass (~10%),[448](https://www.openphilanthropy.org/brain-computation-report#footnote448_eofahg9 \"Azevedo et al. (2009) (p. 536), suggests that the cerebellum weights ~154.02 g (10.3% of the brain’s mass), whereas the cerebral cortex weighs 1232.93 g (81.8% of the brain’s mass).\") energy consumption (<10%),[449](https://www.openphilanthropy.org/brain-computation-report#footnote449_374qe6o \"I’m basing this on the fact that the cerebellum is ~10% of the brain’s weight, relative to ~80% for the cortex, and Howarth et al’s (2012) suggestion that energy consumption per gram is higher in the cerebral cortex than in the cerebellar cortex: “Including this range of values would result in a range of estimates for total energy use for the cerebral cortex of 27.2 to 40.7 μmol ATP/g/min, compared with the measured total energy use of 33 to 50 μmol ATP/g/min in different cortical regions (Sokoloff et al. (1977)), and for the cerebellar cortex of 17.1 to 25.6 μmol ATP/g/min, compared with the measured value of 20.5 μmol ATP/g/min (Sokoloff et al. (1977)). Further work is needed to accurately define these parameters” (p. 1232). Sarpeshkar (1997): “Most of the power in the brain is consumed in the cortex” (p. 204). Thanks to Carl Shulman for suggesting that I consider cerebellar energy consumption, and for pointing me to references.\") and maybe synapses (and synaptic activity dominates many versions of the estimates above).[450](https://www.openphilanthropy.org/brain-computation-report#footnote450_kqli0tq \"Most of the neurons in the cerebellum (specifically, about 50 billion, at least according to Llinás et al. (2004) (p. 277)) are cerebellar granule cells, which appear to have a comparatively small number of synapses each: “[Granule] cells are the most numerous in the CNS; there are about 5 × 1010 cerebellar granule cells in the human brain. Each cell has four or five short dendrites (each less than 30 μm long) that end in an expansion called a dendritic claw (Fig. 7.4C)” (Llinás et al. (2004) (p. 277). Wikipedia cites Llinás et al. (2004)) as grounds for attributing 80-100 synaptic connections to granule cells, but I haven’t been able to find the relevant number. The cerebellum also contains Purkinje cells (up to 1.5e7, according to Llinás et al. (2004), p. 276), which can have over 100,000 synapses each, though I’m not sure about the average number (see Napper and Harvey (1988): “We conclude that there are some 175,000 parallel fiber synapses on an individual Purkinje cell dendritic tree in the cerebellar cortex of the rat” (abstract), though this is an old estimate). I have not attempted to estimate the synapses in the cerebellum in particular, and I am not sure the extent to which synapse counts for granule cells and Purkinje cells overlap (a possibility that could lead to double counting). Energy use in the cerebellum appears to be dominated by granule cells: “This work predicts that the principal neurons in the cerebellum, the Purkinje cells, use only a small fraction of the energy consumed by the cerebellar cortex, while the granule cells dominate the signaling energy use” (Howarth et al. (2012), p. 1230-1231). Many estimates for total synapses in the brain focus on the cerebral cortex, and in particular the neocortex (see citations in section Section 2.1.1.1), and AI Impacts reports the impression, which I share, that neocortical synapses are often treated as representing the bulk of the synapses in the brain. Indeed, Kandel et al. (2013) suggests that “1014  to 1015 synaptic connections are formed in the brain” (p. 175) -- a number comparable to the neocortical estimates from Tang et al. (2001) (“The average total number of synapses in the neocortex of five young male brains was 164 × 1012 (CV = 0.17)” (p. 258)) and Pakkenberg et al. (2003) (“The total number of synapses in the human neocortex is approximately 0.15 × 1015 (0.15 quadrillion)” (p. 95)).\")\n\n\nMore importantly, though, we’re looking for FLOP/s estimates that apply to the full range of tasks that the brain can perform, and it seems very plausible to me that *some* of these tasks (neurosurgery? calligraphy?) will rely crucially on the cerebellum. Indeed, the various impairments generally suffered by patients without cerebellums seem suggestive of this.\n\n\nThis last consideration applies across the board, including to other cases in which various types of cognitive function persist in the face of missing parts of the brain,[451](https://www.openphilanthropy.org/brain-computation-report#footnote451_hzn5c5s \"For example, Pulsifer et al. (2004) report that in a study of 71 patients who underwent hemispherectomy for severe and intractable seizures, “Cognitive measures typically changed little between surgery and follow-up, with IQ change <15 points for 34 of 53 patients” (abstract) (though absolute levels of cognitive ability may still have been low), and Pavone et al. (2013) suggest that “The results obtained from the literature show that relative preservation of cognitive performance suggests that a single cerebral cortical hemisphere connected to an apparently intact brainstem is sufficient for the development of higher cognitive function” (p. 2). See also this article in the New Scientist, which reports that “a teenager who was born without the entire left hemisphere of her brain has above-average reading skills – despite missing the part of the brain that is typically specialised for language...The 18-year-old also has an average-to-high IQ and plans to go to university.”\") neuron/synapse loss,[452](https://www.openphilanthropy.org/brain-computation-report#footnote452_kzxod4p \"Glancing at one study, asymptomatic Alzehimer’s disease does not appear to be associated with neuron loss. See Andrade-Moraes et al. (2013): “We found a great reduction of neuronal numbers in the hippocampus and cerebral cortex of demented patients with Alzheimer’s disease, but not in asymptomatic subjects with Alzheimer’s disease” (abstract).\") etc. That is, while I expect it to be true of many tasks (perhaps even tasks important to AI developers, like natural language processing, scientific reasoning, social modeling, etc.) that you don’t need the whole brain to do them, I also expect us to be able to construct tasks that do require most of the brain. It also seems very surprising, from an evolutionary perspective, if large, resource-intensive chunks of the brain are strictly unnecessary. And the reductions at stake seem unlikely to make an order-of-magnitude difference anyway.\n\n\n\n#### 2.4.2.4 Constraints faced by evolution\n\n\nIn designing the brain, evolution faced many constraints less applicable to human designers.[453](https://www.openphilanthropy.org/brain-computation-report#footnote453_l0ne7gu \"Dr. Dario Amodei suggested considering these constraints. See also the citations throughout the rest of the section.\") For example, constraints on:\n\n\n* The brain’s volume.\n* The brain’s energy consumption.\n* The growth and maintenance it has to perform.[454](https://www.openphilanthropy.org/brain-computation-report#footnote454_bcalslu \"Sandberg (2016): “Biology has many advantages in robustness and versatility, not to mention energy efficiency. Nevertheless, it is also fundamentally limited by what can be built out of cells with a particular kind of metabolism, the fact that organisms need to build themselves from the inside, and the need of solving problems that exist in a particular biospheric environment” (p. 7).\")\n* The size of the genome it has to be encoded in.[455](https://www.openphilanthropy.org/brain-computation-report#footnote455_r6c1tpu \"See Moravec (1988): “There is insufficient information in the 1010 bits of the human genome to custom-wire many of the 1014 synapses in the brain” (p. 166). See also Zador (2019): “ The human genome has about 3 × 109 nucleotides, so it can encode no more than about 1 GB of information—an hour or so of streaming video32. But the human brain has about 1011 neurons, and more than 103 synapses per neuron. Since specifying a connection target requires about log21011 = 37 bits/synapse, it would take about 3.7 × 1015 bits to specify all 1014 connections. (This may represent an underestimate because it considers only the presence or absence of a connection; a few extra bits/synapse would be required to specify graded synaptic strengths. But because of synaptic noise and for other reasons, synaptic strength may not be specified very precisely. So, in large and sparsely connected brains, most of the information is probably needed to specify the locations [of] the nonzero elements of the connection matrix rather than their precise value.). Thus, even if every nucleotide of the human genome were devoted to efficiently specifying brain connections, the information capacity would still be at least six orders of magnitude too small” (p. 5).\")\n* The comparatively slow and unreliable elements it has to work with.[456](https://www.openphilanthropy.org/brain-computation-report#footnote456_xje802m \"Moravec (1988): “The slow switching speed and limited signaling accuracy of neurons rules out certain solutions for neural circuitry that are easy for computers” (p. 165). Dmitri Strukov’s comments here: “we should also keep in mind that over millions of years the evolution of biological brains has been constrained to biomaterials optimized for specific tasks, while we have a much wider range of material choices now in the context of neuromorphic engineering. Therefore, there could exist profound differences in designing rules. For example, the brains have to rely on poor conductors offered by biomaterials, which have presumably affected the principles of brain structure and operation in some ways that are not necessarily to be applicable to neuromorphic computing based on high conducting materials.”\")\n* Ability to redesign the system from scratch.[457](https://www.openphilanthropy.org/brain-computation-report#footnote457_cu3oufs \"Moravec (1988): “The neuron’s basic information-passing mechanism -- the release of chemicals that affect the outer membranes of other cells -- seems to be a very primitive one that can be observed in even the simplest free-swimming bacteria. Animals seem to be stuck with this arrangement because of limitations in their design process. Darwinian evolution is a relentless optimizer of a given design, nuding the parameters this way and that, adding a step here, removing one there, in a plodding, tinkering, way. It’s not much of a redesigner, however. Fundamental changes at the foundation of its creations are out of reach, because too many things would have to change correctly all at once” (p. 168).\")\n\n\nIt may be that these constraints explain the brain’s functional organization at sufficiently high-levels that if we understood the overarching principles at work, we would see that much of what the brain does (even internally) is comparatively easy to do with human computers, which can be faster, bigger, more reliable, more energy-intensive, re-designed from scratch, and built using external machines on the basis of designs stored using much larger amounts memory.[458](https://www.openphilanthropy.org/brain-computation-report#footnote458_qmeroe5 \" Here, the distinction between “finding ways to do it the way the brain does it, but with a high-level of simplification/increased efficiency” and “doing it some other way entirely” is blurry. I have the former vaguely in mind, but see the appendix for more detailed discussion. See also Sandberg (2016) for more discussion of possible constraints: “While we have reason to admire brains, they are also unable to perform certain very useful computations. In artificial neural networks we often employ non-local matrix operations like inversion to calculate optimal weights (Toutounian and Ataei (2009)): these computations are not possible to perform locally in a distributed manner. Gradient descent algorithms such as backpropagation are unrealistic in a biological sense, but clearly very successful in deep learning. There is no shortage of papers describing various clever approximations that would allow a more biologically realistic system to perform similar operations — in fact, the brains may well be doing it — but artificial systems can perform them directly, and by using low-level hardware intended for it, very efficiently” (p. 7).\") This, too, suggests smaller budgets.\n\n\n\n#### 2.4.3 Beyond the mechanistic method\n\n\nOverall, I find the considerations pointing to the adequacy of smaller budgets more compelling than the considerations pointing to the necessity of larger ones (though it also seems, in general, easier to show that X is enough, than that X is strictly required – an asymmetry present throughout the report). But the uncertainties in either direction rightly prompt dissatisfaction with the mechanistic method’s robustness. Is there a better approach?\n\n\n \n\n\n3 The functional method\n-----------------------\n\n\nLet’s turn to the functional method, which attempts to identify a portion of the brain whose function we can already approximate with artificial systems, together with the computational costs of doing so, and then to scale up to an estimate for the brain as a whole.\n\n\nVarious attempts at this method have been made. To limit the scope of the section, I’m going to focus on two categories: estimates based on the retina, and estimates based on the visual cortex. But I expect many problems to generalize.\n\n\nAs a preview of my conclusion: I give less weight to these estimates than to the mechanistic method, primarily due to uncertainties about (a) what the relevant portion of the brain is doing (in the case of the visual cortex), (b) differences between that portion and the rest of the brain (in the case of the retina), and (c) the FLOP/s required to fully replicate the functions in question. However, I take visual cortex estimates as some weak evidence that the mechanistic method range above (1e13-1e17 FLOP/s) isn’t much too low. Some estimates based on recent deep neural network models of retinal neurons point to higher numbers. I take these on their own as even weaker evidence, but I think they’re worth understanding better.\n\n\n#### \n\n\n#### 3.1 The retina\n\n\nAs I discussed in [Section 2.1.2.1.2](#section_2.1.2.1.2), the retina is one of the best-understood neural circuits.[459](https://www.openphilanthropy.org/brain-computation-report#footnote459_9ppplrg \"From Open Philanthropy's non-verbatim notes from a conversation with Prof. Markus Meister: “The computations performed in the retina are fairly well-understood. There is more to learn, of course, but the core framework is in place. We have a standard model of the retina that can account for a lot of retinal processing, as well as predict new observations… The retina is probably the best understood part of the brain” (p. 1-2).\") Could it serve as a basis for a functional method estimate?\n\n\n#### \n\n\n#### 3.1.1 Retina FLOP/s\n\n\nWe don’t yet have very good artificial retinas (though development efforts are ongoing).[460](https://www.openphilanthropy.org/brain-computation-report#footnote460_683rh7p \"See Yue et al. (2016) for a review of progress in retinal implant development as of 2016. From the Stanford Artificial Retina Project: “The current state of the art of retinal prostheses can be summed up as such: no blind patient today would trade their cane or guide dog for a retinal implant.” From Open Philanthropy's non-verbatim notes from a conversation with Prof. Markus Meister: “Despite 30 years of effort, attempts to create functional artificial retinas have met with very little success. Recent performance tests show that people implanted with the devices are functionally blind -- e.g., they cannot read, and they cannot distinguish between letters unless the letters occupy the entire visual field” (p. 3). Nirenberg and Pandarinath (2012) say: “Current devices still provide only very limited vision. For example, they allow patients to see spots of light and high-contrast edges, which provide some ability for navigation and gross feature detection, but they are far from providing patients with normal representations of faces, landscapes, etc. (4–6). [With respect to navigation, the devices enable the detection of light sources, such as doorways and lamps, and, with respect to feature detection, they allow discrimination of objects or letters if they span ∼7° of visual angle (5); this corresponds to about 20/1,400 vision; for comparison, 20/200 is the acuity-based legal definition of blindness in the United States (7)]” (p. 15012), though their paper aims to improve the situation.\") However, this has a lot to do with engineering challenges – e.g., building devices that interface with the optic nerve in the right way.[461](https://www.openphilanthropy.org/brain-computation-report#footnote461_gy3jiuz \"From Open Philanthropy's non-verbatim notes from a conversation with Prof. Markus Meister: “However, this lack of success is not about computation. People in the field generally agree that if you could make the right kind of one-to-one connection to the optic nerve fibers, you could compute spike trains that would allow the brain to see. The obstacle is actually making the interface between an electrical device and the retina. Electrodes on top of the retina stimulate many nerve fibers at once; you don’t know ahead of time which fiber you’ll be stimulating or what type of retinal ganglion cell you’re connected to, and you can’t get data into the eye at the right rate” (p. 3).\") Even absent fully functional artificial retinas, we may be able to estimate the FLOP/s required to replicate retinal computation.\n\n\nMoravec ([1988](https://www.amazon.com/Mind-Children-Future-Robot-Intelligence/dp/0674576187/ref=sr_1_2?dchild=1&keywords=mind+children+moravec&qid=1586128538&sr=8-2), [1998](https://jetpress.org/volume1/moravec.pdf), and [2008](https://www.scientificamerican.com/article/rise-of-the-robots-2008-02/)) offers some estimates in this vein.[462](https://www.openphilanthropy.org/brain-computation-report#footnote462_52ikdm4 \"See Moravec (1988), Chapter 2 (p. 51-74). See also Moravec (1988) and Moravec (2008). Merkle (1989) uses a broadly similar methodology.\") He treats the retina as performing two types of operations – a “center surround” operation, akin to detecting an edge, and a “motion detection” operation – and reports that in his experience with robot vision, such operations take around 100 calculations to perform.[463](https://www.openphilanthropy.org/brain-computation-report#footnote463_rqxq8c8 \"See Moravec (1988) (p. 57-60). For discussion of what a center-surround and a motion-detection operation in the retina consists in, see Meister et al. (2013): “A typical ganglion cell is sensitive to light in a compact region of the retina near the cell body, called the cell’s receptive field. Within that area one can often distinguish a center region and surround region in which light produces opposite responses. An ON cell, for example, fires faster when a bright spot shines on the receptive field’s center but decreases its firing when the spot shines on the surround. If light covers both the center and the surround, the response is much weaker than for center-only illumination. A bright spot on the center combined with a dark annulus on the surround elicits very strong firing. For an OFF cell these relationships are reversed; the cell is strongly excited by a dark spot in a bright annulus (Figure 26-10). The output produced by a population of retinal ganglion cells thus enhances regions of spatial contrast in the input, such as an edge between two different areas of different intensity, and gives less emphasis to regions of homogeneous illumination” (p. 587). See Meister et al. (2013) (p. 588-589), and this graphic, for visual depictions of center-surround type responses. With respect to retinal representation of moving objects, Meister et al. (2013) write: “When an effective light stimulus appears, a ganglion cell’s firing typically increases sharply from the resting level to a peak and then relaxes to an intermediate rate. When the stimulus turns off, the firing rate drops sharply then gradually recovers to the resting level… a moving object elicits strong firing in the ganglion cell population near the edges of the object’s image because these are the only regions of spatial contrast and the only regions where the light intensity changes over time” (p. 587, see p. 588-589 for more on motion-detection).\") He then divides the visual field into patches, processing of which gets sent to a corresponding fiber of the optic nerve, and budgets ten edge/motion detection operations per patch per second (ten frames per second is roughly the frequency at which individual images become indistinguishable for humans).[464](https://www.openphilanthropy.org/brain-computation-report#footnote464_iudes7x \"See Moravec (1988) (p. 58-59). That said, he also acknowledges that “though separate frames cannot be distinguished faster than 10 per second, if the light flickers at the frame rate, the flicker itself is detectable until it reaches a frequency of about 50 flashes per second” (p. 59).\") This yields an overall estimate of:\n\n\n\n> 1e6 ganglion cells × 100 calculations per edge/motion detection × 10 edge/motion detections per second = **1e9 calculations/sec for the whole retina**\n> \n> \n\n\nIs this right? At the least, it’s incomplete: neuroscientists have catalogued a wide variety of computations that occur in the retina, other than edge and motion detection (I’m not sure how many were known at the time). For example: the retina can anticipate motion,[465](https://www.openphilanthropy.org/brain-computation-report#footnote465_23c9p6h \"See Gollisch and Meister (2010): “When the image of an object moves on the retina, it creates a wave of neural activity among the ganglion cells. One should expect that this wave lags behind the object image because of the delay in phototransduction. Instead, experiments show that the activity in the ganglion cell layer moves at the true location of the object or even along its leading edge (Berry et al. (1999)). Effectively, the retinal network computes the anticipated object location and thereby cancels the phototransduction delay” (p. 7-8).\") it can signal that a predicted stimulus is absent,[466](https://www.openphilanthropy.org/brain-computation-report#footnote466_zp8shf4 \"See Gollisch and Meister (2010): “A somewhat different form of anticipation can be observed when the visual system is exposed to a periodic stimulus, such as a regular series of flashes. The activated visual neurons typically become entrained into a periodic response. If the stimulus sequence is interrupted, for example by omitting just one of the flashes, some neurons generate a pulse of activity at the time corresponding to the missing stimulus (Bullock et al. (1990); Bullock et al. (1994)). This phenomenon, termed the “omitted stimulus response”, is quite widespread, and has been noted in the brains of many species, including humans (McAnany and Alexander (2009)). Qualitatively it suggests the build-up of an anticipation for the next stimulus, and the large response reflects surprise at the missing element in the sequence” (p. 7-8).\") it can adapt to different lighting conditions,[467](https://www.openphilanthropy.org/brain-computation-report#footnote467_krw40jj \"Gollisch and Meister (2010): “Because the ambient light level varies over ~9 orders of magnitude in the course of a day, while spiking neurons have a dynamic range of only ~2 log units, the early visual system must adjust its sensitivity to the prevailing intensities. This adaptation to light level is accomplished by the retina, beginning already in the photoreceptors, and the process is complete before spiking neurons get involved. Over a wide range of intensities, the sensitivity of the retina declines inversely with the average light level. As a result, the ganglion cell signals are more or less independent of the illuminating intensity, but encode the reflectances of objects within the scene, which are the ethologically important variables. The perceptual effects of light adaptation and its basis in the circuitry and cellular mechanisms of the retina have been studied extensively and covered in several excellent reviews (Shapley and Enroth-Cugell (1984); Hood (1998); Fain et al. (2001); Rieke and Rudd (2009))” (p. 11). \") and it can suppress vision during saccades.[468](https://www.openphilanthropy.org/brain-computation-report#footnote468_7mgf8kj \"Gollisch and Meister (2010): “During a saccade, the image sweeps across the retina violently for tens of milliseconds, precluding any useful visual processing. In humans, visual perception is largely suppressed during this period (Volkmann (1986); Burr et al. (1994); Castet and Masson (2000)). The circuits of the retina are at least partly responsible for this suppression: Many types of retinal ganglion cell are strongly inhibited during sweeps of the visual image (Roska and Werblin (2003)). This effect is mediated by spiking, inhibitory amacrine cells, which are themselves excited by the global motion signal. Conceivably, the underlying circuitry resembles the one identified for OMS ganglion cells (Figure 2C). In fact, the OMS cells may be distinct simply by an enhanced sensitivity to the global inhibition, so they are suppressed even by the much smaller eye movements during a fixation” (p. 9).\") And further computations may await discovery.[469](https://www.openphilanthropy.org/brain-computation-report#footnote469_qg8h90f \"Gollisch and Meister (2010): “The anatomical diversity suggests that there is much function left to be discovered and that we probably still have a good distance to go before understanding all the computations performed by the retina” (p. 14).\")\n\n\nBut since Moravec’s estimates, we’ve also made progress in modeling retinal computation. Can recent models provide better estimates?\n\n\nSome of these models were included in [Figure 7](#figure_7). Of these, it seems best to focus on models trained on naturalistic stimuli, retinal responses to which have proven more difficult to capture than responses to more artificial stimuli.[470](https://www.openphilanthropy.org/brain-computation-report#footnote470_o6ndlkx \"From Open Philanthropy's non-verbatim notes from a conversation with Prof. Markus Meister: “It has taken more effort to simulate retinal responses to natural scenes than to artificial stimuli used in labs (e.g. spots, flashes, moving bars)” (p. 1). Heitman et al. (2016): “This paper tests how accurately one pseudo-linear model, the generalized linear model (GLM), explains the responses of primate RGCs to naturalistic visual stimuli … The GLM accurately reproduced RGC responses to white noise stimuli, as observed previously, but did not generalize to predict RGC responses to naturalistic stimuli. It also failed to capture RGC responses when fitted and tested with naturalistic stimuli alone. Fitted scalar nonlinearities before and after the linear filtering stage were insufficient to correct the failures. These findings suggest that retinal signaling under natural conditions cannot be captured by models that begin with linear filtering, and emphasize the importance of additional spatial nonlinearities, gain control, and/or peripheral effects in the first stage of visual processing” (p. 1).\") RNN/CNN neural network models appear to have more success at this than some other variants,[471](https://www.openphilanthropy.org/brain-computation-report#footnote471_spq49dx \"See Figure 1C in Maheswaranathan et al. (2019), and Batty et al. (2017): “RNNs of varying architectures consistently outperformed LNs and GLMs in predicting neural spiking responses to a novel natural scene movie for both OFF and ON parasol retinal ganglion cells in both experiments (Figure 2)” (p. 6).\") so I’ll focus on two of these:\n\n\n1. [Maheswaranathan et al. (2019)](https://www.biorxiv.org/content/10.1101/340943v5.full.pdf), who train a three-layer CNN to predict the outputs of ganglion cells in response to naturalistic stimuli, and achieve a correlation coefficient greater than 0.7 (retinal reliability is 0.8).\n2. [Batty et al. (2017)](https://openreview.net/pdf?id=HkEI22jeg), use a shared, two-layer RNN on a similar task, and capture around ~80% of explainable variance across experiments and cell types.\n\n\nThese models are not full replications of human retinal computation. Gaps include:\n\n\n* Their accuracy can still be improved, and what’s missing might matter.[472](https://www.openphilanthropy.org/brain-computation-report#footnote472_8m75zda \"From Open Philanthropy's non-verbatim notes from a conversation with Prof. E.J. Chichilnisky: “It’s hard to know when to stop fine-tuning the details of your model. A given model may be inaccurate to some extent, but we don’t know whether a given inaccuracy matters, or whether a human wouldn’t be able to tell the difference (though focusing on creating usable retinal prostheses can help with this)” (p. 3).\")\n* The models have only been trained on a very narrow class of stimuli.[473](https://www.openphilanthropy.org/brain-computation-report#footnote473_xokduhk \"From Open Philanthropy's non-verbatim notes from a conversation with Dr. Stephen Baccus: “The visual system works under a wide range of conditions -- for example, varying light levels and varying contrast levels. Experiments focused on a set of natural scenes only cover some subset of these conditions. For example, Prof. Baccus’s lab has not really tested dim light, or rapid transitions between bright and dim light” (p. 2). From Open Philanthropy's non-verbatim notes from a conversation with Prof. E.J. Chichilnisky: “One of the biggest challenges is the world of possible stimuli. It would take lifetimes to present all possible stimuli, so we don’t know if we’re missing something. Prof. Chichilnsky’s lab has the biggest trove of data in the world from retinal ganglion cells. They’ve recorded from something like 500,000 retinal ganglion cells (roughly half the retina), and they have about 50 billion spikes. But even this may not be enough data” (p. 3).\")\n* Inputs are small (50 × 50 pixels or less) and black-and-white (though I think they only need to be as large as the relevant ganglion cell’s receptive field).\n* These models don’t include adaptation, either (though one expert did not expect adaptation to make much of a difference to overall computational costs).[474](https://www.openphilanthropy.org/brain-computation-report#footnote474_tgq0c5d \"From Open Philanthropy's non-verbatim notes from a conversation with Prof. Markus Meister: “The biochemistry involved in retinal light adaptation is well-understood, and it can be captured using a simplified computational model. Specifically, you can write down a three-variable dynamical model that gets it about 80% correct. The compute required to run a functional model of the retina would probably be dominated by the feedforward processing in the circuit, rather than by capturing adaptation” (p. 2).\")\n* We probably need to capture correlations across cells, in addition to individual cell responses.[475](https://www.openphilanthropy.org/brain-computation-report#footnote475_ne17ug1 \"From Open Philanthropy's non-verbatim notes from a conversation with Dr. Stephen Baccus: “These models focus on replicating the response of an individual retinal ganglion cell to a stimulus. However, it may also be necessary to replicate correlations between the responses of different cells in the retina, as these may carry important information. Some people think that replicating the firing patterns of individual cells is enough, but most people think that correlations are important. Prof. Baccus’s lab has not yet assessed their model’s accuracy with respect to these between-cell correlations, though it is on their agenda” (p. 2).\")\n* [Maheswaranathan et al. (2019)](https://www.biorxiv.org/content/10.1101/340943v5.full.pdf) use salamander retinal ganglion cells, results from which may not generalize well to humans ([Batty et al. (2017)](https://openreview.net/pdf?id=HkEI22jeg) use primate cells, which seem better).[476](https://www.openphilanthropy.org/brain-computation-report#footnote476_sjmqwjl \"From Open Philanthropy's non-verbatim notes from a conversation with Prof. E.J. Chichilnisky: “There is variability in retinal function both across species and between individuals of the same species. Mouse retinas are very different from human retinas (a difference that is often ignored), and there is variability amongst monkey retinas as well” (p. 3).\")\n* There are a number of other possible gaps (see endnote).[477](https://www.openphilanthropy.org/brain-computation-report#footnote477_lnl03xc \"For example, there are about 20 different types of retinal ganglion cells in humans (see Open Philanthropy's non-verbatim notes from a conversation with Prof. E.J. Chichilnisky (p. 3)), which could vary in complexity. However, Prof. Stephen Baccus seemed to think that the data gathered for Maheswaranathan et al. (2019) captures this complication. From Open Philanthropy's non-verbatim notes from a conversation with Dr. Stephen Baccus: “There is no special selection involved in choosing which cells to test, and Prof. Baccus would expect similar success with arbitrary sets of retinal ganglion cells, though one cannot account for every cell under every condition without testing it” (p. 1). Another possibility is that these CNNs/RNNs might be vulnerable to adversarial examples, in a manner analogous to the vulnerabilities exhibited by image recognition systems (see discussion in Section 3.2). And the results were obtained using isolated retinas (I believe this means that the animal’s eyes were removed from the body), which could introduce differences as well.\")\n\n\nWhat sort of FLOP/s budgets would the above models imply, if they were adequate?\n\n\n* The CNN in [Maheswaranathan et al. (2019)](https://www.biorxiv.org/content/10.1101/340943v5.full.pdf) requires about 2e10 FLOPs to predict the output of one ganglion cell over one second.[478](https://www.openphilanthropy.org/brain-computation-report#footnote478_m5n8b57 \"From Open Philanthropy's non-verbatim notes from a conversation with Dr. Stephen Baccus: “Prof. Baccus and his colleagues have calculated that their CNN requires ~20 billion floating point operations to predict the output of one ganglion cell over one second (these numbers treat multiply and addition as separate operations - if we instead counted multiply-add operations (MACCs), the numbers would drop by a factor of roughly 2). The input size is 50 × 50 (pixels) × 40 time points (10 ms bins). Layer 1 has 8 channels and 36 × 36 units with 15 × 15 filters each. Layer 2 has 8 channels and 26 × 26 units with 11 × 11 filters each. Layer 3 (to the ganglion cell) is a dense layer with a 8 × 26 × 26 filter from layer 2. This leads to the following calculation for one ganglion cell: Layer 1: (40 × 15 × 15 × 2 + 1 (for the ReLU)) × 36 × 36 units × 8 channels = 1.87e8 Layer 2: (8 × 11 × 11 × 2 + 1) × 26 × 26 units × 8 channels = 1.05e7 Layer 3: 8 × 26 × 26 × 2 = 10,816. Total: 1.97e8 FLOP per 10 ms bin. Multiplied by 100, this equals 1.97e10 FLOP/s” (p. 6). \") However, adding more ganglion cells only increases the costs in the last layer of the network. A typical experiment involves 5-15 cells, suggesting ~2e9 FLOP/s per cell, and one of the co-authors on the paper (Prof. Baccus) could easily imagine scaling up to 676 cells (the size of the last layer), which would cost ~20.4 billion FLOP/s (3e7 per cell); or 2500 cells (the size of the input), which would cost 22.4 billion FLOP/s (~1e7 per cell).[479](https://www.openphilanthropy.org/brain-computation-report#footnote479_dy0q4zn \"From Open Philanthropy's non-verbatim notes from a conversation with Dr. Stephen Baccus: “Simulating more ganglion cells simultaneously only alters the last layer of the network, and so results in only a relatively small increase in computation. A typical experiment involves around 5-15 cells, but Prof. Baccus can easily imagine scaling up to 676 cells (26 × 26 — the size of the last layer), or to 2500 (50x50 — the size of the input). 676 cells would require 20.4 billion FLOPs per second. 2500 would require 22.4 billion.” (p. 6). 22.4 billion/2500 is ~9e6, which I've rounded to 1e7.\") I’ll use this last number, which suggests **~1e7 FLOP/s per retinal ganglion cell**. However, I don’t feel that I have a clear grip on how to pick an appropriate number of cells.\n* I estimate that the RNN in [Batty et al. (2017)](https://openreview.net/pdf?id=HkEI22jeg) requires around 1e5 FLOP for one 0.83 ms bin.[480](https://www.openphilanthropy.org/brain-computation-report#footnote480_n6d8fpo \"My estimate is as follows. 1st layer: (31 × 31 (image patch) + 50 (inputs from previous time-step)) × 50 = 48,050 MACCs. Second layer: (50 feedforward inputs from layer 1 + 50 inputs from previous time-step) × 50 = 5,000 MACCs. Total MACCs per timestep: ~ 53,000. Multiplied by two for FLOPs vs. MACCs (see “It’s dot products all the way down” here) = 106,000 FLOPs per time-step. Timesteps per second: 1200 (0.83 ms time bins). Total FLOPs per cell per second: ~1.2e8 FLOP/s. I have discussed this estimate with two people with ML expertise, but it has not been confirmed by any of the paper’s authors.\") I’m less clear on how this scales per ganglion cell, so I’ll assume one cell for the whole network: e.g., **~1e8 FLOP/s per retinal ganglion cell**.\n\n\nThese are much higher than Moravec’s estimate of 1000 calculations/s per ganglion cell, and they result in much higher estimates for the whole retina: 1e13 FLOP/s and 1e14 FLOP/s, respectively (assuming 1e6 ganglion cells).[481](https://www.openphilanthropy.org/brain-computation-report#footnote481_311ddcd \"Sarpeshkar (2010) estimates at least 1e10 FLOP/s for the retina, based on budgeting at least one floating-point multiplication operation per synapse, and a 12 Hz rate of computation (p. 749). However, he doesn’t (at least in that paragraph) say much to justify this assumption; and estimates that assume 1 FLOP per event at synapses have been covered, to some extent, under the mechanistic method section already. So I’ll focus elsewhere. For what it’s worth, though, Sarpeshkar (2010) estimate would imply at least ~1e13-1e16 FLOP/s for the brain as a whole, using the scaling factors discussed below.\") But it’s also a somewhat different task: that is, predicting retinal spike trains, as opposed to motion/edge detection more broadly.\n\n\nNote, also, that in both cases, the FLOPs costs are dominated by the first layer of the network, which processes the input, so costs would scale with the size of the input (though the input size relevant to an individual ganglion cell will presumably be limited by the spatial extent of its receptive field).[482](https://www.openphilanthropy.org/brain-computation-report#footnote482_6zfittf \"From Open Philanthropy's non-verbatim notes from a conversation with Dr. Stephen Baccus: “The largest amount of computation takes place in the first layer of the network. If the input size was larger, these numbers would scale up” (p. 6).\") And in general, the scale-up to the whole retina here is very uncertain, as I feel very uninformed about what it would actually look like to run versions of these models on such a scale (how much of the network could be reused for different cells, what size of receptive field each cell would need, etc).\n\n\n#### 3.1.2 From retina to brain\n\n\nWhat does it look like to scale up from these estimates to the brain as a whole? Here a few ways of doing so, and the results:\n\n\n \n\n\n\n\n\n| BASIS FOR SCALING | ROUGH SCALING FACTOR | APPLIED TO: MORAVEC ESTIMATE (1E9 CALCS/S) | APPLIED TO:[MAHESWARANATHAN ET AL. (2019)](https://www.biorxiv.org/content/10.1101/340943v5.full.pdf)ESTIMATE (1E13 FLOP/S) | APPLIED TO: BATTY ET AL. (2019) ESTIMATE (1E14 FLOP/S) |\n| --- | --- | --- | --- | --- |\n| *Mass* | 4e3-1e5[483](https://www.openphilanthropy.org/brain-computation-report#footnote483_uobzdaf \"Moravec (2008) reports that the brain is about 75,000 times heavier than the retina, which he cites as weighing 0.02 g (though Sarpeshkar (2010) estimates 0.4 g, substantially more). Moravec rounds this factor to 100,000, which in combination with his 1e9 calculations per second estimate for replicating the retina, yields a whole brain estimate of 1e14 calculations per second (this would be ~4e12 if we used Sarpeshkar’s weight estimate). See Moravec (2008), “Nervous Tissue and Computation.” Azevedo et al. (2009) (p. 536), report that the whole brain is ~1508.91 g, which is in line with what Moravec’s estimate implies (1500 g). However, Sarpeshkar (2010) (p. 748), estimates retinal weight at 0.4 g, which would result in a weight-based scale-up of 3750 -- considerably less than Moravec’s rounded 100,000.\") | 4e12-1e14 | 4e16-1e18 | 4e17-1e19 |\n| *Volume* | 4e3-1e5[484](https://www.openphilanthropy.org/brain-computation-report#footnote484_71tkwxi \"Moravec (1988): “The 1,500 cubic centimeter human brain is about 100,000 times as large as the retina” (p. 2). Sarpeshkar (2010) (p. 748), reports that the area of the human retina is 2500 mm2, and the average thickness is 160 µm, for a total of 400 mm3 (0.4 cm3). The brain appears to be around 1400 cm3, which suggests a scale-up, on Sarpeshkar’s numbers, of ~3500.\") | 4e12-1e14 | 4e16-1e18 | 4e17-1e19 |\n| *Neurons* | 1e3-1e4[485](https://www.openphilanthropy.org/brain-computation-report#footnote485_2dxs4mx \"The retina has about 1e8 signaling cells if you include all the photoreceptors (though Stephen Baccus indicated that for bright light, it might make more sense to focus on the roughly 5e6 cones), and tens of millions of other non-photoreceptor neurons. These numbers are roughly a factor of 1000 and 10,000 less, respectively, than the brain’s neuron count (1e11). From Open Philanthropy's non-verbatim notes from a conversation with Dr. Stephen Baccus: “We can think of the retina as receiving a 100 megapixel input and outputting a 1 megapixel output (though in bright light, it’s more like 5 million inputs, because there are 5 million cones and 95 million rods). And there are something like 10 million other cells in the retina” (p. 3).\") | 1e12-1e13 | 1e16-1e17 | 1e17-1e18 |\n| *Synapses* | 1e5-1e6[486](https://www.openphilanthropy.org/brain-computation-report#footnote486_8p13f7a \"Sarpeshkar (2010) (p. 698), lists ~1 billion synapses in the retina, though I’m not sure where he got this number. I am assuming the synapse estimates of 1e14-1e15, discussed in Section 2.1.1.1.\") | 1e14-1e15 | 1e18-1e19 | 1e19-1e20 |\n| *Energy use* | 4e3[487](https://www.openphilanthropy.org/brain-computation-report#footnote487_zpfk6h0 \"See Sarpeshkar (2010): “The weight of the human retina is 2500 mm2 (area) × 160 mm (avg. thickness) × 1000 kg/m3 (density in SI units) = 0.4 grams. Thus, the power consumption of human rods in the dark may be estimated to be 0.2 grams × 13 µmol ATP/g/min × 20 kT/ATP = 2.1mW. If we assume that outer retina power consumption is dominated by the rods, and that the inner and outer retina consume at the same rate in humans, then the total power consumption of the retina in the dark may be estimated to be 2.1 mW × 2 = 4.2 mW. We list the average of (2.6 + 4.2)/2 = 3.4 mW as our estimate for the total power consumption of the retina in Table 23.2. We thank Simon Laughlin for his generous assistance in helping us estimate the number of synapses in the retina and the power consumption of the eye” (p. 748). Following Sarpeshkar, I am here using Aiello’s (1997) estimate of 14.6 W for the brain as a whole.\") | 4e12 | 4e16 | 4e17 |\n| *Overall range* | 1e3-1e6 | 1e12-1e15 | 1e16-1e19 | 1e17-1e20 |\n**Figure 14. Estimates of the FLOP/s to replicate retinal computation, scaled up to the whole brain based on various factors.**\n\nThe full range here runs from **1e12 calc/s** (low-end Moravec) to **1e20 FLOP/s** (high-end Batty et al. (2017)). Moravec argues for scaling based on a combination of mass and volume, rather than neuron count, on the grounds that the retina’s neurons are unusually small and closely packed, and that the brain can shrink neurons while keeping overall costs in energy and materials constant.[488](https://www.openphilanthropy.org/brain-computation-report#footnote488_9ie69yf \"Moravec (1988): “The retina’s evolutionarily pressed neurons are smaller and more tightly packed than average” (p. 59). See also Moravec’s (3/18/98) replies to Anders Sandberg’s comment in the Journal of Evolution and Technology: “Evolution can just as easily choose two small neurons as one twice as large. The cost in metabolism and materials is the same. So I would expect brain structures to maximize for effective computation per volume, not per neuron. After all, one neuron with ten thousand synapses might be the computational match of 50 neurons with 50 synapses each.”\") Anders Sandberg objects to volume, due to differences in “tissue structure and constraints.”[489](https://www.openphilanthropy.org/brain-computation-report#footnote489_tlnkoa5 \"See his reply to Moravec here: “volume cannot be compared due to the differences in tissue structure and constraints.”\") He prefers neuron count.[490](https://www.openphilanthropy.org/brain-computation-report#footnote490_inqf5g9 \"See his reply to Moravec here. Though his high-end estimate of whole brain neuron count (1e12) is, I think, too large.\")\n\n\nRegardless of how we scale, though, the retina remains different from the rest of the brain in many ways. Here are a few:\n\n\n* The retina is probably less plastic.[491](https://www.openphilanthropy.org/brain-computation-report#footnote491_0c0zse0 \"From Open Philanthropy's non-verbatim notes from a conversation with Prof. E.J. Chichilnisky: “The brain is probably a lot more plastic than the retina, though this is likely a quantitative rather than a qualitative difference” (p. 4).\")\n* The retina is highly specialized for performing one particular set of tasks.[492](https://www.openphilanthropy.org/brain-computation-report#footnote492_nm9mudh \"See Anders Sandberg’s 1998 comments on Moravec: “The retina is a highly optimized and fairly stereotypal neural structure, this can introduce a significant bias.”\")\n* The retina is subject to unique physical constraints.[493](https://www.openphilanthropy.org/brain-computation-report#footnote493_t3ndd2o \"For example, it needs to be packed into the eye, and to be transparent enough for light signals to pass through layers of cells to reach the photoreceptors. Anders Sandberg, in his 1998 comments on Moravec, also suggests that it needs to be two dimensional, which might preclude more interesting and complex computational possibilities implicated by 3D structures. I have not investigated this.\")\n* Retinal circuitry has lower connectivity, and exhibits less recurrence.[494](https://www.openphilanthropy.org/brain-computation-report#footnote494_s3i2g18 \"From Open Philanthropy's non-verbatim notes from a conversation with Dr. Stephen Baccus: “There is higher connectivity in the cortex than in the retina… Recurrence might be the trickiest difference. The retina can be largely approximated as a feedforward structure (there is some feedback, but a feedforward model does pretty well), but in the cortex there is a lot of feedback between different brain regions. This might introduce oscillations and feedback signals that make precise details about spike timings (e.g., at a 1 ms level of precision) more important, and therefore make firing rate models, which blur over 10 ms, inadequate” (p. 5).\")\n* We are further from having catalogued all the cell types in the brain than in the retina.[495](https://www.openphilanthropy.org/brain-computation-report#footnote495_a76a05z \"From Open Philanthropy's non-verbatim notes from a conversation with Prof. E.J. Chichilnisky: “We are much further along in mapping all of the cell types in the retina than we are in the brain as a whole. Differences between cell types matter a lot in the retina. We don’t know how much these differences matter in the rest of the brain. Some people think that they don’t matter very much, but Prof. Chichilnisky disagrees, and certainly the field has been moving in the direction of emphasizing the cell type differences in the brain. However, there’s no reason to think that some neuron types in the brain/retina will be radically simple and some will be radically complicated. There will be some variations, but perhaps not a big gulf” (p. 4).\")\n* Some of the possible complications discussed in the mechanistic method section (for example, some forms of dendritic computation, and some alternative signaling mechanisms like ephaptic effects) may not be present in the retina in the same way.[496](https://www.openphilanthropy.org/brain-computation-report#footnote496_s0z8lxj \"The retina engages in certain forms of dendritic computation (see e.g. Taylor et al. (2000) and Hanson et al. (2019)), but various dendritic computation results focus on cortical pyramidal cells, and in particular on the apical dendrite of such cells (see London and Häusser (2005) for examples). Glia, electrical synapses, and neuropeptide signaling are all present in the retina; I’m less sure about ephaptic effects (to the extent that they’re present/task-relevant anywhere).\")\n\n\nNot all of these, though, seem to clearly imply *higher* FLOP/s burdens per unit something (cell, synapse, volume, etc.) in the brain than in the retina (they just suggest possible *differences*). Indeed, Moravec argues that given the importance of vision, the retina may be “evolutionarily more perfected, i.e. computationally dense, than the average neural structure.”[497](https://www.openphilanthropy.org/brain-computation-report#footnote497_81rqr6u \"See his reply to Anders Sandberg here. Drexler (2019) assumes something similar: “In the brain, however, typical INA [immediate neural activity] per unit volume is presumably less than that of activated retina” (p. 188). \") And various retina experts were fairly sympathetic to scaling up from the retina to the whole brain.[498](https://www.openphilanthropy.org/brain-computation-report#footnote498_wx0eoi7 \"From Open Philanthropy's non-verbatim notes from a conversation with Prof. Markus Meister (p. 4): There is nothing particularly simplistic about the retina, relative to other neural circuits. It probably has a hundred different cell types, it probably uses almost every neurotransmitter we know of, and it has very intricate microcircuitry. Prof. Meister would be sympathetic to scaling up from the retina as a way of putting an upper limit on the difficulty of simulating the brain as a whole. Prof. Meister has not actually done this back-of-the-envelope calculation, but budgeting based on the rate at which action potentials arrive at synapses, multiplied by the number of synapses, seems like roughly the right approach. Though see later in that section for some small increases (2×) for dendritic computation. From Open Philanthropy's non-verbatim notes from a conversation with Prof. E.J. Chichilnisky (p. 4): The level of modeling detail necessary in the retina provides a good test of the level of modeling detail necessary in the brain as a whole. However, the data on the retina aren’t in, and they won’t be in for a while. From Open Philanthropy's non-verbatim notes from a conversation with Dr. Stephen Baccus (p. 5): Prof. Baccus thinks the answer is ‘maybe’ to the question of whether the compute necessary to model neurons in the retina will be similar to the compute necessary to model neurons in the cortex. You might expect a volume by volume comparison to work as a method of scaling up from the retina to the cortex.\")\n\n\nWhere does this leave us? Overall, I think that the estimates based on the RNN/CNN models discussed above (1e16-1e20 FLOP/s) are some weak evidence for FLOP/s requirements higher than the mechanistic method range discussed above (1e13-1e17 FLOP/s). And these could yet be under-estimates, either because more FLOP/s are required to replicate retinal ganglion cell outputs with adequate accuracy across all stimuli; or because neural computation in the brain is more complicated, per relevant unit (volume, neuron, watt, etc.) than in the retina (the low plasticity of the retina seems to me like an especially salient difference).\n\n\nWhy only weak evidence? Partly because I’m very uncertain about how it actually looks like to scale these models up to the retina as a whole. And as I discussed in [Section 2.1.2.2](#section_2.1.2.2), I’m wary of updating too much based on a few studies I haven’t investigated in depth. What’s more, it seems plausible to me that these models, while better than current simpler models at fitting retinal spike trains, use more FLOP/s (possibly much more) than are required to do what the retina does. Reasons include:\n\n\n* The FLOP/s budgets for these RNN/CNN retina models depend on specific implementation choices (for example, input size and architecture) that don’t seem to reflect model complexity that has yet been found necessary. Bigger models will generally allow better predictions, but our efforts to predict retinal spikes using deep neural networks seem to be in early stages, and it doesn’t seem like we yet have enough data to ground strong claims about the network size required for a given level of accuracy (and we don’t know what level of accuracy is necessary, either).\n* I’m struck by how much smaller Moravec’s estimate is. It’s true that this estimate is incomplete in its coverage of retinal computation – but it surprises me somewhat if (a) his estimates for edge and motion detection are correct (Prof. Barak Pearlmutter expected Moravec’s robotic vision estimates to be accurate),[499](https://www.openphilanthropy.org/brain-computation-report#footnote499_w94x6yx \"See Open Philanthropy's non-verbatim notes from a conversation with Prof. Barak Pearlmutter: \\\"Prof. Hans Moravec attempted to derive estimates of the computational capacity of the brain from examination of the retina. Prof. Pearlmutter thought that Moravec’s estimates for the computational costs of robotic vision were likely accurate, given Moravec’s expertise in vision\\\" (p. 3).\") but (b) the other functions he leaves out result in an increase of 4-5 orders of magnitude. Part of the difference here might come from his focus on high-level tasks, rather than replicating spike trains.\n* The CNN in [Maheswaranathan et al. (2019)](https://www.biorxiv.org/content/10.1101/340943v5.full.pdf) would require ~2e10 FLOP/s to predict the outputs of 2500 cells in response to a 50 × 50 input. But various vision models discussed in the next section take in larger inputs (224 × 224 × 3),[500](https://www.openphilanthropy.org/brain-computation-report#footnote500_6smnrbl \"See here: “Let’s say the input shape for a convolutional layer is 224×224×3, a typical size for an image classifier.” Other input sizes listed here.\") and run on comparable FLOP/s (~1e10 FLOP/s for an [EfficientNet-B2](https://arxiv.org/pdf/1905.11946.pdf) run at 10 Hz). It seems plausible to me these vision models cover some non-trivial fraction of what the retina does (e.g., edge detection), along with much that it doesn’t do.\n\n\nThat said, these CNN/RNN results, together with the [Beniaguev et al. (2020)](https://www.biorxiv.org/content/10.1101/613141v2.full.pdf) results discussed in [Section 2.1.2.2](#section_2.1.2.2), suggest a possible larger pattern: recent DNN models used to predict the outputs of neurons and detailed neuron models appear to be quite FLOP/s intensive. It’s possible these DNNs are overkill. But they could also indicate complexity that simpler models don’t capture. Further experiments in this vein (especially ones emphasizing model efficiency) would shed helpful light.\n\n\n#### 3.2 Visual cortex\n\n\nLet’s turn to a different application of the functional method, which treats deep neural networks (DNNs) trained on vision tasks as automating some portion of the visual cortex.[501](https://www.openphilanthropy.org/brain-computation-report#footnote501_5zu2pdi \"This section is inspired by some arguments suggested by Dr. Dario Amodei, to the effect that ML vision models might be put into productive comparison with parts of the visual cortex (and in particular, conservatively, V1). See also Drexler (2019), who inspired some of Dr. Amodei’s analysis.\")\n\n\nSuch networks can classify full-color images into 1000 different categories[502](https://www.openphilanthropy.org/brain-computation-report#footnote502_894tu4y \"Some datasets have larger numbers of categories. For example, the full ImageNet dataset has 21k classes, and JFT-300M has 18,291 classes. However, many results focus on the benchmark set by the ILSVRC competition, which uses 1000 classes. I’ll focus there as well.\") with something like human-level accuracy.[503](https://www.openphilanthropy.org/brain-computation-report#footnote503_actwami \"When asked to provide five labels for a given image, at least one human has managed to include the true label 94.9% of the time, Russakovsky et al. (2014): “Annotator A1 evaluated a total of 1500 test set images. The GoogLeNet classification error on this sample was estimated to be 6.8% (recall that the error on full test set of 100,000 images is 6.7%, as shown in Table 7). The human error was estimated to be 5.1%.” You can try out the task for yourself here. Karpathy (2014b), who appears to have served as Annotator A1 for Russakovsky et al. (2014), writes in a blog post: “There have now been several reported results that surpass my 5.1% error on ImageNet. I’m astonished to see such rapid progress. At the same time, I think we should keep in mind the following: Human accuracy is not a point. It lives on a tradeoff curve. We trade off human effort and expertise with the error rate: I am one point on that curve with 5.1%. My labmates with almost no training and less patience are another point, with even up to 15% error. And based on some calculations that consider my exact error types and hypothesizing which ones may be easier to fix than others, it’s not unreasonable to suggest that an ensemble of very dedicated expert human labelers might push this down to 3%, with about 2% being an optimistic error rate lower bound.” DNNs are worse on top 1 labeling, but my understanding is that this is partly because images contain multiple possible labels (see Kostyaev (2016)).\") They can also localize/assign pixels to multiple identified objects, identify points of interest in an image, and generate captions, but I’ll focus here on image classification (I’m less confident about the comparisons with humans in the other cases).[504](https://www.openphilanthropy.org/brain-computation-report#footnote504_7b72tjf \"See Brownlee (2019b) for a breakdown of different types of object-recognition tasks, and here for example models. Hossain et al. (2018) review different image captioning models.\")\n\n\nWhat’s more, the representations learned by deep neural networks trained on vision tasks turn out to be state-of-the-art predictors of neural activity in the visual cortex (though the state of the art is not obviously impressive in an absolute sense[505](https://www.openphilanthropy.org/brain-computation-report#footnote505_clr7ady \"Cadena et al. (2019): “Despite great efforts over several decades, our best models of primary visual cortex (V1) still predict spiking activity quite poorly when probed with natural stimuli, highlighting our limited understanding of the nonlinear computations in V1” (abstract). See also Zhang et al. (2019): “While CNN models, especially those goal-driven ones pre-trained on computer vision tasks, performed very well in our study and some other studies (Cadena et al. (2017)) for V1 neuron modeling, we should point out that even the best-performing CNN in our study only explained about 50% of the explainable variance in our neural data, consistent with Cadena et al. (2017). The failure of CNN models for explaining the other half of the variance in V1 data can be due to a number of reasons. First, V1 neurons are subject to network interaction and their neural responses are known to be mediated by strong long-range contextual modulation. Second, it is possible that there are some basic structural components missing in the current deep CNN methodology for fully capturing V1 neural code” (p. 51-52 in the published version).\")).[506](https://www.openphilanthropy.org/brain-computation-report#footnote506_pg7ne18 \"See Zhang et al. (2019)Kiregeskorte (2015), Yamins and DiCarlo (2016) and Lindsay (2020) for reviews.\") Example results include:\n\n\n* [Cadena et al. (2019)](https://journals.plos.org/ploscompbiol/article?rev=2&id=10.1371/journal.pcbi.1006897#pcbi-1006897-g008): a model based on representations learned by a DNN trained on image classification can explain 51.6% of explainable variance of spiking activity in monkey [primary visual cortex](http://www.scholarpedia.org/article/Area_V1) (V1, an area involved in early visual processing) in response to natural images. A three-layer DNN trained to predict neural data explains 49.8%. The authors report that these models both outperform the previous state of the art.[507](https://www.openphilanthropy.org/brain-computation-report#footnote507_ztwzwgl \"Cadena et al. (2019): “We both trained CNNs directly to fit the data, and used CNNs trained to solve a high-level task (object categorization). With these approaches, we are able to outperform previous models and improve the state of the art in predicting the responses of early visual neurons to natural images” (see “Author summary”) ... “We compared the models for a number of cells selected randomly (Fig 8A). There was a diversity of cells, both in terms of how much variance could be explained in principle (dark gray bars) and how well the individual models performed (colored bars). Overall, the deep learning models consistently outperformed the two simpler models of V1. This trend was consistent across the entire dataset (Fig 8B and 8D). The LNP model achieved 16.3% FEV [Fraction of explainable variance explained], the GFB model 45.6% FEV. The performance of the CNN trained directly on the data was comparable to that of the VGG-based model (Fig 8C and 8D); they predicted 49.8% and 51.6% FEV, respectively, on average” (p. 11). See also Zhang et al. (2019) for comparable results, and Klindt et al. (2017) and Antolík et al. (2016) for earlier results. Kindel et al. (2019) report that “ we trained deep convolutional neural networks to predict the firing rates of V1 neurons in response to natural image stimuli, and we find that the predicted firing rates are highly correlated (CC norm = 0.556 ± 0.01) with the neurons' actual firing rates over a population of 355 neurons. This performance value is quoted for all neurons, with no selection filter. Performance is better for more active neurons: When evaluated only on neurons with mean firing rates above 5 Hz, our predictors achieve correlations of CCnorm = 0.69 ± 0.01 with the neurons' true firing rates” (see abstract). I’m not sure how this fits with the characterization of the state of the art in Cadena et al. (2019).\")\n* [Yamins et al. (2014)](https://www.pnas.org/content/111/23/8619) show that layers of a DNN trained on object categorization can be used to achieve what was then state of the art prediction of spiking activity in the monkey [Inferior Temporal cortex](http://www.scholarpedia.org/article/Inferior_temporal_cortex) (IT, an area thought to be involved in a late stage of hierarchical visual processing) – ~50% of explainable variance explained (though I think the best models can now do better).[508](https://www.openphilanthropy.org/brain-computation-report#footnote508_wstsuqk \"Yamins et al. (2014): “We found that the top layer of the high-performing HMO model achieves high predictivity for individual IT neural sites, predicting 48.5±1.3% of the explainable IT neuronal variance (Fig. 3 B and C). This represents a nearly 100% improvement over the best comparison models and is comparable to the prediction accuracy of state-of-the-art models of lower-level ventral areas such as V1 on complex stimuli (10). In comparison, although the HMAX model was better at predicting IT responses than baseline V1 or SIFT, it was not significantly different from the V2-like model” …. Schrimpf et al. (2018): “The models from this early work outlined above outperformed all other neuroscience models at the time and yielded reasonable scores on predicting response patterns from both single unit activity and fMRI.” And Yamins and DiCarlo (2016): “It turned out that the top hidden layers of these models were the first quantitatively accurate image-computable model of spiking responses in IT cortex, the highest-level area in the ventral hierarchy (Fig. 2b,c). Similar models have also been shown to predict population aggregate responses in functional MRI data from human IT (Fig. 2d)” (p. 359). Yamins and DiCarlo (2016) also note that “These results are not trivially explained merely by any signal reflecting object category identity being able to predict IT responses. In fact, at the single neuron level, IT neural responses are largely not categorical, and ideal-observer models with perfect access to category and iden- tity information are far less accurate IT models than goal-driven HCNNs (Fig. 2a,c). Being a true image-computable neural network model appears critical for obtaining high levels of neural predictivity. In other words: combining two general biological constraints—the behavioral constraint of the object recognition task and the architec- tural constraint imposed by the HCNN model class—leads to greatly improved models of multiple layers of the visual sensory cascade” (p. 359). Schrimpf et al. (2018): “Current models still fall short of reaching benchmark ceilings: The best ANN model V4 predictivity score is 0.663, which is below the internal consistency ceiling of these V4 data (0.892). The best ANN model IT predictivity score is 0.604, which is below the internal consistency ceiling of these IT data (0.817). And the best ANN model behavioral predictivity score is 0.378, which is below the internal consistency ceiling of these behavioral data (0.497)” (p. 7). That said, I am not sure exactly what the relevant benchmark is in the context of this paper. See here for ongoing evaluation of the “brain-score” of different models -- evaluation which incorporates the degree to which they predict neuron responses in IT. \") Similar models can also be used to predict spiking activity in area [V4](https://en.wikipedia.org/wiki/Visual_cortex#V4) (another area involved in later-stage visual processing),[509](https://www.openphilanthropy.org/brain-computation-report#footnote509_e1qgb9d \"Yamins et al. (2014): “We found that the HMO model’s penultimate layer is highly predictive of V4 neural responses (51.7±2.3% explained V4 variance), providing a significantly better match to V4 than either the model’s top or bottom layers. These results are strong evidence for the hypothesis that V4 corresponds to an intermediate layer in a hierarchical model whose top layer is an effective model of IT” (p. 8623). See also Bashivan et al. (2019): “We found that the neural predictor models correctly predicted 89% of the explainable (i.e., image-driven) variance in the V4 neural responses” (p. 1).\") as well as fMRI activity in IT.[510](https://www.openphilanthropy.org/brain-computation-report#footnote510_cb2shy8 \"Khaligh-Razavi and Kiregeskorte (2014): “The models include well-known neuroscientific object-recognition models (e.g. HMAX, VisNet) along with several models from computer vision (e.g. SIFT, GIST, self-similarity features, and a deep convolutional neural network). We compared the representational dissimilarity matrices (RDMs) of the model representations with the RDMs obtained from human IT (measured with fMRI) and monkey IT (measured with cell recording) for the same set of stimuli (not used in training the models). Better performing models were more similar to IT in that they showed greater clustering of representational patterns by category. In addition, better performing models also more strongly resembled IT in terms of their within-category representational dissimilarities” (abstract). Yamins and DiCarlo (2016): “... Similar models have also been shown to predict population aggregate responses in functional MRI data from human IT (Fig. 2d)” (p. 359). See also Storrs et al. (2020).\") The accuracy of the predictions appears to correlate with the network’s performance on image classification (though the correlation weakens for some of the models best at the task).[511](https://www.openphilanthropy.org/brain-computation-report#footnote511_lo5b9q2 \"See Yamins and DiCarlo (2016): “HCNN models that are better optimized to solve object categorization produce hidden layer representations that are better able to predict IT neural response variance” (Figure 2a, p. 360); and Schrimpf et al. (2018): “Extending prior work, we found that gains in ANN ImageNet performance led to gains on Brain-Score. However, correlation weakened at ≥ 70% top-1 ImageNet performance, suggesting that additional guidance from neuroscience is needed to make further advances in capturing brain mechanisms” (p. 1). See also http://www.brain-score.org/ for more data.\")\n\n\nWe can also look more directly at the features that units in an image classifier detect. Here, too, we see interesting neuroscientific parallels. For example:\n\n\n* Neurons in V1 are sensitive to various low-level features of visual input, such as lines and edges oriented in different ways. Some units in the early layers of image classifiers appear to detect similar features. For example, [Gabor filters](https://en.wikipedia.org/wiki/Gabor_filter#:~:text=In%20image%20processing%2C%20a%20Gabor,point%20or%20region%20of%20analysis.), often used to model V1, are found in such early layers.[512](https://www.openphilanthropy.org/brain-computation-report#footnote512_txcggx1 \"Yamins et al. (2014): “For example, neurons in the lowest area, V1, are well described by Gabor-like edge detectors that extract rough object outlines.” Olah et al. (2020b): \\\"Gabor filters are a simple edge detector, highly sensitive to the alignment of the edge. They’re almost universally found in the fist [sic] layer of vision models.” They report that 44% of the units in the first conv layer of InceptionV1 are gabor filters, and that 14% of the units in conv2d1 are “complex gabor filters, which are “like Gabor Filters, but fairly invariant to the exact position, formed by adding together multiple Gabor detectors in the same orientation but different phases. We call these ‘Complex’ after complex cells in neuroscience” (see section “conv2d1”).\")\n* V4 has traditionally been thought to detect features like colors and curves.[513](https://www.openphilanthropy.org/brain-computation-report#footnote513_2ea75qd \"From Open Philanthropy's non-verbatim notes from a conversation with Prof. Konrad Kording: “There is a traditional view in systems neuroscience that each brain area does something pre-assigned and simple. E.g., V1 detects edges, V4 pulls out colors and curvature, etc. But this view is dying at the moment” (p. 3). See also Roe et al. (2020): “One advanced shape property represented in V4 is curvature. Curvature, which can be considered an integration of oriented line segments, is a prominent feature of object boundaries. V4 cells (receptive fields typically 2–10 deg in size) can be strongly selective for curvature of contours (Pasupathy and Connor (1999), 2001) as well as curved (i.e., non-Cartesian) gratings (Gallant et al. (1993), 1996).” (abstract); and Walsh (1999) for more on color in the visual cortex\") These, too, are detected by units in image classifiers.[514](https://www.openphilanthropy.org/brain-computation-report#footnote514_7og0p9r \"See Olah et al. (2020a): “Curve detecting neurons can be found in every non-trivial vision model we’ve carefully examined” (see Example 1: Curve Detectors). See also the corners in conv2d2 described in Olah et al. (2020b), and the color detectors described in conv2d0-2.\") What’s more, such networks can be used to create images that can predictably drive firing rates of V4 neurons beyond naturally occurring levels.[515](https://www.openphilanthropy.org/brain-computation-report#footnote515_ywo0bjb \"Bashivan et al. (2019): “Using an ANN-driven image synthesis method, we found that luminous power patterns (i.e., images) can be applied to primate retinae to predictably push the spiking activity of targeted V4 neural sites beyond naturally occurring levels. This method, although not yet perfect, achieves unprecedented independent control of the activity state of entire populations of V4 neural sites, even those with overlapping receptive fields. These results show how the knowledge embedded in today’s ANN models might be used to noninvasively set desired internal brain states at neuron-level resolution, and suggest that more accurate ANN models would produce even more accurate control” (p. 1).\")\n\n\nExactly what to take away from these results isn’t clear to me. One hypothesis, offered by [Yamins and DiCarlo (2016)](https://www.nature.com/articles/nn.4244), is that hierarchically organized neural networks (a class that includes both the human visual system, and these DNNs) converge on a relatively small set of efficiently-learnable solutions to object categorization tasks.[516](https://www.openphilanthropy.org/brain-computation-report#footnote516_014tn72 \"Yamins and DiCarlo (2016): “within the class of HCNNs [e.g., Hierarchical Convolutional Neural Networks], there appear to be comparatively few qualitatively distinct, efficiently learnable solutions to high-variation object categorization tasks, and perhaps the brain is forced over evolutionary and developmental timescales to pick such a solution” (p. 356).\") But other, more trivial explanations may be available as well,[517](https://www.openphilanthropy.org/brain-computation-report#footnote517_ttqlyqh \"From Open Philanthropy's non-verbatim notes from a conversation with Prof. Konrad Kording: “It’s true that simple models of V1 can describe 30 percent of the variance in V1’s activity. But you can describe half of the variance in the activity of your transistors just by realizing that your computer is turned off at night” (p. 3).\") and superficial comparisons between human and machine perception can be misleading.[518](https://www.openphilanthropy.org/brain-computation-report#footnote518_s9komqo \"See Funke et al. (2020) for some discussion.\")\n\n\nStill, it seems plausible that at the very least, there are interesting similarities between information-processing occurring in (a) the visual cortex and (b) DNNs trained on vision tasks. Can we turn this into a functional method estimate?\n\n\nHere are a few of the uncertainties involved.\n\n\n \n\n\n#### 3.2.1 What’s happening in the visual cortex?\n\n\nOne central problem is that there’s clearly a lot happening in the visual cortex other than image classification of the kind these models perform.\n\n\nIn general, functional method estimates fit best with a traditional view in systems neuroscience, according to which chunks of the brain are highly specialized for particular tasks. But a number of experts I spoke to thought this view inaccurate.[519](https://www.openphilanthropy.org/brain-computation-report#footnote519_4o7qblq \"From Open Philanthropy's non-verbatim notes from a conversation with Prof. Konrad Kording: “There is a traditional view in systems neuroscience that each brain area does something pre-assigned and simple. E.g., V1 detects edges, V4 pulls out colors and curvature, etc. But this view is dying at the moment. It was always suspicious on theoretical grounds. The fact that you know so much, about so many types of things, is in conflict with the view that each specific brain area is simple, as this view does not explain where all of the information available to you comes from. But it’s also empirically wrong. If you look at the literature, when you take a type of signal that matters to animals and looks for it in the brain, you find it everywhere. For example, you can find movement signals and expectations in the primary visual cortex, and rewards explain more of the variance in the primary motor cortex (the “movement area”) than movement. Basically, it’s all a complete mess. … Of course, there’s some specialization. Sound explains more of the variance in auditory cortex than in visual cortex. But the specialization isn’t simple. It’s just easier to publish papers saying e.g. ‘X is the brain area for romantic love,’ than e.g. ‘here are another ten variables X region is tuned to.’” (p. 3). From Open Philanthropy's non-verbatim notes from a conversation with Prof. Markus Meister: “There is a long history, in neuroscience, of attempting to assign understandable computational roles to little chunks of brain matter (e.g., “the anterior cingulate cortex is for X”). Prof. Meister believes that this program is not going to be very successful, because these regions are massively interconnected, and we now know that if you inject signals into one part of the brain, you find them in many other parts of the brain” (p. 3).\") In reality, cortical regions are highly interconnected, and different types of signals show up all over the place. Motor behavior in mice, for example, predicts activity in V1 (indeed, such behaviors are represented using the same neurons that represent visual stimuli);[520](https://www.openphilanthropy.org/brain-computation-report#footnote520_cwscllx \"Stringer et al. (2018) showed mice pictures from Imagenet (“stimuli”) while the mice also engaged in spontaneous motor behavior (“behavior”): “Stimuli and behavior were represented together in V1 as a mixed representation: there were not separate sets of neurons encoding stimuli and behavioral variables, but each neuron multiplexed a unique combination of sensory and behavioral information” (p. 11).\") and V1 responses to identical visual stimuli alter based on a mouse’s estimate of its position in a virtual-reality maze.[521](https://www.openphilanthropy.org/brain-computation-report#footnote521_1hnj25u \"Saleem et al. (2017): “To establish the nature of these signals we recorded in primary visual cortex (V1) and in the CA1 region of the hippocampus while mice traversed a corridor in virtual reality. The corridor contained identical visual landmarks in two positions, so that a purely visual neuron would respond similarly in those positions. Most V1 neurons, however, responded solely or more strongly to the landmarks in one position…. The presence of such navigational signals as early as in a primary sensory area suggests that these signals permeate sensory processing in the cortex” (p. 1).\") Indeed, [Cadena et al. (2019)](https://journals.plos.org/ploscompbiol/article?rev=2&id=10.1371/journal.pcbi.1006897#pcbi-1006897-g008) recorded from 307 monkey V1 neurons, and found that only in about half of them could more than 15% of the variance in their spiking be explained by the visual stimulus (the average, in those neurons, was ~28%).[522](https://www.openphilanthropy.org/brain-computation-report#footnote522_pl49jdn \"See Cadena et al. (2019), “Dataset and inclusion criteria”: “We recorded a total of 307 neurons in 23 recording sessions...We discarded neurons with a ratio of explainable-to-total variance (see Eq 3) smaller than 0.15, yielding 166 isolated neurons (monkey A: 51, monkey B: 115) recorded in 17 sessions with an average explainable variance of 0.285.”\")\n\n\nVarious forms of prediction are also reflected in the visual system, even in very early layers. For example, V1 can fill in missing representations in a gappy motion stimulus.[523](https://www.openphilanthropy.org/brain-computation-report#footnote523_wt3p0z8 \"Chong et al. (2016): “Using fMRI and encoding methods, we found that the ‘intermediate’ orientation of an apparently rotating grating, never presented in the retinal input but interpolated during AM [apparent motion], is reconstructed in population-level, feature-selective tuning responses in the region of early visual cortex (V1) that corresponds to the retinotopic location of the AM path” (p. 1453). See also Open Philanthropy's non-verbatim notes from a conversation with Prof. Won Mok Shim: “There is a traditional view of V1, on which it is the front end of a hierarchical information-processing pipeline, and is responsible for processing simple, low-level features of bottom-up visual input from the retina/LGN. However, many feedback processes and connections have been discovered in V1 over the last decade, and most vision scientists would agree that V1’s information-processing cannot be entirely explained using bottom-up inputs....The anatomy of the visual system also suggests an important role for feedback. For example, there are more feedback connections from V1 to the LGN, than there are feedforward connections from the LGN to V1. V1 receives a large number of connections from other brain areas, like V2, and there are also many lateral connections between cells within V1. The direction of these connections can be identified using neuroanatomical trace studies, mostly from monkeys or cats… On an alternative to the traditional view, V1 is receiving top-down, high-level predictions, which it then compares with the bottom-up input. The difference between the two is an error signal, which is then conveyed from the low-level areas to the high-level areas. The origins of this idea are in computational theory (predictive coding). There is some empirical support as well, but the evidence is not completely clear.” (p. 1-2).\") Simple image classifiers don’t do this. Neurons in the visual cortex also learn over time, whereas the weights in a typical image classifier are static.[524](https://www.openphilanthropy.org/brain-computation-report#footnote524_baodw8g \"See e.g. Schecter et al. (2017), Cooke and Bear (2014), and Cooke et al. (2015).\") And there are various other differences besides.[525](https://www.openphilanthropy.org/brain-computation-report#footnote525_9j4zrg0 \"For example, in addition to detecting features of a visual stimulus like the orientation of lines and the spatial frequency of different patterns (features at least somewhat akin to the features detected by the early layers of a ImageNet model), neurons in V1 can also detect the direction that a stimulus is moving, as well as other features of how a stimulus changes over time (see Carandini (2012): “Cells in area V1 are commonly selective for direction of stimulus motion” and “The slant of receptive fields in space-time confers V1 neurons with some selectivity for stimulus speed, but this selectivity depends on the spatial pattern of a stimulus (Movshon et al. (1978a)). Rather than speed, V1 neurons are typically thought to be selective for temporal frequency, which is the inverse of the period between temporal oscillations between dark and light” (in the “Stimulus selectivity” section)). Indeed, visual processing requires a changing stimulus (see Gilbert (2013): “Visual perception requires eye movement. Visual cortex neurons do not respond to an image that is stabilized on the retina because they require moving or flashing stimuli to be activated: they fire in response to transient stimulation” (p. 606)). The images processed by e.g. a ResNet-101, by contrast, are static (though there are computer-vision systems that operate in dynamic environments as well). V1 is also involved in integrating the different visual inputs from different eyes (see Carandini (2012): “The signals from corresponding regions in the two eyes are kept separate in the LGN, and are combined in V1” (in the “Stimulus selectivity” section)), whereas a ResNet receives only one image.\")\n\n\nMore generally, as elsewhere in the brain, there’s a lot we don’t know about what the visual cortex is doing.[526](https://www.openphilanthropy.org/brain-computation-report#footnote526_jbuzmnx \"From Open Philanthropy's non-verbatim notes from a conversation with Dr. Adam Marblestone: “Dr. Marblestone does not think it obvious that the visual cortex should be thought of as doing something like object-detection. It could be, for example, making a more complicated transition model based on all of its multi-modal inputs, predicting future inputs and rewards, or doing some kind of iterative inference procedure. We just don’t know quite how high-dimensional or complicated the task the visual system performs is. So any compute estimates based on comparisons between the visual system and current deep neural networks are highly uncertain” (p. 8).\") And “vision” as a whole, while hard to define clearly, intuitively involves much more than classifying images into categories (for example, visual representations seem closely tied to behavioral affordances, 3D models of a spatial environment, predictions, high-level meanings and associations, etc.).[527](https://www.openphilanthropy.org/brain-computation-report#footnote527_wqaib4y \"From Open Philanthropy's non-verbatim notes from a conversation with Dr. Kate Storrs: “Returning the name of the main object in an image is a tiny portion of what the visual system can do. Core vision involves understanding the visual world as a navigable 3D space of objects, equipped with orientations, materials, depth, properties, and behavioral affordances. Dr. Storrs would guess that object-recognition only occurs on top of that kind of description of the world. Models analogous to the visual system would need to perform a wider range of the tasks that the visual system performs, which suggests that they would need to be more powerful” (p. 2). From the non-verbatim notes from my conversations with Prof. Konrad Kording: “‘What things are’ isn’t the only question at stake in vision. You want answers to questions like “can I grasp this water bottle? Can I hold it there?”. Indeed, there are a vast number of questions that we want to be able to ask and answer with vision systems, and the “solution” to vision will depend on the exact thing that other parts of the brain need from the visual system. It’s not an easily definable space, and the only way to figure it out is to build a system that fully learns all of the relevant pieces” (p. 4). From Open Philanthropy's non-verbatim notes from a conversation with Prof. Eric Jonas: “Prof. Jonas is fairly confident that the visual system is not classifying objects into one of k categories” (p. 1).\")\n\n\n \n\n\n#### 3.2.2 What’s human level?\n\n\nEven if we could estimate what percentage of the visual cortex is devoted to image recognition of the type these models perform, it’s also unclear how much such models match human-level performance on that task. For example:\n\n\n* DNNs are notoriously vulnerable to [adversarial examples](https://arxiv.org/pdf/1312.6199.pdf),[528](https://www.openphilanthropy.org/brain-computation-report#footnote528_arwcsaf \"See Serre (2019), section 5.2, for a review.\") some of which are naturally occurring.[529](https://www.openphilanthropy.org/brain-computation-report#footnote529_f9mehf4 \"Hendricks et al. (2020): “We introduce natural adversarial examples–real-world, unmodified, and naturally occurring examples that cause machine learning model performance to substantially degrade. We introduce two new datasets of natural adversarial examples. The first dataset contains 7,500 natural adversarial examples for ImageNet classifiers and serves as a hard ImageNet classier test set called IMAGENET-A. We also curate an adversarial out-of-distribution detection dataset called IMAGENET-O, which to our knowledge is the first out-of-distribution detection dataset created for ImageNet models. These two datasets provide new ways to measure model robustness and uncertainty. Like lp adversarial examples, our natural adversarial examples transfer to unseen black-box models. For example, on IMAGENET-A a DenseNet-121 obtains around 2% accuracy, an accuracy drop of approximately 90%, and its out-of-distribution detection performance on IMAGENET-O is near random chance levels. Popular training techniques for improving robustness have little effect, but some architectural changes provide mild improvements. Future research is required to enable generalization to natural adversarial examples” (p. 1).\") The extent to which humans are analogously vulnerable remains an open question.[530](https://www.openphilanthropy.org/brain-computation-report#footnote530_1m30nmy \"Elsayed et al. (2018): “Machine learning models are vulnerable to adversarial examples: small changes to images can cause computer vision models to make mistakes such as identifying a school bus as an ostrich. However, it is still an open question whether humans are prone to similar mistakes. Here, we address this question by leveraging recent techniques that transfer adversarial examples from computer vision models with known parameters and architecture to other models with unknown parameters and architecture, and by matching the initial processing of the human visual system. We find that adversarial examples that strongly transfer across computer vision models influence the classifications made by time-limited human observers” (p. 1). A full test of whether humans are comparably vulnerable to adversarial examples, though, might require the ability to access and manipulate the parameters of the human brain in the same manner that one can with an artificial neural network.\")\n* DNN image classifiers can generalize poorly to data sets they weren’t trained on. [Barbu et al. (2019)](https://objectnet.dev/objectnet-a-large-scale-bias-controlled-dataset-for-pushing-the-limits-of-object-recognition-models.pdf), for example, report a 40-45% drop in performance on the ObjectNet test set, constructed from real-world examples (though [Kolesnikov et al. (2020)](https://arxiv.org/pdf/1912.11370.pdf) recently improved the ObjectNet state of the art by 25%, reaching 80% top-five accuracy).[531](https://www.openphilanthropy.org/brain-computation-report#footnote531_11klhlt \"Barbu et al. (2019): “When tested on ObjectNet, object detectors show a 40-45% drop in performance, with respect to their performance on other benchmarks, due to the controls for biases. Controls make ObjectNet robust to fine-tuning showing only small performance increases” (p. 1).\") See figure below, and endnote, for some other examples.[532](https://www.openphilanthropy.org/brain-computation-report#footnote532_ieijx5l \"Geirhos et al. (2020) discusses a number of examples. Serre (2019), section 5.2, discusses various generalization failures. See also Recht et al. (2019): “We build new test sets for the CIFAR-10 and ImageNet datasets. Both benchmarks have been the focus of intense research for almost a decade, raising the danger of overfitting to excessively reused test sets. By closely following the original dataset creation processes, we test to what extent current classification models generalize to new data. We evaluate a broad range of models and find accuracy drops of 3% - 15% on CIFAR-10 and 11% - 14% on ImageNet. However, accuracy gains on the original test sets translate to larger gains on the new test sets. Our results suggest that the accuracy drops are not caused by adaptivity, but by the models' inability to generalize to slightly \\\"harder\\\" images than those found in the original test sets” (p. 1); Lamb et al. (2019): “humans are able to watch cartoons, which are missing many visual details, without being explicitly trained to do so...We propose a dataset that will make it easier to study the detail-invariance problem concretely. We produce a concrete task for this: SketchTransfer, and we show that state-of-the-art domain transfer algorithms still struggle with this task. The state-of-the-art technique which achieves over 95% on MNIST −→ SVHN transfer only achieves 59% accuracy on the SketchTransfer task, which is much better than random (11% accuracy) but falls short of the 87% accuracy of a classifier trained directly on labeled sketches. This indicates that this task is approachable with today’s best methods but has substantial room for improvement” (p. 1); and Rosenfeld et al. (2018): “We showcase a family of common failures of state-of-the art object detectors. These are obtained by replacing image sub-regions by another sub-image that contains a trained object. We call this ‘object transplanting’. Modifying an image in this manner is shown to have a non-local impact on object detection. Slight changes in object position can affect its identity according to an object detector as well as that of other objects in the image. We provide some analysis and suggest possible reasons for the reported phenomena” (p. 1).\") \n\n\n[![GeirhosMachineVision.png](https://www.openphilanthropy.org/files/Research/Brain_Compute/image9.png)](https://www.openphilanthropy.org/files/Research/Brain_Compute/image9.png)**Figure 15: Examples of generalization failures**. From [Geirhos et al. (2020)](https://arxiv.org/pdf/2004.07780.pdf), Figure 3, p. 8, reprinted with permission, and unaltered. Original caption: “Both human and machine vision generalise, but they generalise very differently. Left: image pairs that belong to the same category for humans, but not for DNNs. Right: image pairs assigned to the same category by a variety of DNNs, but not by humans.”\n* The common [ILSVRC](https://www.kaggle.com/getting-started/149448) benchmark involves classifying images from 1000 categories. But humans can plausibly classify objects from more (much more?) than 10,000 categories, including very particular categories like “that one mug” or “the chair from the living room.”[533](https://www.openphilanthropy.org/brain-computation-report#footnote533_f6rk56p \"Jenkins et al. (2018) for example, found that “people know about 5000 faces on average” (p. 1) and Biederman (1987) estimates that people know 30,000 distinguishable object categories, though he treats this as “liberal” (e.g., on the high end). I have not attempted to evaluate his methodology, but at a glance it looks both loose and based on fairly substantive assumptions. Here is a relevant quote: “How many readily distinguishable objects do people know? How might one arrive at a liberal estimate for this value? One estimate can be obtained from the lexicon. There are less than 1,500 relatively common basic-level object categories, such as chairs and elephants. If we assume that this estimate is too small by a factor of 2, allowing for idiosyncratic categories and errors in the estimate, then we can assume potential classification into approximately 3,000 basic-level categories. RBC assumes that perception is based on a particular componential configuration rather than the basic-level category, so we need to estimate the mean number of readily distinguishable componential configurations per basic-level category. Almost all natural categories, such as elephants or giraffes, have one or only a few instances with differing componential descriptions. Dogs represent a rare exception for natural categories in that they have been bred to have considerable variation in their descriptions. Categories created by people vary in the number of allowable types, but this number often tends to be greater than the natural categories. Cups, typewriters, and lamps have just a few (in the case of cups) to perhaps 15 or more (in the case of lamps) readily discernible exemplars. Let us assume (liberally) that the mean number of types is 10. This would yield an estimate of 30,000 readily discriminable objects (3,000 categories × 10 types/category)” (p. 127). See also Open Philanthropy's non-verbatim notes from a conversation with Dr. Kate Storrs: “The question of how many categories humans can recognize is sort of impossible, because the concept of a category is fairly fuzzy, and it isn't rich enough to capture what human visual recognition involves. For example, you’ve probably seen tens of thousands of chairs over the course of your life. You were able to immediately recognize them as chairs, but you were also able to immediately see a large number of individuating properties. Indeed, one of the great powers of the visual system is that it arrives at a description that is flexible enough that you can then carve it up in whatever ways are behaviorally relevant. Looking at common nouns, and budgeting a certain number of instances of each (maybe 100 or 1000) as individually recognizable, might be one way to put a very rough number on the categories that humans can recognize.\\\" (p. 4).\") Indeed, it’s unclear to me, conceptually, how to draw the line between classifying an object (“house,” “dog,” “child”) and thinking/feeling/predicting (“house I’d like to live in,” “dog that I love,” “child in danger”).[534](https://www.openphilanthropy.org/brain-computation-report#footnote534_cai3nb2 \"Another example might be an image-classification task that involves classifying images into “funny” and “not funny” -- a task hardly limited in difficulty by the number of basic objects humans can identify. See Karpathy (2012) for discussion of all of the complex understanding that goes into appreciating a humorous picture: “the point here is that you’ve used a HUGE amount of information in that half second when you look at the picture and laugh. Information about the 3D structure of the scene, confounding visual elements like mirrors, identities of people, affordances and how people interact with objects, physics (how a particular instrument works, leaning and what that does), people, their tendency to be insecure about weight, you’ve reasoned about the situation from the point of view of the person on the scale, what he is aware of, what his intents are and what information is available to him, and you’ve reasoned about people reasoning about people. You’ve also thought about the dynamics of the scene and made guesses about how the situation will unfold in the next few seconds visually, how it will unfold in the thoughts of people involved, and you reasoned about how likely or unlikely it is for people of particular identity/status to carry out some action. Somehow all these things come together to ‘make sense’ of the scene.”\") That said, it’s possible that all of these categories draw on similar low-level visual features detected in early stages of processing.\n* The resolution of the human visual system may be finer than the resolution of typical ImageNet images. The optic nerve has roughly 1 million retinal ganglion cells that carry input from the retina, and the retina as a whole has about 100 million photoreceptor cells.[535](https://www.openphilanthropy.org/brain-computation-report#footnote535_2oprf14 \"Dr. Dario Amodei suggested this consideration. Sarpeshkar (2010) treats the retina as receiving 36Gb/s, and outputing 20 Mb/s (p. 749, he cites Koch et al. (2004)).\") A typical input to an image classifier is 224 × 224 × 3: ~150,000 input values (though some inputs are larger).[536](https://www.openphilanthropy.org/brain-computation-report#footnote536_mcnwhx7 \"See here: “224×224×3, a typical size for an image classifier.” See here for some example input sizes.\")\n\n\nThat said, DNNs may also be superior to the human visual system in ways. For example, [Geirhos et al. (2018)](https://arxiv.org/pdf/1706.06969.pdf) compared DNN and human performance at identifying objects presented for 200 ms, and found that DNNs outperformed humans by >5% classification accuracy on images from the training distribution (humans generally did better overall when the images were altered).[537](https://www.openphilanthropy.org/brain-computation-report#footnote537_5ri0n2k \"Geirhos et al. (2018): “Here we proposed a fair and psychophysically accurate way of comparing network and human performance on a number of object recognition tasks: measuring categorization accuracy for single-fixation, briefly presented (200 ms) and backward-masked images as a function of colour, contrast, uniform noise, and eidolon-type distortions. We find that DNNs outperform human observers by a significant margin for non-distorted, coloured images—the images the DNNs were specifically trained on… In comparison to human observers, we find the classification performance of three currently well-known DNNs trained on ImageNet—AlexNet, GoogLeNet and VGG-16—to decline rapidly with decreasing signal-to-noise ratio under image degradations like additive noise or eidolon-type distortions” (p. 14-17). See also Figures 2 and 3.\") And human vision is subject to its own illusions, blind spots, shortcuts, etc.[538](https://www.openphilanthropy.org/brain-computation-report#footnote538_hw1jw0b \"From Open Philanthropy's non-verbatim notes from a conversation with Dr. Kate Storrs: “On the other hand, a lot of our impression of the richness of human vision is illusory. For example, we don’t see crisply, or in color, in the periphery of our visual field. So perhaps biological vision uses its own shortcuts” (p. 2).\") And I certainly don’t know that many species of dog. Overall, though, the human advantages here seem more impressive to me.\n\n\nNote, also, that the question here is not whether DNNs are processing visual information exactly like humans do. For example, in order to qualify as human-level, the models don’t need to make the same sorts of mistakes humans do. What matters is high-level task performance.\n\n\n \n\n\n#### 3.2.3 Making up some numbers\n\n\nSuppose we forge ahead with a very loose functional method estimate, despite these uncertainties. What results?\n\n\nAn EfficientNet-B2, capable of a roughly human-level 95% top-five accuracy on ImageNet classification, takes 1e9 FLOPs for a forward pass – though note that if we assume sparse FLOPs (e.g., no costs for multiplying by or adding 0), as we did for the mechanistic method, this number would be lower;[539](https://www.openphilanthropy.org/brain-computation-report#footnote539_ab42pii \"This is a point suggested by Dr. Dario Amodei. The Cerebras whitepaper suggests that “50 to 98% of your multiplications are wasted” on non-sparse hardware (p. 5).\") and it might be possible to prune/compress the model further (though EfficientNet-B2 is already optimized to minimize FLOPs).[540](https://www.openphilanthropy.org/brain-computation-report#footnote540_w0c0bqu \"Ravi (2018): “For example, on ImageNet task, Learn2Compress achieves a model 22× smaller than Inception v3 baseline and 4× smaller than MobileNet v1 baseline with just 4.6-7% drop in accuracy. On CIFAR-10, jointly training multiple Learn2Compress models with shared parameters, takes only 10% more time than training a single Learn2Compress large model, but yields 3 compressed models that are upto 94× smaller in size and upto 27× faster with up to 36× lower cost and good prediction quality (90-95% top-1 accuracy).” See also Frankle and Carbin (2018): “Neural network pruning techniques can reduce the parameter counts of trained networks by over 90%, decreasing storage requirements and improving computational performance of inference without compromising accuracy” (p. 1); and Lillicrap and Kording (2019): “From distillation techniques we know that networks trained on ImageNet, a popular 2012 machine learning benchmark that requires the classification of natural images, cannot readily be compressed to fewer than about 100k free parameters [13, 20, 32] (though see [35])” (p. 3). Note also that other models are less efficient than EfficientNet-B2. For example, a ResNet-101 requires ~1e10 FLOPs, and models that both identify and localize objects, that assign the pixels in each image to different objects, or that identify points of interest in a scene, can require more than 1e11 FLOPs per forward pass. See here for examples..\")\n\n\nHumans can recognize ~ten images per second (though the actual process of assigning labels to ImageNet images takes much longer).[541](https://www.openphilanthropy.org/brain-computation-report#footnote541_idrdw1c \"From Open Philanthropy's non-verbatim notes from a conversation with Prof. Won Mok Shim: “There is a fair amount of consensus in the field that the human visual system can recognize about ten images per second (e.g., one image per 100 ms). However, this doesn’t mean that it takes 100 ms to recognize an image. For example, you might be able to recognize an image shown very briefly (e.g., for less than 100 ms), but without sequences of other images before and afterwards” (p. 3). Trafton’s (2014) MIT news article suggests that 10 images per second has been suggested by previous studies. Potter et al. (2013), however, suggests that humans can at least do better than chance at images presented for only 13 ms: “The results of both experiments show that conceptual understanding can be achieved when a novel picture is presented as briefly as 13 ms and masked by other pictures” (p. 275, see also further discussion on p. 276); and Keysers et al. (2001) report that “macaque monkeys were presented with continuous rapid serial visual presentation (RSVP) sequences of unrelated naturalistic images at rates of 14--222 msec/image, while neurons that responded selectively to complex patterns (e.g., faces) were recorded in temporal cortex. Stimulus selectivity was preserved for 65% of these neurons even at surprisingly fast presentation rates (14 msec/image or 72 images/sec). Five human subjects were asked to detect or remember images under equivalent conditions. Their performance in both tasks was above chance at all rates (14--111 msec/image)”. That said, “better than chance” is too low a standard. Potter et al. (2013) also report that “a picture as brief as 20 ms is easy to see if it is followed by a blank visual field (e.g., Thorpe, Fize, and Marlot (1996))” (p. 270).\") If we ran EfficientNet-B2 ten times per second, this would require **~1e10 FLOP/s**.\n\n\nOn one estimate from 1995, V1 in humans has about 3e8 neurons.[542](https://www.openphilanthropy.org/brain-computation-report#footnote542_0qngcep \"Carandini (2012): “Thanks to high neuronal density and large area, V1 contains a vast number of neurons. In humans, it contains about 140 million neurons per hemisphere (Wandell, 1995), i.e. about 40 V1 neurons per LGN neuron” (from the introduction).\") However, based on more recent estimates in chimpanzees, I think this estimate might be low, possibly by an order to magnitude (see endnote for explanation).[543](https://www.openphilanthropy.org/brain-computation-report#footnote543_zdqzfkq \"For example, one recent estimate by Miller et al. (2014), using better methods, finds 675 million neurons for chimpanzee V1 as a whole. Another -- Collins et al. (2016) -- finds 737 million neurons in just onechimpanzee V1 hemisphere, suggesting ~1.4 billion in V1 as a whole. The human cortex has ~2× the neurons of the chimpanzee cortex, suggesting something like 1-3 billion for human V1. Mora-Bermúdez et al. (2016): “The human brain is about three times as big as the brain of our closest living relative, the chimpanzee. Moreover, a part of the brain called the cerebral cortex – which plays a key role in memory, attention, awareness and thought – contains twice as many cells in humans as the same region in chimpanzees.”\") I’ll use 3e8-3e9 – e.g., ~0.3%-3% of the brain’s neurons.\n\n\nOn an initial search, I haven’t been able to find good sources for neuron count in the visual cortex as a whole, which includes areas V2-V5.[544](https://www.openphilanthropy.org/brain-computation-report#footnote544_wszhr3g \"Though Collins et al. (2016) find ~400 million in one hemisphere on chimpanzee V2, suggesting 800 million for chimp V2 as a whole, and 1.6 billion for human V2, if we assume similar ratios in the cortex.\") I’ll use 1e9-1e10 neurons – e.g., ~1-10% of the brain’s neurons as a whole – but this is just a ballpark.[545](https://www.openphilanthropy.org/brain-computation-report#footnote545_ti34lk7 \"The high-end here is more than half of the neurons in the cortex as a whole (~16 billion neurons, according to Azevedo et al. (2016) (p. 536), which seems high to me, based on eyeballing pictures of the visual cortex. That said, neuron density in primate visual cortex appears to be unusually high (see Collins et al. (2016): “the packing densities of neurons in V1 were 1.2, 2.1, 3.3, and 3.5 times greater than neuron densities in secondary visual cortex (V2) and somatosensory, motor, and premotor cortices, respectively” (“Visual areas of the cortex”), numbers in this range do seem to fall out of extrapolation from the chimpanzee data, and ~50% of the cortex is compatible with comments from Prof. Konrad Kording to the effect that ~half of the brain’s hardware is involved in processing vision in some way. From Open Philanthropy's non-verbatim notes from a conversation with Prof. Konrad Kording: “The human brain dedicates roughly half of its hardware to processing vision (this can be seen by looking at diagrams created by David Van Essen). And we can solve a lot of the vision problem (e.g., detecting objects, segmenting scenes, storing information) using very modest compute” (p. 1).\")\n\n\nIf we focused on percentage of volume, weight, energy consumption, and synapses, the relevant percentages might be larger (since the cortex accounts for a larger percentage of these than of the brain’s neurons).[546](https://www.openphilanthropy.org/brain-computation-report#footnote546_8a08rmd \"See my discussion of the cerebellum in Section 2.4.2.3. Though note that neuron densities in V1 are especially high. See Collins et al. (2016): “the packing densities of neurons in V1 were 1.2, 2.1, 3.3, and 3.5 times greater than neuron densities in secondary visual cortex (V2) and somatosensory, motor, and premotor cortices, respectively” (“Visual areas of the cortex”).\")\n\n\nWe can distill the other uncertainties from 3.2.1 and 3.2.2 into two numbers:\n\n\n1. The percentage of its information-processing capacity that the visual cortex devotes to tasks analogous to image classification, when it performs them.\n2. The factor increase in FLOP/s required to reach human-level performance on this task (if any), relative to the FLOP/s costs of an EfficientNet-B2 run 10 times per second.\n\n\nAbsent a specific chunk of the visual cortex devoted exclusively to this task, the percentage in (1) does not have an obvious physiological interpretation in terms of e.g. volume or number of neurons.[547](https://www.openphilanthropy.org/brain-computation-report#footnote547_urskua0 \"One could also ask questions like: “how many fewer neurons could this region have/how much less energy could it use, if evolution got to rebuild it from scratch, without needing to do task X, but still needing to do everything else it does?” But these are hard to answer.\") Still, something like percentage of spikes or of signaling-based energy consumption driven by performing the task might be a loose guide.[548](https://www.openphilanthropy.org/brain-computation-report#footnote548_2kqm5fu \"Drexler (2019) appears to have something like this in mind: “A key concept in the following will be “immediate neural activity” (INA), an informal measure of potentially task-applicable brain activity. As a measure of current neural activity potentially applicable to task performance, INA is to be interpreted in an abstract, information-processing sense that conceptually excludes the formation of long-term memories (as discussed below, human and machine learning are currently organized in fundamentally different ways)” (p. 183-184)\")\n\n\nOf course, the resources that a brain uses in performing a task are not always indicative of the FLOP/s the task requires. Multiplying two 32-bit numbers in your head, for example, uses lots of spikes, energy, etc., but requires only one FLOP. And naively, it seems unlikely that the neural resources used in playing e.g. Tic-Tac-Toe, Checkers, Chess, and Go will be a simple function of the FLOP/s that have thus far been found necessary to match human-level performance. However, the brain was not optimized to multiply large numbers or play board games. Identifying visual objects (e.g. predators, food) seems like a better test of its computational potential.[549](https://www.openphilanthropy.org/brain-computation-report#footnote549_m937gu1 \"My thanks to Dr. Eric Drexler for discussion.\")\n\n\nCan we say anything about (1)? Obviously, it’s difficult. The variance in the activity in the visual cortex explained by DNN image classifiers might provide some quantitative anchor (this appears to be at least 7% in V1, and possibly much higher in other regions), but I haven’t explored this much.[550](https://www.openphilanthropy.org/brain-computation-report#footnote550_s6wq7wn \"Here’s one loose attempt to estimate (1). Following the data in Cadena et al. (2019), suppose that for half of the neurons in V1, ~28% of the variance is explained by the visual stimulus, and ~50% of that can be explained by networks trained on object recognition. To be conservative, let’s assume that none of the variance in the activity of the other half of V1 neurons is explained by visual stimuli at all. This would suggest that at least 7% of variance in V1 neural activity overall can be explained by such models (here I’m following a version of the methodology in Olshausen and Field (2005), who suggest that “If we consider that roughly 40% of the population of neurons in V1 has actually been recorded from and characterized, together with our conjecture that 30% to 40% of the response variance of these neurons can be explained under natural conditions using the currently established models, then we are left to conclude that we can currently account for 12% to 16% of V1 function. Thus, approximately 85% of V1 function has yet to be explained” (p.  Higher estimates could incorporate all the data listed on http://www.brain-score.org/, which I haven’t tried to interpret, but which appears to suggest a substantial amount of variance explained. From Schrimpf et al. (2018): “The best ANN model V4 predictivity score is 0.663, which is below the internal consistency ceiling of these V4 data (0.892). The best ANN model IT predictivity score is 0.604, which is below the internal consistency ceiling of these IT data (0.817). And the best ANN model behavioral predictivity score is 0.378, which is below the internal consistency ceiling of these behavioral data (0.497)” (p. 7). See also Storrs et al. (2020): “We find that trained models significantly outperform untrained models (accounting for 57% more of the explainable variance), suggesting that features representing natural images are important for explaining hIT. Model fitting further improves the alignment of DNN and hIT representations (by 124%), suggesting that the relative prevalence of different features in hIT does not readily emerge from the particular ImageNet object-recognition task used to train the networks” (abstract).\") Still, to the extent (1) makes sense at all, it should be macroscopic enough to explain the results discussed at the beginning of this section (e.g., it should make interesting parallels between the feature detection in DNNs and the visual cortex noticeable using tools like fMRI and spike recordings), along with other modeling successes in visual neuroscience I haven’t explored.[551](https://www.openphilanthropy.org/brain-computation-report#footnote551_n9q9pai \"See e.g. Open Philanthropy's non-verbatim notes from a conversation with Dr. Kate Storrs: “In Dr. Storrs’ area of neuroscience, there can be a narrative to the effect that: “the early visual system is basically done. We understand the canonical computations: e.g., edge, orientation and color selection. You link them up with local exhibition and inhibition, and you have feedback that probably has some kind of predictive function (e.g., you get less and less response from V1 neurons to a predictable stimulus, suggesting that feedback is creating some kind of short-term memory). Once you’ve got all of this, you can explain most of V1 activity.” (This is not necessarily Dr. Storrs’ view; it’s just a summary of a common narrative.)” (p. 3).\") **I’ll use 1% of V1 as a low end,[552](https://www.openphilanthropy.org/brain-computation-report#footnote552_bbi4mzf \"Open Philanthropy’s technical advisor, Dr. Dario Amodei, suggests that V1 might be a helpful point of focus (ImageNet models plausibly cover functions in other parts of the visual cortex, but he suggests that basing estimates on V1 is conservative).\") and 10% of the visual cortex as a whole as a high end, with 1% of the visual cortex as a rough middle**.\n\n\nMy biggest hesitation about these numbers comes from the conceptual ambiguities involved in estimating this type of parameter at all. Consider: “what fraction of a horse’s legs does a wheelbarrow automate?”[553](https://www.openphilanthropy.org/brain-computation-report#footnote553_g62s4sp \"This is a variant on an analogy suggested by Nick Beckstead.\") It’s not clear that “of course it’s hard to say precisely, but surely at least a millionth, right?” is a sensible answer – and the problem isn’t that the true answer is a billionth instead. It seems possible that comparisons between DNNs and the visual cortex are similar.\n\n\nWe also need to scale up the size of the DNN in question by (2), to reflect the FLOPs costs of fully human-level image classification. What is (2)? I haven’t looked into it much, and I feel very uncertain. Some of the differences discussed in 3.2.2 – for example, differences in input size, or in number of categories (assuming we can settle on a meaningful estimate for the number of categories humans can recognize) – might be relatively easy to adjust for.[554](https://www.openphilanthropy.org/brain-computation-report#footnote554_adfs930 \"For example, FLOPs scaling for bigger inputs appears to be roughly linear: see e.g. here. Dr. Dario Amodei also suggested linear scaling for bigger inputs as a conservative adjustment.\") But others, such as the FLOPs required to run models that are only as vulnerable to adversarial examples as humans are, or that can generalize as well as humans can, might involve much more involved and difficult extrapolations.\n\n\nI’m not going to explore these adjustments in detail here. Here are a few possible factors:\n\n\n* **10x** (150k input values vs. ~1 million retinal ganglion cells)\n* **100x** (~factor increase in EfficientNet-B2 FLOPs required to run a [BiT-L model](https://arxiv.org/pdf/1912.11370.pdf), which exhibits better, though still imperfect, generalization to real-world datasets like ObjectNet).[555](https://www.openphilanthropy.org/brain-computation-report#footnote555_5nnim6g \"Kolesnikov et al. (2020): “All of our BiT models use a vanilla ResNet-v2 architecture [16], except that we replace all Batch Normalization [21] layers with Group Normalization [60] and use Weight Standardization [43] in all convolutional layers. See Section 4.3 for analysis. We train ResNet-152 architectures in all datasets, with every hidden layer widened by a factor of four (ResNet152×4).” A ResNet-152 is 1e10 FLOPs for a forward pass, and my understanding is widening every hidden layer by a factor of four results in a ~16× increase in overall FLOPs, suggesting ~2e11 FLOPs.\")\n* **1000x** (10x on top of a Bit-L model, for additional improvements. I basically just pulled this number out of thin air, and it’s by no means an upper bound).\n\n\nPutting these estimates for (1) and (2) together:\n\n\n\n\n\n| **ESTIMATE TYPE** | **ASSUMED PERCENTAGE OF VISUAL CORTEX INFORMATION-PROCESSING CAPACITY USED FOR TASKS ANALOGOUS TO IMAGE CLASSIFICATION, WHEN PERFORMED** | **IMPLIED PERCENTAGE OF THE WHOLE BRAIN’S CAPACITY (BASED ON NEURON COUNT)** | **ASSUMED FACTOR INCREASE IN 10 HZ EFFICIENTNET-B2 FLOP/S (1E10) REQUIRED TO REACH FULLY HUMAN-LEVEL IMAGE CLASSIFICATION** | **WHOLE BRAIN FLOP/S ESTIMATE RESULTING FROM THESE ASSUMPTIONS** |\n| --- | --- | --- | --- | --- |\n| Low-end | 10% | 0.1%-1% | 10x | 1e13-1e14 |\n| Middle | 1% | 0.01%-0.1% | 100x | 1e15-1e16 |\n| High-end | 0.3% (1% of V1) | 0.003%-0.03% | 1000x | 3e16-3e17 |\n**Figure 16: Functional method estimates based on the visual cortex.**\n\nObviously, the numbers for (1) and (2) here are very made-up. The question of how high (2) could go, for example, seems very salient. And the conceptual ambiguities involved in comparing what the human visual system is doing when it classifies an image, vs. what a DNN is doing, caution against relying on what might appear to be conservative bounds.\n\n\nWhat’s more, glancing at different models, image classification (that is, assigning labels to whole images) appears to require fewer FLOPs than other vision tasks in deep learning, such as object detection (that is, identifying and localizing multiple objects in an image). For example: an [EfficientDet-D7](https://arxiv.org/pdf/1911.09070v6.pdf), a [close to state of the art object-detection](https://paperswithcode.com/sota/object-detection-on-coco) model optimized for efficiency, uses 3e11 FLOPs per forward pass – 300x more than an EfficientNet-B2.[556](https://www.openphilanthropy.org/brain-computation-report#footnote556_46p67zy \"Tan et al. (2020): “In particular, with single-model and single test-time scale, our EfficientDet-D7 achieves state-of-the-art 53.7 AP with 52M parameters and 325B FLOPs, outperforming previous best detector [44] with 1.5 AP while being 4× smaller and using 13× fewer FLOPs” (p. 2).\") So using this sort of model as a baseline instead could add a few orders of magnitude. And such a choice would raise its own questions about what human-level performance on the relevant task looks like.\n\n\nOverall, I hold functional method estimates based on current DNN vision models very lightly – even more lightly, for example, than the mechanistic method estimates above. Still, I don’t think them entirely uninformative. For example, it is at least interesting to me that you need to treat an EfficientNet-B2 as running on e.g. ~0.1% of the FLOPs of a model that would cover ~1% of V1, in order to get whole brain estimates substantially above 1e17 FLOP/s – the top end of the mechanistic method range I discussed above. This weakly suggests to me that such a range is not way too low.\n\n\n#### 3.3 Other functional method estimates\n\n\nThere are various other functional method estimates in the literature. Here are three:[557](https://www.openphilanthropy.org/brain-computation-report#footnote557_in5pju0 \"Others not included in the chart include Kurzweil’s (2012) for a “pattern recognition”: “emulating one cycle in a single pattern recognizer in the biological brain’s neocortex would require about 3,000 calculations. Most simulations run at a fraction of this estimate. With the brain running at about 102 (100) cycles per second, that comes to 3 × 105 (300,000) calculations per second per pattern recognizer. Using my estimate of 3 × 108 (300 million) pattern recognizers, we get about 1014 (100 trillion) calculations per second” (p. 195). Kurzweil (2005) also suggests that “Yet another estimate comes from a simulation at the University of Texas that represents the functionality of a cerebellum region containing 104 neurons; this required about 108 cps, or about 104 cps per neuron. Extrapolating this over an estimated 1011 neurons results in a figure of about 1015 cps for the entire brain” (p. 123).\")\n\n\n\n\n| **SOURCE** | **TASK** | **ARTIFICIAL SYSTEM** | **COSTS OF HUMAN-LEVEL PERFORMANCE** | **ESTIMATED PORTION OF BRAIN** | **RESULTING ESTIMATE FOR WHOLE BRAIN** |\n| --- | --- | --- | --- | --- | --- |\n| [Drexler (2019)](https://www.fhi.ox.ac.uk/wp-content/uploads/Reframing_Superintelligence_FHI-TR-2019-1.1-1.pdf)[558](https://www.openphilanthropy.org/brain-computation-report#footnote558_ddodguw \"Drexler (2019): “Baidu’s Deep Speech 2 system can approach or exceed human accuracy in recognizing and transcribing spoken English and Mandarin, and would require approximately 1 GFLOP/s per real-time speech stream (Amodei et al. 2015). For this roughly human-level throughput, fPFLOP = 10−6 [fPFLOP is the fraction of a petaFLOP that a given number of FLOPs represents]. Turning to neural function again, consider that task-relevant auditory/semantic cortex probably comprises >1% of the human brain. If the equivalent of the Deep Speech 2 speech-recognition task were to require 10% of that cortex, then fINA = 10−3, and RPFLOP = 1000 [RPFLOP is the ratio of the fraction of the brain’s activity that a task represents, to the fraction of a petaFLOP that the compute to perform that task represents]” (p. 187). Dr. Dario Amodei also suggested an estimate in this vein.\") | Speech recognition | DeepSpeech2 | 1e9 FLOP/s | >0.1% | 1e12 FLOP/s |\n| [Drexler (2019)](https://www.fhi.ox.ac.uk/wp-content/uploads/Reframing_Superintelligence_FHI-TR-2019-1.1-1.pdf)[559](https://www.openphilanthropy.org/brain-computation-report#footnote559_fmwa1ai \"Drexler (2019): “Google’s neural machine translation (NMT) systems have reportedly approached human quality (Wu et al. 2016). A multi-lingual version of the Google NMT model (which operates with the same resources) bridges language pairs through a seemingly language-independent representation of sentence meaning (Johnson et al. 2016), suggesting substantial (though unquantifiable) semantic depth in the intermediate processing. Performing translation at a human-like rate of one sentence per second would require approximately 100 GFLOP/s, and fPFLOP = 10−4. It is plausible that (to the extent that such things can be distinguished) human beings mobilize as much as 1% of global INA at an “NMT task-level”— involving vocabulary, syntax, and idiom, but not broader understanding— when performing language translation. If so, then for “NMT-equivalent translation,” we can propose fINA = 10−2, implying RPFLOP = 100” (p. 187-188).\") | Translation | Google Neural Machine Translation | 1e11 FLOP/s (1 sentence per second) | 1% | 1e13 FLOP/s |\n| [Kurzweil (2005)](https://www.amazon.com/Singularity-Near-Humans-Transcend-Biology-ebook/dp/B000QCSA7C)[560](https://www.openphilanthropy.org/brain-computation-report#footnote560_d6p7272 \"Kurzweil (2005): “Another estimate comes from the work of Lloyd Watts and his colleagues on creating functional simulations of regions of the human auditory system, which I discuss further in chapter 4. One of the functions of the software Watts has developed is a task called “stream separation,” which is used in teleconferencing and other applications to achieve telepresence (the localization of each participant in a remote audio teleconference). To accomplish this, Watts explains, means ‘precisely measuring the time delay between sound sensors that are separated in space and that both receive the sound.’ The process involves pitch analysis, spatial position, and speech cues, including language-specific cues. ‘One of the important cues used by humans for localizing the position of a sound source is the Interaural Time Difference (ITD), that is, the difference in time of arrival of sounds at the two ears.’ Watts’s own group has created functionally equivalent re-creations of these brain regions derived from reverse engineering. He estimates that 1011 cps are required to achieve human-level localization of sounds. The auditory cortex regions responsible for this processing comprise at least 0.1 percent of the brain’s neurons. So we again arrive at a ballpark estimate of around 1014 cps (1011 cps × 103)” (p. 123).\") | Sound localization | Work by Lloyd Watts | 1e11 calculations/s | 0.1% | 1e14 calculations/s |\n**Figure 17: Other functional method estimates in the literature.**\n\nI haven’t attempted to vet these estimates. And we can imagine others. Possibly instructive recent work includes:\n\n\n* [Kell et al. (2018)](http://mcdermottlab.mit.edu/papers/Kell_etal_2018_DNN_auditory_cortex.pdf), who suggest that ANNs trained to recognize sounds can predict neural activity in the cortex.[561](https://www.openphilanthropy.org/brain-computation-report#footnote561_yuo70im \"Kell et al. (2018): “...we optimized hierarchical neural networks for speech and music recognition. The best-performing network contained separate music and speech pathways following early shared processing, potentially replicating human cortical organization. The network performed both tasks as well as humans and exhibited human-like errors despite not being optimized to do so, suggesting common constraints on network and human performance. The network predicted fMRI voxel responses substantially better than traditional spectrotemporal filter models throughout auditory cortex. It also provided a quantitative signature of cortical representational hierarchy—primary and non-primary responses were best predicted by intermediate and late network layers, respectively. The results suggest that task optimization provides a powerful set of tools for modeling sensory systems” (p. 630).\")\n* [Banino et al. (2018)](https://www.nature.com/articles/s41586-018-0102-6) and [Cueva and Wei (2018)](https://arxiv.org/pdf/1803.07770.pdf), who suggest that ANNs trained on navigation tasks develop grid-like representations, akin to [grid cells](https://en.wikipedia.org/wiki/Grid_cell#:~:text=A%20grid%20cell%20is%20a,location%2C%20distance%2C%20and%20direction.) in biological circuits.[562](https://www.openphilanthropy.org/brain-computation-report#footnote562_i5lm81p \"Banino et al. (2018): “Grid cells are thought to provide a multi-scale periodic representation that functions as a metric for coding space7,8 and is critical for integrating self-motion (path integration)6,7,9 and planning direct trajectories to goals (vector-based navigation)7,10,11. Here we set out to leverage the computational functions of grid cells to develop a deep reinforcement learning agent with mammal-like navigational abilities… Our findings show that emergent grid-like representations furnish agents with a Euclidean spatial metric and associated vector operations, providing a foundation for proficient navigation. As such, our results support neuroscientific theories that see grid cells as critical for vector-based navigation 7,10,11, demonstrating that the latter can be combined with path-based strategies to support navigation in challenging environments” (abstract). Cueva and Wei (2018): “we trained recurrent neural networks (RNNs) to perform navigation tasks in 2D arenas based on velocity inputs. Surprisingly, we find that grid-like spatial response patterns emerge in trained networks, along with units that exhibit other spatial correlates, including border cells and band-like cells. All these different functional types of neurons have been observed experimentally. The order of the emergence of grid-like and border cells is also consistent with observations from developmental studies. Together, our results suggest that grid cells, border cells and others as observed in EC may be a natural solution for representing space efficiently given the predominant recurrent connections in the neural circuits” (p. 1).\")\n* [Merel et al. (2020)](https://openreview.net/forum?id=SyxrxR4KPS), who develop a virtual rodent, which might allow productive comparison with the capabilities of a real rodent.[563](https://www.openphilanthropy.org/brain-computation-report#footnote563_elckyzk \"Merel et al. (2020): “In this work we develop a virtual rodent that learns to flexibly apply a broad motor repertoire, including righting, running, leaping and rearing, to solve multiple tasks in a simulated world. We analyze the artificial neural mechanisms underlying the virtual rodent’s motor capabilities using a neuroethological approach, where we characterize neural activity patterns relative to the rodent’s behavior and goals. We show that the rodent solves tasks by using a shared set of force patterns that are orchestrated into task-specific behaviors over longer timescales. Through methods familiar to neuroscientists, including representational similarity analysis, dimensionality reduction techniques, and targeted perturbations, we show that the networks produce these behaviors using at least two classes of behavioral representations, one that explicitly encodes behavioral kinematics in a task-invariant manner, and a second that encodes task-specific behavioral strategies. Overall, the virtual rat promises to facilitate grounded collaborations between deep reinforcement learning and motor neuroscience” (p. 1).\")\n\n\nThat said, I expect other functional method estimates to encounter difficulties analogous to those discussed in section 3.2: e.g., difficulties identifying (a) the percentage of the brain’s capacity devoted to a given task, (b) what human-level performance looks like, and (c) the FLOP/s sufficient to match this level.\n\n\n\n4 The limit method\n------------------\n\n\nLet’s turn to a third method, which attempts to upper bound required FLOP/s by appealing to physical limits.\n\n\nSome such bounds are too high to be helpful. [Lloyd (2000)](https://arxiv.org/pdf/quant-ph/9908043.pdf), for example, calculates that a 1 kg, 1 liter laptop (the brain is roughly [1.5 kg and 1.5 liters](https://en.wikipedia.org/wiki/Brain_size)) can perform a maximum of 5e50 operations per second, and store a maximum of 1e31 bits. Its memory, though, “looks like a thermonuclear explosion.”[564](https://www.openphilanthropy.org/brain-computation-report#footnote564_ozsru9o \"Lloyd (2000): “The amount of information that can be stored by the ultimate laptop, ≈ 1031 bits, is much higher than the ≈ 1010 bits stored on current laptops. This is because conventional laptops use many degrees of freedom to store a bit where the ultimate laptop uses just one. There are considerable advantages to using many degrees of freedom to store information, stability and controllability being perhaps the most important. Indeed, as the above calculation indicates, in order to take full advantage of the memory space available, the ultimate laptop must turn all its matter into energy. A typical state of the ultimate laptop’s memory looks like a plasma at a billion degrees Kelvin: the laptop’s memory looks like a thermonuclear explosion or a little piece of the Big Bang! Clearly, packaging issues alone make it unlikely that this limit can be obtained, even setting aside the difficulties of stability and control” (p. 11).\") For present purposes, such idealizations aren’t informative.\n\n\nOther physical limits, though, might be more so. I’ll focus on “[Landauer’s principle](https://en.wikipedia.org/wiki/Landauer%27s_principle),” which specifies the minimum energy costs of erasing bits (more description below). Standard FLOPs (that is, the FLOPs performed by human-engineered computers) erase bits, which means that an idealized computer running on the brain’s energy budget (~20W) can only perform so many standard FLOP/s: specifically, ~7e21 (~1e21 if we assume 8-bit FLOPs, and ~1e19 if we assume current digital multiplier implementations).[565](https://www.openphilanthropy.org/brain-computation-report#footnote565_l5f0pu7 \"See calculations in Section 4.2.\")\n\n\nDoes this upper bound the FLOP/s required to match the brain’s task-performance? In principle, no. The brain need not be performing operations that resemble standard FLOPs, and more generally, bit-erasures are not a universal currency of computational complexity.[566](https://www.openphilanthropy.org/brain-computation-report#footnote566_o5iyyua \"My thanks to Prof. David Wallace for discussion.\") In theory, for example, factorizing a semiprime requires no bit-erasures, since the mapping from inputs to outputs is 1-1.[567](https://www.openphilanthropy.org/brain-computation-report#footnote567_wlq2xsb \"My thanks to Prof. David Wallace for suggesting this example.\") But we’d need many FLOPs to do it. Indeed, in principle, it appears possible to perform arbitrarily complicated computations with very few bit erasures, with manageable algorithmic overheads (though there is at least some ongoing controversy about this).[568](https://www.openphilanthropy.org/brain-computation-report#footnote568_lil196i \"From Open Philanthropy's non-verbatim notes from a conversation with Dr. Paul Christiano: “The algorithmic overhead involved in reversible computing (specifically, the overhead involved in un-computing what you have already computed) is not that bad. Most of the difficulty lies in designing such efficient hardware. Partly for this reason, Dr. Christiano does not think that you can get an upper bound on the FLOP/s required to do what the brain does, purely by appealing to the energy required to erase bits. We believe that you can perform extremely complex computations with almost no bit erasures using good enough hardware” (p. 4). For discussion of some ongoing controversy related to the bit-erasures involved in reading/writing inputs and outputs repeatedly, see Wolpert (2019), Open Philanthropy's non-verbatim notes from a conversation with Prof. David Wolpert (p. 2), and Open Philanthropy's non-verbatim notes from a conversation with Dr. Jess Riedel (p. 5).\")\n\n\nAbsent a simple upper bound, then, the question is what we can say about the following quantity:\n\n\n\n> FLOP/s required to match the brain’s task performance ÷ bit-erasures/s in the brain\n> \n> \n\n\nVarious experts I spoke to about the limit method (though not all[569](https://www.openphilanthropy.org/brain-computation-report#footnote569_s5x6y45 \"From Open Philanthropy's non-verbatim notes from a conversation with Dr. Michael Frank (p. 2): Dr. Frank thinks that it is possible that there are processes in the brain that are close to thermodynamically reversible, and that play a role in computation. We don’t know enough about the brain to answer confidently either way...We don’t have positive evidence that such reversible effects exist and are important to cognition, but we also don’t have positive evidence that rules this out. However, Dr. Frank thinks that it’s a reasonable first-order assumption to assume that those effects, if they exist, would only have a small, second-order effect on the amount of computational work required to simulate the system. If these effects are there, they may be fairly subtle and gradual, acting in a long-term way on the brain, in a manner we are not close to understanding…Overall, Dr. Frank would lean weakly towards the view that you could make a digital model of cognition without including any subtle reversible processes, but because he is not an expert on the neural computation, he would not bet confidently one way or another. From Open Philanthropy's non-verbatim notes from a conversation with Dr. Stephen Larson (p. 4): Dr. Larson is not persuaded that Landauer’s limit can be used to upper-bound the FLOP/s necessary to replicate the brain’s task-performance, as it seems possible to him that there could be computational processes occurring in the brain that do not require bit-erasures. Prof. David Wallace was also skeptical that Landauer's principle could be used to generate an informative upper bound on required FLOP/s.\")) thought it likely that this quantity is less than 1 – indeed, multiple orders of magnitude less.[570](https://www.openphilanthropy.org/brain-computation-report#footnote570_eo2siag \"From Open Philanthropy's non-verbatim notes from a conversation with Prof. Jared Kaplan (p. 2): Mr. Carlsmith asked Prof. Kaplan’s opinion of the following type of upper bound on the compute required to replicate the brain’s task-performance. According to Landauer’s principle, the brain, given its energy budget (~20 W) can be performing no more than ~1e22 bit-erasures per second. And if the brain is performing less than 1e22 bit-erasures per second, the number of FLOP/s required to replicate its task-performance is unlikely to exceed 1e22. Prof. Kaplan thinks that this type of calculation provides a very reasonable loose upper bound on the computation performed by the brain, and that the actual amount of computation performed by the brain is almost certainly many orders of magnitude below this bound. Indeed, he thinks the true number is so obviously much lower than this that Landauer’s principle does not initially seem particularly germane to questions about brain computation. One analogy might be attempting to upper bound the number of fraudulent votes in a US presidential election via the total population of the world. However, he thinks that upper bounds based on Landauer’s principle are a helpful counter to views on which ‘we really just don’t know’ how much computation the brain performs, or on which doing what the brain does requires the type of compute that would be implicated by very detailed biophysical simulations. From Open Philanthropy's non-verbatim notes from a conversation with Dr. Jess Riedel (p. 2-3): Dr. Riedel is very convinced by the claim that because of Landauer’s principle, the brain can be implementing no more than ~1e22 bit-erasures per second. And he also thinks it very reasonable to infer from this that the brain’s task performance can be replicated using less than 1e22 FLOP/s, conditional on the assumption that the brain’s computation is well-characterized as digital and/or analog computation that can be simulated on a digital computer with modest overhead (he assigns some small probability to this assumption being false, though he would find its falsehood fairly shocking). Indeed, Dr. Riedel expects the amount of computation performed by the brain to be much lower than the upper bound implied by Landauer’s principle. This is partly because, from a basic physics perspective, the vast majority of what’s going on in the brain (e.g., cell maintenance, other thermodynamic processes inside cells) generates entropy but has nothing to do with the computations that are happening. From Open Philanthropy's non-verbatim notes from a conversation with Dr. Paul Christiano (p. 5): Dr. Christiano expects that experts in physics, chemistry, and computer engineering would generally think it extremely unlikely that the brain is erasing less than one bit per computationally useful FLOP it performs. If the brain were doing this, Dr. Christiano believes that this would mean that the brain is qualitatively much more impressive than any other other biological machinery we are aware of…Dr. Christiano would be extremely surprised if the brain got more computational mileage out of a single ATP than human engineers can get out of a FLOP, and he would be very willing to bet that it takes at least 10 ATPs to get the equivalent of a FLOP. Mr. Carlsmith estimates that the brain can be using no more than ~1e20 ATPs/second. If this estimate is right, then Dr. Christiano is very confident that you do not need more than 1e20 FLOP/s to replicate the brain’s task-performance. See also Open Philanthropy's non-verbatim notes from a conversation with Prof. David Wolpert (p. 3-4) for more discussion, though with less of an obvious upshot: Mr. Carlsmith asked Prof. Wolpert whether one can use Landauer’s principle to upper bound the FLOP/s required to replicate the human brain’s task-performance… In Prof. Wolpert’s view, it is a subtle and interesting question how to do this type of calculation correctly. A rigorous version would require a large research project… Prof. Wolpert’s thinks that this calculation is legitimate as a first-pass, back-of-the-envelope upper bound on the bit-erasures that the brain could be implementing. It couldn’t get published in a physics journal, but it might get published in a popular science journal, and it helps get the conversation started. At a minimum, it’s a strong concern that advocates of extreme amounts of computational complexity in the brain (for example, advocates of the view that you need much more than 1e22 FLOP/s to replicate the brain’s computation) would need to address.\") They gave various arguments, which I’ll roughly group into (a) algorithmic arguments ([Section 4.2.1](#section_4.2.1)), and (b) hardware arguments ([Section 4.2.2](#section_4.2.2)). Of these, the hardware arguments seem to me stronger, but they also don’t seem to me to rely very directly on Landauer’s principle in particular.\n\n\nWhether the bound in question emerges primarily from Landauer’s principle or not, though, I’m inclined to defer to the judgment of these experts overall.[571](https://www.openphilanthropy.org/brain-computation-report#footnote571_8uht8yk \"This deference is not merely the result of tallying up the amount of expert support for different perspectives: it incorporates many more subjective factors involved in my evaluation of the overall evidence the expert opinions I was exposed to provides.\") And even if their arguments to do not treat the brain entirely as a black box, a number of the considerations these arguments appeal to seem to apply in scenarios where more specific assumptions employed by other methods are incorrect. This makes them an independent source of evidence.\n\n\nNote, as well, that e.g. 1e21 FLOP/s isn’t too far from some of the numbers that have come up in previous sections. And some experts either take numbers in this range or higher seriously, or are agnostic about them.[572](https://www.openphilanthropy.org/brain-computation-report#footnote572_thl6qyw \"From Open Philanthropy's non-verbatim notes from a conversation with Prof. Konrad Kording: “Examination of neurons reveals that they are actually very non-linear, and the computations involved in plasticity probably include a large number of factors distributed across the cell. In this sense, a neuron might be equivalent to a three-layer neural network, internally trained using backpropagation. In that case, you’d need to add another factor of roughly 105 to your compute estimate, for a total of 1020 multiplications per second. This would be much less manageable. … The difference between the estimates generated by these different approaches is very large -- something like ten orders of magnitude. It’s unclear where the brain is on that spectrum” (p. 2). From Open Philanthropy's non-verbatim notes from a conversation with Prof. Eric Jonas: “Attempting to estimate the compute sufficient to replicate the brain’s task performance is an extremely challenging project. It’s worthwhile (indeed, it’s a common thought experiment amongst neuroscientists), but the error bars will be huge (e.g., something like ten orders of magnitude) ... Active dendritic computation could conceivably imply something like 1-5 orders of magnitude more compute than a simple linear summation model of a neuron” (p. 3). If a simple linear summation model implies ~1e13-1e15 FLOP/s -- e.g., ~1 FLOP per spike through synapse -- this would suggest a range of 1e13-1e20 FLOP/s. From Open Philanthropy's non-verbatim notes from a conversation with Prof. Erik De Schutter: “Prof. De Schutter thinks that at this point, we simply are not in a position to place any limits on the level of biological detail that might be relevant to replicating the brain’s task-performance” (p. 1). Sandberg and Bostrom (2008) (p. 13), report that in an informal poll of attendees at a conference about the required level of resolution for whole brain emulation, the consensus appeared to be one of the following three levels: “Spiking neural network,” which Sandberg and Bostrom estimate would require 1e18 FLOP/s; “Electrophysiology,” which Sandberg and Bostrom estimate would require 1e22 FLOP/s; and “Metabolome,” which Sandberg and Bostrom estimate would require 1e25 FLOP/s; Henry Markham, in a 2018 video (18:28), estimates the FLOP/s burdens of running a “real-time molecular simulation of the human-brain” at 4e29 FLOP/s (and see here for some arguments in which he seems to suggest that levels of detail in this vein are central to counting as a simulation of the brain); and Bell (1999) appears to suggest that we cannot be confident that even a molecular level simulation of the brain would be adequate (p. 2018).\") In this sense, the bound in question, if sound, would provide an informative constraint.\n\n\n \n\n\n#### 4.1 Bit-erasures in the brain\n\n\n#### 4.1.1 Landauer’s principle\n\n\nLandauer’s principle says that implementing a computation that erases information requires transferring energy to the environment – in particular, *k* × T × ln2 per bit erased, where *k* is [Boltzmann’s constant](https://en.wikipedia.org/wiki/Boltzmann_constant), and T is the absolute temperature of the environment.[573](https://www.openphilanthropy.org/brain-computation-report#footnote573_os59mr5 \"I’ve mostly relied on Frank (2018), Sagawa (2014), Wolpert (2019), and Wolpert (2019a) for my understanding of the principle, together (centrally) with discussion with experts. Feyman (1996), Chapter 5, also contains a fairly accessible introduction. See Landauer (1961) for the original statement of the argument: “It is argued that computing machines inevitably involve devices which perform logical functions that do not have a single-valued inverse. This logical irreversibility is associated with physical irreversibility and requires a minimal heat generation, per machine cycle, typically of the order of kT for each irreversible function. This dissipation serves the purpose of standardizing signals and making them independent of their exact logical history” (p. 183).\")\n\n\nI’ll define a computation, here, as a mapping from input logical states to probability distributions over output logical states, where logical states are sets of physical [microstates](https://en.wikipedia.org/wiki/Microstate_(statistical_mechanics)) treated as equivalent for computational purposes;[574](https://www.openphilanthropy.org/brain-computation-report#footnote574_blzf1pz \"Here I am following Frank (2018): “Let there be a countable (usually finite) set C = {ci} of distinct entities ci called computational states. Then a general definition of a (possibly stochastic) (computational) operation O is a function O : C → P(C), where P(C) denotes the set of probability distributions over C. That is, O(ci) for any given ci ∈ C is some corresponding probability distribution Pi : C → [0, 1]. The intent of this definition is that, when applied to an initial computational state ci, the computational operation transforms it into a final computational state ci, but in general, this process could be stochastic, meaning that, for whatever reason, having complete knowledge of the initial state does not imply having complete knowledge of the final state” (p. 11). See Maroney (2005) for more discussion of stochastic computation in the context of Landauer’s principle.\") and I’ll use “operation” to refer to a comparatively basic computation implemented as part of implementing another computation. Landauer’s principle emerges from the close relationship between changes in logical entropy (understood as the [Shannon entropy](https://en.wikipedia.org/wiki/Entropy_(information_theory)) of the probability distribution over logical states) and thermodynamic entropy (understood as the natural logarithm of the number of possible microstates, multiplied by [Boltzmann’s constant](https://en.wikipedia.org/wiki/Boltzmann_constant)).[575](https://www.openphilanthropy.org/brain-computation-report#footnote575_0q78bdq \"Schroeder (2000): “Entropy is just the logarithm of the number of ways of arranging things in the system (times Boltzmann’s constant)” (p. 75). See also Wikipedia on Boltzmann’s principle. From Open Philanthropy's non-verbatim notes from a conversation with Prof. Jared Kaplan: “Landauer’s principle states that erasing a bit of information requires a minimum energy expenditure -- specifically, kT ln2, where k is Boltzmann’s constant, and T is the absolute temperature. This principle is grounded in the relationship between entropy and energy -- the same relationship that grounds the fact that heat doesn’t flow from cold things to hot things, and the fact that you can’t create a perpetual motion machine or an arbitrarily efficient engine. For physicists, entropy is the logarithm of the number of accessible states. When a system changes, either this entropy stays the same, or it increases...” (p. 1).\")\n\n\nIn particular, if (given an initial probability distribution over inputs) a computation involves decreasing logical entropy (call a one bit decrease a “logical bit-erasure”),[576](https://www.openphilanthropy.org/brain-computation-report#footnote576_032sado \"I am using the term “logical bit-erasures” to quantify logical entropy drops of the kind to which Landauer’s principle, as I understand it, is relevant, even in a stochastic context. Discussions of Landauer’s principle sometimes assume a deterministic context, in which the relationship between decreases in logical entropy and logical irreversibility (e.g., the inability to reconstruct inputs on the basis of outputs) is more straightforward (e.g., logically irreversible operations necessarily decrease logical entropy). Stochastic contexts introduce more complexities (see e.g. Frank (2018) and Maroney (2018) for some discussion), but as I understand it, the basic fact that decreasing logical entropy implicates Landauer costs remains unaltered. See also Kempes et al. (2017), who use a similar way of measuring Landauer costs in articulating what they call the “generalized Landauer bound” (p. 7), e.g.: “to focus on the specifically computation-based thermodynamic cost of a process, suppose that at any given time t all states x have the same energy. It is now known that in this situation the minimal work required to transform a distribution P0(x) at time 0 to a distribution P1(x) at time 1 is exactly kT[S(P0) − S(P1)] where S(.) is Shannon entropy and x lives in a countable space X” (p. 6). See also Open Philanthropy's non-verbatim notes from a conversation with Prof. David Wolpert: “The generalized Landauer bound tells you the energy costs of performing a computation in a thermodynamically reversible way -- energy that you could in principle get back. In particular: if you’re connected to a single heat bath, then regardless of whether your computation is deterministic or noisy, the generalized Landauer’s bound says that the minimum free energy you need to expend (assuming you perform the computation in a thermodynamically reversible way) is kT multiplied by the drop in the entropy. The total energy costs of a computation will then be the Landauer cost, plus the extra energy dissipated via the thermodynamically irreversible aspects of the physical process. This extra energy cannot be recovered” (p. 2).\") then implementing this computation repeatedly using a finite physical system (e.g., a computer) eventually requires increasing the thermodynamic entropy of the computer’s environment – otherwise, the total thermodynamic entropy of the computer and the environment in combination will decrease, in violation of the second law of thermodynamics.[577](https://www.openphilanthropy.org/brain-computation-report#footnote577_r3a8l67 \"My (non-expert) understanding is that one way to loosely and informally express the basic idea here (without attempting to actually justify it technically) is that because the computer and the environment areassumed to be independent (at least with respect to the types of correlations we will realistically be able to keep track of), total entropy (call this Stot) is simply the entropy of the computer (Scomp) plus the entropy of the environment (Senv). And because the logical states are simply sets of computer microstates, the overall entropy of the computer (call this Scomp) is just the logical entropy (Slog), plus the entropy of the computer conditioned on the logical state (call this, Scomp | log). So Stot = Slog + Scomp | log + Senv. This means that according to the second law, if Slog goes down, then Scomp | log and/or Senv have to go up by an amount sufficient to render the total change in entropy non-negative (see Sagawa (2014) (p. 15-17), for a more formal description of this basic framework. See also Frank (2018), section 3.2, and especially p. 19; as well as his verbal description in this lecture (21:44)). And because the brain is a finite system with a finite capacity to absorb entropy, increasing Scomp | log can only go so far if your computer is continuously processing. Eventually, if Slog goes down, Senv must go up by a corresponding amount (see Open Philanthropy's non-verbatim notes from a conversation with Dr. Jess Riedel: “A system like a brain or a computer contains non-information-bearing degrees of freedom that can absorb a finite amount of entropy. However, because the brain/computer is continuously processing and using energy, you can’t keep dumping entropy into those degrees of freedom indefinitely. Eventually, you need to start pushing entropy into the environment. If we assume that the states of the computer and the environment are not correlated (or at least, not in a way that we can realistically keep track of), then the total entropy will be the entropy of the computer plus the entropy of the environment. If the entropy of the computer goes down, the entropy of the environment must go up” (p. 2)).\")\n\n\nLandauer’s principle quantifies the energy costs of this increase.[578](https://www.openphilanthropy.org/brain-computation-report#footnote578_zn11zo7 \"From Open Philanthropy's non-verbatim notes from a conversation with Dr. Jess Riedel: “In certain rare environments, you can decrease entropy by paying costs in conserved quantities other than energy (for example, you can pay costs in angular momentum). But this is not relevant in the context of the brain.” See Vaccaro and Barnett (2011) for more discussion.\") These costs arise from the relationship between the energy and the thermodynamic entropy of a system: broadly, if a system’s energy increases, it can be in more microstates, and hence its entropy increases.[579](https://www.openphilanthropy.org/brain-computation-report#footnote579_p6uolql \"From Open Philanthropy's non-verbatim notes from a conversation with Dr. Jess Riedel: “Landauer’s principle follows almost trivially from basic principles of thermodynamics. Indeed, it can be understood simply as a rewriting of the definition of temperature. At a fundamental level, temperature is defined via the change in energy per unit change in entropy (up to a proportionality constant, Boltzmann’s constant). The practical and folk definitions of temperature, which focus on the amount of energy in a system (e.g., the kinetic energy of vibrating atoms), can be recovered from this more fundamental definition in all but a small number of exceptional cases. As the energy in a non-exceptional system increases, the number of states it can be in (and hence its maximum possible entropy) increases as well. If you have a system with a certain amount of energy, and you want to decrease its entropy, you need to put that entropy somewhere else, because total entropy is non-decreasing. Temperature gives us the exchange rate between energy and entropy. If you want to put some unit of entropy into a heat bath, you have to pay an energy cost, and the temperature of the bath is that cost” (p. 2). See also Open Philanthropy's non-verbatim notes from a conversation with Prof. Jared Kaplan: “Almost all fixed systems have more accessible states as the energy goes up. Temperature just is how the energy changes as the entropy changes (textbooks will often state this as: the reciprocal of the temperature is the derivative of the entropy with respect to the energy). As an intuitive example: if your system (e.g., a set of gas molecules) has no energy at all, then all your molecules are just lying on the floor. As you add energy, they can bounce around, and there many more configurations they can be in. The energy of a single moving particle is another example. It’s kinetic energy is ½mass velocity2. The velocity is a vector, which in a three dimensional space will live on some sphere. As you make the energy bigger, the surface area of this sphere increases. This corresponds to a larger number of accessible states (at the quantum mechanical level, these states are discrete, so you can literally count them)” (p. 1-2).\") Temperature, fundamentally, is defined by this exchange rate.[580](https://www.openphilanthropy.org/brain-computation-report#footnote580_nu4mnu7 \"Schroeder (2000): “The temperature of a system is the reciprocal of the slope of its entropy vs. energy graph. The partial derivative is to be taken with the system’s volume and number of particles held fixed; more explicitly: 1/T = (dS/dUf)N,V (3.5). From now on I will take equation 3.5 to be the definition of temperature. You may be wondering why I do not turn the derivative upside down, and write equation 3.5 as T = (dU/dS)N,V (3.6). The answer is that there is nothing wrong with this, but it’s less convenient in practice, because rarely do you ever have a formula for energy in terms of entropy” (p. 88). See also Jared Kaplan’s notes on Statistical Mechanics & Thermodynamics, p. 24; Wikipedia, “Definition of thermodynamic temperature”); and the quotes in the previous endnote.\")\n\n\nThere has been some controversy over Landauer’s principle,[581](https://www.openphilanthropy.org/brain-computation-report#footnote581_oyk7ql4 \"See Bennett (2003), section 2 (“Objections to Landauer’s principle”), for a description of the various objections, together with his replies (p. 502-508). Some aspects of the controversy, such as whether Landauer’s principle can exorcise Maxwell’s Demon without first assuming the second law (see e.g. Earman and Norton (1998) and Norton (2004)) are not relevant for our purposes, as assuming the truth of second law is not a dialectical problem in this context. The objection that logical irreversibility does not imply thermodynamic irreversibility (see e.g. Maroney (2018)) might seem to have more force, as Landauer’s principle is indeed often understood as claiming or implying the contrary (see Maroney (2018) for description of these interpretations; see also Bub (2002) (p. 10):a logically irreversible operation must be implemented by a physically irreversible device, which dissipates heat into the environment. My own impression, from Open Philanthropy's non-verbatim notes from a conversation with Prof. David Wolpert and from Sagawa (2014), is that this objection, applied to interpretations of Landauer’s principle inconsistent with it, is in fact correct, but that it does not alter the fact that bit-erasure requires transferring energy to the environment -- it merely notes that such a transfer can, in principle, be performed in a thermodynamically reversible way. See e.g. Kempes et al. (2017) (p. 6-7); Wolpert’s (2019a) (p. 3); Sagawa (2014) (p. 12): The logically irreversible erasure can be performed in a thermodynamically reversible manner in the quasi-static limit. See also Open Philanthropy's non-verbatim notes from a conversation with Prof. David Wolpert (p. 2). Maroney (2018), after arguing that “logical reversibility neither implies, nor is implied by, thermodynamic reversibility” (p. 1), nevertheless acknowledges on page 14 that: This does not contradict Landauer (1961) in the least. All that Landauer can be said to have shown was that a resetting operation required a generation of heat in the environment. However, a confusion then appears to arise through the incorrect use of the term ‘dissipation’. In Landauer (1961) and in much of the surrounding literature ‘dissipation’ is used more or less interchangeably with ‘heat generation’. Strictly, dissipation should be used only when the conversion of work to heat arises through dissipative forces (such as those involving friction) which are thermodynamically irreversible. Forces which are thermodynamically reversible are non-dissipative. That said, I have not attempted to evaluate this debate in detail, and I try, in the section, to remain neutral about it where possible (for example, I try to avoid the suggestion that bit erasure requires dissipating energy, as opposed to simply transferring it, though I don’t think I will have entirely avoided controversy: see e.g. Frank (2018) (p. 1), who argues that:Landauer’s Principle is not about general entropy transfers; rather, it more specifically concerns the ejection of (all or part of) some correlated information from a controlled, digital form (e.g., a computed bit) to an uncontrolled, non-computational form, i.e., as part of a thermal environment. I’m aware of at least one empirical result that presents itself as in tension with some versions of Landauer’s principle: López-Suárex et al. (2016) (though Kish (2016) (p. 1) suggests that their argument:neglects the dominant source of energy dissipation, namely, the charging energy of the capacitance of the input electrode, which totally dissipates during the full (0-1-0) cycle of logic values. López-Suárex et al. (2016) (p.3) also note that:We stress here that our experiment does not question the so-called Landauer-reset interpretation, where a net decrease of physical entropy requires a minimum energy expenditure. What we have here is a logically irreversible computation, that is a generic process where a decrease in the amount of information between the output and the input is realized with an arbitrarily small energy dissipation; this shows that logical reversibility and physical reversibility have to be treated on independent bases. Frank (2018) (p. 36-37) claims that: the only experiments that have claimed to demonstrate violations of Landauer’s limit have been ones in which the experimenters misunderstood some basic aspect of the Principle, such as the need to properly generalize the definition of logical reversibility, which was the subject of [11, 12, 13], or the role of correlations that we explained in §3.3 above. However, he does not give more details, in his 2018 paper, as to the experiments he has in mind or the misunderstandings he takes to be involved.\") and some of the relevant physics has been worked out more rigorously since Landauer’s original paper.[582](https://www.openphilanthropy.org/brain-computation-report#footnote582_3gzquoh \"Wolpert (2019a): “This early work [by Landauer and Bennett] was grounded in the tools of equilibrium statistical physics. However, computers are highly nonequilbrium systems. As a result, this early work was necessarily semiformal, and there were many questions it could not address. On the other hand, in the last few decades there have been major breakthroughs in non-equilibrium statistical physics. Some of the most important of these breakthroughs now allow us to analyze the thermodynamic behavior of any system that can be modeled with a time-inhomogeneous continuous-time Markov chain (CTMC), even if it is open, arbitrarily far from equilibrium, and undergoing arbitrary external driving. In particular, we can now decompose the time-derivative of the (Shannon) entropy of such a system into an ‘entropy production rate’, quantifying the rate of change of the total entropy of the system and its environment, minus a ‘entropy flow rate’, quantifying the rate of entropy exiting the system into its environment. Crucially, the entropy production rate is non-negative, regardless of the CTMC. So if it ever adds a nonzero amount to system entropy, its subsequent evolution cannot undo that increase in entropy. (For this reason it is sometimes referred to as irreversible entropy production.) This is the modern understanding of the second law of thermodynamics, for systems undergoing Markovian dynamics. In contrast to entropy production, entropy flow can be negative or positive. So even if entropy flow increases system entropy during one time interval (i.e. entropy flows into the system), often its subsequent evolution can undo that increase” (see p. 2-3).\") But the basic thrust emerges from very fundamental physics, and my understanding is that it’s widely accepted by experts.[583](https://www.openphilanthropy.org/brain-computation-report#footnote583_j5d1zze \"Prof. David Wallace indicated that most physicists accept Landauer's principle. Though see Open Philanthropy's non-verbatim notes from a conversation with Dr. Jess Riedel: “Landauer’s principle follows almost trivially from basic principles of thermodynamics… There is some dispute over Landauer’s limit in the literature. Whether the basic assumptions it follows from apply in the real world is somewhat subtle” (p. 2).\") A number of recent results also purport to have validated Landauer’s principle empirically.[584](https://www.openphilanthropy.org/brain-computation-report#footnote584_bgqo2j3 \"See the review in Frank (2018): “In 2012, Berut et al. tested Landauer’s Principle in the context of a colloidal particle trapped in a modulated double-well potential, an experimental setup designed to mimic the conceptual picture that we reviewed in Fig. 12. Their experimental results showed that the heat dissipated in the erasure operation indeed approached the Landauer value of kT ln 2 in the adiabatic limit. Also in 2012, Orlov et al. tested Landauer’s Principle in the context of an adiabatic charge transfer across a resistor, and verified that, in cases where the charge transfer is carried out in a way that does not erase known computational information, the energy dissipated can be much less than kT ln 2, which validates the theoretical rationale for doing reversible computing. In 2014, Jun et al. [7] carried an even more high-precision version of the Berut experiment, verifying again the Landauer limit, and that similar, logically-reversible operations can, in contrast, be done in a way that approaches thermodynamic reversibility. Finally, in 2018, Yan et al. [8] carried out a quantum-mechanical experiment demonstrating that Landauer’s Principle holds at the single-atom level” (p. 36-37).\")\n\n\n#### \n\n\n#### 4.1.2 Overall bit-erasures\n\n\nLet’s assume that Landauer’s principle caps the bit-erasures the brain can implement. What bit-erasure budget does this imply?\n\n\nMost estimates I’ve seen of the brain’s energy budget vary between ~10-20W (Joules/second).[585](https://www.openphilanthropy.org/brain-computation-report#footnote585_f5yts0q \"Aiello (1997): “On the basis of in vivo determinations, the mass-specific metabolic rate of the brain is approximately 11.2 W/kg (watts per kilogram). This is over 22 times the mass-specific metabolic rate of skeletal muscle (0.4 W/kg) (Aschoff et al. (1971)). A large brain would, therefore, be a considerable energetic investment. For example, an average human has a brain that is about 1 kg larger than would be expected for an average mammal of our body size (65 kg) and the metabolic cost of this brain would be just under 5 times that of the brain of the average mammal (humans = 14.6 watts, average mammal = 3.0 watts) (Aiello and Wheeler (1995))” (see the section “The expensive brain”). Aiello and Wheeler (1995) contains the same estimate, citing Aschoff et al. (1971), which I have not attempted to access (and which appears to be in German). Sarpeshkar (1997): “The global power consumption of the brain has been measured numerous times by the Kety-Schmidt technique, and the measurements have generally been fairly consistent, even over 40 years. A recent measurement [38] yielded an oxygen uptake of 144 umol.100g-1.min-1. The glucose reaction yields, in in-vitro reactions, about 60 kJ/mol × 38 ATP/6 = 380 kJ/mol of oxygen consumed. The 60 kJ/mol. Value was obtained from [29]. The weight of the brain is about 1.3 kg [10]. Thus, the power consumption in watts is computed to 11.8W, a value that we shall round of 12 W” (p. 204, though in Sarpeshkar (2010) (p. 748), he uses the Aiello (1997) estimate above). Jabr (2012a), writing for Scientific American, estimates 12.6W. Merkle (1989) cites Kandel et al. (1985) (though without a page number) for a 25W estimate, though he assumes that only 10W is actually used for computation. Watts et al. (2018) write that “While making up only a small fraction of our total body mass, the brain represents the largest source of energy consumption—accounting for over 20% of total oxygen metabolism,” which would suggest ~16W if we used the ~80W estimate for the whole body cited in Aiello (1997). Various citations listed here say that 20% of body energy consumption goes to the brain, which the website’s author uses to generate an estimate of 20W for the brain, based on 100W consumption by the human body as a whole. My impression is that the 20% number is used in numerous other contexts (see e.g. Engl and Attwell (2015), who cite Kety (1957); Sokoloff (1960), and Rolfe and Brown (1997) -- though I haven’t followed up on these citations).\") But not all of this energy goes to computation:\n\n\n* Loose estimates suggest that 40% of energy use in the brain,[586](https://www.openphilanthropy.org/brain-computation-report#footnote586_0kj3r0j \"Engl and Attwell (2015): “Current theoretical estimates and experimental data assessing the contribution of each ‘housekeeping’ process to the brain’s total energy budget are inconclusive for many processes, varying widely in some cases. Further research is needed to fill these gaps, and the 40% value shown (right), for the whole brain according to Astrup et al. (1981a), as opposed to the 25% assumed for grey matter in Fig. 1, is quite uncertain” (p. 3424, Figure 5).\") and 25% in cortical gray matter,[587](https://www.openphilanthropy.org/brain-computation-report#footnote587_coqmiuc \"See Howarth et al. (2012): “As panel A, but including non-signaling energy use, assumed to be 6.81 × 1022 ATP/s/m3, that is, 1/3 of the neuronal signaling energy, so that housekeeping tasks are assumed to account for 25% of the total energy use. On this basis, resting potentials use 15%, action potentials 16%, and synaptic processes 44% of the total energy use” (p. 1224, Figure 1).\") goes towards non-signaling tasks.[588](https://www.openphilanthropy.org/brain-computation-report#footnote588_9jh0qzo \"See Engl and Attwell (2015) for some description of these tasks: “Perhaps surprisingly, a significant fraction of brain energy use (25–50%) in previous energy budgets has been assigned to non-signalling (so-called ‘housekeeping’) tasks, which include protein and lipid synthesis, proton leak across the mitochondrial membrane, and cytoskeletal rearrangements, the rate of ATP consumption on all of which is poorly understood” (p. 3418), though the Engl and Attwell emphasize that the methodology used to generate these estimates is quite uncertain.\")\n* Some signaling energy is plausibly used for moving information from one place to another, rather than computing with it. [Harris and Attwell (2012)](https://www.jneurosci.org/content/32/1/356), for example, estimate that action potentials use 17% of the energy in grey matter (though much less in white matter).[589](https://www.openphilanthropy.org/brain-computation-report#footnote589_i9hn469 \"See Figure 1.\")\n\n\nThat said, these don’t initially appear to be order-of-magnitude level adjustments. I’ll use 20W as a high end.\n\n\nThe brain operates at roughly 310 Kelvin, as does the body.[590](https://www.openphilanthropy.org/brain-computation-report#footnote590_l5olfze \"Wang et al. (2014): “On average, deep brain temperature is less than 1°C higher than body temperature in humans, unless cerebral injury is severe enough to significantly disrupt the brain-body temperature regulation (Soukup et al., 2002)” (p. 6). Thanks to Asya Bergal for this citation. See also Nelson and Nunneley (1998): “Cerebral temperatures were generally insensitive to surface conditions (air temperature and evaporation rate), which affected only the most superficial level of the cerebrum” (abstract). Human body temperature is about 37 oC, 310 Kelvin.\") Even if the air surrounding the body is colder, Dr. Jess Riedel suggested that it’s the temperature of the skull and blood that’s relevant, as the brain has to push entropy into the environment via these conduits.[591](https://www.openphilanthropy.org/brain-computation-report#footnote591_xynzc5c \"From Open Philanthropy's non-verbatim notes from a conversation with Dr. Jess Riedel: “The temperature relevant to applying Landauer’s limit to the brain is essentially that of the skull and blood. Even if the temperature outside the body is at a lower temperature, the brain will have to push entropy into its environment via those conduits. If there were some other cold reservoir inside the brain absorbing entropy (there isn’t), it would quickly be expended” (p. 3). Sandberg (2016), in his attempt to apply Landauer’s limit to the brain, uses body temperature as well (see p. 5).\")\n\n\nAt 310 K, *k* × T × ln2 Joules results in a minimum energy emission of 3e-21 Joules per bit erasure.[592](https://www.openphilanthropy.org/brain-computation-report#footnote592_lpfk1mk \"See calculation here.\") With a 20W budget, this allows **no more than 7e21 bit erasures per second in the brain overall**.[593](https://www.openphilanthropy.org/brain-computation-report#footnote593_gb1oa83 \"See calculation here. Sandberg's (2016) estimate is slightly higher: “20 W divided by 1.3 × 10-21 J (the Landauer limit at body temperature) suggests a limit of no more than 1.6 × 1022 irreversible operations per second” (p. 5). This is because his estimate of the Landauer limit at body temperature differs from mine by about a factor of two -- I’m not sure why.\") This simple estimate passes over some complexities (see endnote), but I’ll use it as a first pass.[594](https://www.openphilanthropy.org/brain-computation-report#footnote594_431fxfq \"From Open Philanthropy's non-verbatim notes from a conversation with Prof. David Wolpert: “In Prof. Wolpert’s view, it is a subtle and interesting question how to do this type of calculation correctly. A rigorous version would require a large research project. One complexity is that the brain is an open system, in what would be formally called a non-equilibrium steady state, which continually receives new inputs and performs many computations at the same time, even though its entropy does not change that much over time. Landauer’s principle, though, applies to drops in entropy that occur in each step of a calculation. Various other caveats would also be necessary. For example, there are long-range correlations between bits, and there are multiple heat baths in the brain. As a simplified toy model, however, we can imagine that the brain computes in a serial fashion. It gets new inputs for each computation (thereby reinflating the entropy), and each computation causes a drop in entropy. In this case, the upper bound on bit-erasures suggested by Mr. Carlsmith would apply. Prof. Wolpert’s thinks that this calculation is legitimate as a first-pass, back-of-the-envelope upper bound on the bit-erasures that the brain could be implementing. It couldn’t get published in a physics journal, but it might get published in a popular science journal, and it helps get the conversation started” (p. 3). I expect that further investigation would reveal other complexities as well.\")\n\n\n\n#### 4.2 From bit-erasures to FLOP/s\n\n\nCan we get from this to a bound on required FLOP/s?\n\n\nIf the brain were performing standard FLOPs, it would be easy. A standard FLOP takes two n-bit numbers, and produces another n-bit number. So absent active steps to save the inputs, you’ve erased at least *n* bits.[595](https://www.openphilanthropy.org/brain-computation-report#footnote595_35wpr0f \"Jared Kaplan’s notes on Statistical Mechanics & Thermodynamics: “Say we add two numbers, eg 58 + 23 = 81. We started out with information representing both 58 and 23. Typically this would be stored as an integer, and for example a 16 bit integer has information, or entropy, 16 log 2. But at the end of the computation, we don’t remember what we started with, rather we just know the answer. Thus we have created an entropy S = 2 × (16 log 2) − (16 log 2) = 16 log 2 through the process of erasure!” (p. 59). See also Hänninen and Takala (2010): “The binary addition operation performs an unbalanced compression between the input and output state spaces, since the mapping between the values is not bijective. Medium-sized result values can originate from the largest set of possible input operand pairs. The addition of two n-bit binary operands results in at most an (n + 1)-bit result, and the result value 2n − 1 compresses the largest group of input pairs, 2n distinct cases, into the single output. Thus, the logical reversal of the addition requires the result word and n extra bits, which could be chosen simply to represent one of the input operands. The number of bits required to reverse the binary addition, as one indivisible logical operation, can be interpreted as the minimum amount of information lost in any irreversible adder structure at best. This loss determines the minimum achievable energy cost per operation” (p. 224). See also Hänninen and Takala (2010): “The binary addition operation performs an unbalanced compression between the input and output state spaces, since the mapping between the values is not bijective. Medium-sized result values can originate from the largest set of possible input operand pairs. The addition of two n-bit binary operands results in at most an (n + 1)-bit result, and the result value 2n − 1 compresses the largest group of input pairs, 2n distinct cases, into the single output. Thus, the logical reversal of the addition requires the result word and n extra bits, which could be chosen simply to represent one of the input operands. The number of bits required to reverse the binary addition, as one indivisible logical operation, can be interpreted as the minimum amount of information lost in any irreversible adder structure at best. This loss determines the minimum achievable energy cost per operation” (p. 224). See also Hänninen and Takala (2010) (p. 2370), for comparable discussion re: multiplication. Hänninen et al. (2011) discuss the possibility of less-than-n bit erasures for word-length n operations in the context of “non-trivial multiplication,” which, at a glance, seems to involve excluding multiplications that take zero as an operand (see p. 2371).\") 7e21 bit-erasures/s, then, would imply a maximum of e.g. ~2e21 4-bit FLOP/s, 9e20 8-bit FLOP/s, and so forth, for a computer running on 20W at 310 Kelvin.\n\n\nAnd the intermediate steps involved in transforming inputs into outputs erase bits as well. For example, [Hänninen et al. (2011)](https://www3.nd.edu/~lent/pdf/nd/IrreversibleBitErasuresHanninenLent2011.pdf) suggest that on current digital multiplier implementations, the most efficient form of n-bit multiplication requires 8 × n2 bit-erasures – e.g., 128 for a 4-bit multiplication, and 512 for an 8-bit multiplication.[596](https://www.openphilanthropy.org/brain-computation-report#footnote596_u5h8jmh \"Hänninen et al. (2011) estimate the bit-erasures implicated by various proposed multiplier implementations. The array multiplier is the most efficient, at 8n2 for n-bit words (see Table II, p. 2372). 8 × 42 = 128; 83 = 512.\") This would suggest a maximum of ~5e19 4-bit digital multiplications, and ~1e19 8-bit multiplications (though analog implementations may be much more efficient).[597](https://www.openphilanthropy.org/brain-computation-report#footnote597_01sf3it \"Sarpeshkar (1998) discusses more efficient, analog implementations: “Items 1 through 3 show that analog computation can be far more efficient than digital computation because of analog computation’s repertoire of rich primitives. For example, addition of two parallel 8-bit numbers takes one wire in analog circuits (using Kirchoff’s current law), whereas it takes about 240 transistors in static CMOS digital circuits. The latter number is for a cascade of 8 full adders. Similarly an 8-bit multiplication of two currents in analog computation takes 4 to 8 transistors, whereas a parallel 8-bit multiply in digital computation takes approximately 3000 transistors” (p. 1605).\")\n\n\nAnd FLOPs in actual digital computers appear to erase even more bits than this – ~1 bit-erasure per transistor switch involved in the operation.[598](https://www.openphilanthropy.org/brain-computation-report#footnote598_3fmu9d2 \"See also Hänninen et al. (2011): “Present CMOS effectively performs an erasure every time a transistor switches states—generating hugely unnecessary levels of heat” (p. 2370).\") [Sarpeshkar (1998)](https://ieeexplore.ieee.org/document/6790538) suggests 3000 transistors for an 8-bit digital multiply (though only 4-8 in for analog implementations);[599](https://www.openphilanthropy.org/brain-computation-report#footnote599_uiinc54 \"Sarpeshkar (1998): “an 8-bit multiplication of two currents in analog computation takes 4 to 8 transistors, whereas a parallel 8-bit multiply in digital computation takes approximately 3000 transistors” (p. 1605).\") [Asadi and Navi (2007)](https://www.idosi.org/wasj/wasj2(4)/12.pdf) suggest >20,000 for a 32-bit multiply.[600](https://www.openphilanthropy.org/brain-computation-report#footnote600_u40fxgd \"Asadi and Navi (2007): “Table 3: comparison between 32 × 32 bit multipliers … Transistor counts: 21579.00, 25258.00, 32369.00” (Table 3, p. 346).\")\n\n\nPerhaps for some, comfortable assuming that the brain’s operations are relevantly like standard FLOPs, this is enough. But a robust upper bound should not assume this. The brain implements some causal structure that allows it to perform tasks, which can in principle be replicated using FLOP/s, but which itself could in principle take a wide variety of unfamiliar forms. Landauer’s principle tells us that this causal structure, represented as a set of (possibly stochastic) transitions between logical states, cannot involve erasing more than 7e21 bits/second.[601](https://www.openphilanthropy.org/brain-computation-report#footnote601_8tk7j4c \"Given the probability distribution over inputs to which the brain is in fact exposed, that is.\") It doesn’t tell us anything, directly, about the FLOP/s required to replicate the relevant transitions, and/or perform the relevant tasks.[602](https://www.openphilanthropy.org/brain-computation-report#footnote602_cbw5h9p \"My thanks to Prof. David Wallace for discussion.\")\n\n\nHere’s an analogy. Suppose that you’re wondering how many bricks you need to build a bridge across the local river, and you know that a single brick always requires a pound of mortar. You learn that the “old bridge” across the river was built using no more than 100,000 pounds of mortar. If the old bridge is made of bricks, then you can infer that 100,000 bricks is enough. If the old bridge is made of steel, though, you can’t: even assuming that a brick can do anything *y* units of steel can do, y units of steel might require less (maybe much less) than a pound of mortar, so the old bridge could still be built with more than 100,000×*y* units of steel.\n\n\nObviously, the connection between FLOPs, bit-erasures, and the brain’s operations may be tighter than that between bricks, mortar, and steel. But conceptually, the point stands: unless we assume that the brain performs standard FLOPs, moving from bit-erasures to FLOPs requires further arguments. I’ll consider two types.\n\n\n \n\n\n#### 4.2.1 Algorithmic arguments\n\n\nWe might think that any algorithm useful for information-processing, whether implemented using standard FLOPs or no, will require erasing lots of logical bits.\n\n\nIn theory, this appears to be false (though there is at least some ongoing controversy, related to the bit-erasures implied by repeatedly reading/writing inputs and outputs).[603](https://www.openphilanthropy.org/brain-computation-report#footnote603_fbg2jof \"From Open Philanthropy's non-verbatim notes from a conversation with Dr. Jess Riedel: “There is a simple algorithm for converting a computation that uses logically irreversible operations into an equivalent computation that uses logically reversible operations. This allows you to avoid almost all of the relevant logical bit-erasures” (p. 4). And from Open Philanthropy's non-verbatim notes from a conversation with Dr. Paul Christiano: “We believe that you can perform extremely complex computations with almost no bit erasures using good enough hardware” (p. 4). See also Bennett (1989): “Reversible computers of various kinds (Turing machines, cellular automata, combinational logic) have been considered [1], [11], [12], [13], [6], [2], [14] especially in connection with the physical question of the thermodynamic cost of computation; and it has been known for some time that they can simulate the corresponding species of irreversible computers in linear time [1] (or linear circuit complexity 13]), provided they are allowed to leave behind at the end of the computation a copy of the input (thereby rendering the mapping between initial and final states 1:1 even though the input-output mapping may be many-to-one)” (p. 766). See also Sagawa (2014), p. 8 in the arxiv version), and Bennett (1973). For disagreement/controversy, see Wolpert (2019a): “Summarizing, it is not clear that there is a way to implement a logically irreversible function with an extended circuit built out of logically reversible gates that reduces the Landauer cost below the Landauer cost of an equivalent AO [“all at once”] device. The effect on the mismatch cost of using such a circuit rather than an AO device is more nuanced, varying with the priors, the actual distribution, etc.” (p. 33 of the arxiv paper). My understanding is that the crux of this objection hinges on the fact that the reversible circuit will need to be reused, which means that its inputs and outputs will need to be reinitialized: “In general, the Landauer cost and mismatch cost of answer-reinitialization of an extended circuit will be greater than the corresponding answer-reinitialization costs of an equivalent AO device. This is for the simple reason that the answer-reinitialization of the extended circuit must reinitialize the bits containing copies of x and m, which do not even exist in the AO device” (p. 30 of the arxiv paper). See also Open Philanthropy's non-verbatim notes from a conversation with Prof. David Wolpert (p. 2). Dr. Jess Riedel was skeptical of this sort of objection. From Open Philanthropy's non-verbatim notes from a conversation with Dr. Jess Riedel: “Dr. Riedel is skeptical of objections to the viability of reversible computing that appeal to the bit-erasures involved in receiving new inputs and writing new final outputs. It’s true that reversible computing paradigms require bit-erasures for this, but for most interesting computations, the intermediate memory usage is much (often exponentially) larger than the input and output data” (p. 5). I have not attempted to evaluate this debate in detail. If Prof. Wolpert is correct, then algorithmic arguments look stronger.\") Any computation can be performed using logically reversible operations (that is, operations that allow you to reconstruct the input on the basis of the output), which do not erase bits.[604](https://www.openphilanthropy.org/brain-computation-report#footnote604_mj08c9d \"Sagawa (2014): “A computational process C is logically reversible if and only if it is an injection. In other words, C is logically reversible if and only if, for any output logical state, there is a unique input logical state. Otherwise, C is logically irreversible” (p. 7 in the arxiv version).\") For example, in theory, you can make multiplication reversible just by saving one of the inputs.[605](https://www.openphilanthropy.org/brain-computation-report#footnote605_1egxzqt \"Hänninen and Takala (2010): “the logical reversal of the addition requires the result word and n extra bits, which could be chosen simply to represent one of the input operands” (p. 224). And see also Jared Kaplan’s notes on Statistical Mechanics & Thermodynamics: “In principle we can do even better through reversible computation. After all, there’s no reason to make erasures. For example, when adding we could perform an operation mapping (x, y) → (x, x + y), for example (58, 23) → (58, 81), so that no information is erased. In this case, we could in principle perform any computation we like without producing any waste heat at all. But we need to keep all of the input information around to avoid creating entropy and using up energy” (p. 60).\") And my understanding is that the algorithmic overheads involved in using logically reversible operations, instead of logically irreversible ones – e.g., additional memory to save intermediate results, additional processing time to “rewind” computations[606](https://www.openphilanthropy.org/brain-computation-report#footnote606_0kwbmh6 \"Johnson (1999): “Efficient as such a system would be, there would still be drawbacks. In a complex calculation, the extra memory needed to save all the intermediary ''garbage bits'' can grow wildly. As a compromise, Dr. Bennett devised a memory-saving method in which a computer would carry out a few steps of the calculation, copy the result and rewind. Then, starting with the copied result, it would take a few more steps. He likened the method to crossing a river using just a few stepping stones: one must backtrack to pick up the stones left behind, placing them in the path ahead. While the procedure would consume less memory, it would require more computational steps, slowing down the calculation. To computer scientists, this was a classic tradeoff: pay the computational cost with either memory space or processing time.” Wolpert (2019b): “One of the properties of logically reversible gates that initially caused problems in designing circuits out of them is that running those gates typically produces “garbage” bits, to go with the bits that provide the output of the conventional gate that they emulate. The problem is that these garbage bits need to be reinitialized after the gate is used, so that the gate can be used again. Recognizing this problem, [50] shows how to avoid the costs of reinitializing any garbage bits produced by using a reversible gate in a reversible circuit C ′ , by extending C ′ with yet more reversible gates (e.g., Fredkin gates). The result is an extended circuit that takes as input a binary string of input data x, along with a binary string of “control signals” m ∈ M, whose role is to control the operation of the reversible gates in the circuit. The output of the extended circuit is a binary string of the desired output for input xIN , xOUT = f(x N), together with a copy of m, and a copy of xIN, which I will write as xINcopy. So in particular, none of the output garbage bits produced by the individual gates in the original, unextended circuit of reversible gates still exists by the time we get to the output bits of the extended circuit. While it removes the problem of erasing the garbage bits, this extension of the original circuit with more gates does not come for free. In general it requires doubling the total number of gates (i.e., the circuit’s size), doubling the running time of the circuit (i.e., the circuit’s depth), and increasing the number of edges coming out of each gate, by up to a factor of 3. (In special cases though, these extra cost can be reduced, sometimes substantially.)” (p. 28). See also Michael Frank’s comments here: “It is probably the case that general reversible computations do require some amount of overhead in either space or time complexity; indeed, Ammer and I proved rigorously that this is true in a certain limited technical context. But, the overheads of reversible algorithms can theoretically be overwhelmed by their energy-efficiency benefits, to improve overall cost-performance for large-scale computations.”\") – are fairly manageable, something like a small multiplicative factor in running time and circuit size.[607](https://www.openphilanthropy.org/brain-computation-report#footnote607_ugl1isr \"From Open Philanthropy's non-verbatim notes from a conversation with Dr. Jess Riedel: “For large computations, this conversion adds only a modest overhead in required time and memory. For example, the algorithm described in Charles Bennett’s 1989 paper ‘Time/Space Trade-Offs for Reversible Computation’ involves slow-downs of at worst a multiplicative factor, around 2-3× as slow” (p. 4). See also Open Philanthropy's non-verbatim notes from a conversation with Dr. Paul Christiano: “The algorithmic overhead involved in reversible computing (specifically, the overhead involved in un-computing what you have already computed) is not that bad. Most of the difficulty lies in designing such efficient hardware” (p. 4). Bennett (1989): “Using a pebbling argument, this paper shows that, for any e > 0, ordinary multitape Turing machines using time T and space S can be simulated by reversible ones using time O(T1 + F) and space O(S log T) or in linear time and space O(STe)... The time/space cost of computing a 1:1 function on such a machine is equal within a small polynomial to the cost of computing the function and its inverse on an ordinary Turing machine” (p. 766). See also Wolpert's (2019a) overhead estimates, e.g.: “In general it requires doubling the total number of gates (i.e., the circuit’s size), doubling the running time of the circuit (i.e., the circuit’s depth), and increasing the number of edges coming out of each gate, by up to a factor of 3” (p. 28). \")\n\n\nIn practice, however, two experts I spoke with expected the brain’s information-processing to involve lots of logical bit-erasures. Reasons included:\n\n\n* When humans write software to perform tasks, it erases lots of bits.[608](https://www.openphilanthropy.org/brain-computation-report#footnote608_s8x6hw4 \"From Open Philanthropy's non-verbatim notes from a conversation with Dr. Jess Riedel: “When humans write software to accomplish human objectives, they use a lot of irreversible steps (though there are some non-atomic reversible intermediate computations, like Fourier transforms)” (p. 4).\")\n* Dr. Jess Riedel suggested that processing sensory data requires extracting answers to high-level questions (e.g., “should I dodge this flying rock to the left or the right?”) from very complex intermediate systems (e.g., trillions of photons hitting the eye), which involves throwing out a lot of information.[609](https://www.openphilanthropy.org/brain-computation-report#footnote609_jt7azcy \"From Open Philanthropy's non-verbatim notes from a conversation with Dr. Jess Riedel: “When the world has some simple feature (e.g., the position and velocity of a rock heading towards your head), this feature is encoded in very complicated intermediate systems (e.g., the trillions of photons scattering from the rock and heading towards your eye). The brain has to distill an answer to a high-level question (e.g., “do I dodge left or right?”) from the complicated intermediate system, and this involves throwing out a lot of entropy” (p. 4).\")\n* Prof. Jared Kaplan noted that FLOPs erase bits, and in general, he expects order one bit-erasures per operation in computational systems. You generally don’t do a lot of complicated things with a single bit before erasing it (though there are some exceptions to this). His intuition about this was informed by his understanding of simple operations you can do with small amounts of information.[610](https://www.openphilanthropy.org/brain-computation-report#footnote610_nthn64o \"From Open Philanthropy's non-verbatim notes from a conversation with Prof. Jared Kaplan: “FLOPs in actual computers erase bits, and Prof. Kaplan expects that you generally have order one bit-erasures per operation in computational systems. That is, you don’t do a lot of complicated things with a bit, and then erase it, and then do another set of very complicated things with another bit, and then erase it, etc. Prof. Kaplan’s intuition in this respect comes from his understanding of certain basic operations you can do with small amounts of information. In principle you can perform a very complicated set of transformations on a piece of information, like an image, without erasing bits. Prof. Kaplan can imagine some kind of order one factor increase in required compute from this type of thing” (p. 4).\")\n\n\nIf one imagines erasing lots of bits as the “default,” then you can also argue that the brain would need to be unrealistically energy-efficient (see next section) in order to justify any overheads incurred by transitioning to more reversible forms of computation.[611](https://www.openphilanthropy.org/brain-computation-report#footnote611_hbk4ixs \"From Open Philanthropy's non-verbatim notes from a conversation with Dr. Jess Riedel: “if (as in current conventional computers) you’re dissipating thousands of kT per operation, it isn’t worth transitioning to logically reversible operations, because other forms of energy dissipation dominate the Landauer-mandated energy costs of logical irreversibility” (p. 4).\") Dr. Paul Christiano noted, though, that if evolution had access to computational mechanisms capable of implementing useful, logically-reversible operations, brains may have evolved a reliance on them from the start.[612](https://www.openphilanthropy.org/brain-computation-report#footnote612_rfaady7 \"From Open Philanthropy's non-verbatim notes from a conversation with Dr. Paul Christiano: “Dr. Christiano does not think that logically irreversible operations are a more natural or default computational unit than reversible ones. And once we’re engaging with models of brain computation that invoke computations performed by low-level, reversible elements, then we are assuming that the brain is able to make use of such elements, in which case it may well have evolved a reliance on them from the start. For example, if it were possible to use proteins to directly perform large tunable matrix multiplications, Landauer's principle implies that those matrix multiplications would necessarily be invertible or even unitary. But unitary matrix multiplications are just as useful for deep learning as general matrix multiplications, so Landauer's principle per se doesn't tell us anything about the feasibility of the scenario. Instead the focus should be on other arguments (e.g. regarding consistency and flexibility)” (p. 4).\")\n\n\nWe can also look at models of neural computation to see what bit-erasures they imply. There is some risk, here, of rendering the limit method uninformative (e.g., if you’ve already decided how the brain computes, you can just estimate required FLOP/s directly).[613](https://www.openphilanthropy.org/brain-computation-report#footnote613_kj1u5wu \"My thanks to Prof. David Wallace for discussion.\") But it could still be helpful. For example:\n\n\n* Some kinds of logical irreversibility may apply to large swaths of hypotheses about how the brain computes (e.g., hypotheses on which the membrane potential, which is routinely reset, carries task-relevant information).\n* Some specific hypotheses (e.g., each neuron is equivalent to X-type of very large neural network) might imply bit-erasures incompatible with Landauer’s bound.\n* If the brain is erasing lots of bits in one context, this might indicate that it does so elsewhere too, or everywhere.\n\n\nOf course, it’s a further step from “the brain is probably erasing lots of logical bits” to “FLOP/s required to replicate the brain’s task-performance ÷ bit-erasures per second in the brain ≤1,” just as it’s a further step from “the old bridge was probably built using lots of mortar” to “bricks I’ll need ÷ pounds of mortar used for the old bridge ≤1.” One needs claims like:\n\n\n\n> \n> 1. A minimal, computationally useful operation in the brain probably erases at least one logical bit, on average.\n> 2. One FLOP is probably enough to capture what matters about such an operation, on average.\n> \n> \n> \n\n\nProf. Kaplan and Dr. Riedel both seemed to expect something like (1) and (2) to be true, and they seem fairly plausible to me as well. But the positive algorithmic arguments just listed don’t themselves seem to me obviously decisive.\n\n\n#### 4.2.2 Hardware arguments\n\n\nAnother class of arguments appeals to the energy dissipated by the brain’s computational mechanisms. After all, for required FLOPs per logical bit-erasure to be >1, it would need to be the case that required FLOPs per ~0.69*k*T of energy dissipation is >1 as well.\n\n\nFor example, in combination with (2) above, we might argue instead for:\n\n\n\n> 1\\*. A minimal, computationally useful operation in the brain probably dissipates at least 0.69*k*T, on average\n> \n> \n\n\nOne possibly instructive comparison is with the field of reversible computing, which aspires to build computers that dissipate arbitrarily small amounts of energy per operation.[614](https://www.openphilanthropy.org/brain-computation-report#footnote614_szi8ipl \"Michael Frank gives a summary of the development of the literature on reversible computing here (see paragraphs starting with “I’ll summarize a few of the major historical developments…”).\") This requires logically reversible algorithms (since otherwise, Landauer’s principle will set a minimum energy cost per operation), but it also requires extremely non-dissipative hardware – indeed, hardware that is close to thermodynamically reversible (e.g., its operation creates negligible amounts of overall thermodynamic entropy).\n\n\nUseful, scalable hardware of this kind would need to be *really* fancy. As Dr. Michael Frank puts it, it would require “a level of device engineering that’s so precise and sophisticated that it will make today’s top-of-the-line device technologies seem as crude in comparison, to future eyes, as the practice of chiseling stone tablets looks to us today.”[615](https://www.openphilanthropy.org/brain-computation-report#footnote615_dj1llbe \"See this 2014 interview with the Machine Intelligence Research Institute.\") According to Dr. Frank, the biggest current challenge centers on the trade-off between energy dissipation and processing speed.[616](https://www.openphilanthropy.org/brain-computation-report#footnote616_tzyc5zc \"From Open Philanthropy's non-verbatim notes from a conversation with Dr. Michael Frank: “The biggest challenge is figuring out the fundamental physics involved in improving the trade-offs between energy dissipation and speed in reversible processes. We don’t know of any fundamental limits in this respect at the moment, but there may be some, and we need to understand them if so. One question is whether exploiting quantum phenomena can help. Dr. Frank is working on this at the moment. There are also practical issues involved in improving the degree of reversibility of mechanisms that we know how to design in principle, but which require a lot of advanced, high-precision engineering to get the level of efficiency we want. And there is a lot of engineering and design work to do at the level of circuits, architectures, design tools, and hardware description languages” (p. 2). See also page 1: “A lot of advanced physics and engineering is necessary for figuring out how to do reversible computing well. The goal is to create very fast, very energy-efficient systems. Currently, the closest examples are fairly rudimentary systems like simple oscillators. The transition to reversible computing won’t happen overnight, and it may take decades, even once fundamental problems are solved.”\") Dr. Christiano also mentioned challenges imposed by an inability to expend energy in order to actively set relevant physical variables into particular states: the computation needs to work for whatever state different physical variables happen to end up in.[617](https://www.openphilanthropy.org/brain-computation-report#footnote617_8m0hxz4 \"From Open Philanthropy's non-verbatim notes from a conversation with Dr. Paul Christiano: “In irreversible computers, you do not need to keep track of and take into account what happens to each degree of freedom, because you are able to expend energy to reset the system to a state it needs to be in for your computation to proceed successfully. With reversible computers, however, you aren’t able to expend such energy, so what happens to any degree of freedom that could influence your computation starts to matter a lot; you can’t simply force the relevant physical variables into a particular state, so your computation needs to work for the particular state that those variables happen to be in. Given the reversibility of physics, this is a very difficult engineering challenge” (p. 5).\")\n\n\nFor context, the energy dissipation per logical bit-erasure in current digital computers appears to be ~1e5-1e6 worse than Landauer’s limit, and progress is expected to asymptote between 1e3 and 1e5.[618](https://www.openphilanthropy.org/brain-computation-report#footnote618_rlnaax9 \"This is based primarily on eyeballing the chart presented at 4:17 in Michael Frank’s 2017 YouTube talk (Frank cites the International Roadmap of Semiconductors 2015, though I’m not sure where the specific information he’s pointing to comes from). According to Frank’s description of this chart, if you include various overhead factors that Frank suggests are extremely difficult to eliminate, we are currently dissipating around 10,000-50,000 kT per grounding of a circuit node at T=300K. The minimum energy used to switch the state of a minimum-sized transistor is smaller, between 100-1000 kT, but Frank suggests that using minimum-sized transistors is not always optimal for performance, and other overheads are in play as well. See also Frank (2018): “As the end of the semiconductor roadmap approaches, there is today a growing realization among industry leaders, researchers, funding agencies and investors that a transition to novel computing paradigms will be required in order for engineers to continue improving the energy efficiency (and thus, cost efficiency) of computing technology beyond the expected final CMOS node, when minimal transistor gate energies are expected to plateau at around the 40-80 kT level (∼ 1-2 eV at room temperature), with typical total CV2 node energies plateauing at a much higher level of around 1-2 keV” (p. 2). Hänninen et al. (2011) also note that the Landauer limit is “nearly three orders of magnitude lower than end-of-the-roadmap CMOS transistors,” (p. 2370) which is roughly where Frank’s chart forecasts the asymptote for minimum-size transistors (if we include circuit-level overhead factors, it’s another couple orders of magnitude). Jess Riedel notes that humans can, if necessary, create very special-purpose computational devices that get much closer to Landauer’s limit (this, he suggests, is what the “experimental tests” of Landauer’s limit attempt to do), but that these aren’t useful for practical, large-scale computing (see Open Philanthropy's non-verbatim notes from a conversation with Dr. Jess Riedel, p. 3). See also this conversation with Erik DeBenedictis, who predicts 2000 kT/logic op by 2030, including interconnect wire.\") A [V100](https://www.nvidia.com/en-us/data-center/v100/) GPU, at 1e14 FLOP/s and 300W, requires ~1e9 0.69*k*T per FLOP (assuming room temperature).[619](https://www.openphilanthropy.org/brain-computation-report#footnote619_idjmmaz \"See calculation here.\") So in order to perform the logically-reversible equivalent of a FLOP for less than 0.69*k*T, you’d need a roughly billion-fold increase in energy efficiency.\n\n\nOf course, biological systems have strong incentives to reduce energy costs.[620](https://www.openphilanthropy.org/brain-computation-report#footnote620_zc5j2rl \"See Aiello’s (1997) for some discussion. From Open Philanthropy's non-verbatim notes from a conversation with Prof. David Wolpert: “Metabolic constraints are extremely important in evolutionary biology. But the field of evolutionary biology has not adequately incorporated discoveries about the energy costs of the computation. The massive energy costs of the brain ground a presumption that it has been highly optimized for thermodynamic efficiencies. Understanding better how the brain’s architecture balances energy costs with computational performance may lead to important breakthroughs. However, at this point we are basically clueless about how the brain’s computation works, so we can’t even state this problem precisely” (p. 3).\") And some computational processes in biology are extremely efficient.[621](https://www.openphilanthropy.org/brain-computation-report#footnote621_whosun2 \"See e.g. Kempes et al. (2017): “Here we show that the computational efficiency of translation, defined as free energy expended per amino acid operation, outperforms the best supercomputers by several orders of magnitude, and is only about an order of magnitude worse than the Landauer bound” (p. 1). Rahul Sarpeshkar, in a 2018 TED talk, suggests that cells are the most energy efficient computers that we know, and that they are already computing at an efficiency near the fundamental laws of physics (3:30-4:04). See also Laughlin et al. (1998): “Freed from heavy mechanical work, ion channels change conformation in roughly 100 μs32. In principle, therefore, a single protein molecule, switching at the rate of an ion channel with the stoichiometry of kinesin, could code at least 103 bit per second at a cost of 1 ATP per bit” (p. 39). See Sarpeshkar (2013) for more on computation in cells, and Sarpeshkar (2010) for more on the energy-efficiency of biological systems more generally: “A single cell in the body performs ~10 million energy-consuming biochemical operations per second on its noisy molecular inputs with ~1 pW of average power. Every cell implements a ~30,000 node gene-protein molecular interaction network within its confines. All the ~100 trillion cells of the human body consume ~80 W of power at rest. The average energy for an elementary energy-consuming operation in a cell is about 20kT, where kT is a unit of thermal energy. In deep submicron processes today, switching energies are nearly 104 – 105kT for just an elementary 0->1 digital switching operation. Even at 10 nm, the likely end of business-as-usual transistor scaling in the future, it is unlikely that we will be able to match such energy efficiency. Unlike traditional digital computation, biological computation is tolerant to error in elementary devices and signals. Nature illustrates that it is significantly more energy efficient to compute with error-prone devices and signals and then correct for these errors through feedback-and-learning architectures than to make every device and every signal in a system robust, as in traditional digital paradigms thus far” (p. 18-19). Bennett (1989) also suggests that “a few thermodynamically efficient data processing systems do exist, notably genetic enzymes such as RNA polymerase, which, under appropriate reactant concentrations, can transcribe information from DNA to RNA at a thermodynamic cost considerably less than kT per step” (p. 766); see also Bennett (1973): “Tape copying is a logically reversible operation, and RNA polymerase is both thermodynamically and logically reversible” (p. 532). See also Ouldridge and ten Wolde (2017), Ouldridge (2017), Sartori et al. (2014), Mehta and Schwab (2012), and Mehta et al. (2016). Though see also Open Philanthropy's non-verbatim notes from a conversation with Dr. Jess Riedel: “Biology may be very energy efficient in certain cases, but Dr. Riedel still thinks it very unlikely that the efficiency of the brain’s computation is anywhere near Landauer’s limit. There are also likely to be other examples in which biology is extremely inefficient relative to Landauer’s principle, due to other constraints (for example, cases in which biological systems use chemical gradients involving billions of molecules to communicate ~5 bits of information). Humans can, if necessary, create very special-purpose computational devices that get close to Landauer’s limit (this is what “experimental tests” of Landauer’s limit attempt to do), and our power plants, considered as thermodynamic heat engines, are very efficient (e.g., nearing thermodynamic bounds). However, our useful, scalable computers are not remotely close to the minimal energy dissipation required by Landauer’s principle. This appears to be an extraordinarily hard engineering problem, and it’s reasonable to guess that brains haven’t solved it, even if they are very energy efficient elsewhere. ” (p. 3).\") But relative to a standard of 0.69*k*T per operation, the brain’s mechanisms generally appear highly dissipative.[622](https://www.openphilanthropy.org/brain-computation-report#footnote622_idt1o91 \"From Open Philanthropy's non-verbatim notes from a conversation with Dr. Michael Frank: “In general, Dr. Frank does not see evidence that biology is attempting to do anything like what human engineers working on reversible computing are trying to do. Reversible computing is an extremely advanced tier of high-precision engineering, which we’re still struggling to figure out. Biology, by contrast, seems perfectly happy with what it can do with simple, irreversible mechanisms. … In general, most signaling mechanisms in biology are highly dissipative. For example, the biophysical processes involved in neural firing (e.g., vesicle release, action potential propagation, ion channels driving the ion concentrations to new states) dissipate lots of energy. Indeed, most of life seems to be based on strongly driven (e.g., irreversible) processes” (p. 4). From Open Philanthropy's non-verbatim notes from a conversation with Prof. David Wolpert: “Prof. Wolpert also expects that using Landauer’s principle to estimate the amount of computation performed by the brain will result in substantial overestimates. A single neuron uses very complicated physical machinery to propagate a single bit along an axon. Prof. Wolpert expects this to be very far away from theoretical limits of efficiency. That said, some computational processes in biology are very energy efficient. For example, Prof. Wolpert recently co-authored a paper on protein synthesis in ribosomes, showing that the energy efficiency of the computation is only around two orders of magnitude worse than Landauer’s bound. Prof. Wolpert expects neurons to be much less efficient than this, but he doesn’t know” (p. 4).\") For example:\n\n\n* [Laughlin et al. (1998)](https://pubmed.ncbi.nlm.nih.gov/10195106/) suggest that synapses and cells use ~1e5-1e8*k*T per bit “observed” (though I don’t have a clear sense of what the relevant notion of observation implies).[623](https://www.openphilanthropy.org/brain-computation-report#footnote623_ouzyy0b \"See Laughlin et al. (1998): “Synapses and cells are using 105 to 108 times more energy than the thermodynamic minimum. Thermal noise sets a lower limit of k · T Joules for observing a bit of information (k, Boltzmann's constant; T, absolute temperature, 290K) and the hydrolysis of one ATP molecule to ADP releases about 25 kT” (p. 39). “Thermal noise sets a lower limit of k × T Joules for observing a bit of information (k, Boltzmann’s constant; T, absolute temperature, 290K” (p. 39). Laughlin et al. (1998) also note that “At least two biophysical constraints will contribute to these systems’ costs. First, there is the uncertainty associated with molecular interactions. The stochastic nature of receptor activation (photon absorption), of molecular collision, of diffusion, and of vesicle release, degrades information by introducing noise (eqns. 1 and 7), thereby substantially increasing costs. Secondly, energy is required to distribute signals over relatively large distances. We suggest, therefore, that the high metabolic cost of information in systems is dictated by basic molecular and cellular constraints to cell signaling, as independently proposed by Sarpeshkar (see also Sarpeshkar (1997))” (p. 37).\")\n* A typical cortical spike dissipates around 1e10-1e11*k*T.[624](https://www.openphilanthropy.org/brain-computation-report#footnote624_ba11cu0 \"Lennie (2003) writes that “The aggregate cost of a spike is 2.4 × 109 ATP molecules” (p. 493), and with Laughlin et al. (1998), who write that “the hydrolysis of one ATP molecule to ADP releases about 25 kT” (p. 39) (see also discussion here). 2.4e9 × 25 = 6e10. See also Bennett (1981): “Macroscopic size also explains the poor efficiency of neurons, which dissipate about 1011 kT per discharge” (p. 907).\") Prof. David Wolpert noted that this process involves very complicated physical machinery, which he expects to be very far from theoretical limits of efficiency, being used to propagate a single bit.[625](https://www.openphilanthropy.org/brain-computation-report#footnote625_5ckpkz0 \"From Open Philanthropy's non-verbatim notes from a conversation with Prof. David Wolpert: “Prof. Wolpert also expects that using Landauer’s principle to estimate the amount of computation performed by the brain will result in substantial overestimates. A single neuron uses very complicated physical machinery to propagate a single bit along an axon. Prof. Wolpert expects this to be very far away from theoretical limits of efficiency” (p. 4).\")\n* Dr. Riedel mentioned that the nerves conveying a signal to kick your leg burn much more than 0.69*k*T per bit required to say how much to move the muscle.[626](https://www.openphilanthropy.org/brain-computation-report#footnote626_6d1dgk4 \"From Open Philanthropy's non-verbatim notes from a conversation with Dr. Jess Riedel: “Presumably, we think we basically understand cases where the brain is sending very simple signals, like the signal to kick your leg. We know that the nerves involved in conveying these signals are operating in an irreversible way, and burning way more energy than the Landauer limit would say is necessary to communicate the number of bits needed to say e.g. how much to move the muscle. It seems this energy is required partly because the nerve is a big and complicated system, with many moving parts, so redundancy is necessary” (p. 3). From Open Philanthropy's non-verbatim notes from a conversation with Prof. Jared Kaplan: “For example, a lot of synapses, not too dissimilar from synapses in the brain, are used to send information to e.g. a muscle. Those synapses are using a lot of energy, and the brain is clearly going through a lot of effort to convey the relevant information confidently” (p. 3).\")\n* A single molecule of [ATP](https://en.wikipedia.org/wiki/Adenosine_triphosphate) (the brain’s main energy currency) releases ~25*k*T,[627](https://www.openphilanthropy.org/brain-computation-report#footnote627_witb5jz \"Laughlin et al. (1998) write that “the hydrolysis of one ATP molecule to ADP releases about 25 kT” (p. 39) (see also discussion here). Sarpeshkar (2014) also mentions “20 kT per molecular operation (1 ATP molecule hydrolysed)” (section 1). Swaminathan (2008) characterize ATP as \\\"the primary source of cellular energy\\\" in rat brains; and studies of brain metabolism like Lennie (2003) use ATPs as the central basis for measuring the brain's energy budget \") and Dr. Christiano was very confident that the brain would need at least 10 ATPs to get computational mileage equivalent to a FLOP.[628](https://www.openphilanthropy.org/brain-computation-report#footnote628_ri2sirj \"From Open Philanthropy's non-verbatim notes from a conversation with Dr. Paul Christiano: “Dr. Christiano would be extremely surprised if the brain got more computational mileage out of a single ATP than human engineers can get out of a FLOP, and he would be very willing to bet that it takes at least 10 ATPs to get the equivalent of a FLOP. Mr. Carlsmith estimates that the brain can be using no more than ~1e20 ATPs/second. If this estimate is right, then Dr. Christiano is very confident that you do not need more than 1e20 FLOP/s to replicate the brain’s task-performance” (p. 5).\") At a rough maximum of ~2e20 ATPs per second,[629](https://www.openphilanthropy.org/brain-computation-report#footnote629_lciffsw \"Calculation here. This link also lists 1e-19 J per molecule, and 30-60 kJ per mole. Lennie (2003) estimates a “gross consumption of 3.4 × 1021 molecules of ATP per minute” in the cortex, and that “in the normal awake state, cortex accounts for 44% of whole brain energy consumption,” suggesting ~6e19 ATPs/s in the cortex, and ~1e20 for the brain overall.\") this would suggest <2e19 FLOP/s.\n\n\nOf course, the relevant highly-non-dissipative information-processing could be hiding somewhere we can’t see, and/or occurring in a way we don’t understand. But various experts also mentioned more general features of the brain that make it poorly suited to this, including:\n\n\n* The size of its components.[630](https://www.openphilanthropy.org/brain-computation-report#footnote630_piulllp \"From Open Philanthropy's non-verbatim notes from a conversation with Prof. Jared Kaplan: “In general, Prof. Kaplan thinks it unlikely that big, warm things are performing thermodynamically reversible computations” (p. 3). From Open Philanthropy's non-verbatim notes from a conversation with Dr. Jess Riedel: “... It seems this energy is required partly because the nerve is a big and complicated system, with many moving parts, so redundancy is necessary” (p. 3). See also Bennett (1981): “Macroscopic size also explains the poor efficiency of neurons, which dissipate about 1011 kT per discharge” (p. 907).\")\n* Its warm temperature.[631](https://www.openphilanthropy.org/brain-computation-report#footnote631_sxie74g \"From Open Philanthropy's non-verbatim notes from a conversation with Prof. Jared Kaplan: “In general, Prof. Kaplan thinks it unlikely that big, warm things are performing thermodynamically reversible computations” (p. 3).\")\n* The need to boost signals in order to contend with classical noise.[632](https://www.openphilanthropy.org/brain-computation-report#footnote632_1c7ilx2 \"From Open Philanthropy's non-verbatim notes from a conversation with Prof. Jared Kaplan: “If you’re in a regime where there is some signal to noise ratio, and you make your signal big to avoid noise, you can’t be doing something thermodynamically reversible: the noise is creating waste heat, and you’re extending your signal to get above that. Prof. Kaplan would have thought that basically all of the processes in the brain have this flavor” (p. 3). Laughlin et al. (1998) also note that “At least two biophysical constraints will contribute to these systems’ costs. First, there is the uncertainty associated with molecular interactions. The stochastic nature of receptor activation (photon absorption), of molecular collision, of diffusion, and of vesicle release, degrades information by introducing noise (eqns. 1 and 7), thereby substantially increasing costs. Secondly, energy is required to distribute signals over relatively large distances. We suggest, therefore, that the high metabolic cost of information in systems is dictated by basic molecular and cellular constraints to cell signaling, as independently proposed by Sarpeshkar (see also Sarpeshkar (1997))” (p. 37).\")\n* Its reliance on diffusion to propagate information.[633](https://www.openphilanthropy.org/brain-computation-report#footnote633_uea7gjc \"From Open Philanthropy's non-verbatim notes from a conversation with Prof. Jared Kaplan: “Processes that involve diffusion also cannot be thermodynamically reversible. Diffusion increases entropy. For example, if you take two substances and mix them together, you have increased the entropy of that system” (p. 3). From Open Philanthropy's non-verbatim notes from a conversation with Dr. Michael Frank: “One example difference is that reversible computing engineers can use inertia to propagate signals at the speed of light, with very little energy dissipation. They can also achieve similarly efficient, high-speed results by sending magnetic flux quanta through superconducting circuits. The brain, however, relies on diffusion, which cannot take advantage of such inertia” (p. 4).\")\n* The extreme difficulty of building reversible computers in general.[634](https://www.openphilanthropy.org/brain-computation-report#footnote634_l4y3bjd \"From Open Philanthropy's non-verbatim notes from a conversation with Prof. Jared Kaplan (p. 3): In general, it’s extremely difficult to build reversible computers. For example, all of the quantum computers we have are very rudimentary (quantum computers are a type of reversible computer), and it’s hard to keep them running for very long without destroying information. In order to be performing thermodynamically reversible computations, each neuron would have to have some sort of very specialized component, operating in a specialized environment crafted in order to perform the computation in a thermodynamically reversible way. It would be hard to keep this running for very long, and Prof. Kaplan doesn’t think this is happening. From Open Philanthropy's non-verbatim notes from a conversation with Dr. Jess Riedel (p. 3): Humans can, if necessary, create very special-purpose computational devices that get close to Landauer’s limit (this is what ‘experimental tests’ of Landauer’s limit attempt to do), and our power plants, considered as thermodynamic heat engines, are very efficient (e.g., nearing thermodynamic bounds). However, our useful, scalable computers are not remotely close to the minimal energy dissipation required by Landauer’s principle. This appears to be an extraordinarily hard engineering problem, and it’s reasonable to guess that brains haven’t solved it, even if they are very energy efficient elsewhere. From Open Philanthropy's non-verbatim notes from a conversation with Dr. Michael Frank (p. 3-4): In general, Dr. Frank does not see evidence that biology is attempting to do anything like what human engineers working on reversible computing are trying to do. Reversible computing is an extremely advanced tier of high-precision engineering, which we’re still struggling to figure out. Biology, by contrast, seems perfectly happy with what it can do with simple, irreversible mechanisms. From the non-verbatim notes from my conversation with Dr. Paul Christiano (p. 5): Dr. Christiano expects that experts in physics, chemistry, and computer engineering would generally think it extremely unlikely that the brain is erasing less than one bit per computationally useful FLOP it performs. If the brain were doing this, Dr. Christiano believes that this would mean that the brain is qualitatively much more impressive than any other other biological machinery we are aware of.\")\n\n\nAll of this seems to me like fairly strong evidence for something like 1\\*.\n\n\nNote, though, that Landauer’s principle isn’t playing a very direct role here. We had intended to proceed from an estimate of the brain’s energy budget, to an upper bound on its logical bit-erasures (via Landauer’s principle), to an upper bound on the FLOP/s required to match its task performance. But hardware arguments skip the middle step, and just argue directly that you don’t need more than one FLOP per 0.69*k*T used by the brain. I think that this is probably true, but absent this middle step, 0.69*k*T doesn’t seem like a clearly privileged number to focus on.\n\n\n#### 4.3 Overall weight for the limit method\n\n\nOverall, it seems very unlikely to me that more than ~7e21 FLOP/s is required to match the brain’s task-performance. This is centrally because various experts I spoke to seemed confident about claims in the vicinity of (1), (1\\*), and (2) above; partly because those claims seem plausible to me as well; and partly because other methods generally seem to point to lower numbers.[635](https://www.openphilanthropy.org/brain-computation-report#footnote635_nus8h3c \"The FLOP/s costs of the models in Beniaguev et al. (2020), Maheswaranathan et al. (2019), and Batty et al. (2017) are the most salient exception.\")\n\n\nIndeed, lower numbers (e.g., 1e21 – ~ the maximum 8-bit irreversible FLOP/s a computer running on 20W at 310 Kelvin could perform, and 1e20 – the maximum number of required FLOP/s, assuming at least one ATP per required FLOP) seem likely to me to be overkill as well.[636](https://www.openphilanthropy.org/brain-computation-report#footnote636_eui5yec \"I don’t give much weight to the energy costs of current digital multiplier implementations, given that analog implementations may be much more efficient (see Sarpeshkar (1998) (p. 1605)).\")\n\n\nThat said, this doesn’t seem like a case of a hard physical limit imposing a clean upper bound. Even equipped with an application of the relevant limit to the brain (various aspects of this still confuse me – see endnote), further argument is required.[637](https://www.openphilanthropy.org/brain-computation-report#footnote637_a35u76r \"A number of my confusions center on theoretical issues related to identifying the set of the computations that a physical system can be said to implement (see Piccinini (2017) for an introduction). For example, a simulation of a physical system at any level of detail is interpretable as a set of (possibly stochastic) transitions between logical states, and hence as a computation implemented by this system. In this sense, any physical system, dissipating a given amount of energy (a box of gas, a hurricane, etc.), implements an extremely complex computation that describes exactly what it in fact does or would do given different inputs. What’s more, there are broader questions about whether a given physical system can be understood as implementing any computation, given a sufficiently unnatural carving of logical states (see e.g. Aaronson (2011) (p. 23); Drescher (2006), Chapter 2, and Hemmo and Shenker (2019)). I feel very unclear about how both of these theoretical issues interact with constraints imposed by Landauer’s principle, and with estimates of the FLOP/s required to re-implement the computations in question. Indeed, note if it were possible to move easily from bit-erasures to FLOP/s, then naively applied, the Landauer argument discussed here seems to suggest that you can cap the FLOP/s required to simulate a physical system via the energy that system dissipates -- a conclusion which fits poorly with the extreme computational costs of simulating low-level physical systems like interacting molecules or proteins in lots of detail. Tom Davidson also suggested that this understanding of Landauer’s principle has the somewhat strange implication that a system that gives the same output regardless of the input would have the highest Landauer energy costs, which seems somewhat strange to me (especially if we’re allowed to interpret any set of microstates as an output state). Prof. David Wolpert suggested a number of other possible complexities in our conversation (see Open Philanthropy's non-verbatim notes from a conversation with Prof. David Wolpert (p. 3)) that I haven’t engaged with, and I expect that further investigation would uncover more.\") And indeed, the arguments that seem most persuasive to me (e.g., hardware arguments) don’t seem to rely very directly on the limit itself. Still, we should take whatever evidence we can get.\n\n\n \n\n\n5 The communication method\n--------------------------\n\n\nLet’s briefly discuss a final method (the “communication method”), which attempts to use the communication bandwidth in the brain as evidence about its computational capacity. I haven’t explored this much, but I think it might well be worth exploring.\n\n\nCommunication bandwidth, here, refers to the speed with which a computational system can send different amounts of information different distances.[638](https://www.openphilanthropy.org/brain-computation-report#footnote638_deto8ig \"In the context of human hardware, I’ll use the term to cover both on-chip memory bandwidth and bandwidth between chips, since brain-equivalent systems can use multiple chips; in some contexts, like a TPU, we might also include very short-distance communication taking place between ALUs. From Open Philanthropy's non-verbatim notes from a conversation with Dr. Paul Christiano: “Across many different models of computation (e.g. Turing Machines, RAM machines, circuits, etc.), computational resources tend to fall into a number of broad categories, including: Memory (e.g., data the computer can store), Communication (roughly, the amount of information the computer can send from one part to another), Compute/number of operations. The exact meaning of these concepts varies across models, but they are often useful to work with” (p. 1). \") This is distinct from the operations per second that a system can perform (computation), but it’s just as hard a constraint on what the system can do.\n\n\nEstimating the communication bandwidth in the brain is a worthy project in its own right. But it also might help with computation estimates. This is partly because the marginal value of additional computation and communication are related (e.g., too little communication and your computational units sit idle; too few computational units and it becomes less useful to move information around).\n\n\nCan we turn this into a FLOP/s estimate? The basic form of the argument would be roughly:\n\n\n1. The profile of communication bandwidth in the brain is X.\n2. If the profile of the communication bandwidth in the brain is X, then Y FLOP/s is probably enough to match its task performance.\n\n\nI’ll discuss each premise in turn.\n\n\n \n\n\n#### 5.1 Communication in the brain\n\n\nOne approach to estimating communication in the brain would be to identify all of the mechanisms involved in it, together with the rates at which they can send different amounts of information different distances.\n\n\n* Axons are clearly a central mechanism here, and one in which a sizeable portion of the brain’s energy and volume have been invested.[639](https://www.openphilanthropy.org/brain-computation-report#footnote639_0rkmd4k \"Howarth et al. (2012), Figure 1, estimate that maintaining resting potentials uses 15% of the total energy in the cortex (20% of signaling energy in the cortex), and action potentials use 16% (21% of signaling energy). Synaptic processes account for an additional 44% (see p. 1224). Schlaepfer et al. (2006), Table 1, suggests that white matter, which largely consists of myelinated axons, is about 30% of brain volume (p. 150). See Diamond (1996) for discussion of evolutionary pressures on metabolism and brain volume (p. 757).\") There is a large literature on estimating the information communicated by action potentials.[640](https://www.openphilanthropy.org/brain-computation-report#footnote640_lumpguk \"See Dayan and Abbott (2001), Chapter 4 (p. 123-150); Zador (1998); Tsubo et al. (2012), Fuhrmann et al. (2001), Mainen and Sejnowski (1995), van Steveninck et al. (1997).\")\n* Dendrites also seem important, though generally at shorter distances (and at sufficiently short distances, distinctions between communication and computation may blur).[641](https://www.openphilanthropy.org/brain-computation-report#footnote641_51m5rgp \"From Open Philanthropy's non-verbatim notes from a conversation with Dr. Paul Christiano: “One can also distinguish between the bandwidth available at different distances. Axons vary in length, shorter-distance communication in neurons occurs via dendrites, and at sufficiently short distances, the distinction between communication and computation becomes blurry. For example, a multiply is in some sense mostly communication, and one can think of different processes taking place within neurons as communication as well. For longer-distance communication, though, axons seems like the brain’s primary mechanism” (p. 2).\")\n* Other mechanisms (e.g. glia, neuromodulation, ephaptic effects, blood flow – I’m less sure about gap junctions) are plausibly low-bandwidth relative to axons and dendrites.[642](https://www.openphilanthropy.org/brain-computation-report#footnote642_oc8rgq8 \"See discussion in Section 2.3. From Open Philanthropy's non-verbatim notes from a conversation with Dr. Paul Christiano: “There are other communication mechanisms in the brain (e.g., glia, neuromodulation, ephaptic effects), but Dr. Christiano expects that these will be lower-bandwidth than axon communication” (p. 2). This point is fairly similar to ones made in Section 2.3, but the idea here is that speed limits the information these mechanism can send different distances, rather than the amount of processing of information they can perform\") If so, this would simplify the estimate. And the resources invested in axons and dendrites would make it seem somewhat strange if the brain has other, superior forms of communication available.[643](https://www.openphilanthropy.org/brain-computation-report#footnote643_d1ddinp \"From Open Philanthropy's non-verbatim notes from a conversation with Dr. Paul Christiano: “the brain invests a sizeable portion of its energy and volume into communication via axons, which would be a strange investment if it had some other, superior communication mechanism available” (p. 2).\")\n\n\nDr. Paul Christiano suggests a rough estimate of ~10 bits per spike for axon communication, and uses this to generate the bounds of ~1e9 bytes/s of long-distance communication across the brain, 1e11 bytes/s of short-distance communication (where each neuron could access ~1e7 nearby neurons), and larger amounts of very short-distance communication.[644](https://www.openphilanthropy.org/brain-computation-report#footnote644_mh2z3de \"From Open Philanthropy's non-verbatim notes from a conversation with Dr. Paul Christiano: “You can roughly estimate the bandwidth of axon communication by dividing the firing rate by the temporal resolution of spiking. Thus, for example, if the temporal precision is 1 ms, and neurons are spiking at roughly 1 Hz, then each spike would communicate ~10 bits of information (e.g., log2(1000)). If you increase the temporal precision to every microsecond, that’s only a factor of two difference (e.g., log2(1,000,000) = ~20 bits)... Roughly 1e8 axons cross the corpus callosum, and these account for a significant fraction of the length of all axons (AI Impacts has some estimates in this regard). Based on estimates Dr. Christiano has seen for the total length of all axons and dendrites, and the estimate that 1 spike/second = 10 bits/second across each, he thinks the following bounds are likely: 1e9 bytes/s of long-distance communication (across the brain), 1e11 bytes/s of short-distance communication (where each neuron could access about 10 million nearby neurons), and larger amounts of very-short distance communication.” (p. 2-3). See also Zhou et al. (2013): “The largest commissural tract in the human brain is the corpus callosum (CC), with more than 200 million axons connecting the two cerebral hemispheres” (p. E2714).\")\n\n\nAnother approach would be to draw analogies with metrics used to assess the communication capabilities of human computers. [AI Impacts](https://aiimpacts.org/brain-performance-in-teps/), for example, recommends the traversed edges per second (TEPS) metric, which measures the time required to perform a certain kind of search through a random graph.[645](https://www.openphilanthropy.org/brain-computation-report#footnote645_9zp4o4y \"AI Impacts: “Traversed edges per second (TEPS) is a metric that was recently developed to measure communication costs, which were seen as neglected in high performance computing.8 The TEPS benchmark measures the time required to perform a breadth-first search on a large random graph, requiring propagating information across every edge of the graph (either by accessing memory locations associated with different nodes, or communicating between different processors associated with different nodes). You can read about the benchmark in more detail at the Graph 500 site.”\") They treat neurons as vertices on the graph, synapses as edges, and spikes through synapses as traversals of edges, yielding an overall estimate of ~2e13-6e14 TEPS (the same as their estimate of the number of spikes through synapses).[646](https://www.openphilanthropy.org/brain-computation-report#footnote646_80dlca5 \"Their estimate makes a number of assumptions, including that (1) most relevant communication is between neurons (as opposed to e.g. internal to neurons); (2) that traversing an edge is relevantly similar to spiking; (3) that the distribution of edges traversed doesn’t make a material difference, and (4) that the graph characteristics are relevantly similar. I can imagine objections to (1) that focus on the possibility that important communication is taking place within dendrites (though tree structure arguments might limit the difference this makes); and objections, more generally, that focus on alternative conceptions of how many relevant “vertices” there are in the brain. \")\n\n\nI haven’t investigated either of these estimates in detail. But they’re instructive examples.\n\n\n \n\n\n#### 5.2 From communication to FLOP/s\n\n\nHow do we move from a communication profile for the brain, to an estimate of the FLOP/s sufficient to match its task performance? There are a number of possibilities.\n\n\nOne simple argument runs as follows: if you have two computers comparable on one dimension important to performance (e.g., communication), but you can’t measure how they compare on some other dimension (e.g., computation), then other things equal, your median guess should be that they are comparable on this other dimension as well.[647](https://www.openphilanthropy.org/brain-computation-report#footnote647_ir2mh5p \"Here I describe a specific version of a general type of argument suggested by Dr. Paul Christiano. From Open Philanthropy's non-verbatim notes from a conversation with Dr. Paul Christiano: “Dr. Christiano puts some weight on the following type of a priori argument: if you have two computers that are comparable on one dimension (e.g., communication), but you can’t measure how they compare along any other dimensions, then a priori your median guess should be that they are comparable on these other dimensions as well (e.g., it would be strange to have a strong view about which is better)” (p. 2). The argument described above also incorporates the constraint that the dimension in question be important to task-performance, and appeals to the skill of the engineers in question.\") Here, the assumption would be that the known dimension reflects the overall skill of the engineer, which was presumably applied to the unknown dimension as well.[648](https://www.openphilanthropy.org/brain-computation-report#footnote648_pi74pxr \"The argument appears in a different light if all you know is that e.g. both computers are green (though even there, it would seem strange to think that e.g. the one on the left is probably better than the one on the right, if you have no information to distinguish them). My thanks to Paul Christiano for discussion.\") As an analogy: if all we know is that Bob’s cheesecake crusts are about as good as Maria’s, the best median guess is that they’re comparable cheesecake chefs, and hence that his cheesecake filling is about as good as hers as well.\n\n\nOf course, we know much about brains and computers unrelated to how their communication compares. But those drawn to simple *a priori* arguments, perhaps this sort of approach can be useful.\n\n\nUsing Dr. Christiano’s estimates, discussed above, one can imagine comparing a V100 GPU to the brain as follows:[649](https://www.openphilanthropy.org/brain-computation-report#footnote649_i1xhpcb \"From Open Philanthropy's non-verbatim notes from a conversation with Dr. Paul Christiano: “A V100 GPU has about 1e12 bytes/s of memory bandwidth on the chip (~10x the brain’s 1e11 bytes of short-distance communication, estimated above), and 3e11 bytes/s of off-chip bandwidth (~300x the brain’s 1e9 bytes/s of long-distance communication, estimated above). Dr. Christiano thinks that these memory access numbers are comparable, based on matching up the memory of a V100 (respectively, cluster of V100s) to the amount of information stored in synapses accessible by the \\\"short-distance\\\" (respectively, \\\"long-distance\\\") connections described above” (p. 4).\")\n\n\n\n\n| *METRIC* | V100 | HUMAN BRAIN |\n| --- | --- | --- |\n| Short-distance communication | 1e12 bytes/s of memory bandwidth | 1e11 bytes/s to nearby neurons? (not vetted)[650](https://www.openphilanthropy.org/brain-computation-report#footnote650_66z363e \"From Open Philanthropy's non-verbatim notes from a conversation with Dr. Paul Christiano (p. 2-3).\") |\n| Long-distance communication | 3e11 bytes/s of off-chip bandwidth | 1e9 bytes/s across the brain? (not vetted)[651](https://www.openphilanthropy.org/brain-computation-report#footnote651_839fkzs \"From Open Philanthropy's non-verbatim notes from a conversation with Dr. Paul Christiano (p. 2-3).\") |\n| Computation | 1e14 FLOP/s | ? |\n**Figure 18: Comparing the brain to a V100.**\n\nOn these estimates, the V100’s communication is at least comparable to the brain’s (indeed, it’s superior by between 10 and 300x). Naively, then, perhaps its computation is comparable (indeed, superior) as well.[652](https://www.openphilanthropy.org/brain-computation-report#footnote652_rt99452 \"From Open Philanthropy's non-verbatim notes from a conversation with Dr. Paul Christiano: “If we knew nothing else about the brain, then, this might suggest that the brain’s computational capacity will be less than, or at least comparable to, a V100’s computational capacity (~1e14 FLOP/s) as well. And even if our compute estimates for the brain are higher, communication estimates are plausibly more robust, and they provide a different indication of how powerful the brain is relative to our computers” (p. 4).\") This would suggest **1e14 FLOP/s or less for the brain**.\n\n\nThat said, it seems like a full version of this argument would include other available modes of comparison as well (continuing the analogy above: if you also know that that Maria’s jelly cheesecake toppings are much worse than Bob’s, you should take this into account too). For example, if we assume that synapse weights are the central means of storing memory in the brain,[653](https://www.openphilanthropy.org/brain-computation-report#footnote653_jrisi61 \"From Open Philanthropy's non-verbatim notes from a conversation with Dr. Kate Storrs: “Dr. Storrs’ sense is that, in the parts of the field she engages with most closely (e.g., systems level modeling, visual/cognitive/perceptual modeling, human behavior), and maybe more broadly, a large majority of people treat synaptic weights as the core learned parameters in the brain. That said, she is not a neurophysiologist, and so isn’t the right person to ask about what sort of biophysical complexities could imply larger numbers of parameters. She is peripherally aware of papers suggesting that glia help store knowledge, and there are additional ideas as well. The truth probably involves mechanisms other than synaptic weights, but she believes that the consensus is that such weights hold most of the knowledge” (p. 2). Though see Trettenbrein (2016) and Langille and Brown (2018) for some complications. And see here for a long list of quotes attesting to the role of synapses in memory.\") we might get:\n\n\n\n\n| *METRIC* | V100 | HUMAN BRAIN |\n| --- | --- | --- |\n| Memory | 3e10 bytes on chip | 1e14-1e15 synapses,[654](https://www.openphilanthropy.org/brain-computation-report#footnote654_y82s4pl \"See Section 2.1.1.\") each storing >5 bits?[655](https://www.openphilanthropy.org/brain-computation-report#footnote655_yipubo2 \"Bartol et al. (2015) suggest a minimum of “4.7 bits of information at each synapse” (they don’t estimate a maximum).\") |\n| Power consumption | 300W | 20W[656](https://www.openphilanthropy.org/brain-computation-report#footnote656_lej5yi3 \"See Section 4.1.2.\") |\n**Figure 19: Comparing the brain to a V100, continued.**\n\nSo the overall comparison here becomes more complicated. V100 power consumption is >10x worse, and comparable memory, on this naive memory estimate for the brain, would require a cluster of ~3000-30,000 V100s, suggesting a corresponding increase to the FLOP/s attributed to the brain (memory access across the cluster would become more complex as well, and overall energy costs would increase).[657](https://www.openphilanthropy.org/brain-computation-report#footnote657_qct2u3j \"Here I’m treating a synapse weight as ~1 byte.\")\n\n\nA related approach involves attempting to identify a systematic relationship between communication and computation in human computers – a relationship that might reflect trade-offs and constraints applicable to the brain as well.[658](https://www.openphilanthropy.org/brain-computation-report#footnote658_4y0qw2a \"From Open Philanthropy's non-verbatim notes from a conversation with Dr. Paul Christiano: “In designing brains, evolution had to make trade-offs in allocating resources (e.g., energy consumption, space) to additional communication mechanisms, vs. additional mechanisms used for computation. Human engineers designing chips also have to make trade-offs in budgeting resources (energy, chip real-estate) to computation vs. communication. Equipped with an estimate of the communication profile of the brain, then, we might be able to use our knowledge of how to balance communication and computation in human computers to estimate what it would take to match the compute power of the brain, or to match its overall performance” (p. 2).\") Thus, for example, AI Impacts examines the ratio of TEPS to FLOP/s in eight top supercomputers, and finds a fairly consistent ~500-600 FLOP/s per TEPS.[659](https://www.openphilanthropy.org/brain-computation-report#footnote659_ohmcp3y \"See here: “The [eight] supercomputers measured here consistently achieve around 1-2 GTEPS per scaled TFLOPS (see Figure 3). The median ratio is 1.9 GTEPS/TFLOPS, the mean is 1.7 GTEPS/TFLOP, and the variance 0.14 GTEPS/TFLOP.\\\" However, AI Impacts notes that they only looked at data about the relationship between TEPS and FLOP/s in a small number of computers, and they have not investigated whether it makes sense to extrapolate from this data to the brain.\") Scaling up from their TEPS estimate for the brain, they get **~1e16-3e17 FLOP/s**.[660](https://www.openphilanthropy.org/brain-computation-report#footnote660_rhp4lzx \"See here: “Among a small number of computers we compared4, FLOPS and TEPS seem to vary proportionally, at a rate of around 1.7 GTEPS/TFLOP. We also estimate that the human brain performs around 0.18 – 6.4 × 1014 TEPS. Thus if the FLOPS:TEPS ratio in brains is similar to that in computers, a brain would perform around 0.9 – 33.7 × 1016 FLOPS.5 We have not investigated how similar this ratio is likely to be.” 1e12/1.7e9=~600.\")\n\n\nA more sophisticated version of this approach would involve specifying a production function governing the returns on investment in marginal communication vs. computation.[661](https://www.openphilanthropy.org/brain-computation-report#footnote661_08pqa0i \"From Open Philanthropy's non-verbatim notes from a conversation with Dr. Paul Christiano: “Dr. Christiano’s approach requires some sort of production function relating the returns from investment in communication to investment in compute. Dr. Christiano’s starting point would be something like logarithmic returns (though there aren’t really two buckets, so a more accurate model would be much messier), and he thinks that when you have two complimentary quantities (say, X and Y), a 50/50 resource split between them is reasonable across a wide range of production functions. After all, a 50% allocation to X will likely give you at least 50% of the maximal value that X can provide, and halving your allocation to X will only allow you to increase your allocation to Y by 50%” (p. 3).\") This function might allow evaluation of different hypothesized combinations of communication and computation in the brain. Thus, for example, the hypothesis that the brain performs the equivalent of 1e20 FLOP/s, but has the communication profile listed in the table above, might face the objection that it assigns apparently sub-optimal design choices to evolution: e.g., in such a world, the brain would have been better served re-allocating resources invested in computation (energy, volume, etc.) to communication instead.\n\n\nAnd even if the brain were performing the equivalent of 1e20 FLOP/s (perhaps because it has access to some very efficient means of computing), such a production function might also indicate a lower FLOP/s budget sufficient, in combination with more communication than the brain can mobilize, to match the brain’s task performance overall (since there may be diminishing returns to more computation, given a fixed amount of communication).[662](https://www.openphilanthropy.org/brain-computation-report#footnote662_u3roqr6 \"From Open Philanthropy's non-verbatim notes from a conversation with Dr. Paul Christiano: “Such a production function would also allow you to estimate what it would take to match the overall performance of the brain, even without matching its compute capacity. Thus, for example, it’s theoretically possible that biological systems have access to large amounts of very efficient computation. If we assume that the value of additional computation diminishes if communication is held fixed, though, then even if the brain has substantially more computation than human computers can mobilize, we might be able to match its overall performance regardless, by exceeding its communication capacity (and hence increasing the value of our marginal compute to overall performance)” (p. 3).\")\n\n\nThese are all just initial gestures at possible approaches, and efforts in this vein face a number of issues and objections, including:\n\n\n* Variation in optimal trade-offs between communication and computation across tasks.\n* Changes over time to the ratio of communication to computation in human-engineered computers.[663](https://www.openphilanthropy.org/brain-computation-report#footnote663_ci7qpit \"From Open Philanthropy's non-verbatim notes from a conversation with Dr. Paul Christiano: “One complication here is that the communication to computation ratio in human computers has changed over time. For example, traditional CPUs had less computation per unit communication than the current hardware used for AI applications, like GPUs (Dr. Christiano says that this is partly because it is easier to write software if you can operate on anything in memory rather than needing to worry about communication and parallelization). If we applied CPU-like ratios to the brain, we would get very low compute estimates. Current supercomputers, though, spend more comparable amounts of energy on communication (including within chips) and compute” (p. 3).\")\n* Differences in the constraints and trade-offs faced by human designers and evolution.\n\n\nI haven’t investigated the estimates above very much, so I don’t put much weight on them. But I think approaches in this vicinity may well be helpful.\n\n\n6 Conclusion\n------------\n\n\nI’ve discussed four different methods of generating FLOP/s budgets big enough to perform tasks as well as the human brain. Here’s a summary of the main estimates, along with the evidence/evaluation discussed:\n\n\n\n\n| **ESTIMATE** | **DESCRIPTION** | **~FLOP/S** | **SUMMARY OF EVIDENCE/EVALUATION** |\n| --- | --- | --- | --- |\n| Mechanistic method low | ~1 FLOP per spike through synapse; neuron models with costs ≤ Izhikevich spiking models run with 1 ms time-steps. | 1e13-1e15 | Simple model, and the default in the literature; some arguments suggest that models in this vein could be made adequate for task-performance without major increases in FLOP/s; these arguments are far from conclusive, but they seem **plausible** to me, and to some experts (others are more skeptical). |\n| Mechanistic method high | ~100 FLOPs per spike through synapse; neuron models with costs greater than Izhikevich models run with 1 ms time-steps, but less than single-compartment Hodgkin-Huxley run with 0.1 ms timesteps. | 1e15-1e17 | It also seems **plausible** to me that FLOP/s budgets for a fairly brain-like task-functional model would need to push into this range in order to cover e.g. learning, synaptic conductances, and dendritic computation (learning seems like an especially salient candidate here). |\n| Mechanistic method very high | Budgets suggested by more complex models – e.g., detailed biophysical models, large DNN neuron models, very FLOPs-intensive learning rules. | >1e17 | **I don’t see much strong positive evidence that you need this much**, even for fairly brain-like models, but it’s possible, and might be suggested by higher temporal resolutions, FLOP/s intensive DNN models of neuron behavior, estimates based on time-steps per variable, greater biophysical detail, larger FLOPs budgets for processes like dendritic computation/learning, and/or higher estimates of parameters like firing rate or synapse count. |\n| Scaling up the DNN from [Beniaguev et al. (2020)](https://www.biorxiv.org/content/10.1101/613141v2.full.pdf) | Example of an estimate >1e17 FLOP/s. Uses the FLOP/s for a DNN-reduction of a detailed biophysical model of a cortical neuron, scaled up by 1e11 neurons. | 1e21 | I think that this is an **interesting example** of positive evidence for very high mechanistic method estimates, as [Beniaguev et al. (2020)](https://www.biorxiv.org/content/10.1101/613141v2.full.pdf) found it necessary to use a very large model in order to get a good fit. But **I don’t give this result on its own a lot of weight**, partly because their model focuses on predicting membrane potential and individual spikes very precisely, and smaller models may prove adequate on further investigation. |\n| Mechanistic method very low | Models that don’t attempt to model every individual neuron/synapse. | <1e13 | It seems **plausible** to me that something in this range is enough, even for fairly brain-like models. Neurons display noise, redundancy, and low-dimensional behavior that suggest that modeling individual neurons/synapses might be overkill; mechanistic method estimates based on low-level components (e.g. transistors) substantially overestimate FLOP/s capacity in computers we actually understand; emulation imposes overheads; and the brain’s design reflects evolutionary constraints that could allow further simplification. |\n| Functional method estimate based on Moravec’s retina estimate, scaled up to whole brain | Assumes 1e9 calculations per second for the retina (100 calculations per edge/motion detection per, 10 edge/motion detections per second per cell, 1e6 cells); scaled up by 1e3-1e6 (the range suggested by portion of mass, volume, neurons, synapses, and energy). | 1e12-1e15 (assuming 1 calculation ~= 1 FLOP) | The retina does a lot of things other than edge and motion detection (e.g., it anticipates motion, it can signal that a predicted stimulus is absent, it can adapt to different lighting conditions, it can suppress vision during saccades); and there are lots of differences between the retina and the brain as a whole. But the estimate, while **incomplete in its coverage of retinal function, might be instructive regardless**, as a ballpark for some central retinal operations (I haven’t vetted the numbers Moravec uses for edge/motion detection, but Prof. Barak Pearlmutter expected them to be accurate).[664](https://www.openphilanthropy.org/brain-computation-report#footnote664_7biniyh \"See Open Philanthropy's non-verbatim notes from a conversation with Prof. Barak Pearlmutter: \\\"Prof. Hans Moravec attempted to derive estimates of the computational capacity of the brain from examination of the retina. Prof. Pearlmutter thought that Moravec’s estimates for the computational costs of robotic vision were likely accurate, given Moravec’s expertise in vision\\\" (p. 3).\") |\n| Functional method estimate based on DNN models of the retina, scaled up to the whole brain | Estimates of retina FLOP/s implied by the models in [Batty et al. (2017)](https://openreview.net/pdf?id=HkEI22jeg) (1e14 FLOP/s) and [Maheswaranathan et al. (2019)](https://www.biorxiv.org/content/10.1101/340943v5.full.pdf) (1e13 FLOP/s), scaled up to the brain as a whole using the same 1e3-1e6 range above. | 1e16-1e20 | I think this is some **weak evidence for numbers higher than 1e17**, and the models themselves are still far from full replications of retinal computation. However, I’m very uncertain about what it looks like to scale these models up to the retinas as a whole. And it also seems plausible to me that these models use many more FLOP/s than required to do what the retina does. For example, their costs reflect implementation choices and model sizes that haven’t yet been shown necessary, and Moravec’s estimate (even if incomplete) is much lower. |\n| Low end functional method estimate based on the visual cortex | Treats a 10 Hz EfficientNet-B2 image classifier, scaled up by 10x, as equivalent to 10% of the visual cortex’s information-processing capacity, then scales up to the whole brain based on portion of neurons (portion of synapses, volume, mass, and energy consumption might be larger, if the majority of these are in the cortex). | 1e13-1e14 | In general, I **hold these estimates lightly**, as I feel very uncertain about what the visual cortex is doing overall and how to compare it to DNN image classifiers, as well as about the scale-up in model size that will be required to reach image classification performance as generalizable across data sets and robust to adversarial examples as human performance is (the high-end correction for this used here – 1000x – is basically just pulled out of thin air, and could be too low). That said, I do think that, to the extent it makes sense at all to estimate the % of the visual cortex’s information-processing capacity mobilized in performing a task analogous to image classification, the number should be macroscopic enough to explain the interesting parallels between the feature detection in image classifiers and in the visual cortex (see [Section 3.2](#section_3.2) for discussion). 1% of V1 seems to me reasonably conservative in this regard, especially given that CNNs trained on image classification end up as state of the art predictors of neural activity in V1 (as well as elsewhere in the visual cortex). So I take these estimates as **some weak evidence** that the mechanistic method estimates I take most seriously (e.g., 1e13-1e17) aren’t way too low. |\n| Middle-range functional method estimate based on visual cortex | Same as previous, but scales up 10 Hz EfficientNet-B2 by 100x, and treats it as equivalent to 1% of the visual cortex’s information-processing capacity. | 1e15-1e16 |\n| High end functional method estimate based on visual cortex | Same as previous, but scales up 10 Hz EfficientNet-B2 by 1000x instead, and treats it as equivalent to 1% of V1’s information-processing capacity. | 3e16-3e17 |\n| Limit method low end | Maximum 8-bit, irreversible FLOP/s that a computer running on 20W at body temperature can perform, assuming current digital multiplier implementations (~500 bit-erasures per 8-bit multiply). | 1e19 | I don’t think that a robust version of the limit method should assume that the brain’s operations are analogous to standard, irreversible FLOP/s (and especially not FLOP/s in digital computers, given that there may be more energy-efficient analog implementations available – see [Sarpeshkar (1998)](https://ieeexplore.ieee.org/document/6790538)). But it does seem broadly plausible to me that a minimal, computationally useful operation in the brain erases at least one logical bit, and very plausible that it dissipates at least 0.69*k*T (indeed, my best guess would be that it dissipates much more than that, given that cortical spikes dissipate 1e10-1e11*k*T; a single ATP releases ~25*k*T; the brain is noisy, warm, and reliant on comparatively large components, etc.). And it seems plausible, as well, that a FLOP is enough to replicate the equivalent of a minimal, computationally useful operation in the brain. Various experts (though not all) also seemed quite confident about claims in this vicinity. So overall, I do think it **very unlikely that required FLOP/s exceeds e.g. 1e21. However, I don’t think this is a case of a physical limit imposing a clean upper bound**. Rather, it seems like one set of arguments amongst others. Indeed, the arguments that seem strongest to me (e.g., arguments that appeal to the energy dissipated by the brain’s mechanisms) don’t seem to rely directly on Landauer’s principle at all. |\n| Limit method middle | Maximum 8-bit, irreversible FLOP/s that a computer running in 20W at body temperature can perform, assuming no intermediate bit-erasures (just a transformation from two n-bit inputs to one n-bit output). | 1e21 |\n| Limit method high | Maximum FLOP/s, assuming at least one logical bit-erasure, or at least 0.69*k*T\ndissipation, per required FLOP. | 7e21 |\n| ATPs | Maximum FLOP/s, assuming at least one ATP used per required FLOP. | 1e20 |\n| Communication method estimate based on comparison with V100 | Estimates brain communication capacity, compares it to a V100, and infers on the basis of the comparability/inferiority of the brain’s communication to a V100s communication, perhaps it’s computational capacity is comparable/inferior as well. | ≤1e14 | **I haven’t vetted these estimates much and so don’t put much weight on them**. The main general question is whether the relationship between communication and computation in human-engineered computers provides much evidence about what to expect that relationship to be in the brain. Initial objections to comparisons to a V100, even granting the communication estimates for the brain that it’s based on, might center on complications introduced by also including memory and energy consumption in the comparison. Initial objections to relying on TEPS-FLOP/s ratios might involve the possibility that there are meaningfully more relevant “edges” in the brain than synapses, and/or “vertices” than neurons. Still, I think that **approaches in this broad vicinity may well prove helpful on further investigation**. |\n| Communication method estimate based on TEPS to FLOP/s extrapolation | Estimates brain TEPS via an analogy between spikes through synapses and traversals of an edge in a graph; then extrapolates to FLOP/s based on observed relationship between TEPS and FLOP/s in a small number of human-engineered computers. | 1e16-3e17 FLOP/s |\n**Figure 20: Summary and description of the main estimates discussed in the report.**\n\nHere are the main numbers plotted together:\n\n\n \n\n\n\n[![FLOPsBudgets5.png](https://www.openphilanthropy.org/files/Blog/FLOPsBudgets5.png)](https://www.openphilanthropy.org/files/Blog/FLOPsBudgets5.png)**Figure 1, repeated. The report’s main estimates.**\n\n\n \n\n\nNone of these numbers are direct estimates of the *minimum* possible FLOP/s budget. Rather, they are different attempts to use the brain – the only physical system we know of that performs these tasks, but far from the only possible such system – to generate some kind of adequately (but not arbitrarily) large budget. If a given method is successful, it shows that a given number of FLOP/s is *enough*, and hence, that the minimum is less than that. But it doesn’t, on its own, indicate how much less.\n\n\nCan we do anything to estimate the minimum directly, perhaps by including some sort of adjustment to one or more of these numbers? Maybe, but it’s a can of worms that I don’t want to open here, as addressing the question of where we should expect the theoretical limits of algorithmic efficiency to lie relative to these numbers (or, put another way, how many FLOP/s we should expect superintelligent aliens to use, if they were charged with replicating human-level task-performance using FLOPs) seems like a further, difficult investigation (though Dr. Paul Christiano expected the brain to be performing at least some tasks in close to maximally efficient ways, using a substantial portion of its resources – see endnote).[665](https://www.openphilanthropy.org/brain-computation-report#footnote665_tn6tkkb \"From Open Philanthropy's non-verbatim notes from a conversation with Dr. Paul Christiano: “If you include a sufficiently broad range of tasks that the human brain can perform, and require similarly useful task-performance across the full range of inputs to which the brain could be exposed, it is likely that for at least one of the tasks in the relevant profile, for some set of inputs, the brain’s method will (a) be close to maximally algorithmically efficient (e.g., within an order of magnitude or two), and (b) use a substantial portion of the computational resources that the brain has available. For example, if you take a computer from the 60s, and you look at all of the tasks it could perform, Dr. Christiano expects that many of the algorithms it was running (for example: sorting), were close to optimally efficient. As another example, there is a very inefficient algorithm for SAT solving, which takes 2n time. For many inputs, we can improve on this algorithm by a huge amount, but we can’t for every input: indeed, there is a rough consensus amongst computer scientists that the very inefficient algorithm is close to the best one can do. Indeed, Dr. Christiano expects that for most algorithms, there will be some family of instances on which it does reasonably well. And given how large the space of possible tasks the brain performs is (we can imagine a very wide set of evaluation metrics and input regimes), the density of roughly-optimal-on-some-inputs algorithms doesn’t need to be that high for them to appear in the brain” (p. 7).\")\n\n\n**Overall, I think it more likely than not that 1e15 FLOP/s is enough to perform tasks as well as the human brain (given the right software, which may be very hard to create). And I think it unlikely (<10%) that more than 1e21 FLOP/s is required.** That said, as emphasized above:\n\n\n* The numbers above are just very loose, back-of-the-envelope estimates.\n* I am not a neuroscientist, and there is no consensus on this topic in neuroscience (or elsewhere).\n* Basically all of my best-guesses are based on a mix of (a) shallow investigation of messy, unsettled science, and (b) a limited, non-representative sampling of expert opinion.\n\n\nMore specific probabilities require answering questions about the theoretical limits of algorithmic efficiency – questions that I haven’t investigated and that I don’t want to overshadow the evidence actually surveyed in the report. In the [appendix](#section_7), I discuss a few narrower conceptions of the brain’s FLOP/s capacity, and offer a few more specific probabilities there, keyed to one particular type of brain model. My current best-guess median for the FLOP/s required to run *that particular type* of model is around 1015 (recall that none of these numbers are estimates of the FLOP/s uniquely “equivalent” to the brain).\n\n\nAs can be seen from the figure above, the FLOP/s capacities of current computers (e.g., a [V100](https://www.nvidia.com/en-us/data-center/v100/) at ~1e14 FLOP/s for ~$10,000, the [Fugaku supercomputer](https://en.wikipedia.org/wiki/Fugaku_(supercomputer)) at ~4e17 FLOP/s for ~$1 billion) cover the estimates I find most plausible.[666](https://www.openphilanthropy.org/brain-computation-report#footnote666_lkj7d2k \"See here for V100 prices (currently ~$8799); and here the $1 billion Fugaku pricetag: “The six-year budget for the system and related technology development totaled about $1 billion, compared with the $600 million price tags for the biggest planned U.S. systems.” Fugaku FLOP/s performance is listed here, at around 4e17-5e17 FLOP/s. Google’s TPU supercomputer, which recently broke records in training ML systems, can also do ~4e17 FLOP/s, though I’m not sure the costs. See Kumar (2020): “In total, this system delivers over 430 PFLOPs of peak performance.” The A100, for ~$200,000, can do 5e15 FLOP/s -- see Mehar (2020). NVIDIA's newest SuperPOD can deliver ~7×1017 of AI performance -- see Paikeday (2020).\") However:\n\n\n* Task-performance requires resources other than FLOP/s (for example, memory and memory bandwidth).\n* Performing tasks on a particular machine can introduce further overheads and complications.\n* Most importantly, matching the human brain’s task-performance requires *actually creating* sufficiently capable and computationally efficient AI systems, and this could be extremely (even prohibitively) difficult in practice even with computers that could run such systems in theory. Indeed, as noted above, the FLOP/s required to run a system that does X can be available even while the resources (including data) required to *train*it remain substantially out of reach. And what sorts of task-performance will result from what sorts of training is itself a further, knotty question.[667](https://www.openphilanthropy.org/brain-computation-report#footnote667_tjgxrde \"See my colleague Ajeya Cotra’s investigation focuses on these issues. \")\n\n\nSo even if my best-guesses are correct, this does not imply that we’ll see AI systems as capable as the human brain anytime soon.\n\n\n \n\n\n#### 6.1 Possible further investigations\n\n\nHere are a few projects that others interested in this topic might pursue (this list also doubles as a catalogue of some of my central ongoing uncertainties).\n\n\n*Mechanistic method*\n\n\n* Investigate the literature on population-level modeling and/or neural manifolds, and evaluate what sorts of FLOP/s estimates it might imply.\n* Investigate the best-understood neural circuits (for example, Prof. Eve Marder mentioned some circuits in leeches, *C. elegans*, flies, and electric fish), and what evidence they provide about the computational models adequate for task-performance.[668](https://www.openphilanthropy.org/brain-computation-report#footnote668_9ly5n8i \"From Open Philanthropy's non-verbatim notes from a conversation with Prof. Eve Marder: “There are also some circuits in leeches, C. elegans, flies, and electric fish that are relatively well-characterized” (p. 4).\")\n* Follow up on the work in [Beniaguev et al. (2020)](https://www.biorxiv.org/content/10.1101/613141v2.full.pdf), testing different hypotheses about the size of deep neural networks required to fit neuron behavior with different levels of accuracy.\n* Investigate the computational requirements and biological plausibility of different proposed learning rules in the brain in more depth.\n* Investigate more deeply different possible hypotheses about molecular-level intracellular signaling processes taking place in the brain, and the FLOP/s they might imply.\n* Investigate the FLOP/s implications of non-binary forms of axon signaling in more detail.\n\n\n*Functional method*\n\n\n* Following up on work by e.g. [Batty et al. (2017)](https://openreview.net/pdf?id=HkEI22jeg) and [Maheswaranathan et al. (2019)](https://www.biorxiv.org/content/10.1101/340943v5.full.pdf), try to gather more data about the minimal artificial neural network models adequate to predict retinal spike trains across trials at different degrees of accuracy (including higher degrees of accuracy than these models currently achieve).\n* Create a version of Moravec’s retina estimate that covers a wider range of computations that the retina performs, but which still focuses on high-level tasks rather than spike trains.\n* Investigate the literature on comparisons between the feature detection in DNNs and in the visual cortex, and try to generate better quantitative estimates of the overlap and the functional method FLOP/s it would imply.\n* Based on existing image classification results, try to extrapolate to the model size required to achieve human-level robustness to adversarial examples and/or generalization across image classification data sets.\n* Investigate various other types of possible functional methods (for example, estimates based on ML systems performing speech recognition).\n\n\n*Limit method*\n\n\n* Investigate and evaluate more fleshed-out versions of algorithmic arguments.\n* Look for and evaluate examples in biology where the limit method might give the wrong answer: e.g., where a biological system is performing some sort of useful computation that would require more than a FLOP to replicate, but which dissipates less than 0.69*k*T.\n\n\n*Communication method*\n\n\n* Estimate the communication bandwidth available in the brain at different distances.\n* Investigate the trade-offs and constraints governing the relationship between communication and computation in human-engineered computers across different tasks, and evaluate the extent to which these would generalize to the brain.\n\n\n*General*\n\n\n* Gather more standardized, representative data about expert opinion on this topic.\n* Investigate what evidence work on brain-computer interfaces might provide.\n* Investigate and evaluate different methods of estimating the memory and/or number of parameters in the brain – especially ones that go beyond just counting synapses. What would e.g., neural manifolds, different models of state retention in neurons, models of biological neurons as multi-layer neural networks, dynamical models of synapses, etc., imply about memory/parameters?\n* (Ambitious) Simulate a simple organism like *C. elegans* at a level of detail adequate to replicate behavioral responses and internal circuit dynamics across a wide range of contexts, then see how much the simulation can be simplified.\n\n\n \n\n\n7 Appendix: Concepts of brain FLOP/s\n------------------------------------\n\n\nIt is reasonably common for people to talk about the brain’s computation/task-performance in terms of metrics like FLOP/s. It is much less common for them to say what they mean.\n\n\nWhen I first started this project, I thought that there might be some sort of clear and consensus way of understanding this kind of talk that I just hadn’t been exposed to. I now think this much less likely. Rather, I think that there are a variety of importantly different concepts in this vicinity, each implying different types of conceptual ambiguity, empirical uncertainty, and relevant evidence. These concepts are sufficiently inter-related that it can be easy to slip back and forth between them, or to treat them as equivalent. But if offering estimates, or making arguments about e.g. AI timelines using such estimates, it matters which you have in mind.\n\n\nI’ll group these concepts into four categories:\n\n\n1. [FLOP/s required for task-performance](#section_7.1), with no further constraints.\n2. [FLOP/s required for task-performance + brain-like-ness constraints](#section_7.2) (e.g., constraints on the similarity between the task-functional model and the brain’s internal dynamics).\n3. [FLOP/s required for task-performance + findability constraints](#section_7.3) (e.g., constraints on what sorts of processes would be able to create/identify the task-functional model in question).\n4. [Other analogies with human-engineered computers](#section_7.4).\n\n\nI find it useful, in thinking about these concepts, to keep the following questions in mind:\n\n\n* *Single answer*: Does this concept identify a single, well-defined number of FLOP/s?\n* *Non-arbitrariness*. Does it involve a highly arbitrary point of focus?\n* *One-FLOP-per-FLOP*: To the extent that this concept purports to represent the brain’s FLOP/s capacity, does an analogous concept, applied to a human-engineered computer, identify the number of FLOP/s that computer actually performs? E.g., applied to a [V100](https://www.thinkmate.com/product/nvidia/900-2g500-0010-000), does it pick out 1e14 FLOP/s?[669](https://www.openphilanthropy.org/brain-computation-report#footnote669_wupg9k2 \"This is a criterion suggested by Dr. Paul Christiano. From Open Philanthropy's non-verbatim notes from a conversation with Dr. Paul Christiano: “In thinking about conceptual standards to use in generating estimates for the FLOP/s necessary to run a task-functional model of a computational system that exhibits some degree of similarity to that system, one constraint is that when you apply your standard to digital systems that actually perform FLOPs, it ought to yield an answer of one FLOP per FLOP (e.g., your estimate for a V100, which performs ~1e14 FLOP/s, should be 1e14 FLOP/s). That is, it shouldn’t yield an estimate of the FLOPs necessary to e.g. model every transistor, or to model lower-level physical processes in transistors leading to e.g. specific patterns of mistaken bit-flips” (p. 7-8).\")\n* *Relationship to the literature*: To what extent do estimates offered in the literature on this topic (mechanistic method, functional method, etc.) bear on the FLOP/s this concept refers to?\n* *Relevance to AI timelines*: How relevant is this number of FLOP/s to when we should expect humans to develop AI systems that match human-level performance?\n\n\nThis appendix briefly discusses some of the pros and cons of these concepts in light of such questions, and it offers some probabilities keyed to one in particular.\n\n\n#### 7.1 No constraints\n\n\nThis report has focused on the evidence the brain provides about the FLOP/s sufficient for task-performance, with no further constraints on the models/algorithms employed in performing the tasks. I chose this point of focus centrally because:\n\n\n* Its breadth makes room for a wide variety of brain-related sources of evidence to be relevant.\n* It avoids the disadvantages and controversies implied by further constraints (see below).\n* It makes the discussion in the report more likely to be helpful to people with different assumptions and reasons for interest in the topic.\n\n\nHowever, it has two main disadvantages:\n\n\n* As noted in the report, evidence that X FLOP/s is sufficient is only indirect evidence about the minimum FLOP/s required; and the overall probability that X is sufficient depends, not just on evidence from the brain/current AI systems, but on further questions about where the theoretical limits of algorithmic efficiency are likely to lie. That said, as noted earlier, Dr. Paul Christiano expected there to be at least some tasks such (a) the brain’s methods of performing them are close to maximally efficient, and (b) these methods use most of the brain’s resources.[670](https://www.openphilanthropy.org/brain-computation-report#footnote670_jw4u8cr \"From Open Philanthropy's non-verbatim notes from a conversation with Dr. Paul Christiano: “If you include a sufficiently broad range of tasks that the human brain can perform, and require similarly useful task-performance across the full range of inputs to which the brain could be exposed, it is likely that for at least one of the tasks in the relevant profile, for some set of inputs, the brain’s method will (a) be close to maximally algorithmically efficient (e.g., within an order of magnitude or two), and (b) use a substantial portion of the computational resources that the brain has available. For example, if you take a computer from the 60s, and you look at all of the tasks it could perform, Dr. Christiano expects that many of the algorithms it was running (for example: sorting), were close to optimally efficient. As another example, there is a very inefficient algorithm for SAT solving, which takes 2n time. For many inputs, we can improve on this algorithm by a huge amount, but we can’t for every input: indeed, there is a rough consensus amongst computer scientists that the very inefficient algorithm is close to the best one can do. Indeed, Dr. Christiano expects that for most algorithms, there will be some family of instances on which it does reasonably well. And given how large the space of possible tasks the brain performs is (we can imagine a very wide set of evaluation metrics and input regimes), the density of roughly-optimal-on-some-inputs algorithms doesn’t need to be that high for them to appear in the brain” (p. 7).\") I haven’t investigated this, but if true, it would reduce the force of this disadvantage.\n* The relevance of *in principle* FLOP/s requirements to AI timelines is fairly indirect. If you know that Y type of task-performance is impossible without X FLOP/s, then you know that you won’t see Y until X FLOP/s are available. But once X FLOP/s are available (as I think they probably are), the question of when you’ll see Y is still wide open. You know that superintelligent aliens could do it in theory, if forced to use only the FLOP/s your computers make available. But on its own, this gives you very little indication of when humans will do it in practice.\n\n\nIn light of these disadvantages, let’s consider a few narrower points of focus.\n\n\n\n#### 7.2 Brain-like-ness\n\n\nOne option is to require that models/algorithms employed in matching the brain’s task-performance exhibit some kind of resemblance to its internal dynamics as well. Call such requirements “brain-like-ness constraints.”\n\n\nSuch constraints restrict the set of task-functional models under consideration, and hence, to some extent, the relevance of questions about the theoretical limits of algorithmic efficiency. And they may suggest a certain type of “findability,” without building it into the definition of the models/algorithms under consideration. The brain, after all, is the product of evolution – a search and selection process whose power may be amenable to informative comparison with what we should expect the human research community to achieve.\n\n\nBut brain-likeness constraints also have disadvantages. Notably:\n\n\n* From the perspective of AI timelines, it doesn’t matter whether the AI systems in question are brain-like.\n* Functional method estimates are based on human-engineered systems that aren’t designed to meet any particular brain-like-ness constraints.\n* It’s difficult to define brain-like-ness constraints in a manner that picks out a single, privileged number of FLOP/s, without making seemingly-arbitrary choices about the type of brain-like-ness in question and/or losing the *One-FLOP-per-FLOP* criterion above.\n\n\nThis last problem seems especially salient to me. Here are some examples where it comes up.\n\n\n*Brain simulations*\n\n\nConsider the question: what’s the minimum number of FLOP/s sufficient to simulate the brain? At a minimum, it depends on what you want the simulation to do (e.g., serve as a model for drug development? teach us how the brain works? perform a given type of task?). But even if we focus on replicating task-performance, there still isn’t a single answer, because we have not specified the level of brain-like-ness required to count as a simulation of the brain, assuming task-performance stays fixed.[671](https://www.openphilanthropy.org/brain-computation-report#footnote671_b54wdai \"From Open Philanthropy's non-verbatim notes from a conversation with Prof. Rosa Cao: “Prof. Cao does not believe that there is a privileged description of the computations that the brain is performing. We can imagine many different possible computational models of the brain, which will replicate different types of behavior, to within a given error-tolerance, in a given circumstance. In order to determine which biophysical processes are important, and what level of precision and detail you need in a model, you first need to specify the particular type of input-output relationship that you care about, and how the relevant outputs need to be produced. More generally, Prof. Cao thinks that the computational paradigm in neuroscience is conceptually underspecified. That is, the field is insufficiently clear about what it means to talk about the computations that the brain is performing” (p. 1).\") Simulating individual molecules is presumably not required. Is replicating the division of work between hemispheres, but doing everything within the hemispheres in a maximally efficient but completely non-brain-like-way, sufficient?[672](https://www.openphilanthropy.org/brain-computation-report#footnote672_wdmtpbo \"From Open Philanthropy's non-verbatim notes from a conversation with Dr. Paul Christiano: “In the case of the brain, for example, a high-level description might be something like ‘it divides the work between these two hemispheres in the following way.’ Thus, to meet the relevant standard, ‘brain-like’ computational models will only need to replicate that hemispheric division. Beyond that, they can just employ the maximally efficient way of performing the task” (p. 8).\") If so, we bring back many of the questions about the theoretical limits of algorithmic efficiency we were aiming to avoid. If not, where’s the line in between? We haven’t said.\n\n\n*“Reasonably brain-like” models*\n\n\nA similar problem arises if we employ a vaguer standard – requiring, for example, that the algorithm in question be “reasonably brain-like.” What counts? Are birds reasonably plane-like? Are the units of a DNN reasonably neuron-like? Some vagueness is inevitable, but this is, perhaps, too much.\n\n\n*Just picking a constraint*\n\n\nOne way to avoid this would be to just pick a precisely-specified type of brain-likeness to require. For example, we might require that the simulation feature neuron-like units (defined with suitable precision), a brain-like connectome, communication via binary spikes, brain-like average firing rates, but not e.g. individual ion channels, protein dynamics, membrane potential fluctuations, etc. But why these and not others? Absent a principled answer, the choice seems arbitrary.\n\n\n*The brain’s algorithm*\n\n\nPerhaps we might appeal to the FLOP/s required to reimplement what I will call “the brain’s algorithm.” The idea, here, would be to assume that there is a single, privileged description of *how* the brain performs the tasks that it performs – a description that allows us to pick out a single, privileged number of FLOP/s required to perform those tasks *in that* way.\n\n\nWe can imagine appealing, here, to influential work by David Marr, who distinguished between three different levels of understanding applicable to an information-processing system:\n\n\n1. *The computational level*: the overall task that the system in question is trying to solve, together with the reason it is trying to solve this task.\n2. *The algorithmic level*: how the task-relevant inputs and outputs are represented in the system, together with the intermediate steps of the input-output transformation.\n3. *The implementation level*: how these representations and this algorithm are physically implemented.[673](https://www.openphilanthropy.org/brain-computation-report#footnote673_49xptcx \"See Marr (1982) (p. 25).\")\n\n\nThe report focused on level 1. But suppose we ask, instead: how many FLOP/s are required to replicate level 2? Again, the same problem arises: which departures from brain-like-ness are compatible with reimplementing the brain’s algorithm, and which are not (assuming high-level task performance remains unaffected regardless)? I have yet to hear a criterion that seems to me an adequate answer.[674](https://www.openphilanthropy.org/brain-computation-report#footnote674_rno2ahm \"From Open Philanthropy's non-verbatim notes from a conversation with Prof. Chris Eliasmith: “There is no privileged model of the brain which can claim to be the model of how the brain performs tasks. You can’t answer someone’s question about how the brain works without knowing exactly what the question is. Nor is there a privileged level of biological detail that a model needs to include in order count as a brain model, as all models are wrong to some extent. You can, though, specify a particular set of functions that a model needs to reproduce, with a particular degree of similarity to human behavior and anatomical and physiological data. Prof. Eliasmith’s work is basically oriented towards building a brain model that satisfies constraints of this type” (p. 4). From Open Philanthropy's non-verbatim notes from a conversation with Prof. Rosa Cao: “Prof. Cao does not believe that there is a privileged description of the computations that the brain is performing. We can imagine many different possible computational models of the brain, which will replicate different types of behavior, to within a given error-tolerance, in a given circumstance. In order to determine which biophysical processes are important, and what level of precision and detail you need in a model, you first need to specify the particular type of input-output relationship that you care about, and how the relevant outputs need to be produced. More generally, Prof. Cao thinks that the computational paradigm in neuroscience is conceptually underspecified. That is, the field is insufficiently clear about what it means to talk about the computations that the brain is performing” (p. 1).\")\n\n\nNote that this problem arises even if we assume clean separations between implementation and algorithmic levels in the brain – a substantive assumption, and one that may be more applicable in the context of human-engineered computers than biological systems.[675](https://www.openphilanthropy.org/brain-computation-report#footnote675_9k50exu \"See Bell (1999), Hanson (2011), and Lee (2011) for some discussion.\") For even in human-engineered computers, there are multiple algorithmic levels. Consider someone playing Donkey Kong on an [MOS 6502](https://en.wikipedia.org/wiki/MOS_Technology_6502). How many FLOP/s do you need to reimplement the “algorithmic level” of the MOS 6502, or to play Donkey Kong “the way the MOS 6502 does it”? I don’t think there’s a single answer. Do we need to emulate individual transistors, or are logic gates enough? Can we implement the adders, or the ALU, or the high-level architecture, in a different way? A full description of how the system performs the task involves all these levels of abstraction simultaneously. Given a description of an algorithm (e.g., a set of states and rules for transitioning between them), we can talk about the operations required to implement it.[676](https://www.openphilanthropy.org/brain-computation-report#footnote676_iy552go \"E.g., we can talk about how many FLOP/s it takes to run an EfficientNet-B2 at 10 Hz, given a description of the model.\") But given an actual physical system operating on multiple levels of abstraction, it’s much less clear what talk about *the algorithm* it’s implementing refers to.[677](https://www.openphilanthropy.org/brain-computation-report#footnote677_hi4agpq \"See Piccinini (2017) for discussion of related issues.\")\n\n\n \n\n\n[![Jonas_and_Kording_image.png](https://www.openphilanthropy.org/files/Research/Brain_Compute/Jonas_and_Kording_image.png)](https://www.openphilanthropy.org/files/Research/Brain_Compute/Jonas_and_Kording_image.png)**Figure 21:** Levels of abstraction in a microprocessor. From [Jonas and Kording (2016)](https://journals.plos.org/ploscompbiol/article/file?id=10.1371/journal.pcbi.1005268&type=printable), p. 5, Figure 1, unaltered, licensed under [CC BY 4.0](https://creativecommons.org/licenses/by/4.0/). Original caption: “A microprocessor is understood at all levels. **(A)** The instruction fetcher obtains the next instruction from memory. This then gets converted into electrical signals by the instruction decoder, and these signals enable and disable various internal parts of the processor, such as registers and the arithmetic logic unit (ALU). The ALU performs mathematical operations such as addition and subtraction. The results of these computations can then be written back to the registers or memory. **(B)** Within the ALU there are well-known circuits, such as this one-bit adder, which sums two one-bit signals and computes the result and a carry signal. **(C)** Each logic gate in **(B)** has a known truth table and is implemented by a small number of transistors. **(D)** A single NAND gate is comprised of transistors, each transistor having three terminals **(E)**. We know **(F)** the precise silicon layout of each transistor.”\n \n\n\n*The lowest algorithmic level*\n\n\nPerhaps we could focus on the *lowest* algorithmic level, assuming this is well-defined (or, put another way, on replicating all the algorithmic levels, assuming that the lowest structures all the rest)? One problem with this is that even if we knew that a given type of brain simulation – for example, a connectome-like network of Izhikevich spiking neurons – could be made task-functional, we wouldn’t yet know whether it captured the level in question. Are ion channels above or below the lowest algorithmic level? To many brain modelers, these questions don’t matter: if you can leave something out without affecting the behavior you care about, all the better. But focusing on the lowest-possible algorithmic level brings to the fore abstract questions about where this level lies. And it’s not clear, at least to me, how to answer them.[678](https://www.openphilanthropy.org/brain-computation-report#footnote678_2sei8sa \"For an example of the types of debates in this vein that do not seem to me particularly relevant or productive in this context, see here.\")\n\n\nAnother problem with focusing on the lowest algorithmic level is, to the extent that we want a FLOP/s estimate that would be to the brain what 1e14 FLOP/s is to a V100, we’ll do poorly on the *One-FLOP-per-FLOP* criterion above: e.g., if we assume that the lowest algorithmic level in a V100 is at the level of transistors, we’ll end up budgeting many more FLOP/s for a transistor-level simulation than the 1e14 FLOP/s the V100 actually performs.[679](https://www.openphilanthropy.org/brain-computation-report#footnote679_81n6ah5 \"From Open Philanthropy's non-verbatim notes from a conversation with Dr. Paul Christiano: “Attempting to use some standard like “the description of the system you would give if you really understood how the system worked” might well result in over-estimates, since it would plausibly result in descriptions at lower levels, like transistors or NAND gates” (p. 8).\")\n\n\n*The highest algorithmic level*\n\n\nWhat about the *highest* algorithmic level? As with the lowest algorithmic level, it’s unclear where this highest level lies, and very high-level descriptions of the brain’s dynamics (analogous, e.g., to the “processor architecture” portion of the diagram above) may leave a lot of room for intuitively non-brain-like forms of efficiency (recall the “simulation” of the brain’s hemispheres discussed above). And it’s not clear that this standard passes the “one-FLOP-per-FLOP” test either: if a V100 performing some task is inefficient at some lower level of algorithmic description, then the maximally efficient way of performing that task in a manner that satisfies some higher level of description may use fewer FLOP/s than the V100 performs.\n\n\n*Nothing that doesn’t map to the brain*\n\n\nNick Beckstead suggests a brain-like-ness constraint on which the algorithm used to match the brain’s task performance must be such that (a) all of its algorithmic states map onto brain states, and (b) the transitions between these algorithmic states mirror the transitions between the corresponding brain states.[680](https://www.openphilanthropy.org/brain-computation-report#footnote680_f6pnsji \"This definition is based on the definition of when one computational method represents another offered by Knuth (1997), p. 467, problem 9. See also Sandberg and Bostrom (2008): “A strict definition of simulation might be that a system S consists of a state x(t) evolving by a particular dynamics f, influenced by inputs and producing outputs: x(t+1) = f(I,x(t)), O(t)=g(x(t)). Another system T simulates S if it produces the same output (within a tolerance) for the same input time series starting with a given state (within a tolerance): X(t+1)=F(I, X(t)), O(t)=G(X(t)) where |x(t)‐X(t)|< ε1 and X(0)=x(0)+ ε2. The simulation is an emulation if F=f (up to a bijective transformation of X(t)), that is, the internal dynamics is identical and similar outputs are not due to the form of G(X(t)).\\\"\") Such a constraint rules out replicating the division of work between hemispheres, but doing everything else in a maximally efficient way, because the maximally efficient way will presumably involve algorithmic states that don’t map onto brain states.\n\n\nThis constraint requires specifying the necessary accuracy of the mapping from algorithmic states to brain states (though note that defining task-performance at all requires something like this).[681](https://www.openphilanthropy.org/brain-computation-report#footnote681_kqxrzps \"See e.g. Sandberg and Bostrom (2008), who note that the brain is not strictly simulable on their definition, due to chaotic dynamics, but that “there exists a significant amount of noise in the brain that does not prevent meaningful brain states from evolving despite the indeterminacy of their dynamics. A “softer” form of emulation may be possible to define that has a model or parameter error smaller than the noise level and is hence practically indistinguishable from a possible evolution of the original system” (p. 7).\") I also worry that whether a given algorithm satisfies this constraint or not will end up depending on which operations are treated as basic (and hence immune from the requirement that the state-transitions involved in implementing them map onto the brain’s).[682](https://www.openphilanthropy.org/brain-computation-report#footnote682_770xpd7 \"E.g., whether a given method of transitioning between states in a way that doesn't map to the brain is OK or not will depend on whether this is construed as part of the “algorithm” or part of its “implementation.” But implementation itself takes place at many levels of abstraction, which can themselves be described in algorithmic terms.\") And it’s not clear to me that this definition will capture One-FLOP-per-FLOP, since it seems to require a very high degree of emulation accuracy. That said, I think something in this vicinity might turn out to work.\n\n\nMore generally, though, brain-like-ness seems only indirectly relevant to what we ultimately care about, which is task-performance itself. Can findability constraints do better?\n\n\n\n#### 7.3 Findability\n\n\nFindability constraints restrict attention to the FLOP/s required to run task-functional systems that could be identified or created via a specific type of process. Examples include task-functional systems that:\n\n\n1. humans will in fact create in the future (or, perhaps, the *first* such systems);\n2. humans would/could create, given access to a specific set of resources and/or data;\n3. would/could be identified via a specific type of training procedure – for example, a procedure akin to those used in machine learning today;\n4. could/would be found via a specified type of evolution-like search process, akin to the one that “found” the biological brain;\n5. could be created by an engineer “as good as evolution” at engineering.[683](https://www.openphilanthropy.org/brain-computation-report#footnote683_mw8jmkf \"See this post by AI impacts for a framework somewhat reminiscent of this conception, which plots indifference curves for different combinations of hardware and software sophistication. The post treats the brain as the point that combines “human-level hardware” and “evolution level software engineering.” But we can also imagine defining human-level hardware as the amount of hardware that someone with “evolution level software engineering skill” would need in order to create a computational system that matches human-level task performance. My thanks to Paul Christiano, Katja Grace, and Ajeya Cotra for discussion of this approach.\")\n\n\nThe central benefit of all such constraints is that they are keyed directly to what it takes to actually create a task-functional system, rather than what systems could exist in principle. This makes them more informative for the purposes of thinking about when such systems might in fact be created by humans.\n\n\nBut it’s also a disadvantage, as estimates involving findability constraints require answering many additional, knotty questions about what types of systems are what kinds of findable (e.g., what sorts of research programs or training methods could result in what sorts of task performance; what types of resources and data these programs/methods would require; what would in fact result from various types of counterfactual “evolution-like” search processes, etc.).\n\n\nFindability constraints related to evolution-like search processes/engineering efforts (e.g., (d) and (e) above) are also difficult to define precisely, and they are quite alien to mainstream neuroscientific discourse. This makes them difficult to solicit expert opinion about, and harder to evaluate using evidence of the type surveyed in the report.\n\n\nMy favorite of these constraints is probably the FLOP/s that will be used by the first human-built systems to perform these tasks, since this is the most directly relevant to AI timelines. I see functional method estimates as especially relevant here, and mechanistic/limit method estimates as less so.\n\n\n\n#### 7.4 Other computer analogies\n\n\nThere are a few other options as well, which appeal to various other analogies with human-engineered computers.\n\n\n*Operations per second*\n\n\nFor example, we can imagine asking: how many operations per second does the brain perform? One problem here is that “operations” does not have a generic meaning. An operation is just an input-output relationship, implemented as part of a larger computation, and treated as basic for the purpose of a certain kind of analysis.[684](https://www.openphilanthropy.org/brain-computation-report#footnote684_msasop7 \"See discussion Schneider and Gersting (2018) (p. 96-100): “To measure time efficiency, we identify the fundamental unit (or units) of work of an algorithm and count how many times the work unit is executed” (p. 96). From Open Philanthropy's non-verbatim notes from a conversation with Dr. Jess Riedel: “In the context of a computational system, you can think of an ‘operation’ as a small computation that can be treated as atomic, at least with respect to a particular architecture” (p. 5).\") The brain implements many different such relationships at different levels of abstraction: for example, it implements many more “ion-channel opening/closing” operations per second than it does “spikes through synapses” operations.[685](https://www.openphilanthropy.org/brain-computation-report#footnote685_8zfif31 \"See e.g. Thagard (2002), who chooses to count proteins instead of neurons.\") Estimates that focus on the latter, then, need to say why they do so. You can’t just pick a thing to count, and count it.\n\n\nMore importantly, our ultimate interest is in systems that run on FLOP/s, that perform tasks at human-levels. To be relevant to this, then, we also need to know how many FLOP/s are sufficient to replicate one of the operations in question; and we need some reason to think that, so replicated, the resulting FLOP/s budget overall would be enough for task-performance. This amounts to something closely akin to the mechanistic method, and the same questions about the required degree of brain-like-ness apply.\n\n\n*FLOP/s it performs*\n\n\nWhat if we just asked directly: how many FLOP/s does the brain perform? Again, we need to know what is meant.\n\n\n* One possibility is that we have in mind one of the other questions above: e.g., how many FLOP/s do you need to perform some set of tasks that the brain performs, perhaps with some kind of implicit brain-like-ness constraint. This raises the problems discussed in 7.1 and 7.2 above.\n* Another possibility is that we are asking more literally: how many times per second does the brain’s biophysics implement e.g. an addition, subtraction, multiplication, or division operation of a given level of precision? In some places, we may be able to identify such implementation – for example, if synaptic transmission implements an addition operation via the postsynaptic membrane potential. In other places, though, the task-relevant dynamics in the brain may not map directly to basic arithmetic; rather, they may be more complicated, and require multiple FLOPs to capture. If we include these FLOPs (as we should, if we want the question to be relevant to the hardware requirements for advanced AI systems), we’re back to something closely akin to the mechanistic method, and to the same questions about brain-like-ness.\n\n\n*Usefulness limits*\n\n\nI’ll consider one final option, which seems to me (a) promising and (b) somewhat difficult to think about.\n\n\nSuppose you were confronted with a computer performing various tasks, programmed by a programmer of unclear skill, using operations quite dissimilar from FLOP/s. You want some way of quantifying this computer’s computational capacity in FLOP/s. How would you do it?\n\n\nAs discussed above, using the minimum FLOP/s sufficient to perform any of the tasks the computer is currently programmed to perform seems dicey: this depends on where the theoretical limits of algorithmic efficiency lie, relative to algorithms the computer is running. But suppose we ask, instead, about the minimum FLOP/s sufficient to perform any useful task that the computer *could in principle be programmed to perform*, given arbitrary programming skill. An arbitrarily skillful programmer, after all, would presumably employ maximally efficient algorithms to use this computer to its fullest capacity.\n\n\nApplied to a computer actually performing FLOP/s, this approach does well on the “One-FLOP-per-FLOP” criterion. That is, even an arbitrarily skillful programmer still cannot wring more FLOP/s out of a V100 than the computer actually performs, assuming this programmer is restricted to the computational mechanisms intended by the system’s designers. So the minimum FLOP/s sufficient to do any of the tasks that this programmer could use a V100 to perform would presumably be 1e14.\n\n\nAnd it also fits well with what we’re intuitively doing when we ask about a system’s computational capacity: that is, we’re asking how useful this system can be for computational tasks. For instance, if a task requires 1e17 FLOP/s, can I do it with this machine? This approach gives the answers you would get if the machine actually performed FLOP/s itself.\n\n\nCan we apply this approach to the brain? The main conceptual challenge, I think, is defining what sorts of interventions would count as “programming” the brain.[686](https://www.openphilanthropy.org/brain-computation-report#footnote686_6ii2fio \"If we construe the type of task-performance at stake in the \\\"no constraints\\\" option above as including any task the brain can perform in the sense at stake here, then the two collapse into each other. However, my sense is that when people talk about matching human-level task-performance, they generally have in mind the type of task-performance humans do in fact display, rather than the type of task-performance they could display in principle if \\\"programmed\\\" with arbitrary skill.\")\n\n\n* One option would be a restriction to external stimulation like e.g. talking, reading, etc. The tasks in question would be the set of tasks that any human could in principle be trained to perform, given arbitrary training time/arbitrarily skilled trainers. This would be limited by the brain’s existing methods of learning.\n* Another option would be to allow direct intervention on biophysical variables in the brain. Here, the main problem would be putting limits on which variables can be intervened on, and by how much. Intuitively, we want to disallow completely remoulding the brain into a fundamentally different device, or “use” of mechanisms and variables that the brain does not currently “use” to store or process information. I think it possible that this sort of restriction can be formulated with reasonable precision, but I haven’t tried.\n\n\nOne might also object that this approach will focus attention on tasks that are overall much more difficult than the ones that we generally have in mind when we’re thinking about human-level task performance.[687](https://www.openphilanthropy.org/brain-computation-report#footnote687_umk89hh \"My thanks to Ajeya Cotra for discussion.\") I think that this is very likely true, but this seems quite compatible with using it as a concept of the brain’s FLOP/s capacity, as it seems fine (indeed, inuitive) if this concept indicates the limitations on the brain’s task performance imposed by hardware constraints alone, as opposed to other ways the system is sub-optimal.\n\n\n### \n\n\n#### 7.5 Summing up\n\n\nHere is a summary of the various concepts I’ve discussed:\n\n\n \n\n\n\n\n\n\n\n| *CONCEPT* | *ADVANTAGES* | *DISADVANTAGES* |\n| --- | --- | --- |\n| Minimum FLOP/s sufficient to match the brain’s task-performance | Simple; broad; focuses directly on task-performance. | Existing brains and AI systems provide only indirect evidence about the theoretical limits of algorithmic efficiency; questionably relevant to the FLOP/s we should expect human engineers to actually use. |\n| Minimum FLOP/s sufficient to run a task-functional model that meets some brain-like-ness constraint, such as being a:* “simulation of the brain”\n* “reasonably brain-like model”\n* model with X-very specific type of brain-like-ness\n* model that captures “the algorithmic level”\n* … “the lowest algorithmic level”\n* … “the highest algorithmic level’\n* model with no states/transitions that don’t map to the brain\n | Restricted space of models makes theoretical limits of algorithmic efficiency somewhat less relevant, and neuroscientific evidence more directly relevant; connection to evolution may indicate a type of findability (without needing to include such findability in the definition). | Non-arbitrary brain-like-ness constraints are difficult to define with precision adequate to pick out a single number of FLOP/s; the systems we ultimately care about don’t need to be any particular degree of brain-like; functional method estimates are not based on systems designed to be brain-like; analogous standards, applied to a human-engineered computer, struggle to identify the FLOP/s that computer actually performs; the connection between evolutionary find-ability and specific computational models of the brain is often unclear. |\n| Minimum FLOP/s sufficient to run a task-functional model that meets some findability constraint, such as being:* the first such model humans will in fact create\n* creatable by humans using X-type of training/resources/data etc.\n* findable by X-type of hypothetical, evolution-like process\n* creatable by an engineer “as good as evolution” at engineering\n | More directly relevant to the FLOP/s costs of models that we might expect humans to create, as opposed to ones that could exist in principle. “First model humans will in fact create” seems especially relevant (and functional method estimates may provide some purchase on it). | Implicating of difficult further questions about which models are what kinds of findable; findability constraints based on evolutionary hypotheticals/evolution-level engineers are also difficult to define precisely, and they are fairly alien from mainstream neuroscientific discourse – a fact which makes them difficult to solicit expert opinion about and/or evaluate using evidence of the type surveyed in the report. |\n| Other computer analogies:* “Operations per second in the brain”\n* “FLOP/s the brain performs”\n* “Minimum FLOP/s sufficient to perform any task the brain could be programmed to perform”\n | Variable. Focusing on the tasks that the brain can be “programmed” to perform does fairly well on *One-FLOP-per-FLOP*, and it fits well with what we might want a notion of “FLOP/s capacity” to do, while also side-stepping questions about the degree of algorithmic inefficiency in the brain. | In order to retain relevance to task-functional systems running on FLOP/s, “operations per second in the brain” and “FLOP/s the brain performs” seem to me to collapse back into something like the mechanistic method, and to correspondingly difficult questions about the theoretical limits of algorithmic efficiency, and/or brain-like-ness. Focusing on the tasks that the brain can be programmed to perform requires defining what interventions count as “programming” as opposed to reshaping – e.g., distinguishing between hardware and software, which is hard in the brain. |\n**Figure 22: Concepts of “brain FLOP/s”**\n\nAll these options have pros and cons. I don’t find any of them particularly satisfying, or obviously privileged as a way of thinking about the FLOP/s “equivalent” to the human brain. I’ve tried, in the body of the report, to use a broad framing; to avoid getting too bogged down in conceptual issues; and to survey evidence relevant to many narrower points of focus.\n\n\nThat said, it may be useful to offer some specific (though loose) probabilities for at least one of these. The point of focus I feel most familiar with is the FLOP/s required to run a task-functional model that satisfies a certain type of (somewhat arbitrary and ill-specified) brain-like-ness constraint, so I’ll offer some probabilities for that, keyed to the different mechanistic method ranges discussed above.\n\n\n**Best-guess probabilities for the minimum FLOP/s sufficient to run a task-functional model that satisfies the following conditions:**\n\n\n1. *It includes units and connections between units corresponding to each neuron and synapse in the human brain (these units can have further internal structure, and the model can include other things as well).*[688](https://www.openphilanthropy.org/brain-computation-report#footnote688_lsmybmg \"Strictly, they would need to correspond to the neurons and synapses in a particular human brain; but as I noted in Section 1.5, at the level of precision relevant to this report, I’m treating normal adult human brains as equivalent.\")\n2. *The functional role of these units and connections in task-performance is roughly similar to the functional role of the corresponding neurons and synapses in the brain.*[689](https://www.openphilanthropy.org/brain-computation-report#footnote689_p4dnrjw \"This is meant to exclude the possibility of using some other part of the model to do what is intuitively “all of the work,” but in some hyper-efficient manner.\")\n\n\nCaveats:\n\n\n* These are rough subjective probabilities offered about unsettled science. Hold them lightly.[690](https://www.openphilanthropy.org/brain-computation-report#footnote690_ycmtlwk \"In particular, despite the amount of evidence discussed in the report, I don't think of these probabilities as particularly \\\"robust.\\\" Even in the final stages of this project, they've continued to vary somewhat as I've been exposed to new evidence, and as different considerations have become more or less salient to me (for example, whether 1e15 has fallen above or below my median has varied), and I expect that they will continue to do so, especially in response to more data about expert opinion. The numbers offered here are just a coarse-grained snap-shot. I've also erred on the side of round numbers to avoid suggesting too much precision.\")\n* (2) is admittedly imprecise. My hope is that these numbers can be a helpful supplement to the more specific evidence surveyed in the report, but those who think the question ill-posed are free to ignore.[691](https://www.openphilanthropy.org/brain-computation-report#footnote691_q5zcgaa \"The estimate can be seen as keyed to a concept that combines “just pick a degree of brain-like-ness” with “reasonably brain-like.” It has the disadvantages of both -- namely, arbitrariness and vagueness.\")\n* This is not an estimate of the “FLOP/s equivalent to the brain.” It’s an estimate of “the FLOP/s required to run a specific type of model of the brain.” See Sections [7.1](#section_7.1)–[7.4](#section_7.4) on why I think the concept of “the FLOP/s equivalent to the brain” is underspecified.\n* I also think it very plausible that modeling every neuron/synapse is in some sense overkill (see [Section 2.4.2](#section_2.4.2)) above), even in the context of various types of brain-like-ness constraints; and even more so without them.\n* I assume access to “sparse FLOP/s,” as discussed in[Section 2.1.1.2.2](#section_2.1.1.2.2).\n\n\n\n\n| **FLOP/S RANGE** | **BEST-GUESS PROBABILITY** | **CENTRAL CONSIDERATIONS I HAVE IN MIND** |\n| --- | --- | --- |\n| <1e13 | ~15% | This is less than the estimate I’ve used for the spikes through synapses per second in the brain, so this range requires either that (a) this estimate is too high, or (b) satisfying the conditions above requires less than 1 FLOP per spike through synapse. (a) seems possible, as these parameters seem fairly unknown and I wouldn’t be that surprised if e.g. the average firing rate was <0.1 Hz, especially given the estimates in [Lennie (2003)](http://www2.bcs.rochester.edu/sites/plennie/pdfs/Lennie03a.pdf#page=3). And (b) seems quite possible as well: a single FLOP might cover multiple spikes (for example, if what matters is a firing rate encoded in multiple spikes), and in general, it might well be possible to simplify what matters about the interactions between neurons in ways that aren’t salient to me (though note that simplifications that summarize groups of neurons are ruled out by the definition of the models in question).\nThis sort of range also requires <100 FLOP/s per neuron for firing decisions, which, assuming at least 1 FLOP per firing decision, means you have to be computing firing decisions less than 100 times per second. My naive guess would’ve been that you need to do it more frequently, if a neuron is operating on e.g. 1 ms timescales, but I don’t have a great sense of the constraints here, and [Sarpeshkar (2010)](https://www.cambridge.org/core/books/ultra-low-power-bioelectronics/ED8504DA1504856B74E2502EA859FDEA) and Dr. Paul Christiano both seemed to think it possible to compute firing decisions less than once per timestep (see [Section 2.1.2.5](#section_2.1.2.5)).\nAnd finally, this sort of range requires that the FLOP/s required to capture the contributions of all the other processes described in the mechanistic method section (e.g., dendritic computation, learning, alternative signaling mechanisms, etc.) are <1 FLOP per spike through synapse and <100 FLOP/s per neuron. Learning seems to me like the strongest contender for requiring more than this, but maybe it’s in the noise due to slower timescales, and/or only a small factor (e.g., 2× for something akin to gradient descent methods) on top of a very low-end baseline.\nSo overall, it doesn’t seem like this range is ruled out, even assuming that we’re modeling individual neurons and synapses. But it requires that the FLOPs costs of everything be on the low side. And my very vague impression that many experts (even those sympathetic to the adequacy of comparatively simple models) would think this range too low. That said, it also covers possible levels of simplification that current theories/models do not countenance. And it seems generally reasonable, in contexts with this level of uncertainty, to keep error bars (in both directions) wide. |\n| 1e13-1e15 | ~30% | This is the range that emerges from the most common type of methodology in the literature, which budgets one operation per spike through synapse, and seems to assume that (i) operations like firing decisions, that scale with the number of neurons (~1e11) rather than number of synapses (~1e14-1e15), are in the noise, and (ii) so is everything else (including learning, alternative signaling mechanisms, and so on).\nAs I discuss in [Section 2.1.2.5](#section_2.1.2.5), I think that assumption (i) is less solid if we budget FLOPs at synapses based on spike rates rather than timesteps, since the FLOPs costs of processes in a neuron could scale with timesteps per neuron per second, and timesteps are plausibly a few orders of magnitude more frequent than spikes, on average. Still, this range covers all neuron models with FLOP/s costs less than an Izhikevich spiking neuron model run with 1 ms timesteps (~1e15 FLOP/s for 1e11 neurons) – a set that includes many models in the integrate-and-fire family (run at similar temporal resolutions). So it still seems like a decent default budget for fairly simple models of neuron/synapse dynamics.\nDendritic computation and learning seem like prominent processes missing from such a basic model, so this range requires that these don’t push us beyond 1e15 FLOP/s. If we would end up on the low end of this range (or below) absent those processes, this would leave at least one or two orders of magnitude for them to add, which seems like a reasonable amount of cushion to me, given the considerations surveyed in Sections [2.1.2.2](#section_2.1.2.2) and [2.2](#section_2.2). That said, my best guess would be that we need at least a few FLOPs per spike through synapse to cover short-term synaptic plasticity, so there would need to be less than ~3e14 spikes through synapses per second to leave room for this. And most basic type of integrate-and-fire neuron model already puts us at ~5e14 FLOP/s (assuming 1 ms timesteps), so this doesn’t leave much room for increases from dendritic computation.[692](https://www.openphilanthropy.org/brain-computation-report#footnote692_fihq3uf \"See Izhikevich (2004) (p. 1066); and the chart in Section 2.1.2.3.\")\nOverall, this range represents a simple default model that seems fairly plausible to me, despite not budgeting explicitly for these other complexities; and various experts appear to find this type of simple default persuasive.[693](https://www.openphilanthropy.org/brain-computation-report#footnote693_48mj2qy \"See endnotes in Section 2.1.2.4 for examples.\") |\n| 1e15-1e17 | ~30%. | This range is similar to the last, but with an extra factor of 100x budgeted to cover various possible complexities that came up in my research. Specifically, assuming the number of spikes through synapses falls in the range I’ve used (1e13-1e15), it covers 100-10,000 FLOPs per spike through synapse (this would cover [Sarpeshkar’s (2010)](https://www.cambridge.org/core/books/ultra-low-power-bioelectronics/ED8504DA1504856B74E2502EA859FDEA) 50 FLOPs per spike through synapse for synaptic filtering and learning; along with various models of learning discussed in [Section 2.2.2](#section_2.2.2)) as well as 1e4-1e6 FLOP/s per neuron (this would cover, on the top end, single-compartment Hodgkin-Huxley models run with 0.1 ms timesteps – a level of modeling detail/complexity that I expect many computational neuroscientists to consider unnecessary).\nOverall, this range seems very plausibly adequate to me, and various experts I engaged with seemed to agree.[694](https://www.openphilanthropy.org/brain-computation-report#footnote694_i1k83xq \"See endnotes in Section 2.1.2.4.\") I’m much less confident that it’s required, but as mentioned above, my best guess is that you need at least a few FLOPs per spike through synapse to cover short-term synaptic plasticity, and plausibly more for more complex forms of learning; and it seems plausible to me that ultimately, FLOPs budgets for firing decisions (including dendritic computation) are somewhere between Izhikevich spiking neurons and Hodgkin-Huxley models. But as discussed above, lower ranges seem plausible as well. |\n| 1e17-1e21 | ~20% | As I noted in the report, I don’t see a lot of strong positive evidence that budgets this high are required. The most salient considerations for me are (a) the large FLOP/s costs of various DNN models of neuron behavior discussed in the report, which could indicate types complexity that lower budgets do not countenance, and (b) if you budget at least one FLOP per *timestep* per synapse (as opposed to per spike through synapse), along with <1 ms timesteps, and>1e14 synapses, then you get above 1e17 FLOP/s, and it seems possible that sufficiently important and unsimplifiable changes are taking place at synapses this frequently (for example, changes involved in learning). Some experts also seem to treat “time-steps per second per variable” as a default method of generating FLOP/s estimates (and there may be many variables per synapse – see e.g. [Benna and Fusi (2016)](https://www.nature.com/articles/nn.4401)).\nBeyond this, the other central pushes in this direction I feel involve (a) the general costliness of low-level modeling of biological and chemical processes; (b) the possibility that learning and dendritic computation introduce more complexity than 1e17 FLOP/s budgets for; (c) the fact that this range covers four orders of magnitude; (d) the possibility of some other type of unknown error or mistake, not currently on my radar, that pushes required FLOP/s into this range, and (e) an expectation that a decent number of experts would give estimates in this range as well. |\n| >1e21 | ~5% | Numbers this high start to push past the upper bounds discussed in the limit method section. These bounds don’t seem airtight to me, but I feel reasonably persuaded by the hardware arguments discussed in [Section 4.2.2](#section_4.2.2) (e.g., I expect the brain to be dissipating at least a few *k*T per FLOP required to meet the conditions above, and to use at least 1 ATP, of which it has a maximum of ~1e20/s available). I also don’t see a lot of positive reason to go this high (though the DNN models I mentioned are one exception to this); other methods generally point to lower numbers; and some experts I spoke to were very confident that numbers in this range are substantial overkill. That said, I also put macroscopic probability on the possibility that these experts and arguments (possibly together with the broader paradigms they assume) are misguided in some way; that the conditions above, rightly understood, somehow end up requiring very large FLOP/s budgets (though this last one feels more like uncertainty about the concepts at stake in the question than uncertainty about the answer); and/or that the task-relevant causal structure in the brain is just intrinsically very difficult to replicate using FLOP/s (possibly because it draws on analog physical primitives, continuous/very fine-grained temporal dynamics, and/or complex biochemical interactions that are cheap for the brain, but very expensive to capture with FLOP/s). And in general, long tails seem appropriate in contexts with this level of uncertainty. |\n\n\n8 Sources\n---------\n\n\n\n\n| DOCUMENT | SOURCE |\n| --- | --- |\n| Aaronson (2011) | [Source](https://arxiv.org/abs/1108.1791) |\n| Abraham and Philpot (2009) | [Source](http://www.scholarpedia.org/article/Metaplasticity) |\n| Achard and De Schutter (2006) | [Source](https://pubmed.ncbi.nlm.nih.gov/16848639/) |\n| Adam (2019) | [Source](https://www.ncbi.nlm.nih.gov/pmc/articles/PMC6613938/) |\n| Adams (2013) | [Source](https://lips.cs.princeton.edu/what-is-the-computational-capacity-of-the-brain/) |\n| Agarwal et al. (2017) | [Source](http://www.sciencedirect.com/science/article/pii/S0896627316310078) |\n| AI Impacts, “Brain performance in FLOPS” | [Source](https://aiimpacts.org/brain-performance-in-flops/) |\n| AI Impacts, “Brain performance in TEPS” | [Source](https://aiimpacts.org/brain-performance-in-teps/) |\n| AI Impacts, “Glial Signaling” | [Source](https://aiimpacts.org/glial-signaling/) |\n| AI Impacts, “Neuron firing rates in humans” | [Source](https://aiimpacts.org/rate-of-neuron-firing/) |\n| AI Impacts, “Scale of the Human Brain” | [Source](https://aiimpacts.org/scale-of-the-human-brain/) |\n| AI Impacts, “The cost of TEPS” | [Source](https://aiimpacts.org/cost-of-teps/#Relationship_between_TEPS_and_FLOPS) |\n| AI Impacts, “How AI timelines are estimated” | [Source](https://aiimpacts.org/how-ai-timelines-are-estimated/) |\n| Aiello (1997) | [Source](http://www.scielo.br/scielo.php?script=sci_arttext&pid=S0100-84551997000100023) |\n| Aiello and Wheeler (1995) | [Source](https://www.jstor.org/stable/2744104) |\n| Ajay and Bhalla (2006) | [Source](https://journals.physiology.org/doi/full/10.1152/physiol.00009.2006) |\n| Alger (2002) | [Source](https://www.ncbi.nlm.nih.gov/pubmed/12498988) |\n| Amodei and Hernandez (2018) | [Source](https://openai.com/blog/ai-and-compute/#lookingforward) |\n| Amodei et al. (2016) | [Source](http://proceedings.mlr.press/v48/amodei16.pdf) |\n| Ananthanarayanan et al. (2009) | [Source](https://people.eecs.berkeley.edu/~demmel/cs267_Spr10/Lectures/RajAnanthanarayanan_SC09-a63.pdf) |\n| Anastassiou and Koch (2015) | [Source](https://www.sciencedirect.com/science/article/abs/pii/S0959438814001809?via%3Dihub) |\n| Anastassiou et al. (2011) | [Source](https://pubmed.ncbi.nlm.nih.gov/21240273/) |\n| Andrade-Moraes et al. (2013) | [Source](https://academic.oup.com/brain/article/136/12/3738/442715) |\n| Angel et al. (2012) | [Source](https://pdfs.semanticscholar.org/803b/73d66fb0a940905c91ff955dc1c9963459c0.pdf) |\n| Antolík et al. (2016) | [Source](https://journals.plos.org/ploscompbiol/article?id=10.1371/journal.pcbi.1004927) |\n| Araque and Navarrete (2010) | [Source](https://www.ncbi.nlm.nih.gov/pmc/articles/PMC2894949/pdf/rstb20090313.pdf) |\n| Araque et al. (2000) | [Source](https://www.jneurosci.org/content/20/2/666) |\n| Araque et al. (2001) | [Source](https://pubmed.ncbi.nlm.nih.gov/11181976/#:~:text=Astrocytes%2C%20a%20sub%2Dtype%20of,and%20can%20modulate%20neighboring%20neurons.) |\n| Arizona Power Authority, “History of Hoover” | [Source](http://www.powerauthority.org/about-us/history-of-hoover/) |\n| Arkhipov et al. (2018) | [Source](https://journals.plos.org/ploscompbiol/article?id=10.1371/journal.pcbi.1006535) |\n| Asadi and Navi (2007) | [Source](https://www.idosi.org/wasj/wasj2(4)/12.pdf) |\n| Aschoff et al. (1971) | [Source](https://books.google.com/books/about/Energiehaushalt_und_Temperaturregulation.html?id=00dWGwAACAAJ) |\n| Ashida et al. (2007) | [Source](https://journals.physiology.org/doi/full/10.1152/jn.00399.2006) |\n| Astrup et al. (1981a) | [Source](https://www.ahajournals.org/doi/10.1161/01.STR.12.6.726) |\n| Attwell and Laughlin (2001) | [Source](https://journals.sagepub.com/doi/pdf/10.1097/00004647-200110000-00001) |\n| Azevedo et al. (2009) | [Source](https://www.ncbi.nlm.nih.gov/pubmed/19226510) |\n| Backyard Brains, “Experiment: Comparing Speeds of Two Nerve Fiber Sizes” | [Source](https://backyardbrains.com/experiments/comparingnervespeed) |\n| Balasubramanian and Berry (2002) | [Source](https://www.ncbi.nlm.nih.gov/pubmed/12463343?dopt=Abstract) |\n| Balasubramanian et al. (2001) | [Source](https://www.ncbi.nlm.nih.gov/pubmed/11255570) |\n| Baldwin and Eroglu (2017) | [Source](https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5573249/pdf/nihms880422.pdf) |\n| Banino et al. (2018) | [Source](https://www.nature.com/articles/s41586-018-0102-6) |\n| Barbu et al. (2019) | [Source](https://objectnet.dev/objectnet-a-large-scale-bias-controlled-dataset-for-pushing-the-limits-of-object-recognition-models.pdf) |\n| Barth and Poulet (2012) | [Source](https://www.bio.cmu.edu/labs/barth/papers/TINS_2012.pdf) |\n| Bartheld et al. (2016) | [Source](https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5063692/pdf/nihms799882.pdf) |\n| Bartol et al. (2015) | [Source](https://elifesciences.org/articles/10778) |\n| Bartol Jr et al. (2015) | [Source](https://elifesciences.org/articles/10778) |\n| Bartunov et al. (2018) | [Source](https://arxiv.org/pdf/1807.04587.pdf) |\n| Bashivan et al. (2019) | [Source](https://www.gwern.net/docs/ai/2019-bashivan.pdf) |\n| Batty et al. (2017) | [Source](https://openreview.net/pdf?id=HkEI22jeg) |\n| Bell (1999) | [Source](https://redwood.berkeley.edu/wp-content/uploads/2018/08/bell-levels-loops.pdf) |\n| Bengio et al. (2015) | [Source](https://arxiv.org/abs/1502.04156) |\n| Beniaguev et al. (2019) | [Source](https://www.biorxiv.org/content/biorxiv/early/2019/04/18/613141.full.pdf) |\n| Beniaguev et al. (2020) | [Source](https://www.biorxiv.org/content/10.1101/613141v2.full.pdf) |\n| Benna and Fusi (2016) | [Source](https://www.nature.com/articles/nn.4401) |\n| Bennett (1973) | [Source](https://www.math.ucsd.edu/~sbuss/CourseWeb/Math268_2013W/Bennett_Reversibiity.pdf) |\n| Bennett (1981) | [Source](https://www.pitt.edu/~jdnorton/lectures/Rotman_Summer_School_2013/thermo_computing_docs/Bennett_1982.pdf) |\n| Bennett (1989) | [Source](https://epubs.siam.org/doi/abs/10.1137/0218053?casa_token=vnD0zJclKZQAAAAA%3AK7-WmLzZs0hMB9f0RLP4QxScEYJ1S5lPtVdmT6QeFfF8ND24mDbadlMU5KzhivkC372qCMTHUw&journalCode=smjcat) |\n| Bennett (2003) | [Source](https://www.cs.princeton.edu/courses/archive/fall06/cos576/papers/bennett03.pdf) |\n| Bennett and Zukin (2004) | [Source](https://www.sciencedirect.com/science/article/pii/S0896627304000431) |\n| Bennett et al. (1991) | [Source](https://www.cell.com/neuron/abstract/0896-6273(91)90241-Q) |\n| Bernardinell et al. (2004) | [Source](https://www.ncbi.nlm.nih.gov/pmc/articles/PMC522032/) |\n| Berry et al. (1999) | [Source](https://www.nature.com/articles/18678.pdf?origin=ppub) |\n| Bezzi et al. (2004) | [Source](https://www.nature.com/articles/nn1246) |\n| Bhalla (2004) | [Source](http://www.sciencedirect.com/science/article/pii/S0006349504735596) |\n| Bhalla (2014) | [Source](https://www.sciencedirect.com/science/article/abs/pii/S0959438813002171) |\n| Bi and Poo (2001) | [Source](https://pubmed.ncbi.nlm.nih.gov/11283308/) |\n| Bialowas et al. (2015) | [Source](https://pubmed.ncbi.nlm.nih.gov/25394682/) |\n| Biederman (1987) | [Source](https://psycnet.apa.org/record/1987-20898-001) |\n| Bileh et al. (2020) | [Source](https://www.cell.com/neuron/fulltext/S0896-6273(20)30067-2) |\n| Bindocci et al. (2017) | [Source](https://science.sciencemag.org/content/356/6339/eaai8185) |\n| Bischofberger et al. (2002) | [Source](https://pubmed.ncbi.nlm.nih.gov/12486151/) |\n| Blanding (2017) | [Source](https://news.vanderbilt.edu/vanderbiltmagazine/brainiac-with-her-innovative-brain-soup-suzana-herculano-houzel-is-changing-neuroscience-one-species-at-a-time/) |\n| Blinkow and Glezer (1968) | [Source](https://onlinelibrary.wiley.com/doi/abs/10.1002/ajpa.1330290327) |\n| Bliss and Lømo (1973) | [Source](https://www.ncbi.nlm.nih.gov/pmc/articles/PMC1350458/pdf/jphysiol00958-0128.pdf) |\n| Bollmann et al. (2000) | [Source](https://pubmed.ncbi.nlm.nih.gov/10937999/) |\n| Bomash et al. (2013) | [Source](https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3544815/) |\n| Bostrom (1998) | [Source](https://nickbostrom.com/superintelligence.html) |\n| Bouhours et al. (2011) | [Source](https://www.jneurosci.org/content/31/15/5804) |\n| Bower and Beeman (1995) | [Source](https://www.amazon.com/Book-GENESIS-Exploring-Realistic-SImulations/dp/0387940197) |\n| Brain-Score | [Source](http://www.brain-score.org/) |\n| Brain-Score, “Leaderboard” | [Source](http://www.brain-score.org/#leaderboard) |\n| Brains in Silicon, “Publications” | [Source](https://web.stanford.edu/group/brainsinsilicon/pubs.html) |\n| Braitenberg and Schüz (1998) | [Source](https://link.springer.com/book/10.1007/978-3-662-03733-1) |\n| Branco, Clark, and Häusser (2010) | [Source](https://science.sciencemag.org/content/329/5999/1671/tab-article-info) |\n| Brette (2015) | [Source](https://www.frontiersin.org/articles/10.3389/fnsys.2015.00151/full) |\n| Brette and Gerstner (2005) | [Source](https://www.ncbi.nlm.nih.gov/pubmed/16014787) |\n| Brody and Yue (2000) | [Source](https://www.ncbi.nlm.nih.gov/pubmed/10729328) |\n| Brown et al. (2020) | [Source](https://arxiv.org/abs/2005.14165) |\n| Brownlee (2019a) | [Source](https://machinelearningmastery.com/rectified-linear-activation-function-for-deep-learning-neural-networks/) |\n| Brownlee (2019b) | [Source](https://machinelearningmastery.com/object-recognition-with-deep-learning/) |\n| Bruzzone et al. (1996) | [Source](https://pubmed.ncbi.nlm.nih.gov/8665925/) |\n| Bub (2002) | [Source](https://arxiv.org/pdf/quant-ph/0203017.pdf) |\n| Bucurenciu et al. (2008) | [Source](https://pubmed.ncbi.nlm.nih.gov/18304483/) |\n| Bullock et al. (1990) | [Source](https://pubmed.ncbi.nlm.nih.gov/2230933/) |\n| Bullock et al. (1994) | [Source](https://pubmed.ncbi.nlm.nih.gov/7517843/) |\n| Bullock et al. (2005) | [Source](http://utw10020.utweb.utexas.edu/djlab/pdfs/Bullocketal2005.pdf) |\n| Burgoyne and Morgan (2003) | [Source](https://pubmed.ncbi.nlm.nih.gov/12663867/) |\n| Burke (2000) | [Source](https://pubmed.ncbi.nlm.nih.gov/10946991/) |\n| Burkitt (2006) | [Source](https://pubmed.ncbi.nlm.nih.gov/16622699/#:~:text=The%20integrate%2Dand%2Dfire%20neuron%20model%20is%20one%20of%20the,injected%20current%20that%20it%20receives.) |\n| Burr et al. (1994) | [Source](https://pubmed.ncbi.nlm.nih.gov/7935763/) |\n| Burrows (1996) | [Source](https://www.oxfordscholarship.com/view/10.1093/acprof:oso/9780198523444.001.0001/acprof-9780198523444-chapter-5) |\n| Bush et al. (2015) | [Source](https://www.cell.com/neuron/fulltext/S0896-6273(15)00628-5?_returnURL=https%3A%2F%2Flinkinghub.elsevier.com%2Fretrieve%2Fpii%2FS0896627315006285%3Fshowall%3Dtrue) |\n| Bushong et al. (2002) | [Source](https://www.jneurosci.org/content/22/1/183?ijkey=12959cc2fb497700c703bcaefeaf254f4a8ec157&keytype2=tf_ipsecsha) |\n| Bussler (2020) | [Source](https://towardsdatascience.com/will-gpt-3-kill-coding-630e4518c04d) |\n| Büssow (1980) | [Source](https://www.ncbi.nlm.nih.gov/pubmed/6771013) |\n| Butt et al. (2004) | [Source](https://www.nature.com/articles/6701595) |\n| Button et al. (2013) | [Source](https://www.nature.com/articles/nrn3475) |\n| Buzaki and Mizuseki (2014) | [Source](http://www.buzsakilab.com/content/PDFs/Mizuseki2014.pdf#page=5) |\n| Cadena et al. (2017) | [Source](https://www.biorxiv.org/content/10.1101/201764v1) |\n| Cadena et al. (2019) | [Source](https://journals.plos.org/ploscompbiol/article?rev=2&id=10.1371/journal.pcbi.1006897) |\n| Cantero et al. (2018) | [Source](https://www.nature.com/articles/s41598-018-30453-2) |\n| Carandini (2012) | [Source](http://www.scholarpedia.org/article/Area_V1) |\n| Carandini et al. (2005) | [Source](https://www.jneurosci.org/content/25/46/10577?ijkey=b1aeab6b756dc871b809f168b632df61554970d5&keytype2=tf_ipsecsha) |\n| Cariani (2011) | [Source](http://www.scholarpedia.org/article/Jeffress_model) |\n| Carp (2012) | [Source](https://www.sciencedirect.com/science/article/abs/pii/S1053811912007057) |\n| Carr and Boudreau (1993b) | [Source](https://pubmed.ncbi.nlm.nih.gov/8313166/) |\n| Carr and Konishi (1990) | [Source](https://www.jneurosci.org/content/10/10/3227?ijkey=d4987df0788fd215557034462d162ed702c3cf78&keytype2=tf_ipsecsha) |\n| Castet and Masson (2000) | [Source](https://pubmed.ncbi.nlm.nih.gov/10649574/) |\n| Cell Biology By The Numbers, “How much energy is released in ATP hydrolysis?” | [Source](http://book.bionumbers.org/how-much-energy-is-released-in-atp-hydrolysis/) |\n| Cerebras, “Cerebras Wafer Scale Engine: An Introduction” | [Source](https://www.cerebras.net/wp-content/uploads/2019/08/Cerebras-Wafer-Scale-Engine-Whitepaper.pdf) |\n| Chaigneau et al. (2003) | [Source](https://www.pnas.org/content/100/22/13081) |\n| Chang (2019) | [Source](https://www.ncbi.nlm.nih.gov/pmc/articles/PMC6312416/pdf/TJP-597-249.pdf) |\n| Cheng et al. (2018) | [Source](https://www.frontiersin.org/articles/10.3389/fnsyn.2018.00033/full) |\n| Cheramy (1981) | [Source](https://pubmed.ncbi.nlm.nih.gov/6258083/) |\n| Chiang et al. (2019) | [Source](https://www.ncbi.nlm.nih.gov/pmc/articles/PMC6312416/pdf/TJP-597-249.pdf) |\n| Chong et al. (2016) | [Source](https://www.pnas.org/content/113/5/1453?__cf_chl_jschl_tk__=612d27ddc7851b49ceff9efe8dc52400d0e8e5e0-1584569029-0-AaEs1cZBOcpicm3r9lztFzIG1JqZRgJaQO1LxkUFWiGFPXr4TFBFhOiXj2CdSJTDgD05btg9OL3drZjWz3Cy5rtBERY8A8KpsNhFwaPggn6KiUFdnEdTV7X56HuwOZ2898hcDUS9n4OCRf_r1k8x7G50JrLgrbpP26AYXq6cLzcOL_ouqkPms6PhcHJR2JwfU4oq3R13nnDAIGz-nzJfVqoyMYKk9m-B5TJ2Ts7-KMdh9rghoLJqDtXDmTaTvzWg2qhnjQsjoUIV1smZ2ZTWnsvh5nD-xtlC3Zg569ZJj3Lg) |\n| Christie and Jahr (2009) | [Source](https://pubmed.ncbi.nlm.nih.gov/19759293/) |\n| Christie et al. (2011) | [Source](https://www.nature.com/articles/nn.2718) |\n| Citri and Malenka (2008) | [Source](https://www.nature.com/articles/1301559) |\n| Clark (2020) | [Source](https://www.nytimes.com/2020/06/22/technology/japanese-supercomputer-fugaku-tops-american-chinese-machines.html) |\n| Clopath (2012) | [Source](https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3368062/pdf/11571_2011_Article_9177.pdf) |\n| Cochran et al. (1984) | [Source](https://science.sciencemag.org/content/226/4678/1080.abstract) |\n| Collel and Fauquet (2015) | [Source](https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4468356/) |\n| Collins et al. (2016) | [Source](https://www.pnas.org/content/113/3/740) |\n| Compute Canada, “Technical Glossary” | [Source](https://www.computecanada.ca/research-portal/accessing-resources/glossary/#:~:text=Core%20year%3A%20The%20equivalent%20of,based%20on%20core%20year%20allocations.) |\n| Cooke and Bear (2014) | [Source](https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3843896/) |\n| Cooke et al. (2015) | [Source](https://www.nature.com/articles/nn.3920) |\n| Crick (1984) | [Source](https://pubmed.ncbi.nlm.nih.gov/6589612/) |\n| Crick (1989) | [Source](https://pubmed.ncbi.nlm.nih.gov/2911347/) |\n| Critch (2016) | [Source](http://acritch.com/credence/) |\n| Cudmore and Desai (2008) | [Source](http://www.scholarpedia.org/article/Intrinsic_plasticity) |\n| Cueva and Wei (2018) | [Source](https://arxiv.org/pdf/1803.07770.pdf) |\n| Dalrymple (2011) | [Source](https://www.lesswrong.com/posts/XhHetxjWxZ6b85HK9/whole-brain-emulation-looking-at-progress-on-c-elgans?commentId=wwwhhRufNfuNTSmQy) |\n| Daniel et al. (2013) | [Source](https://www.nature.com/articles/nature12148) |\n| Dayan and Abbott (2001) | [Source](https://www.amazon.com/Theoretical-Neuroscience-Computational-Mathematical-Modeling/dp/0262541858) |\n| De Castro (2013) | [Source](https://link.springer.com/article/10.1007/s11023-013-9302-x) |\n| de Faria, Jr. et al. (2019) | [Source](https://www.ncbi.nlm.nih.gov/pmc/articles/PMC6587454/) |\n| Deans et al. (2007) | [Source](https://pubmed.ncbi.nlm.nih.gov/17599962/) |\n| Debanne et al. (2013) | [Source](https://pubmed.ncbi.nlm.nih.gov/23187813/) |\n| Deli et al. (2017) | [Source](https://vixra.org/pdf/1710.0168v1.pdf) |\n| Deneve et al. (2001) | [Source](https://pubmed.ncbi.nlm.nih.gov/11477429/) |\n| Dermietzel et al. (1989) | [Source](https://pubmed.ncbi.nlm.nih.gov/2557621/) |\n| Dettmers (2015) | [Source](https://timdettmers.com/2015/07/27/brain-vs-deep-learning-singularity/) |\n| Di Castro et al. (2011) | [Source](https://pubmed.ncbi.nlm.nih.gov/21909085/) |\n| Diamond (1996) | [Source](https://www.nature.com/articles/382756a0.pdf) |\n| Dix (2005) | [Source](https://alandix.com/academic/papers/brain-and-web-2005/) |\n| Dongerra et al. (2003) | [Source](https://www.netlib.org/utk/people/JackDongarra/PAPERS/146_2003_the-linpack-benchmark-past-present-and-future.pdf) |\n| Doose et al. (2016) | [Source](https://www.jneurosci.org/content/36/43/11120) |\n| Doron et al. (2017) | [Source](https://www.cell.com/cell-reports/pdf/S2211-1247(17)31467-5.pdf) |\n| Dowling (2007) | [Source](http://www.scholarpedia.org/article/Retina) |\n| Drescher (2006) | [Source](https://www.gwern.net/docs/statistics/decision/2006-drescher-goodandreal.pdf) |\n| Drexler (2019) | [Source](https://www.fhi.ox.ac.uk/wp-content/uploads/Reframing_Superintelligence_FHI-TR-2019-1.1-1.pdf) |\n| Dreyfus (1972) | [Source](https://www.amazon.com/What-Computers-Cant-Artificial-Intelligence/dp/0060906138) |\n| Dugladze et al. (2012) | [Source](https://www.ncbi.nlm.nih.gov/pubmed/22700932) |\n| Dunn et al. (2005) | [Source](https://pubmed.ncbi.nlm.nih.gov/15925522/) |\n| Earman and Norton (1998) | [Source](https://www.sciencedirect.com/science/article/pii/S1355219898000239) |\n| Einevoll et al. (2015) | [Source](https://arxiv.org/pdf/1906.06189.pdf) |\n| Eliasmith (2013) | [Source](https://www.amazon.com/How-Build-Brain-Architecture-Architectures/dp/0190262125) |\n| Eliasmith et al. (2012) | [Source](https://science.sciencemag.org/content/338/6111/1202.abstract) |\n| Elliott (2011) | [Source](https://www.mitpressjournals.org/doi/abs/10.1162/NECO_a_00088) |\n| Elsayed et al. (2018) | [Source](http://papers.nips.cc/paper/7647-adversarial-examples-that-fool-both-computer-vision-and-time-limited-humans.pdf) |\n| Engl and Attwell (2015) | [Source](https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4560575/pdf/tjp0593-3417.pdf) |\n| Enoki et al. (2009) | [Source](https://www.cell.com/neuron/fulltext/S0896-6273(09)00204-9?_returnURL=https%3A%2F%2Flinkinghub.elsevier.com%2Fretrieve%2Fpii%2FS0896627309002049%3Fshowall%3Dtrue) |\n| Erdem and Hasselmo (2012) | [Source](https://onlinelibrary.wiley.com/doi/abs/10.1111/j.1460-9568.2012.08015.x) |\n| Fain et al. (2001) | [Source](https://pubmed.ncbi.nlm.nih.gov/11152756/) |\n| Faisal (2012) | [Source](https://www.semanticscholar.org/paper/Noise-in-Neurons-and-Other-Constraints-Faisal/e0cb8d65ef6ea5c69d79c99505c49ee73c81430f) |\n| Faisal et al. (2008) | [Source](https://www.ncbi.nlm.nih.gov/pmc/articles/PMC2631351/) |\n| Faria et al. (2019) | [Source](https://www.ncbi.nlm.nih.gov/pmc/articles/PMC6587454/) |\n| Fathom Computing | [Source](https://www.fathomcomputing.com/) |\n| Fedchyshyn and Wang (2005) | [Source](https://www.ncbi.nlm.nih.gov/pubmed/15843616) |\n| Feyman (1996) | [Source](https://www.amazon.com/Feynman-Lectures-Computation-Frontiers-Physics/dp/0738202967) |\n| Fiete et al. (2008) | [Source](https://www.jneurosci.org/content/28/27/6858) |\n| Fischer et al. (2008) | [Source](https://www.jneurosci.org/content/28/32/8107?ijkey=2efd2f61d0209fa1aa537f664eec72bfdd4028bc&keytype2=tf_ipsecsha) |\n| Fisher (2015) | [Source](https://arxiv.org/pdf/1508.05929.pdf) |\n| Fortune and Rose (2001) | [Source](http://www.sciencedirect.com/science/article/pii/S016622360001835X) |\n| Fotowat (2010) | [Source](https://www.researchgate.net/profile/Haleh_Fotowat/publication/50362225_Collision_Detection_as_a_Model_for_Sensory-Motor_Integration/links/00b4953a9717f744e2000000.pdf) |\n| Fotowat and Gabbiani (2011) | [Source](https://pdfs.semanticscholar.org/325b/cd539461767e59149bca2803059c89b30d3d.pdf) |\n| Francis et al. (2003) | [Source](https://pubmed.ncbi.nlm.nih.gov/12917358/) |\n| Frank (2018) | [Source](https://arxiv.org/pdf/1901.10327.pdf) |\n| Frank and Ammer (2001) | [Source](http://www.eng.fsu.edu/~mpf/revsep.pdf) |\n| Frankle and Carbin (2018) | [Source](https://arxiv.org/abs/1803.03635) |\n| Fredkin and Toffoli (1982) | [Source](https://link.springer.com/article/10.1007/BF01857727) |\n| Freitas (1996) | [Source](http://www.rfreitas.com/Nano/TheFutureOfComputers--Analog--March1996.htm) |\n| Friston (2010) | [Source](https://www.fil.ion.ucl.ac.uk/~karl/The%20free-energy%20principle%20A%20unified%20brain%20theory.pdf) |\n| Fröhlich and McCormick (2010) | [Source](https://pubmed.ncbi.nlm.nih.gov/20624597/) |\n| Fuhrmann et al. (2001) | [Source](https://lobster.ls.huji.ac.il//idan/files/Fuhrmann_etal_2002.pdf) |\n| Funabiki et al. (1998) | [Source](https://www.ncbi.nlm.nih.gov/pmc/articles/PMC2230923/) |\n| Funabiki et al. (2011) | [Source](https://www.jneurosci.org/content/31/43/15245) |\n| Funke et al. (2020) | [Source](https://arxiv.org/pdf/2004.09406.pdf) |\n| Fusi and Abbott (2007) | [Source](https://pubmed.ncbi.nlm.nih.gov/17351638/) |\n| Future of Life, “Steven Pinker and Stuart Russell on the Foundations, Benefits, and Possible Existential Threat of AI” | [Source](https://futureoflife.org/2020/06/15/steven-pinker-and-stuart-russell-on-the-foundations-benefits-and-possible-existential-risk-of-ai/) |\n| Gütig and Sompolinsky (2006) | [Source](https://www.nature.com/articles/nn1643) |\n| Gabbiani et al. (2002) | [Source](https://www.nature.com/articles/nature01190.pdf) |\n| Gallant et al. (1993) | [Source](https://pubmed.ncbi.nlm.nih.gov/8418487/) |\n| Gallant et al. (1996) | [Source](https://pubmed.ncbi.nlm.nih.gov/8899641/) |\n| Gallego et al. (2017) | [Source](https://pubmed.ncbi.nlm.nih.gov/28595054/) |\n| Gardner‐Medwin (1983) | [Source](https://www.ncbi.nlm.nih.gov/pmc/articles/PMC1197360/) |\n| Garg (2015) | [Source](http://www.neuwritewest.org/blog/2015/1/3/ask-a-neuroscientist-whats-a-spike-train) |\n| Garis et al. (2010) | [Source](https://www.sciencedirect.com/science/article/abs/pii/S0925231210003279?via%3Dihub) |\n| Gatys et al. (2015) | [Source](https://arxiv.org/pdf/1508.06576.pdf) |\n| Geiger and Jonas (2000) | [Source](https://www.ncbi.nlm.nih.gov/pubmed/11163277) |\n| Geirhos et al. (2018) | [Source](https://arxiv.org/pdf/1706.06969.pdf) |\n| Geirhos et al. (2020) | [Source](https://arxiv.org/pdf/2004.07780.pdf) |\n| Gelal et al. (2016) | [Source](https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4888693/) |\n| Georgopoulos et al. (1986) | [Source](https://pubmed.ncbi.nlm.nih.gov/3749885/) |\n| Gerstner and Naud (2009) | [Source](https://science.sciencemag.org/content/326/5951/379.long) |\n| Gerstner et al. (2018) | [Source](https://www.frontiersin.org/articles/10.3389/fncir.2018.00053/full) |\n| Get Body Smart, “Visual Cortex Areas” | [Source](https://www.getbodysmart.com/the-brain/visual-cortex-areas) |\n| Ghanbari et al. (2017) | [Source](https://journals.plos.org/ploscompbiol/article/file?id=10.1371/journal.pcbi.1005738&type=printable) |\n| Giaume (2010) | [Source](https://www.frontiersin.org/articles/10.3389/fnene.2010.00129/full) |\n| Giaume et al. (2010) | [Source](https://www.nature.com/articles/nrn2757/) |\n| Gidon et al. (2020) | [Source](https://science.sciencemag.org/content/367/6473/83.long) |\n| Gilbert (2013) | [Source](https://neurology.mhmedical.com/content.aspx?bookid=1049§ionid=59138653) |\n| GitHub, “convnet-burden” | [Source](https://github.com/albanie/convnet-burden) |\n| GitHub, “neuron\\_as\\_deep\\_net” | [Source](https://github.com/SelfishGene/neuron_as_deep_net/blob/master/fit_CNN.py) |\n| GitHub, “Report for resnet-101” | [Source](https://github.com/albanie/convnet-burden/blob/master/reports/resnet-101.md) |\n| GitHub, “Report for SE-ResNet-152” | [Source](https://github.com/albanie/convnet-burden/blob/master/reports/SE-ResNet-152.md) |\n| Gittis et al. (2010) | [Source](https://pubmed.ncbi.nlm.nih.gov/20592126/) |\n| Goldman et al. (2001) | [Source](https://pubmed.ncbi.nlm.nih.gov/11438598/) |\n| Gollisch and Meister (2008) | [Source](https://www.cell.com/iscience/fulltext/S2589-0042(18)30064-6?_returnURL=https%3A%2F%2Flinkinghub.elsevier.com%2Fretrieve%2Fpii%2FS2589004218300646%3Fshowall%3Dtrue#) |\n| Gollisch and Meister (2010) | [Source](https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3717333/pdf/nihms488912.pdf) |\n| Goodenough et al. (1996) | [Source](https://pubmed.ncbi.nlm.nih.gov/8811187/) |\n| Google Cloud, “Tensor Processing Unit” | [Source](https://storage.googleapis.com/nexttpu/index.html) |\n| Grace et al. (2018) | [Source](https://arxiv.org/pdf/1705.08807.pdf) |\n| Graph 500 | [Source](https://graph500.org/) |\n| Graubard et al. (1980) | [Source](https://www.ncbi.nlm.nih.gov/pmc/articles/PMC349693/pdf/pnas00493-0675.pdf) |\n| Green and Swets (1966) | [Source](https://www.amazon.com/Signal-Detection-Theory-Psychophysics-Marvin/dp/0932146236) |\n| Greenberg and Ziff (1984) | [Source](https://www.nature.com/articles/311433a0) |\n| Greenberg et al. (1985) | [Source](https://www.jbc.org/content/260/26/14101.short) |\n| Greenberg et al. (1986) | [Source](https://science.sciencemag.org/content/234/4772/80.abstract) |\n| Greenemeier (2009) | [Source](https://blogs.scientificamerican.com/news-blog/computers-have-a-lot-to-learn-from-2009-03-10/) |\n| Greydanus (2017) | [Source](https://arxiv.org/abs/1711.00138) |\n| Gross (2008) | [Source](http://www.scholarpedia.org/article/Inferior_temporal_cortex) |\n| Grossberg (1987) | [Source](https://onlinelibrary.wiley.com/doi/abs/10.1111/j.1551-6708.1987.tb00862.x) |\n| Grutzendler et al. (2002) | [Source](https://www.ncbi.nlm.nih.gov/pubmed/12490949) |\n| Guerguiev et al. (2017) | [Source](https://elifesciences.org/articles/22901) |\n| Guo et al. (2014) | [Source](http://www.dl.begellhouse.com/journals/4b27cbfc562e21b8,64a3e6f7290a8a6e,64cd0e236c1f5579.html) |\n| Guthrie et al. (1999) | [Source](https://www.jneurosci.org/content/19/2/520) |\n| Hänninen and Takala (2010) | [Source](https://ieeexplore.ieee.org/document/5697744) |\n| Hänninen et al. (2011) | [Source](https://www3.nd.edu/~lent/pdf/nd/IrreversibleBitErasuresHanninenLent2011.pdf) |\n| Hafting et al. (2005) | [Source](https://www.nature.com/articles/nature03721) |\n| Halassa et al. (2007b) | [Source](https://www.jneurosci.org/content/27/24/6473) |\n| Halassa et al. (2009) | [Source](https://www.pnas.org/content/106/35/15037) |\n| Hamilton (2015) | [Source](https://www.npr.org/sections/health-shots/2015/03/16/392789753/a-man-s-incomplete-brain-reveals-cerebellum-s-role-in-thought-and-emotion) |\n| Hamzelou (2020) | [Source](https://www.newscientist.com/article/mg24532693-800-teen-born-without-half-her-brain-has-above-average-reading-skills/) |\n| Hansel et al. (1998) | [Source](https://www.mitpressjournals.org/doi/10.1162/089976698300017845) |\n| Hanson (2011) | [Source](https://www.overcomingbias.com/2011/01/signal-processors-decouple.html) |\n| Hanson (2016) | [Source](https://www.amazon.com/Age-Em-Work-Robots-Earth/dp/1536619590) |\n| Hanson et al. (2019) | [Source](https://elifesciences.org/articles/42392) |\n| Harris (2008) | [Source](https://pubmed.ncbi.nlm.nih.gov/18255165/) |\n| Harris and Attwell (2012) | [Source](https://www.jneurosci.org/content/32/1/356) |\n| Hasenstaub et al. (2010) | [Source](https://www.pnas.org/content/107/27/12329) |\n| Hassabis et al. (2017) | [Source](https://www.cell.com/neuron/pdf/S0896-6273(17)30509-3.pdf) |\n| Haug (1986) | [Source](https://pubmed.ncbi.nlm.nih.gov/3540464/) |\n| Hay et al. (2011) | [Source](https://journals.plos.org/ploscompbiol/article?id=10.1371/journal.pcbi.1002107) |\n| Hayworth (2019) | [Source](https://www.brainpreservation.org/quotes-on-synaptic-encoding-of-memory/) |\n| He et al. (2002) | [Source](https://www.ncbi.nlm.nih.gov/pubmed/11826057) |\n| Héja et al. (2009) | [Source](https://www.ncbi.nlm.nih.gov/pmc/articles/PMC2744931/) |\n| Hemmo and Shenker (2019) | [Source](https://philpapers.org/rec/HEMTPO-7) |\n| Hendricks et al. (2020) | [Source](https://arxiv.org/pdf/1907.07174.pdf) |\n| Henneberger et al. (2010) | [Source](https://www.nature.com/articles/nature08673) |\n| Herculano-Houzel (2009) | [Source](https://www.frontiersin.org/articles/10.3389/neuro.09.031.2009/full#:~:text=Cognitive%20Abilities%2C%20Brain%20Size%20and,is%20unremarkable%20in%20its%20capabilities.) |\n| Herculano-Houzel and Lent (2005) | [Source](https://www.ncbi.nlm.nih.gov/pmc/articles/PMC6725175/pdf/00252518.pdf) |\n| Herz et al. (2006) | [Source](http://www.ini.ethz.ch/~cwang/ModelingSingleNeuron.pdf) |\n| Hess et al. (2000) | [Source](https://www.jneurosci.org/content/20/9/3328.short) |\n| Hines and Carnevale (1997) | [Source](https://pubmed.ncbi.nlm.nih.gov/9248061/) |\n| Hinton (2011) | [Source](https://www.cs.toronto.edu/~hinton/backpropincortex2014.pdf) |\n| Hinton et al. (2006) | [Source](https://elifesciences.org/articles/22901#bib23) |\n| Hochberg (2012) | [Source](https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3640850/pdf/nihms366580.pdf) |\n| Hoffmann and Pfeifer (2012) | [Source](https://arxiv.org/pdf/1202.0440.pdf) |\n| Hollemans (2018) | [Source](https://machinethink.net/blog/how-fast-is-my-model/) |\n| Holtmaat et al. (2005) | [Source](https://www.ncbi.nlm.nih.gov/pubmed/15664179) |\n| Hood (1998) | [Source](https://europepmc.org/article/med/9496631) |\n| Hoppensteadt and Izhikevich (2001) | [Source](https://www.izhikevich.org/publications/arbib.pdf) |\n| Hossain et al. (2018) | [Source](https://arxiv.org/pdf/1810.04020.pdf) |\n| Howarth et al. (2010) | [Source](https://www.ncbi.nlm.nih.gov/pubmed/19888288/) |\n| Howarth et al. (2012) | [Source](https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3390818/pdf/jcbfm201235a.pdf) |\n| Howell et al. (2000) | [Source](https://www.researchgate.net/publication/220549289_A_large-scale_model_of_the_cerebellar_cortex_using_PGENESIS) |\n| Hu and Wu (2004) | [Source](https://www.ncbi.nlm.nih.gov/pubmed/15325008) |\n| Huang and Neher (1996) | [Source](https://pubmed.ncbi.nlm.nih.gov/8755485/) |\n| Hubel and Wiesel (1959) | [Source](https://www.ncbi.nlm.nih.gov/pmc/articles/PMC1363130/) |\n| Hubel and Wisel (1959) | [Source](https://www.ncbi.nlm.nih.gov/pmc/articles/PMC1363130/pdf/jphysiol01298-0128.pdf) |\n| Huys et al. (2006) | [Source](https://pubmed.ncbi.nlm.nih.gov/16624998/) |\n| ImageNet | [Source](http://image-net.org/) |\n| ImageNet Winning CNN Architectures (ILSVRC) | [Source](https://www.kaggle.com/getting-started/149448) |\n| ImageNet, “Summary and Statistics” | [Source](http://www.image-net.org/about-stats) |\n| Irvine (2000) | [Source](http://cstl-csm.semo.edu/xzhang/Class%20Folder/CS280/Workbook_HTML/FLOATING_tut.htm) |\n| Izhikevich (2003) | [Source](https://www.izhikevich.org/publications/spikes.pdf) |\n| Izhikevich (2004) | [Source](https://www.izhikevich.org/publications/whichmod.pdf) |\n| Izhikevich and Edelman (2007) | [Source](https://www.izhikevich.org/publications/large-scale_model_of_human_brain.pdf) |\n| Izhikevich et al., “why did I do that?” | [Source](https://www.izhikevich.org/human_brain_simulation/why.htm) |\n| Jabr (2012a) | [Source](https://www.scientificamerican.com/article/thinking-hard-calories/) |\n| Jabr (2012b) | [Source](https://www.scientificamerican.com/article/c-elegans-connectome/) |\n| Jackson et al. (1991) | [Source](https://www.ncbi.nlm.nih.gov/pubmed/1988937) |\n| Jadi et al. (2014) | [Source](https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4279447/) |\n| Jeffreys (1995) | [Source](https://pubmed.ncbi.nlm.nih.gov/7480159/) |\n| Jenkins et al. (2018) | [Source](https://royalsocietypublishing.org/doi/full/10.1098/rspb.2018.1319) |\n| Johansson et al. (2014) | [Source](https://www.pnas.org/content/pnas/111/41/14930.full.pdf) |\n| Johnson (1999) | [Source](https://www.nytimes.com/1999/06/15/science/a-radical-computer-learns-to-think-in-reverse.html) |\n| Jolivet et al. (2006a) | [Source](https://www.zora.uzh.ch/id/eprint/156190/1/ZORA_NL_156190.pdf) |\n| Jolivet et al. (2008a) | [Source](https://www.sciencedirect.com/science/article/abs/pii/S0165027007005535?via%3Dihub) |\n| Jolivet et al. (2008b) | [Source](https://papers.nips.cc/paper/2858-integrate-and-fire-models-with-adaptation-are-good-enough.pdf) |\n| Jonas (2014) | [Source](http://ericjonas.com/publication/thesis/thesis.pdf) |\n| Jonas and Kording (2016) | [Source](https://journals.plos.org/ploscompbiol/article/file?id=10.1371/journal.pcbi.1005268&type=printable) |\n| Jones and Gabbiani (2012) | [Source](https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3752046/) |\n| Jourdain et al. (2007) | [Source](https://www.nature.com/articles/nn1849) |\n| Journal of Evolution and Technology, “Peer Commentary on Moravec’s Paper” | [Source](https://jetpress.org/volume1/commentary.htm) |\n| Juusola et al. (1996) | [Source](https://www.cell.com/trends/neurosciences/pdf/S0166-2236(96)10028-X.pdf) |\n| Káradóttir et al. (2008) | [Source](https://www.ncbi.nlm.nih.gov/pubmed/18311136) |\n| Kahn and Mann (2020) | [Source](https://cset.georgetown.edu/wp-content/uploads/AI-Chips%E2%80%94What-They-Are-and-Why-They-Matter.pdf) |\n| Kandel et al. (2013a) | [Source](https://neurology.mhmedical.com/content.aspx?bookid=1049§ionid=59138622) |\n| Kandel et al. (2013b) | [Source](https://neurology.mhmedical.com/content.aspx?bookid=1049§ionid=59138626) |\n| Kandel et al. (2013c) | [Source](https://neurology.mhmedical.com/content.aspx?bookid=1049§ionid=59138631) |\n| Kaplan (2018) | [Source](https://sites.krieger.jhu.edu/jared-kaplan/files/2018/11/StatisticalMechanicsNotes.pdf) |\n| Kaplan et al. (2020) | [Source](https://arxiv.org/pdf/2001.08361.pdf) |\n| Kaplanis et al. (2018) | [Source](https://arxiv.org/pdf/1802.07239.pdf) |\n| Karpathy (2012) | [Source](https://karpathy.github.io/2012/10/22/state-of-computer-vision/) |\n| Karpathy (2014a) | [Source](https://cs.stanford.edu/people/karpathy/ilsvrc/) |\n| Karpathy (2014b) | [Source](https://karpathy.github.io/2014/09/02/what-i-learned-from-competing-against-a-convnet-on-imagenet/) |\n| Kawaguchi and Sakaba (2015) | [Source](https://www.ncbi.nlm.nih.gov/pubmed/25728570) |\n| Keat et al. (2001) | [Source](https://www.sciencedirect.com/science/article/pii/S0896627301003221) |\n| Kell et al. (2018) | [Source](http://mcdermottlab.mit.edu/papers/Kell_etal_2018_DNN_auditory_cortex.pdf) |\n| Kempes et al. (2017) | [Source](https://arxiv.org/pdf/1706.05043.pdf) |\n| Kety (1957) | [Source](https://www.sciencedirect.com/science/article/pii/B9780080090627500266?via%3Dihub) |\n| Keysers et al. (2001) | [Source](https://www.ncbi.nlm.nih.gov/pubmed/11224911) |\n| Khaligh-Razavi and Kiregeskorte (2014) | [Source](https://journals.plos.org/ploscompbiol/article?id=10.1371/journal.pcbi.1003915) |\n| Khan (2020) | [Source](https://cset.georgetown.edu/wp-content/uploads/Why-AI-Chips-Matter.pdf) |\n| Khan Academy, “Neurotransmitters and receptors” | [Source](https://www.khanacademy.org/science/biology/human-biology/neuron-nervous-system/a/neurotransmitters-their-receptors) |\n| Khan Academy, “Overview of neuron structure and function” | [Source](https://www.khanacademy.org/science/biology/human-biology/neuron-nervous-system/a/overview-of-neuron-structure-and-function) |\n| Khan Academy, “Q & A: Neuron depolarization, hyperpolarization, and action potentials” | [Source](https://www.khanacademy.org/science/biology/human-biology/neuron-nervous-system/a/depolarization-hyperpolarization-and-action-potentials) |\n| Khan Academy, “The membrane potential” | [Source](https://www.khanacademy.org/science/biology/human-biology/neuron-nervous-system/a/the-membrane-potential) |\n| Khan Academy, “The synapse” | [Source](https://www.khanacademy.org/science/biology/human-biology/neuron-nervous-system/a/the-synapse) |\n| Kim (2014) | [Source](https://pubmed.ncbi.nlm.nih.gov/25409299/) |\n| Kindel et al. (2019) | [Source](https://jov.arvojournals.org/article.aspx?articleid=2732380) |\n| Kiregeskorte (2015) | [Source](https://www.annualreviews.org/doi/full/10.1146/annurev-vision-082114-035447?url_ver=Z39.88-2003&rfr_id=ori%3Arid%3Acrossref.org&rfr_dat=cr_pub%3Dpubmed) |\n| Kish (2016) | [Source](https://arxiv.org/pdf/1606.09493.pdf) |\n| Kleinfield et al. (2019) | [Source](https://www.cell.com/neuron/pdf/S0896-6273(19)30695-6.pdf) |\n| Kleinjung et al. (2010) | [Source](https://eprint.iacr.org/2010/006.pdf) |\n| Klindt et al. (2017) | [Source](https://papers.nips.cc/paper/6942-neural-system-identification-for-large-populations-separating-what-and-where.pdf) |\n| Knudsen et al. (1979) | [Source](https://link.springer.com/article/10.1007%2FBF00663105) |\n| Knuth (1997) | [Source](https://doc.lagout.org/science/0_Computer%20Science/2_Algorithms/The%20Art%20of%20Computer%20Programming%20(vol.%201_%20Fundamental%20Algorithms)%20(3rd%20ed.)%20%5BKnuth%201997-07-17%5D.pdf) |\n| Kobayashi et al. (2009) | [Source](https://www.ncbi.nlm.nih.gov/pmc/articles/PMC2722979/) |\n| Koch (1999) | [Source](https://www.amazon.com/Biophysics-Computation-Information-Computational-Neuroscience/dp/0195181999) |\n| Koch (2016) | [Source](https://www.scientificamerican.com/article/does-brain-size-matter1/) |\n| Koch et al. (2004) | [Source](https://www.cell.com/fulltext/S0960-9822(04)00656-6) |\n| Kole et al. (2007) | [Source](https://pubmed.ncbi.nlm.nih.gov/17698015/) |\n| Kolesnikov et al. (2020) | [Source](https://arxiv.org/pdf/1912.11370.pdf) |\n| Kostyaev (2016) | [Source](https://blog.kostyaev.me/computer%20vision/2016/03/01/Why-top-5-error-is-more-fair-metric-than-top-1-for-ImageNet-classification-task.html) |\n| Kozlov et al. (2006) | [Source](https://www.pnas.org/content/103/26/10058) |\n| Kriegeskorte (2015) | [Source](https://www.annualreviews.org/doi/full/10.1146/annurev-vision-082114-035447?url_ver=Z39.88-2003&rfr_id=ori%3Arid%3Acrossref.org&rfr_dat=cr_pub%3Dpubmed) |\n| Krizhevsky et al. (2009) | [Source](https://www.cs.toronto.edu/~kriz/cifar.html) |\n| Krizhevsky et al. (2012) | [Source](https://papers.nips.cc/paper/4824-imagenet-classification-with-deep-convolutional-neural-networks) |\n| Krueger (2008) | [Source](https://www.ncbi.nlm.nih.gov/pmc/articles/PMC2586424/) |\n| Kruijer et al. (1984) | [Source](https://www.nature.com/articles/312711a0) |\n| Kuba et al. (2005) | [Source](https://www.jneurosci.org/content/25/8/1924?ijkey=4a29f33e8283ea454996ffc0a173434343810eb6&keytype2=tf_ipsecsha) |\n| Kuba et al. (2006) | [Source](https://pubmed.ncbi.nlm.nih.gov/17136099/) |\n| Kuga et al. (2011) | [Source](https://www.jneurosci.org/content/31/7/2607) |\n| Kumar (2020) | [Source](https://cloud.google.com/blog/products/ai-machine-learning/google-breaks-ai-performance-records-in-mlperf-with-worlds-fastest-training-supercomputer) |\n| Kurzweil (1999) | [Source](https://www.amazon.com/Age-Spiritual-Machines-Computers-Intelligence/dp/B000OYDNBA) |\n| Kurzweil (2005) | [Source](https://www.amazon.com/Singularity-Near-Humans-Transcend-Biology-ebook/dp/B000QCSA7C) |\n| Kurzweil (2012) | [Source](https://www.amazon.com/How-Create-Mind-Thought-Revealed/dp/1491518839) |\n| López-Suárex et al. (2016) | [Source](https://www.nature.com/articles/ncomms12068) |\n| Lahiri and Ganguli (2013) | [Source](https://papers.nips.cc/paper/4872-a-memory-frontier-for-complex-synapses.pdf) |\n| Lake et al. (2015) | [Source](https://web.mit.edu/cocosci/Papers/Science-2015-Lake-1332-8.pdf) |\n| Lamb et al. (2019) | [Source](https://arxiv.org/pdf/1912.11570.pdf) |\n| Landauer (1961) | [Source](http://worrydream.com/refs/Landauer%20-%20Irreversibility%20and%20Heat%20Generation%20in%20the%20Computing%20Process.pdf) |\n| Langille and Brown (2018) | [Source](https://www.ncbi.nlm.nih.gov/pmc/articles/PMC6212519/) |\n| Lau and Nathans (1987) | [Source](https://www.pnas.org/content/84/5/1182.short) |\n| Laughlin (2001) | [Source](https://www.sciencedirect.com/science/article/abs/pii/S0959438800002373) |\n| Laughlin et al. (1998) | [Source](https://pubmed.ncbi.nlm.nih.gov/10195106/) |\n| Lauritzen (2001) | [Source](https://pubmed.ncbi.nlm.nih.gov/11740198/) |\n| LeCun and Bengio (2007) | [Source](http://yann.lecun.com/exdb/publis/pdf/bengio-lecun-07.pdf) |\n| LeCun et al. (2015) | [Source](https://www.nature.com/articles/nature14539) |\n| Lee (2011) | [Source](http://timothyblee.com/2011/01/13/emulation-simulation-and-the-human-brain/) |\n| Lee (2016) | [Source](https://www.macleans.ca/society/science/the-meaning-of-alphago-the-ai-program-that-beat-a-go-champ/) |\n| Lee et al. (1988) | [Source](https://pubmed.ncbi.nlm.nih.gov/3352733/) |\n| Lee et al. (2010) | [Source](https://science.sciencemag.org/content/330/6005/790) |\n| Lee et al. (2015) | [Source](https://link.springer.com/chapter/10.1007/978-3-319-23528-8_31) |\n| Leng and Ludwig (2008) | [Source](https://www.ncbi.nlm.nih.gov/pmc/articles/pmid/18845614/) |\n| Lennie (2003) | [Source](http://www2.bcs.rochester.edu/sites/plennie/pdfs/Lennie03a.pdf) |\n| Levy and Baxter (1996) | [Source](https://www.ncbi.nlm.nih.gov/pubmed/8868566?dopt=Abstract) |\n| Levy and Baxter (2002) | [Source](https://www.jneurosci.org/content/22/11/4746?ijkey=e7c9b28abb3dd8f022cfbe7c7c2ab07b7a1949b3&keytype2=tf_ipsecsha) |\n| Levy et al. (2014) | [Source](https://arxiv.org/abs/1408.6777) |\n| Li et al. (2019) | [Source](https://www.pnas.org/content/pnas/116/30/15244.full.pdf) |\n| Liao et al. (2015) | [Source](https://arxiv.org/abs/1510.05067) |\n| Lillicrap and Kording (2019) | [Source](https://arxiv.org/pdf/1907.06374.pdf) |\n| Lillicrap et al. (2016) | [Source](https://pubmed.ncbi.nlm.nih.gov/27824044/) |\n| Lind et al. (2018) | [Source](https://onlinelibrary.wiley.com/doi/abs/10.1002/glia.23246) |\n| Lindsay (2020) | [Source](https://arxiv.org/abs/2001.07092) |\n| Litt et al. (2006) | [Source](http://watarts.uwaterloo.ca/~pthagard/Articles/quantum.pdf) |\n| Llinás (2008) | [Source](http://www.scholarpedia.org/article/Neuron) |\n| Llinás et al. (2004) | [Source](https://oxford.universitypressscholarship.com/view/10.1093/acprof:oso/9780195159561.001.1/acprof-9780195159561-chapter-7) |\n| Lloyd (2000) | [Source](https://arxiv.org/pdf/quant-ph/9908043.pdf) |\n| Lodish et al. (2000) | [Source](https://scholar.google.com/scholar?cluster=198058569078716943&hl=en&as_sdt=2005&sciodt=0,5) |\n| Lodish et al. (2008) | [Source](https://books.google.com/books?hl=en&lr=&id=K3JbjG1JiUMC&oi=fnd&pg=PA1&dq=lodish+et+al+2008&ots=asG8ZUys4H&sig=ysgdCjfzBAJCpglEURb1P-_M1sY#v=snippet&q=neurons%20and%20glia&f=false) |\n| London and Häusser (2005) | [Source](https://pdfs.semanticscholar.org/81cf/8182d5725d5fdd9a33b65843f2f9fdb6a9f6.pdf) |\n| Lucas (1961) | [Source](http://users.ox.ac.uk/~jrlucas/mmg.html) |\n| Luczak et al. (2015) | [Source](https://www.nature.com/articles/nrn4026) |\n| Lumen Learning, “Action Potential” | [Source](https://courses.lumenlearning.com/wm-biology2/chapter/action-potential/) |\n| Lumen Learning, “Resting Membrane Potential” | [Source](https://courses.lumenlearning.com/wm-biology2/chapter/resting-membrane-potential/) |\n| Luscher and Malenka (2012) | [Source](https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3367554/) |\n| Machine Intelligence Research Institute, “Erik DeBenedictis on supercomputing” | [Source](https://intelligence.org/2014/04/03/erik-debenedictis/#endnote_0_10946) |\n| Machine Intelligence Research Institute, “Mike Frank on reversible computing” | [Source](https://intelligence.org/2014/01/31/mike-frank-on-reversible-computing/) |\n| Macleod, Horiuchi et al. (2007) | [Source](https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3268177/) |\n| Maheswaranathan et al. (2019) | [Source](https://www.biorxiv.org/content/10.1101/340943v5.full.pdf) |\n| Mainen and Sejnowski (1995) | [Source](http://www.math.pitt.edu/~bard/classes/compneuro/mainensej.pdf) |\n| Mains and Eipper (1999) | [Source](https://www.ncbi.nlm.nih.gov/books/NBK28247/) |\n| Major, Larkum, and Schiller (2013) | [Source](https://pubmed.ncbi.nlm.nih.gov/23841837/) |\n| Malickas (2007) | [Source](https://www.aleph.se/Trans/Global/Uploading/gupload.html) |\n| Malonek et al. (1997) | [Source](https://www.ncbi.nlm.nih.gov/pmc/articles/PMC25122/) |\n| Marblestone et al. (2013) | [Source](https://www.frontiersin.org/articles/10.3389/fncom.2013.00137/full) |\n| Marcus (2015) | [Source](https://www.nytimes.com/2015/06/28/opinion/sunday/face-it-your-brain-is-a-computer.html) |\n| Marder (2012) | [Source](https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3482119/) |\n| Marder and Goaillard (2006) | [Source](http://www.ccnss.org/ccn_2010/materials/pdf/marder/MarderGoaillard2006.pdf) |\n| Markram et al. (1997) | [Source](https://science.sciencemag.org/content/275/5297/213.long) |\n| Markram et al. (2015) | [Source](https://www.cell.com/cell/pdf/S0092-8674%2815%2901191-5.pdf) |\n| Maroney (2005) | [Source](https://arxiv.org/abs/physics/0406137) |\n| Maroney (2018) | [Source](https://arxiv.org/abs/physics/0406137) |\n| Marr (1982) | [Source](https://www.amazon.com/Vision-Computational-Investigation-Representation-Information/dp/0262514621) |\n| Martin et al. (2006) | [Source](https://pubmed.ncbi.nlm.nih.gov/16725349/) |\n| Martins (2012) | [Source](https://repositorium.sdum.uminho.pt/bitstream/1822/20756/1/NanoroboticBrainMonitoring2012_%20draft%20with%20page%20numbers.pdf) |\n| Martins et al. (2012) | [Source](https://repositorium.sdum.uminho.pt/bitstream/1822/20756/1/NanoroboticBrainMonitoring2012_%20draft%20with%20page%20numbers.pdf) |\n| Mathematical Association of America, “Putnam Competition” | [Source](https://www.maa.org/math-competitions/putnam-competition) |\n| Mathis et al. (2012) | [Source](https://www.mitpressjournals.org/doi/abs/10.1162/NECO_a_00319) |\n| Matsuura et al. (1999) | [Source](https://pubmed.ncbi.nlm.nih.gov/10529490/) |\n| Maturna et al. (1960) | [Source](https://www.ncbi.nlm.nih.gov/pmc/articles/PMC2195076/pdf/129.pdf) |\n| McAnany and Alexander (2009) | [Source](https://www.ncbi.nlm.nih.gov/pmc/articles/PMC2682626/) |\n| McCandlish et al. (2018) | [Source](https://arxiv.org/pdf/1812.06162.pdf) |\n| McDermott (2014) | [Source](http://www.cs.yale.edu/homes/dvm/papers/humongous.pdf) |\n| McDonnel and Ward (2011) | [Source](https://www.nature.com/articles/nrn3061) |\n| McFadden and Al-Khalili (2018) | [Source](https://royalsocietypublishing.org/doi/10.1098/rspa.2018.0674) |\n| McLaughlin (2000) | [Source](https://www.pnas.org/content/97/14/8087) |\n| McNaughton et al. (2006) | [Source](https://www.nature.com/articles/nrn1932) |\n| Mead (1989) | [Source](https://www.amazon.com/Analog-VLSI-Neural-Systems-Carver/dp/0201059924) |\n| Mead (1990) | [Source](https://web.stanford.edu/group/brainsinsilicon/documents/MeadNeuroMorphElectro.pdf) |\n| Medina et al. (2000) | [Source](https://www.jneurosci.org/content/20/14/5516.long) |\n| Medlock (2017) | [Source](https://aeon.co/ideas/the-body-is-the-missing-link-for-truly-intelligent-machines) |\n| Mehar (2020) | [Source](https://www.inceptivemind.com/nvidia-dgx-a100-world-first-5-petaflops-system/13267/#:~:text=NVIDIA%20DGX%20A100%20packs%20record%205%20petaflops%20of%20AI%20performance.&text=NVIDIA%20has%20unveiled%20the%20third,the%20new%20NVIDIA%20DGX%20A100) |\n| Mehta and Schwab (2012) | [Source](https://www.pnas.org/content/109/44/17978) |\n| Mehta et al. (2016) | [Source](https://link.springer.com/article/10.1007%2Fs10955-015-1431-6) |\n| Meister et al. (2013) | [Source](https://neurology.mhmedical.com/content.aspx?bookid=1049§ionid=59138654) |\n| Merel et al. (2020) | [Source](https://openreview.net/forum?id=SyxrxR4KPS) |\n| Merkle (1989) | [Source](https://www.merkle.com/brainLimits.html) |\n| Mermillod et al. (2013) | [Source](https://www.frontiersin.org/articles/10.3389/fpsyg.2013.00504/full) |\n| Metaculus, “What will the necessary computational power to replicate human mental capability turn out to be?” | [Source](https://www.metaculus.com/questions/2646/what-will-the-necessary-computational-power-to-replicate-human-mental-capability-turn-out-to-be/) |\n| Metric Conversions, “Celsius to Kelvin” | [Source](https://www.metric-conversions.org/temperature/celsius-to-kelvin.htm) |\n| Miller (2018) | [Source](https://www.amazon.com/Introductory-Course-Computational-Neuroscience/dp/0262038250) |\n| Miller et al. (2014) | [Source](https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4032965/) |\n| Min and Nevian (2012) | [Source](https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3485583/) |\n| Min et al. (2012) | [Source](https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3485583/pdf/fncom-06-00093.pdf) |\n| Ming and Song (2011) | [Source](https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3106107/) |\n| MIT Open Courseware, “Lecture 1.2: Gabriel Kreiman – Computational Roles of Neural Feedback” | [Source](https://ocw.mit.edu/resources/res-9-003-brains-minds-and-machines-summer-course-summer-2015/unit-1.-neural-circuits-of-intelligence/lecture-1.2-gabriel-kreiman-computational-roles-of-neural-feedback/) |\n| Mnih et al. (2015) | [Source](https://www.nature.com/articles/nature14236) |\n| Moehlis et al. (2006) | [Source](http://www.scholarpedia.org/article/Periodic_Orbit) |\n| Monday et al. (2018) | [Source](https://www.ncbi.nlm.nih.gov/pmc/articles/PMC6238218/) |\n| Moore and Cao (2008) | [Source](https://journals.physiology.org/doi/pdf/10.1152/jn.01366.2006) |\n| Moore et al. (2017) | [Source](https://science.sciencemag.org/content/355/6331/eaaj1497) |\n| Mora-Bermúdez et al. (2016) | [Source](https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5110243/#:~:text=The%20human%20brain%20is%20about,the%20same%20region%20in%20chimpanzees.) |\n| Mora-Bermúdez (2016) | [Source](https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5110243/#:~:text=The%20human%20brain%20is%20about,the%20same%20region%20in%20chimpanzees.) |\n| Moravčík et al. (2017) | [Source](https://arxiv.org/pdf/1701.01724.pdf) |\n| Moravec (1988) | [Source](https://www.amazon.com/Mind-Children-Future-Robot-Intelligence/dp/0674576187/ref=sr_1_2?dchild=1&keywords=mind+children+moravec&qid=1586128538&sr=8-2) |\n| Moravec (1998) | [Source](https://jetpress.org/volume1/moravec.pdf) |\n| Moravec (2008) | [Source](https://www.scientificamerican.com/article/rise-of-the-robots-2008-02/) |\n| Moreno-Jimenez et al. (2019) | [Source](https://www.nature.com/articles/s41591-019-0375-9) |\n| Moser and Moser (2007) | [Source](http://www.scholarpedia.org/article/Grid_cells) |\n| Movshon et al. (1978a) | [Source](https://pubmed.ncbi.nlm.nih.gov/722570/) |\n| Mu et al. (2019) | [Source](https://www.cell.com/cell/pdf/S0092-8674(19)30621-X.pdf) |\n| Muehlhauser (2017a) | [Source](https://www.openphilanthropy.org/blog/technical-and-philosophical-questions-might-affect-our-grantmaking) |\n| Muehlhauser (2017b) | [Source](https://www.openphilanthropy.org/2017-report-consciousness-and-moral-patienthood) |\n| Müller and Hoffmann (2017) | [Source](https://www.mitpressjournals.org/doi/full/10.1162/ARTL_a_00219) |\n| Müller et al. (1984) | [Source](https://www.nature.com/articles/312716a0) |\n| Munno and Syed (2003) | [Source](https://www.ncbi.nlm.nih.gov/pmc/articles/PMC2343306/) |\n| Nadim and Bucher (2014) | [Source](https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4252488/pdf/nihms603280.pdf) |\n| Nadim and Manor (2000) | [Source](https://pubmed.ncbi.nlm.nih.gov/11240276/) |\n| Napper and Harvey (1988) | [Source](https://onlinelibrary.wiley.com/doi/abs/10.1002/cne.902740204?sid=nlm%3Apubmed) |\n| Nature Communications, “Building brain-inspired computing” | [Source](https://www.nature.com/articles/s41467-019-12521-x) |\n| Nature, “Far To Go” | [Source](https://www.nature.com/news/482456a-i3-0-jpg-7.2933?article=1.10066) |\n| Naud and Gerstner (2012a) | [Source](https://www.researchgate.net/publication/264893074_The_Performance_and_Limits_of_Simple_Neuron_Models_Generalizations_of_the_Leaky_Integrate-and-Fire_Model) |\n| Naud and Gerstner (2012b) | [Source](https://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.381.6258&rep=rep1&type=pdf) |\n| Naud et al. (2009) | [Source](https://pdfs.semanticscholar.org/cb2c/7a2ff006349e763b08d7067de00f0308657d.pdf) |\n| Naud et al. (2014) | [Source](https://www.frontiersin.org/articles/10.3389/fncom.2014.00090/full) |\n| Neishabouri and Faisal (2014) | [Source](https://www.ncbi.nlm.nih.gov/pubmed/24809823) |\n| Nelson and Nunneley (1998) | [Source](https://www.ncbi.nlm.nih.gov/pubmed/9754976) |\n| Nett et al. (2002) | [Source](https://pubmed.ncbi.nlm.nih.gov/11784768/#:~:text=Hippocampal%20astrocytes%20in%20situ%20exhibit%20calcium,occur%20independent%20of%20neuronal%20activity.&text=Results%20presented%20in%20this%20study,the%20absence%20of%20neuronal%20activity.) |\n| Next Big Future, “Henry Markram Calls the IBM Cat Scale Brain Simulation a Hoax” | [Source](https://www.nextbigfuture.com/2009/11/henry-markram-calls-ibm-cat-scale-brain.html) |\n| Nicolesis and Circuel (2015) | [Source](https://www.amazon.com/Relativistic-Brain-cannot-simulated-machine-ebook/dp/B00VXGFBI6) |\n| Nielsen (2015) | [Source](http://neuralnetworksanddeeplearning.com/index.html) |\n| Nimmerjahn et al. (2009) | [Source](http://www.sciencedirect.com/science/article/pii/S089662730900244X) |\n| Nirenberg and Pandarinath (2012) | [Source](https://www.pnas.org/content/pnas/early/2012/08/08/1207035109.full.pdf) |\n| Niven et al. (2007) | [Source](https://www.ncbi.nlm.nih.gov/pubmed/17373859?dopt=Abstract) |\n| Nordhaus (2001) | [Source](https://web.archive.org/web/20160222082744/http://www.econ.yale.edu/~nordhaus/homepage/prog_083001a.pdf) |\n| Norton (2004) | [Source](http://philsci-archive.pitt.edu/1729/2/Norton.pdf) |\n| Norup Nielsen and Lauritzen (2001) | [Source](https://pubmed.ncbi.nlm.nih.gov/11410634/) |\n| NVIDIA, “Steel for the AI Age: DGX SuperPOD Reaches New Heights with NVIDIA DGX A100” | [Source](https://blogs.nvidia.com/blog/2020/05/14/dgx-superpod-a100/) |\n| NVIDIA, “NVIDIA Tesla V100 GPU Architecture” | [Source](https://images.nvidia.com/content/volta-architecture/pdf/volta-architecture-whitepaper.pdf) |\n| NVIDIA, “NVIDIA V100 Tensor Core GPU” | [Source](https://www.nvidia.com/en-us/data-center/v100/) |\n| Oberheim et al. (2006) | [Source](https://www.cell.com/trends/neurosciences/fulltext/S0166-2236(06)00175-5?_returnURL=https%3A%2F%2Flinkinghub.elsevier.com%2Fretrieve%2Fpii%2FS0166223606001755%3Fshowall%3Dtrue) |\n| Okun et al. (2015) | [Source](https://www.nature.com/articles/nature14273) |\n| Olah et al. (2018) | [Source](https://distill.pub/2018/building-blocks/) |\n| Olah et al. (2020a) | [Source](https://distill.pub/2020/circuits/zoom-in/) |\n| Olah et al. (2020b) | [Source](https://distill.pub/2020/circuits/early-vision/) |\n| Olshausen and Field (2005) | [Source](http://ling.umd.edu/~ellenlau/courses/nacs642/Olshausen_2005.pdf) |\n| OpenAI et al. (2019) | [Source](https://arxiv.org/pdf/1912.06680.pdf) |\n| OpenAI, “Solving Rubik’s Cube with a Robot Hand” | [Source](https://openai.com/blog/solving-rubiks-cube/#understandingourneuralnetworks) |\n| OpenStax, “Anatomy and Physiology” | [Source](https://openstax.org/books/anatomy-and-physiology/pages/12-5-communication-between-neurons) |\n| Otsu et al. (2015) | [Source](https://www.nature.com/articles/nn.3906) |\n| Ouldridge (2017) | [Source](https://arxiv.org/abs/1702.00360) |\n| Ouldridge and ten Wolde (2017) | [Source](https://journals.aps.org/prl/abstract/10.1103/PhysRevLett.118.158103) |\n| Pakkenberg and Gundersen (1997) | [Source](https://pubmed.ncbi.nlm.nih.gov/9215725/) |\n| Pakkenberg et al. (2002) | [Source](https://www.sciencedirect.com/science/article/abs/pii/S0531556502001511?via%3Dihub) |\n| Pakkenberg et al. (2003) | [Source](http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.332.5850&rep=rep1&type=pdf) |\n| Panatier et al. (2011) | [Source](https://www.cell.com/action/showPdf?pii=S0092-8674%2811%2900820-8) |\n| Papers with Code, “Object Detection on COCO test-dev” | [Source](https://paperswithcode.com/sota/object-detection-on-coco) |\n| Park and Dunlap (1998) | [Source](https://www.ncbi.nlm.nih.gov/pubmed/9712647) |\n| Parpura and Zorec (2010) | [Source](http://www.sciencedirect.com/science/article/pii/S0165017309001283) |\n| Pascual et al. (2005) | [Source](https://pubmed.ncbi.nlm.nih.gov/16210541/) |\n| Pasupathy and Connor (1999) | [Source](https://pubmed.ncbi.nlm.nih.gov/10561421/) |\n| Pasupathy and Connor (2001) | [Source](https://pubmed.ncbi.nlm.nih.gov/11698538/) |\n| Pavone et al. (2013) | [Source](https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3564735/pdf/1824-7288-39-3.pdf) |\n| Payeur et al. (2019) | [Source](https://www.sciencedirect.com/science/article/abs/pii/S0959438818302162) |\n| Peña et al. (1996) | [Source](https://www.jneurosci.org/content/16/21/7046?ijkey=91b4d4043c5c1546894b9dbfeb713c140c6eded0&keytype2=tf_ipsecsha) |\n| Penrose (1994) | [Source](https://www.amazon.com/Emperors-New-Mind-Concerning-Computers/dp/0192861980) |\n| Penrose and Hameroff (2011) | [Source](http://www.neurohumanitiestudies.eu/archivio/penrose_consciousness.pdf) |\n| Perea and Araque (2005) | [Source](https://pubmed.ncbi.nlm.nih.gov/15745945/) |\n| Peterson (2009) | [Source](https://www.amazon.com/Introduction-Decision-Cambridge-Introductions-Philosophy-ebook/dp/B00E3URCAE/ref=sr_1_5?dchild=1&keywords=introduction+to+decision-theory&qid=1586030921&sr=8-5) |\n| Piccinini (2017) | [Source](https://plato.stanford.edu/entries/computation-physicalsystems/) |\n| Piccinini and Scarantino (2011) | [Source](https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3006465/pdf/10867_2010_Article_9195.pdf) |\n| Pillow et al. (2005) | [Source](https://www.jneurosci.org/content/25/47/11003) |\n| Poirazi and Papoutsi (2020) | [Source](https://www.nature.com/articles/s41583-020-0301-7) |\n| Poirazi et al. (2003) | [Source](https://www.sciencedirect.com/science/article/pii/S0896627303001491) |\n| Poldrack et al. (2017) | [Source](https://www.nature.com/articles/nrn.2016.167) |\n| Polsky, Mel, and Schiller (2004) | [Source](https://pubmed.ncbi.nlm.nih.gov/15156147/) |\n| Porter and McCarthy (1997) | [Source](https://www.ncbi.nlm.nih.gov/pubmed/9106901) |\n| Potter et al. (2013) | [Source](https://link.springer.com/article/10.3758%2Fs13414-013-0605-z) |\n| Pozzorini et al. (2015) | [Source](https://journals.plos.org/ploscompbiol/article?id=10.1371/journal.pcbi.1004275) |\n| Prakriya and Mennerick (2000) | [Source](https://www.ncbi.nlm.nih.gov/pubmed/26657943) |\n| Principles of Computational Modelling in Neuroscience, “Figure Code examples.all” | [Source](https://www.compneuroprinciples.org/code-examples/all/all?page=1) |\n| Prinz et al. (2004) | [Source](https://www.nature.com/articles/nn1352) |\n| Pulsifer et al. (2004) | [Source](https://onlinelibrary.wiley.com/doi/full/10.1111/j.0013-9580.2004.15303.x?sid=nlm%3Apubmed) |\n| Purves et al. (2001) | [Source](https://www.ncbi.nlm.nih.gov/books/NBK11164/) |\n| Putnam Problems (2018) | [Source](https://www.maa.org/sites/default/files/pdf/Putnam/Competition_Archive/2018PutnamProblems.pdf) |\n| Qiu et al. (2015) | [Source](https://pubmed.ncbi.nlm.nih.gov/26631463/) |\n| Queensland Brain Institute, “Long-term synaptic plasticity” | [Source](https://qbi.uq.edu.au/brain-basics/brain/brain-physiology/long-term-synaptic-plasticity) |\n| Radford et al. (2019) | [Source](https://cdn.openai.com/better-language-models/language_models_are_unsupervised_multitask_learners.pdf) |\n| Rakic (2008) | [Source](https://www.ncbi.nlm.nih.gov/pmc/articles/PMC2527871/) |\n| Rall (1964) | [Source](https://scinapse.io/papers/93995994) |\n| Rama et al. (2015a) | [Source](https://www.nature.com/articles/ncomms10163) |\n| Rama et al. (2015b) | [Source](https://pubmed.ncbi.nlm.nih.gov/25461842/) |\n| Raphael et al. (2010) | [Source](https://pubmed.ncbi.nlm.nih.gov/20631172/) |\n| Rauch et al. (2003) | [Source](https://journals.physiology.org/doi/pdf/10.1152/jn.00293.2003) |\n| Ravi (2018) | [Source](https://ai.googleblog.com/2018/05/custom-on-device-ml-models.html) |\n| Raymond et al. (1996) | [Source](https://pubmed.ncbi.nlm.nih.gov/8638157/) |\n| Reardon et al. (2018) | [Source](https://science.sciencemag.org/content/360/6394/1222) |\n| Recht et al. (2019) | [Source](https://arxiv.org/abs/1902.10811) |\n| Reyes (2001) | [Source](https://www.annualreviews.org/doi/full/10.1146/annurev.neuro.24.1.653) |\n| Reyes et al. (1996) | [Source](https://www.jneurosci.org/content/16/3/993.short) |\n| Rieke and Rudd (2009) | [Source](https://pubmed.ncbi.nlm.nih.gov/20005818/) |\n| Rieke et al. (1997) | [Source](https://www.amazon.com/Spikes-Exploring-Neural-Computational-Neuroscience/dp/0262681080) |\n| Roe et al. (2020) | [Source](https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4912377/) |\n| Rolfe and Brown (1997) | [Source](https://journals.physiology.org/doi/pdf/10.1152/physrev.1997.77.3.731) |\n| Rosenfeld et al. (2018) | [Source](https://arxiv.org/pdf/1808.03305.pdf) |\n| Roska and Werblin (2003) | [Source](https://pubmed.ncbi.nlm.nih.gov/12740583/) |\n| Rupprecht et al. (2019) | [Source](https://arxiv.org/pdf/1904.01318.pdf) |\n| Russakovsky et al. (2014) | [Source](https://arxiv.org/pdf/1409.0575.pdf) |\n| Russo (2017) | [Source](https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5424629/pdf/nihms860267.pdf) |\n| Sabatini and Regehr (1997) | [Source](https://pubmed.ncbi.nlm.nih.gov/9133368/) |\n| Sadtler et al. (2014) | [Source](https://www.nature.com/articles/nature13665) |\n| Sagawa (2014) | [Source](https://arxiv.org/pdf/1311.1886.pdf) |\n| Sakry et al. (2014) | [Source](https://journals.plos.org/plosbiology/article?id=10.1371/journal.pbio.1001993) |\n| Saleem et al. (2017) | [Source](https://www.biorxiv.org/content/biorxiv/early/2017/12/18/235648.full.pdf) |\n| Sandberg (2013) | [Source](https://link.springer.com/chapter/10.1007/978-3-642-31674-6_19) |\n| Sandberg (2016) | [Source](https://arxiv.org/pdf/1602.04019.pdf) |\n| Sandberg and Bostrom (2008) | [Source](https://www.fhi.ox.ac.uk/brain-emulation-roadmap-report.pdf) |\n| Santello et al. (2011) | [Source](https://pubmed.ncbi.nlm.nih.gov/21382557/) |\n| Santos-Carvalho et al. (2015) | [Source](https://www.ncbi.nlm.nih.gov/pubmed/25797468) |\n| Sarma et al. (2018) | [Source](https://royalsocietypublishing.org/doi/10.1098/rstb.2017.0382#RSTB20170382TB2) |\n| Sarpeshkar (1997) | [Source](https://thesis.library.caltech.edu/3063/1/Sarpeshkar_R_1997.pdf) |\n| Sarpeshkar (1998) | [Source](https://ieeexplore.ieee.org/document/6790538) |\n| Sarpeshkar (2010) | [Source](https://www.cambridge.org/core/books/ultra-low-power-bioelectronics/ED8504DA1504856B74E2502EA859FDEA) |\n| Sarpeshkar (2013) | [Source](https://www.nature.com/articles/nature12148?proof=true&platform=oscar&draft=collection) |\n| Sarpeshkar (2014) | [Source](https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3928905/) |\n| Sartori et al. (2014) | [Source](https://journals.plos.org/ploscompbiol/article?id=10.1371/journal.pcbi.1003974) |\n| Sasaki et al. (2012) | [Source](https://pubmed.ncbi.nlm.nih.gov/22357869/) |\n| Scellier and Bengio, 2016 | [Source](https://elifesciences.org/articles/22901#bib55) |\n| Schecter et al. (2017) | [Source](https://www.jneurosci.org/content/37/44/10541) |\n| Schlaepfer et al. (2006) | [Source](https://watermark.silverchair.com/9-2-147.pdf?token=AQECAHi208BE49Ooan9kkhW_Ercy7Dm3ZL_9Cf3qfKAc485ysgAAAlowggJWBgkqhkiG9w0BBwagggJHMIICQwIBADCCAjwGCSqGSIb3DQEHATAeBglghkgBZQMEAS4wEQQM7tVtAcbSAZN_fuqeAgEQgIICDULOOL-A8EtpbxSlz1Tm1g2Mu-ApcOl4SoPlDq8TsxsVkrI942Z4QxqxBrKjCxki2BfoTmBvideuPuNVvSq74jY7R_QWunUUhCESx4ez_DVbIu_pWX3a2XFWimQuY79o9xoA45xFEjtOIKHv04jloN_gI7-80ACxE7LfMM2wQHRwCjT3vfN5fjED6qzr-a1fk9tim3iXXR-88IT_vlyOUURGKXzH2Vj1HoOfQJAfGBvLb76Ay-Tmt7XHveLmx1Vc2TU0em4TvvQ61KOxM_aYT4Egb5K_TRrjkSJ2W0gzJiKZIV2MU80kvtfbVSoQgPXceOYBNC15QcNsXfRMx4TTNNIVUf9UHo5XPUJCMionysPNTRmK83zUUm0isdX1-YasUR501FHuYG6ibf-_FdeGpO_cBp2P4xzqlxwmM-3WmNy8e-6SGHcijS7Y5LNVg96wFs6wX3UxbsCqwUN2i8qzmEcR8x23POg6N2ZtH1dWdmZ03YChoPkjqCUm_n7MGwFbW2p2UAGnNTPckJAkbq2oNlZuTs5u0WWbUcnNkFsCayK_KH3LfpEciOlgkJv6g6pCxvswiJLyvebY8cCpKXsTox78qUkIbZf0CP3hIv0Isrr0Rx9Sgllf6oNKd9yLXbOSavz5aLwYSgpAZahIKk7-039YE4ZxpkEWFeEIdzL8oHdLbQCLr3yAMfCLzMSiTw) |\n| Schmidt-Hiever et al. (2017) | [Source](https://www.ncbi.nlm.nih.gov/pubmed/28628104) |\n| Schneider and Gersting (2018) | [Source](https://www.amazon.com/Invitation-Computer-Science-G-Michael-Schneider/dp/1337561916) |\n| Schrimpf et al. (2018) | [Source](https://www.biorxiv.org/content/10.1101/407007v1.full.pdf) |\n| Schroeder (2000) | [Source](https://www.amazon.com/Introduction-Thermal-Physics-Daniel-Schroeder/dp/0201380277) |\n| Schubert et al. (2011) | [Source](https://onlinelibrary.wiley.com/doi/abs/10.1002/glia.21190) |\n| Schultz (2007) | [Source](http://www.scholarpedia.org/article/Signal-to-noise_ratio_in_neuroscience) |\n| Schulz (2010) | [Source](https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3059710/) |\n| Schummers et al. (2008) | [Source](https://science.sciencemag.org/content/320/5883/1638) |\n| Schwartz and Javitch (2013) | [Source](https://neurology.mhmedical.com/content.aspx?bookid=1049§ionid=59138637) |\n| Science Direct, “Membrane Potential” | [Source](https://www.sciencedirect.com/topics/neuroscience/membrane-potential) |\n| Science Direct, “Pyramidal Cell” | [Source](https://www.sciencedirect.com/topics/neuroscience/pyramidal-cell) |\n| ScienceDirect, “Endocannabinoids” | [Source](https://www.sciencedirect.com/topics/neuroscience/endocannabinoids) |\n| Scott et al. (2008) | [Source](https://www.ncbi.nlm.nih.gov/pubmed/18667608) |\n| Segev and Rall (1998) | [Source](https://pubmed.ncbi.nlm.nih.gov/9829684/) |\n| Selverston (2008) | [Source](http://www.scholarpedia.org/article/Stomatogastric_ganglion) |\n| Semiconductor Industry Association, “2015 International Technology Roadmap for Semiconductors (ITRS)” | [Source](https://www.semiconductors.org/resources/2015-international-technology-roadmap-for-semiconductors-itrs/) |\n| Serre (2019) | [Source](https://www.annualreviews.org/doi/abs/10.1146/annurev-vision-091718-014951) |\n| Seung (2012) | [Source](https://www.amazon.com/Connectome-How-Brains-Wiring-Makes/dp/0547508182/ref=tmm_hrd_swatch_0?_encoding=UTF8&qid=&sr=) |\n| Shadlen and Newsome (1998) | [Source](https://www.frontiersin.org/articles/10.3389/fnsys.2015.00151/full#B68) |\n| Shapley and Enroth-Cugell (1984) | [Source](https://linkinghub.elsevier.com/retrieve/pii/0278432784900117) |\n| Sheffield (2011) | [Source](https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3030701/) |\n| Shenoy et al. (2013) | [Source](https://www.annualreviews.org/doi/full/10.1146/annurev-neuro-062111-150509) |\n| Shepherd (1990) | [Source](https://www.amazon.com/Synaptic-Organization-Brain-Gordon-Shepherd/dp/019515956X) |\n| Sheth et al. (2004) | [Source](https://pubmed.ncbi.nlm.nih.gov/14736849/) |\n| Shoham et al. (2005) | [Source](https://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.457.6826&rep=rep1&type=pdf) |\n| Shouval (2007) | [Source](http://www.scholarpedia.org/article/Models_of_synaptic_plasticity) |\n| Shu et al. (2006) | [Source](https://www.nature.com/articles/nature04720) |\n| Shu et al. (2007) | [Source](https://www.pnas.org/content/104/27/11453) |\n| Shulz and Jacob (2010) | [Source](https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3059710/#B12) |\n| Siegelbaum and Koester (2013a) | [Source](https://neurology.mhmedical.com/content.aspx?bookid=1049§ionid=59138627) |\n| Siegelbaum and Koester (2013b) | [Source](https://neurology.mhmedical.com/content.aspx?bookid=1049§ionid=59138629) |\n| Siegelbaum and Koester (2013c) | [Source](https://neurology.mhmedical.com/content.aspx?bookid=1049§ionid=59138628) |\n| Siegelbaum and Koester (2013d) | [Source](https://neurology.mhmedical.com/content.aspx?bookid=1049§ionid=59138632) |\n| Siegelbaum et al. (2013a) | [Source](https://neurology.mhmedical.com/content.aspx?bookid=1049§ionid=59138634) |\n| Siegelbaum et al. (2013b) | [Source](https://neurology.mhmedical.com/content.aspx?bookid=1049§ionid=59138635) |\n| Siegelbaum et al. (2013c) | [Source](https://neurology.mhmedical.com/content.aspx?bookid=1049§ionid=59138636) |\n| Silver et al. (2016) | [Source](https://www.nature.com/articles/nature16961) |\n| Sipser (2013) | [Source](https://www.amazon.com/Introduction-Theory-Computation-Michael-Sipser/dp/113318779X) |\n| Sjöström and Gerstner (2010) | [Source](http://www.scholarpedia.org/article/Spike-timing_dependent_plasticity) |\n| Skora et al. (2017) | [Source](https://www.med.upenn.edu/ngg/assets/user-content/documents/paper2-energy-scarcity-promotes-a-brain-wide-sleep-state-modulated-by-insulin-signaling-in-c.-elegans.pdf) |\n| Slee et al. (2010) | [Source](https://journals.physiology.org/doi/full/10.1152/jn.00678.2009) |\n| Smith et al. (2019) | [Source](https://elifesciences.org/articles/47889) |\n| Sokoloff (1960) | [Source](https://www.semanticscholar.org/paper/The-metabolism-of-the-central-nervous-system-in-Sokoloff/afb75236457912a504a1ed3bb3a2270ede3b2113) |\n| Sokoloff et al. (1977) | [Source](https://pubmed.ncbi.nlm.nih.gov/864466/) |\n| Song et al. (2007) | [Source](https://bmsr.usc.edu/files/2012/09/1053.pdf) |\n| Sorrells et al. (2018) | [Source](https://www.nature.com/articles/nature25975) |\n| Srinivasan et al. (2015) | [Source](https://pubmed.ncbi.nlm.nih.gov/28213444/) |\n| Stack Exchange, “Number of FLOPs (floating point operations) for exponentiation” | [Source](https://cs.stackexchange.com/questions/105026/number-of-flops-floating-point-operations-for-exponentiation) |\n| Stack Overflow, “How many FLOPs does tanh need?” | [Source](https://stackoverflow.com/questions/41251698/how-many-flops-does-tanh-need#:~:text=The%20key%20takeaway%3A%20the%20costs,between%2010%20and%20100%20FLOPs.) |\n| Stanford Encyclopedia of Philosophy, “Embodied Cognition” | [Source](https://plato.stanford.edu/entries/embodied-cognition/) |\n| Stanford Medicine, “Stanford Artificial Retina Project | Competition” | [Source](https://med.stanford.edu/artificial-retina/research/competition.html) |\n| Steil (2011) | [Source](http://www.pagetable.com/?p=517) |\n| Stevenson and Kording (2011) | [Source](https://www.nature.com/articles/nn.2731) |\n| Stobart et al. (2018a) | [Source](https://academic.oup.com/cercor/article/28/1/184/2572087) |\n| Stobart et al. (2018b) | [Source](https://www.cell.com/action/showPdf?pii=S0896-6273%2818%2930284-8) |\n| Stopfer et al. (2003) | [Source](https://www.sciencedirect.com/science/article/pii/S089662730300535X) |\n| Storrs et al. (2020) | [Source](https://www.biorxiv.org/content/10.1101/2020.05.07.082743v1.full.pdf) |\n| Street (2016) | [Source](https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5108784/) |\n| Stringer et al. (2018) | [Source](https://www.biorxiv.org/content/biorxiv/early/2018/04/22/306019.full.pdf) |\n| Stuart and Spruston (2015) | [Source](https://www.nature.com/articles/nn.4157) |\n| Su et al. (2012) | [Source](https://pubmed.ncbi.nlm.nih.gov/23172146/) |\n| Such et al. (2018) | [Source](https://arxiv.org/abs/1812.07069) |\n| Sun (2017) | [Source](https://arxiv.org/pdf/1707.02968.pdf) |\n| Swaminathan (2008) | [Source](https://www.scientificamerican.com/article/why-does-the-brain-need-s/%3ESource%3C/a%3E%3C/td%3E%3C/tr%3E%3Ctr%3E%3Ctd%3ESwanson%20(1995)%3C/td%3E%3Ctd%3E%3Ca%20href=) |\n| Swenson (2006) | [Source](https://www.dartmouth.edu/~rswenson/NeuroSci/chapter_11.html) |\n| Szegedy et al. (2013) | [Source](https://arxiv.org/pdf/1312.6199.pdf) |\n| Szegedy et al. (2014) | [Source](https://arxiv.org/pdf/1409.4842.pdf) |\n| Szucs and P.A. loannidis (2017) | [Source](https://journals.plos.org/plosbiology/article?id=10.1371/journal.pbio.2000797) |\n| Takahashi (2012) | [Source](https://venturebeat.com/2012/12/11/copper-wires-might-be-the-bottleneck-in-the-way-of-moores-law/) |\n| Tan and Le (2019) | [Source](https://arxiv.org/pdf/1905.11946.pdf) |\n| Tan et al. (2019) | [Source](https://arxiv.org/pdf/1911.09070v6.pdf) |\n| Tan et al. (2020) | [Source](https://arxiv.org/pdf/1911.09070v6.pdf) |\n| Tang et al. (2001) | [Source](https://www.ncbi.nlm.nih.gov/pubmed/11418939) |\n| Tao and Poo (2001) | [Source](https://www.pnas.org/content/98/20/11009) |\n| Taylor et al. (2000) | [Source](https://science.sciencemag.org/content/289/5488/2347) |\n| TED, “Robin Hanson: What would happen if we upload our brains to computers?” | [Source](https://www.ted.com/talks/robin_hanson_what_would_happen_if_we_upload_our_brains_to_computers?language=en#t-91367) |\n| Tegmark (1999) | [Source](https://arxiv.org/pdf/quant-ph/9907009.pdf) |\n| Tegmark (2017) | [Source](https://www.amazon.com/Life-3-0-Being-Artificial-Intelligence/dp/1101946598/ref=tmm_hrd_swatch_0?_encoding=UTF8&qid=1586106499&sr=8-1) |\n| Thagard (2002) | [Source](http://cogsci.uwaterloo.ca/Articles/molecules.html) |\n| The Physics Factbook, “Energy in ATP” | [Source](https://hypertextbook.com/facts/2000/AmberIqbal.shtml#:~:text=Hydrolysis%20of%20one%20gram%20mole,about%2010%E2%88%9219%20J.%22&text=All%20of%20the%20biosynthesis%20activities,the%20capacity%20to%20do%20work) |\n| The Physics Factbook, “Power of a Human Brain” | [Source](https://hypertextbook.com/facts/2001/JacquelineLing.shtml) |\n| The Physics Factbook, “Power of a Human” | [Source](https://hypertextbook.com/facts/2003/WeiLiangMok.shtml) |\n| The Physics Factbook, “Volume of a Human” | [Source](https://hypertextbook.com/facts/2001/ViktoriyaShchupak.shtml) |\n| Theodosis et al. (2008) | [Source](https://pubmed.ncbi.nlm.nih.gov/18626065/) |\n| Thinkmate, “NVIDIA® Tesla™ V100 GPU Computing Accelerator” | [Source](https://www.thinkmate.com/product/nvidia/900-2g500-0010-000) |\n| Thomé (2019) | [Source](https://lists.gforge.inria.fr/pipermail/cado-nfs-discuss/2019-December/001139.html) |\n| Thomson and Kristan (2006) | [Source](https://pubmed.ncbi.nlm.nih.gov/16870746/) |\n| Thorpe, Fize, and Marlot (1996) | [Source](https://www.nature.com/articles/381520a0) |\n| Top 500, “June 2020” | [Source](https://www.top500.org/lists/top500/2020/06/) |\n| Top 500, “November 2019” | [Source](https://www.top500.org/lists/2019/11/) |\n| Tosdyks and Wu (2013) | [Source](http://www.scholarpedia.org/article/Short-term_synaptic_plasticity) |\n| Toutounian and Ataei (2009) | [Source](http://www.sciencedirect.com/science/article/pii/S0377042708005062) |\n| Trafton’s (2014) | [Source](https://news.mit.edu/2014/in-the-blink-of-an-eye-0116) |\n| Trenholm and Awatramani (2019) | [Source](https://webvision.med.utah.edu/book/part-iii-retinal-circuits/myriad-roles-for-gap-junctions-in-retinal-circuits/) |\n| Trenholm et al. (2013) | [Source](https://www.nature.com/articles/nn.3308?draft=marketing) |\n| Trettenbrein (2016) | [Source](https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5112247/) |\n| Trussell (1999) | [Source](https://pubmed.ncbi.nlm.nih.gov/10099698/) |\n| Tsien (2013) | [Source](https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3725115/pdf/pnas.201310158.pdf) |\n| Tsodyks and Wu (2013) | [Source](http://www.scholarpedia.org/article/Short-term_synaptic_plasticity) |\n| Tsodyks et al. (1999) | [Source](https://science.sciencemag.org/content/286/5446/1943.abstract) |\n| Tsubo et al. (2012) | [Source](https://journals.plos.org/ploscompbiol/article?id=10.1371/journal.pcbi.1002461) |\n| Tuszynski (2006) | [Source](https://www.terasemjournals.org/GNJournal/GN0104/tuszynski_01e.html) |\n| Twitter, “David Pfau” | [Source](https://twitter.com/pfau/status/1105443964423938049) |\n| Twitter, “Kevin Lacker” | [Source](https://twitter.com/lacker/status/1279136788326432771/photo/1) |\n| Twitter, “Sharif Shameem” | [Source](https://twitter.com/sharifshameem/status/1282676454690451457) |\n| Twitter, “Tim Brady” | [Source](https://twitter.com/timothyfbrady/status/1289397905623674881/photo/1) |\n| Tzilivaki et al. (2019) | [Source](https://www.nature.com/articles/s41467-019-11537-7) |\n| Ujfalussy et al. (2018) | [Source](https://www.sciencedirect.com/science/article/pii/S0896627318307372) |\n| Urbanczik and Senn (2009) | [Source](https://pubmed.ncbi.nlm.nih.gov/19219040/) |\n| Uttal (2012) | [Source](https://mitpress.mit.edu/books/reliability-cognitive-neuroscience) |\n| Vaccaro and Barnett (2011) | [Source](https://arxiv.org/pdf/1004.5330.pdf) |\n| Vallbo et al. (1984) | [Source](https://pubmed.ncbi.nlm.nih.gov/6478176/) |\n| van den Oord et al. (2016) | [Source](https://arxiv.org/pdf/1609.03499.pdf) |\n| van Steveninck et al. (1997) | [Source](https://pubmed.ncbi.nlm.nih.gov/9065407/) |\n| Vanzetta et al. (2004) | [Source](https://pubmed.ncbi.nlm.nih.gov/15182722/) |\n| Varpula (2013) | [Source](http://www.m-hikari.com/asb/asb2013/asb1-4-2013/annilaASB1-4-2013.pdf) |\n| Venance et al. (1997) | [Source](https://www.jneurosci.org/content/17/6/1981) |\n| Verkhratsky and Butt, eds. (2013) | [Source](https://onlinelibrary.wiley.com/doi/book/10.1002/9781118402061) |\n| Vinyals et al. (2019) | [Source](https://www.nature.com/articles/s41586-019-1724-z.epdf?author_access_token=lZH3nqPYtWJXfDA10W0CNNRgN0jAjWel9jnR3ZoTv0PSZcPzJFGNAZhOlk4deBCKzKm70KfinloafEF1bCCXL6IIHHgKaDkaTkBcTEv7aT-wqDoG1VeO9-wO3GEoAMF9bAOt7mJ0RWQnRVMbyfgH9A%3D%3D) |\n| VisualChips, “6502 – simulating in real time on an FPGA” | [Source](http://visual6502.org/wiki/index.php?title=6502_-_simulating_in_real_time_on_an_FPGA&oldid=608) |\n| VisualChips, “Visual Transistor-level Simulation of the 6502 CPU and other chips!” | [Source](http://www.visual6502.org/) |\n| Volkmann (1986) | [Source](https://www.sciencedirect.com/science/article/abs/pii/0042698986901641?via%3Dihub) |\n| Volterra and Meldolesi (2005) | [Source](https://www.nature.com/articles/nrn1722) |\n| von Bartheld et al. (2016) | [Source](https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5063692/pdf/nihms799882.pdf) |\n| von Neumann (1958) | [Source](https://www.amazon.com/Computer-Brain-Silliman-Memorial-Lectures/dp/0300181116) |\n| Vroman et al. (2013) | [Source](https://pubmed.ncbi.nlm.nih.gov/24068997/) |\n| Vul and Pashler (2017) | [Source](https://books.google.com/books?id=VMbXDQAAQBAJ&lpg=PA196&ots=js2Q-GfBZY&lr=lang_en&pg=PA196#v=onepage&q&f=false) |\n| Waldrop (2012) | [Source](https://www.nature.com/news/computer-modelling-brain-in-a-box-1.10066) |\n| Walsh (1999) | [Source](https://www.ncbi.nlm.nih.gov/pmc/articles/PMC33934/) |\n| Wang et al. (2006) | [Source](https://www.nature.com/articles/nn1703#:~:text=Astrocytic%20Ca2%2B%20signaling%20was%20a,in%20response%20to%20sensory%20stimulation.&text=Thus%2C%20astrocytes%20are%20activated%20by,responses%20are%20reduced%20or%20absent.) |\n| Wang et al. (2009) | [Source](https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3638986/) |\n| Wang et al. (2010) | [Source](https://journals.plos.org/plosone/article?id=10.1371/journal.pone.0010253) |\n| Wang et al. (2014) | [Source](https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4189373/pdf/fnins-08-00307.pdf) |\n| Wang et al. (2016) | [Source](https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5067378/) |\n| Wärnberg and Kumar (2017) | [Source](https://journals.plos.org/ploscompbiol/article/file?id=10.1371/journal.pcbi.1007074&type=printable) |\n| Watts et al. (2018) | [Source](https://www.frontiersin.org/articles/10.3389/fnmol.2018.00216/full) |\n| Weiss and Faber (2010) | [Source](https://www.ncbi.nlm.nih.gov/pmc/articles/PMC2876880/) |\n| Weiss et al. (2018) | [Source](https://www.ncbi.nlm.nih.gov/pmc/articles/PMC6147046/pdf/main.pdf) |\n| White et al. (1984) | [Source](https://royalsocietypublishing.org/doi/pdf/10.1098/rstb.1986.0056) |\n| Wikimedia, “Receptive field.png” | [Source](https://commons.wikimedia.org/wiki/File:Receptive_field.png) |\n| Wikipedia, “Action potential” | [Source](https://en.wikipedia.org/wiki/Action_potential) |\n| Wikipedia, “Allocortex” | [Source](https://en.wikipedia.org/wiki/Allocortex) |\n| Wikipedia, “Angular diameter” | [Source](https://en.wikipedia.org/wiki/Angular_diameter) |\n| Wikipedia, “Astrocyte” | [Source](https://en.wikipedia.org/wiki/Astrocyte) |\n| Wikipedia, “Boltzmann’s constant” | [Source](https://en.wikipedia.org/wiki/Boltzmann_constant) |\n| Wikipedia, “Boolean satisfiability problem” | [Source](https://en.wikipedia.org/wiki/Boolean_satisfiability_problem) |\n| Wikipedia, “Brain size” | [Source](https://en.wikipedia.org/wiki/Brain_size) |\n| Wikipedia, “Breadth-first search” | [Source](http://en.wikipedia.org/wiki/Breadth-first_search) |\n| Wikipedia, “Caenorhabditis elegans” | [Source](https://en.wikipedia.org/wiki/Caenorhabditis_elegans) |\n| Wikipedia, “Cerebellar agenesis” | [Source](https://en.wikipedia.org/wiki/Cerebellar_agenesis) |\n| Wikipedia, “Cerebellar granule cell” | [Source](https://en.wikipedia.org/wiki/Cerebellar_granule_cell) |\n| Wikipedia, “Cerebral cortex” | [Source](https://en.wikipedia.org/wiki/Cerebral_cortex) |\n| Wikipedia, “Chemical synapse” | [Source](https://en.wikipedia.org/wiki/Chemical_synapse) |\n| Wikipedia, “Conditional entropy” | [Source](https://en.wikipedia.org/wiki/Conditional_entropy) |\n| Wikipedia, “Convolutional neural network” | [Source](https://en.wikipedia.org/wiki/Convolutional_neural_network) |\n| Wikipedia, “Decapoda” | [Source](https://en.wikipedia.org/wiki/Decapoda) |\n| Wikipedia, “Electrical synapse” | [Source](https://en.wikipedia.org/wiki/Electrical_synapse) |\n| Wikipedia, “Electroencephalography” | [Source](https://en.wikipedia.org/wiki/Electroencephalography) |\n| Wikipedia, “Entropy (information theory)” | [Source](https://en.wikipedia.org/wiki/Entropy_(information_theory)) |\n| Wikipedia, “Entropy (statistical thermodynamics)” | [Source](https://en.wikipedia.org/wiki/Entropy_(statistical_thermodynamics)#Boltzmann's_principle) |\n| Wikipedia, “Excitatory postsynaptic potential” | [Source](https://en.wikipedia.org/wiki/Excitatory_postsynaptic_potential#Miniature_EPSPs_and_quantal_analysis) |\n| Wikipedia, “Exponential decay” | [Source](http://en.wikipedia.org/wiki/Exponential_decay) |\n| Wikipedia, “Extended mind thesis” | [Source](https://en.wikipedia.org/wiki/The_Extended_Mind) |\n| Wikipedia, “Floating-point arithmetic” | [Source](https://en.wikipedia.org/wiki/Floating-point_arithmetic) |\n| WIkipedia, “Fugaku (supercomputer)” | [Source](https://en.wikipedia.org/wiki/Fugaku_(supercomputer)) |\n| Wikipedia, “Functional magnetic resonance imaging” | [Source](https://en.wikipedia.org/wiki/Functional_magnetic_resonance_imaging) |\n| Wikipedia, “Gabor filter” | [Source](https://en.wikipedia.org/wiki/Gabor_filter#:~:text=In%20image%20processing%2C%20a%20Gabor,point%20or%20region%20of%20analysis.) |\n| Wikipedia, “Gap junction” | [Source](https://en.wikipedia.org/wiki/Gap_junction) |\n| Wikipedia, “Glia” | [Source](https://en.wikipedia.org/wiki/Glia) |\n| Wikipedia, “Grid cell” | [Source](https://en.wikipedia.org/wiki/Grid_cell) |\n| Wikipedia, “Hemispherectomy” | [Source](https://en.wikipedia.org/wiki/Hemispherectomy) |\n| Wikipedia, “Hodgkin-Huxley model” | [Source](https://en.wikipedia.org/wiki/Hodgkin%E2%80%93Huxley_model) |\n| Wikipedia, “Human body temperature” | [Source](https://en.wikipedia.org/wiki/Human_body_temperature) |\n| Wikipedia, “Injective function” | [Source](https://en.wikipedia.org/wiki/Injective_function) |\n| Wikipedia, “Ion” | [Source](https://en.wikipedia.org/wiki/Ion) |\n| Wikipedia, “Landauer’s principle” | [Source](https://en.wikipedia.org/wiki/Landauer%27s_principle) |\n| Wikipedia, “Membrane” | [Source](https://en.wikipedia.org/wiki/Membrane) |\n| Wikipedia, “Microstates (statistical mechanics) | [Source](https://en.wikipedia.org/wiki/Microstate_(statistical_mechanics)) |\n| Wikipedia, “MOS Technology 6502” | [Source](https://en.wikipedia.org/wiki/MOS_Technology_6502) |\n| Wikipedia, “Multiply-accumulate operation” | [Source](https://en.wikipedia.org/wiki/Multiply%E2%80%93accumulate_operation) |\n| Wikipedia, “Neocortex” | [Source](https://en.wikipedia.org/wiki/Neocortex) |\n| Wikipedia, “Neural circuit” | [Source](https://en.wikipedia.org/wiki/Neural_circuit) |\n| Wikipedia, “Neuromorphic engineering” | [Source](https://en.wikipedia.org/wiki/Neuromorphic_engineering) |\n| Wikipedia, “Neuropeptide” | [Source](https://en.wikipedia.org/wiki/Neuropeptide) |\n| Wikipedia, “Perineuronal net” | [Source](https://en.wikipedia.org/wiki/Perineuronal_net) |\n| Wikipedia, “Pyramidal cell” | [Source](https://en.wikipedia.org/wiki/Pyramidal_cell) |\n| Wikipedia, “Recurrent neural network” | [Source](https://en.wikipedia.org/wiki/Recurrent_neural_network) |\n| Wikipedia, “RSA numbers” | [Source](https://en.wikipedia.org/wiki/RSA_numbers) |\n| Wikipedia, “Scientific notation” | [Source](https://en.wikipedia.org/wiki/Scientific_notation) |\n| Wikipedia, “Synapse” | [Source](https://en.wikipedia.org/wiki/Synapse) |\n| Wikipedia, “Synaptic weight” | [Source](https://en.wikipedia.org/wiki/Synaptic_weight) |\n| Wikipedia, “Thermodynamic temperature” | [Source](https://en.wikipedia.org/wiki/Thermodynamic_temperature#Definition_of_thermodynamic_temperature) |\n| Wikipedia, “Traversed edges per second” | [Source](https://en.wikipedia.org/wiki/Traversed_edges_per_second) |\n| Wikipedia, “Visual cortex” | [Source](https://en.wikipedia.org/wiki/Visual_cortex) |\n| Wikipedia, “White matter” | [Source](https://en.wikipedia.org/wiki/White_matter) |\n| Wilson and Foglia (2015) | [Source](https://plato.stanford.edu/entries/embodied-cognition/) |\n| Winship et al. (2007) | [Source](https://www.jneurosci.org/content/jneuro/27/23/6268.full.pdf) |\n| WolframAlpha | [Source](https://www.wolframalpha.com/) |\n| Wolpert (2016) | [Source](https://www.mdpi.com/1099-4300/18/4/138) |\n| Wolpert (2019a) | [Source](https://arxiv.org/pdf/1905.05669.pdf) |\n| Wolpert (2019b) | [Source](https://arxiv.org/pdf/1901.00386.pdf) |\n| Wong-Riley (1989) | [Source](http://www.sciencedirect.com/science/article/pii/0166223689901653) |\n| Wu et al. (2016) | [Source](https://arxiv.org/pdf/1609.08144.pdf) |\n| Yamins and DiCarlo (2016) | [Source](https://www.nature.com/articles/nn.4244) |\n| Yamins et al. (2014) | [Source](https://www.pnas.org/content/111/23/8619) |\n| Yang and Calakos (2013) | [Source](https://www.frontiersin.org/articles/10.3389/fnsyn.2013.00008/full) |\n| Yang and Wang (2006) | [Source](https://www.ncbi.nlm.nih.gov/pubmed/16723526) |\n| Yang et al. (1998) | [Source](https://www.pnas.org/content/95/13/7715/) |\n| Yap and Greenberg (2018) | [Source](https://www.cell.com/neuron/pdf/S0896-6273(18)30901-2.pdf) |\n| YouTube, “Analog Supercomputers: From Quantum Atom to Living Body | Rahul Sarpeshkar | TEDxDartmouth” | [Source](https://youtu.be/ZycidN_GYo0) |\n| YouTube, “Biophysics of object segmentation in a collision-detecting neuron” | [Source](https://www.youtube.com/watch?v=5E5MYf9Z8R0) |\n| YouTube, “Bush dodges flying shoes” | [Source](https://www.youtube.com/watch?v=TxNprnas7i8) |\n| YouTube, “Homo digitalis – Henry Markram” | [Source](https://youtu.be/DvE-nphgswY?t=1183) |\n| YouTube, “Hubel and Wiesel Cat Experiment” | [Source](https://www.youtube.com/watch?v=IOHayh06LJ4) |\n| YouTube, “Jonathan Pillow – Tutorial: Statistical models for neural data – Part 1 (Cosyne 2018)” | [Source](https://youtu.be/NFeGW5ljUoI?t=968) |\n| YouTube, “Lecture 7: Information Processing in the Brain” | [Source](https://www.youtube.com/watch?v=Bm40BSZJRck#t=10m10s) |\n| YouTube, “Markus Meister, Nueral computations in the retina: from photons to behavior: 2016 Sharp Lecture” | [Source](https://www.youtube.com/watch?v=2UpiWMukZeI) |\n| YouTube, “Matt Botvinick: Neuroscience, Psychology, and AI at DeepMind | Lex Fridman Podcast #106” | [Source](https://www.youtube.com/watch?v=3t06ajvBtl0) |\n| YouTube, “Neural networks and the brain: from the retina to semantic cognition – Surya Ganguli” | [Source](https://www.youtube.com/watch?v=FKi6sWK9Qo0&feature=youtu.be&t=295) |\n| YouTube, “Neuralink Launch Event” | [Source](https://www.youtube.com/watch?v=r-vbh3t7WVI) |\n| YouTube, “Quantum Processing in the Brain? (Matthew PA Fisher)” | [Source](https://www.youtube.com/watch?v=IP_GmTKYlsc) |\n| YouTube, “Stanford Seminar – Generalized Reversible Computing and the Unconventional Computing Landscape” | [Source](https://youtu.be/IQZ_bQbxSXk) |\n| YouTube, “The Stilwell Brain” | [Source](https://www.youtube.com/watch?v=rA5qnZUXcqo&vl=en) |\n| YouTube, “Yann LeCun – How does the brain learn so much so quickly? (CCN 2017)” | [Source](https://www.youtube.com/watch?v=cWzi38-vDbE) |\n| Yu et al. (2009) | [Source](https://journals.physiology.org/doi/full/10.1152/jn.90941.2008) |\n| Yue et al. (2016) | [Source](https://ezproxy-prd.bodleian.ox.ac.uk:2056/science/article/pii/S1350946216300271) |\n| Yuste (2015) | [Source](https://www.nature.com/articles/nrn3962) |\n| Zador (1998) | [Source](http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.125.8765&rep=rep1&type=pdf) |\n| Zador (1999) | [Source](https://journals.physiology.org/doi/pdf/10.1152/jn.1998.79.3.1219) |\n| Zador (2019) | [Source](http://zadorlab.labsites.cshl.edu/wp-content/uploads/sites/59/2019/08/A-critique-of-pure-learning-and-what-artificial-neuralnetworks-can-learn-from-animal-brains.pdf) |\n| Zaghloul and Boahen (2006) | [Source](https://web.stanford.edu/group/brainsinsilicon/pdf/06_ZaghloulBoahenJNE06.pdf) |\n| Zbili and Debanne (2019) | [Source](https://www.ncbi.nlm.nih.gov/pmc/articles/PMC6492051/pdf/fncel-13-00160.pdf) |\n| Zbili et al. (2016) | [Source](https://www.frontiersin.org/articles/10.3389/fncel.2016.00278/full) |\n| Zenke et al. (2017) | [Source](https://arxiv.org/pdf/1703.04200.pdf) |\n| Zhang et al. (2014) | [Source](https://pubmed.ncbi.nlm.nih.gov/24453330/) |\n| Zhang et al. (2019) | [Source](https://www.biorxiv.org/content/10.1101/296301v1) |\n| Zhou et al. (2013) | [Source](https://www.pnas.org/content/pnas/110/29/E2714.full.pdf) |\n| Zhu et al. (2012) | [Source](https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3325488/pdf/nihms-357436.pdf) |\n| Zilberter et al. (2005) | [Source](https://pubmed.ncbi.nlm.nih.gov/16061520/) |\n| Zuo et al. (2005) | [Source](https://pubmed.ncbi.nlm.nih.gov/15848798/) |\n| Zuo et al. (2015) | [Source](https://www.cell.com/current-biology/fulltext/S0960-9822(14)01560-7?_returnURL=https%3A%2F%2Flinkinghub.elsevier.com%2Fretrieve%2Fpii%2FS0960982214015607%3Fshowall%3Dtrue) |\n\n\n\n\n[Expand Footnotes\n \n\n\n\n\n Collapse Footnotes](javascript:void(0);)\n\n[1.](https://www.openphilanthropy.org/brain-computation-report#footnoteref1_2nn9xh3)The names “mechanistic method” and “functional method” were suggested by our technical advisor Dr. Dario Amodei, though he uses a somewhat more specific conception of the mechanistic method. [Sandberg and Bostrom (2008)](https://www.fhi.ox.ac.uk/brain-emulation-roadmap-report.pdf) also distinguish between “straightforward multiplicate estimates” and those that are based on “analogy or constraints” (p. 84, Appendix A).\n\n\n[2.](https://www.openphilanthropy.org/brain-computation-report#footnoteref2_hqs4jxi)Here I am using “software” in a way that includes trained models in addition to hand-coded programs. Some forms of hardware (including neuromorphic hardware – see [Mead (1989)](https://www.amazon.com/Analog-VLSI-Neural-Systems-Carver/dp/0201059924)) complicate traditional distinctions between hardware and software, but the broader consideration at stake here – e.g., that task-performance requires organizing available computational power in the right way – remains applicable.\n\n\n[3.](https://www.openphilanthropy.org/brain-computation-report#footnoteref3_10d52qz)Though it also seems easier, in general, to show that X is enough, than that X is strictly required – an asymmetry present throughout the report.\n\n\n[4.](https://www.openphilanthropy.org/brain-computation-report#footnoteref4_xne824i)The probabilities reported here should be interpreted as subjective levels of confidence or “credences,” not as claims about objective frequencies, statistics, or “propensities” (see [Peterson (2009)](https://www.amazon.com/Introduction-Decision-Cambridge-Introductions-Philosophy-ebook/dp/B00E3URCAE/ref=sr_1_5?dchild=1&keywords=introduction+to+decision-theory&qid=1586030921&sr=8-5), Chapter 7, for discussion of various alternative interpretations of probability judgments). One way of defining these credences is via preferences over lotteries – a definition I find useful (though not fully satisfactory). On such a definition, “I think it more likely than not” means that, for example, if I had the option to win $10,000 if 1015 FLOP/s is sufficient, in principle, to match human-level task-performance, or the option to win $10,000 if 1015 FLOP/s is *not* sufficient, I would choose the former option. Skepticism about my answer should go in proportion to confidence that 1e15 FLOP/s is not sufficient (e.g., those who disagree should prefer the latter option rather than the former), rather than with dissatisfaction with the evidence available either way (I too am quite dissatisfied in this regard), or disinclination to take real-world bets (why turn down a free chance at $10,000?). That said, for various reasons, I don’t find this definition of subjective probability judgments fully satisfactory (in particular, it transforms probabilistic claims about the world into true/false claims about one’s betting behavior– and it’s not clear exactly what sort of betting behavior is implied, or what consistency in such behavior assumed), so I offer it more as a gesture at a way of soliciting subjective credences than as an endorsed definition. See [Peterson (2009)](https://www.amazon.com/Introduction-Decision-Cambridge-Introductions-Philosophy-ebook/dp/B00E3URCAE/ref=sr_1_5?dchild=1&keywords=introduction+to+decision-theory&qid=1586030921&sr=8-5), section 7.5, for discussion of lotteries of this type in the context of the literature on decision-theory. See also [this blog post by Andrew Critch](http://acritch.com/credence/) for more informal discussion; and see [Muehlhauser (2017a)](https://www.openphilanthropy.org/blog/technical-and-philosophical-questions-might-affect-our-grantmaking#Making_decisions_under_different_kinds_of_uncertainty), section 2, for discussion of some complexities involved in using these probabilities in practice.\n\n\n[5.](https://www.openphilanthropy.org/brain-computation-report#footnoteref5_376tjis)I focus on this model in particular because I think it fits best with the mechanistic method evidence I’ve thought about most and take most seriously. Offering specific probabilities keyed to the minimum FLOP/s sufficient for task-performance, by contrast, requires answering further questions about the theoretical limits of algorithmic efficiency that I haven’t investigated.\n\n\n[6.](https://www.openphilanthropy.org/brain-computation-report#footnoteref6_8qle859)See [here](https://www.thinkmate.com/product/nvidia/900-2g500-0010-000) for V100 prices (currently ~$8,799); and [here](https://www.nytimes.com/2020/06/22/technology/japanese-supercomputer-fugaku-tops-american-chinese-machines.html) the $1 billion Fugaku pricetag: “The six-year budget for the system and related technology development totaled about $1 billion, compared with the $600 million price tags for the biggest planned U.S. systems.” Fugaku FLOP/s performance is listed [here](https://www.top500.org/lists/top500/2020/06/), at around ~4×1017 FLOP/s-5×1017 FLOP/s. Google’s TPU supercomputer, which recently broke records in training ML systems, can also do ~4×1017 FLOP/s, though I’m not sure the costs. See [Kumar (2020)](https://cloud.google.com/blog/products/ai-machine-learning/google-breaks-ai-performance-records-in-mlperf-with-worlds-fastest-training-supercomputer): “In total, this system delivers over 430 PFLOPs of peak performance.” The A100, for ~$200,000, can do 5×1015 FLOP/s – see [Mehar (2020)](https://www.inceptivemind.com/nvidia-dgx-a100-world-first-5-petaflops-system/13267/#:~:text=NVIDIA%20DGX%20A100%20packs%20record%205%20petaflops%20of%20AI%20performance.&text=NVIDIA%20has%20unveiled%20the%20third,the%20new%20NVIDIA%20DGX%20A100.). NVIDIA’s newest SuperPOD can deliver ~7×1017 of AI performance – see [Paikeday (2020)](https://blogs.nvidia.com/blog/2020/05/14/dgx-superpod-a100/).\n\n\n[7.](https://www.openphilanthropy.org/brain-computation-report#footnoteref7_hqg8qk3)See discussion in [Section 1.3](https://www.openphilanthropy.org/research/how-much-computational-power-does-it-take-to-match-the-human-brain/#Context) below.\n\n\n[8.](https://www.openphilanthropy.org/brain-computation-report#footnoteref8_i2q7fw5)Selection effects include: expertise related to an issue relevant to the report, willingness to talk with me about the subject, recommendation by one of the other experts I spoke with as a possible source of helpful input, and connection (sometimes a few steps removed) with the professional and social communities that intersect at Open Philanthropy.\n\n\n[9.](https://www.openphilanthropy.org/brain-computation-report#footnoteref9_7pczl9i)See [Poldrack et al. (2017)](https://www.nature.com/articles/nrn.2016.167); [Vul and Pashler (2017)](https://books.google.com/books?id=VMbXDQAAQBAJ&lpg=PA196&ots=js2Q-GfBZY&lr=lang_en&pg=PA196#v=onepage&q&f=false); [Uttal (2012)](https://mitpress.mit.edu/books/reliability-cognitive-neuroscience); [Button et al. (2013)](https://www.nature.com/articles/nrn3475); [Szucs and P.A. loannidis (2017)](https://journals.plos.org/plosbiology/article?id=10.1371/journal.pbio.2000797); and [Carp (2012)](https://www.sciencedirect.com/science/article/abs/pii/S1053811912007057). And see also [Muehlhauser (2017b)](https://www.openphilanthropy.org/2017-report-consciousness-and-moral-patienthood), Appendix Z.8, for discussion of his reasons for default skepticism of published studies. My thanks to Luke Muehlhauser for suggesting this type of consideration and these references.\n\n\n[10.](https://www.openphilanthropy.org/brain-computation-report#footnoteref10_wg3tydz)This effort is itself part of a project at Open Philanthropy currently called [Worldview Investigations](https://www.openphilanthropy.org/blog/our-progress-2019-and-plans-2020#Worldview_investigations), which aims to investigate key questions informing our grant-making.\n\n\n[11.](https://www.openphilanthropy.org/brain-computation-report#footnoteref11_w8x4o51)See, for example, [Moravec (1998)](https://jetpress.org/volume1/moravec.pdf), chapter 2; and [Kurzweil (2005)](https://www.amazon.com/Singularity-Near-Humans-Transcend-Biology-ebook/dp/B000QCSA7C), chapter 3. See [this list](https://aiimpacts.org/list-of-analyses-of-time-to-human-level-ai/) from AI Impacts for related forecasts.\n\n\n[12.](https://www.openphilanthropy.org/brain-computation-report#footnoteref12_7fre80b)See, for example, [Malcolm (2000)](https://web.archive.org/web/20100722014446/http://www.dai.ed.ac.uk/homes/cam/Robots_Wont_Rule2.shtml); [Lanier (2000)](https://www.edge.org/conversation/one-half-a-manifesto) (“Belief # 5”); [Russell (2019)](https://www.amazon.com/Human-Compatible-Artificial-Intelligence-Problem-ebook/dp/B07N5J5FTS) (p. 78). [AI Impacts](https://aiimpacts.org/how-ai-timelines-are-estimated/) offers a framework that I find helpful, which uses indifference curves indicating which combinations hardware and software capability yield the same overall task-performance. This framework (see especially Figure 3) makes clear that the first human-level AI systems could use much more or much less hardware than the amount “equivalent” to the human brain (at least assuming that this amount is not the absolute minimum) – though see figure 4 for a scenario in which brain-equivalent hardware is a better basis for forecasts.\n\n\n[13.](https://www.openphilanthropy.org/brain-computation-report#footnoteref13_cw60b9y)Moravec argues [here](https://jetpress.org/volume1/commentary.htm) that “under current circumstances, I think computer power is the pacing factor for AI” (see his second reply to Robin Hanson). [Kurzweil (2005)](https://www.amazon.com/Singularity-Near-Humans-Transcend-Biology-ebook/dp/B000QCSA7C) devotes all of Chapter 4 to the question of software.\n\n\n[14.](https://www.openphilanthropy.org/brain-computation-report#footnoteref14_m780czs)For example: a ResNet-152 uses ~1e10 FLOP to classify an image, but took ~1e19 FLOP (a billion times more) to train, according to [Hernandez and Amodei (2018)](https://openai.com/blog/ai-and-compute/) (see appendix, though see also[Hernandez and Brown (2020)](https://openai.com/blog/ai-and-efficiency/) for discussion of decreasing training costs for vision models over time).\n\n\n[15.](https://www.openphilanthropy.org/brain-computation-report#footnoteref15_24im16f)[Silver et al. (2017)](https://www.nature.com/articles/nature24270): “Over the course of training, 4.9 million games of self-play were generated” (see “Empirical analysis of AlphaGo Zero training”). A bigger version of the model trained on 29 million games. See [Kaplan et al. (2020)](https://arxiv.org/pdf/2001.08361.pdf) and [Hestness et al. (2017)](https://arxiv.org/pdf/1712.00409.pdf) for more on the scaling properties for training in deep learning.\n\n\n[16.](https://www.openphilanthropy.org/brain-computation-report#footnoteref16_f17b6og)The question of what sorts of task-performance will result from what sorts of training is centrally important in this context, and I am not here assuming any particular answers to it.\n\n\n[17.](https://www.openphilanthropy.org/brain-computation-report#footnoteref17_3oh6ac5)The fact that training a model requires running it a lot makes this clear. But there are also more complex relationships between e.g. model size and amount of training data. See [Kaplan et al. (2020)](https://arxiv.org/pdf/2001.08361.pdf) and [Hestness et al. (2017)](https://arxiv.org/pdf/1712.00409.pdf).\n\n\n[18.](https://www.openphilanthropy.org/brain-computation-report#footnoteref18_baro8pn)See e.g. [Dongerra et al. (2003)](https://www.netlib.org/utk/people/JackDongarra/PAPERS/146_2003_the-linpack-benchmark-past-present-and-future.pdf): “the performance of a computer is a complicated issue, a function of many interrelated quantities. These quantities include the application, the algorithm, the size of the problem, the high-level language, the implementation, the human level of effort used to optimize the program, the compiler’s ability to optimize, the age of the compiler, the operating system, the architecture of the computer and the hardware characteristics” (p. 805); [Moravec (1988)](https://www.amazon.com/Mind-Children-Future-Robot-Intelligence/dp/0674576187/ref=sr_1_2?dchild=1&keywords=mind+children+moravec&qid=1586128538&sr=8-2): “Any particular formula for estimating power may be grossly misled by an unlucky or diabolic counterexample. For instance, if a computer’s power were defined simply by how many additions per second it could do, an otherwise useless special circuit made of an array of fast adders, and nothing else, costing a few hundred dollars, could outperform a $10-million supercomputer” (p. 169); [Nordhaus (2001)](https://web.archive.org/web/20160222082744/http://www.econ.yale.edu/~nordhaus/homepage/prog_083001a.pdf): “Measuring computer power has bedeviled analysts because computer characteristics are multidimensional and evolve rapidly over time.” (p. 5).\n\n\n[19.](https://www.openphilanthropy.org/brain-computation-report#footnoteref19_0f829fd)An operation, here, is an abstract mapping from inputs to outputs that can be implemented by a computer, and that is treated as basic for the purpose of the analysis in question (see [Schneider and Gersting (2018)](https://www.amazon.com/Invitation-Computer-Science-G-Michael-Schneider/dp/1337561916) (p. 96-100)). A FLOP is itself composed out of a series of much simpler logic operations, which are in some contexts a more natural and basic computational unit. See e.g. [Sipser (2013)](https://www.amazon.com/Introduction-Theory-Computation-Michael-Sipser/dp/113318779X), section 9.3, for discussion of analyzing the complexity of algorithms in terms of the number of AND, OR, and NOT gates required to construct a functional circuit. The report’s analysis could in principle be converted into these units instead – or, indeed, into any computational unit capable of simulating a FLOP.\n\n\n[20.](https://www.openphilanthropy.org/brain-computation-report#footnoteref20_ybj43f6)See e.g. [Kahn and Mann (2020)](https://cset.georgetown.edu/wp-content/uploads/AI-Chips%E2%80%94What-They-Are-and-Why-They-Matter.pdf): “The success of modern AI techniques relies on computation on a scale unimaginable even a few years ago. Training a leading AI algorithm can require a month of computing time and cost $100 million” (p. 3); and Geoffrey Hinton’s comments in [Lee (2016)](https://www.macleans.ca/society/science/the-meaning-of-alphago-the-ai-program-that-beat-a-go-champ/): “In deep learning, the algorithms we use now are versions of the algorithms we were developing in the 1980s, the 1990s. People were very optimistic about them, but it turns out they didn’t work too well. Now we know the reason is they didn’t work too well is that we didn’t have powerful enough computers, we didn’t have enough data sets to train them. If we want to approach the level of the human brain, we need much more computation, we need better hardware.” For more discussion of the compute burdens of contemporary AI applications, see e.g. [Kaplan et al. (2020)](https://arxiv.org/pdf/2001.08361.pdf), [Amodei and Hernandez (2018)](https://openai.com/blog/ai-and-compute/#lookingforward), and [McCandlish et al. (2018)](https://arxiv.org/pdf/1812.06162.pdf). Note that the dominant costs here are from *training* the relevant systems, not from running them. However, the costs of training depend centrally on the costs of running (along with other factors). This relationship is central to my colleague Ajeya Cotra’s investigation.\n\n\n[21.](https://www.openphilanthropy.org/brain-computation-report#footnoteref21_yubq0il)I say a little bit about communication bandwidth in [Section 5](https://www.openphilanthropy.org/research/how-much-computational-power-does-it-take-to-match-the-human-brain/#TheCommunicationMethod). See [Sandberg and Bostrom (2008)](https://www.fhi.ox.ac.uk/brain-emulation-roadmap-report.pdf) (p. 84-85), for a literature review of memory estimates. See [Open Philanthropy’s non-verbatim notes from a conversation with Dr. Adam Marblestone](https://www.openphilanthropy.org/research/dr-adam-marblestone-research-scientist-google-deepmind/)] (“FLOP/s”) for some discussion of other relevant factors.\n\n\n[22.](https://www.openphilanthropy.org/brain-computation-report#footnoteref22_0lgamzp)[Eugene Izhikevich](https://www.izhikevich.org/human_brain_simulation/why.htm), for example, reports that in running his brain simulation, he did not have the memory required to store all of the synaptic weights (10,000 terabytes), and so had to regenerate the anatomy of his simulated brain every time step; and Stephen Larson suggested that one of the motivations behind the Blue Brain project’s reliance on a supercomputer was the need to reduce latency between computation units (see [Open Philanthropy’s non-verbatim notes from a conversation with Dr. Stephen Larson](https://www.openphilanthropy.org/research/dr-stephen-larson-ceo-of-metacell-and-co-founder-of-openworm/) (p. 5)). See also Fathom Computing’s comment [here](https://www.fathomcomputing.com/): “Data movements, not math or logic operations, are the bottleneck in computing” (though this is hardly an unbiased source); Hollemans’ comments [here](https://machinethink.net/blog/how-fast-is-my-model/): “The number of computations — whether you count them as MACCs or FLOPS — is only part of the story. Memory bandwidth is the other part, and most of the time is even more important!”; and various citations from [AI Impacts](https://aiimpacts.org/brain-performance-in-teps/), e.g. [Angel et al. (2012)](https://pdfs.semanticscholar.org/803b/73d66fb0a940905c91ff955dc1c9963459c0.pdf), and [Takahashi (2012)](https://venturebeat.com/2012/12/11/copper-wires-might-be-the-bottleneck-in-the-way-of-moores-law/).\n\n\n[23.](https://www.openphilanthropy.org/brain-computation-report#footnoteref23_je4jj2a)From [Open Philanthropy’s non-verbatim notes from a conversation with Dr. Adam Marblestone](https://www.openphilanthropy.org/research/dr-adam-marblestone-research-scientist-google-deepmind/): “the architecture of a given computer (especially e.g. a standard von Neumann architecture) might create significant overhead. For example, the actual brain co-locates long-term memory and computing. If you had to store longer-term data in a conventional RAM instead, many additional operations might be necessary in order to locate, address, and update relevant variables” (p. 1). One option for reducing overheads might involve neuromorphic computing architectures (see [Mead (1989)](https://www.amazon.com/Analog-VLSI-Neural-Systems-Carver/dp/0201059924), descriptions [here](https://en.wikipedia.org/wiki/Neuromorphic_engineering), and papers [here](https://web.stanford.edu/group/brainsinsilicon/pubs.html); [Zaghloul and Boahen (2006)](https://web.stanford.edu/group/brainsinsilicon/pdf/06_ZaghloulBoahenJNE06.pdf) report a “100-fold improvement over conventional microprocessors” (p. 266)). There is also a growing industry of chips designed specifically for AI applications (see [Khan (2020)](https://cset.georgetown.edu/wp-content/uploads/Why-AI-Chips-Matter.pdf): “AI-specialized chip designs are an additional 10 to 1,000 more cost-effective for training AI algorithms than ordinary chips” (p. 2)).\n\n\n[24.](https://www.openphilanthropy.org/brain-computation-report#footnoteref24_h8e70n1)An example of “unrealistically extreme abundance” would be the type of abundance of memory required by a [giant look-up table](http://www.cs.yale.edu/homes/dvm/papers/humongous.pdf). Even bracketing such obviously extreme scenarios, though, it seems possible that trade-offs between FLOP/s and other computational resources might complicate talk about the minimum FLOP/s sufficient to do X, absent further more specific constraints on the other resources available. I haven’t delved into this issue much: my hope is that insofar as it’s a problem in theory, the actual evidence surveyed in the report will still be useful in practice.\n\n\n[25.](https://www.openphilanthropy.org/brain-computation-report#footnoteref25_2tuoex6)See [Ananthanarayanan et al. (2009)](https://people.eecs.berkeley.edu/~demmel/cs267_Spr10/Lectures/RajAnanthanarayanan_SC09-a63.pdf) for discussion of the hardware complexities involved in brain simulation.\n\n\n[26.](https://www.openphilanthropy.org/brain-computation-report#footnoteref26_d6hdhnd)Objections focused on general differences between brains and various human-engineered computers (e.g., the brain lacks a standardized clock, the brain is very parallel, the brain is analog, the brain is stochastic, the brain is chaotic, the brain is embodied, the brain’s memory works differently, the brain lacks a sharp distinction between hardware and software, etc.) are therefore relevant only insofar as they are incompatible with particular claims in the report; they are not, as far as I can tell, incompatible with any underlying assumptions of the project as a whole (except insofar as they are taken to suggest that no human-engineered computer can perform the tasks the brain performs – a form of skepticism the report does not attempt to address). See [Marcus (2015)](https://www.nytimes.com/2015/06/28/opinion/sunday/face-it-your-brain-is-a-computer.html) for discussion of some such objections. The different methods I consider rely on their own, somewhat more substantive assumptions.\n\n\n[27.](https://www.openphilanthropy.org/brain-computation-report#footnoteref27_tcs4hmd)My impression is that the content reviewed here is basically settled science, though see [Section 1.5.1](https://www.openphilanthropy.org/research/how-much-computational-power-does-it-take-to-match-the-human-brain/#151-uncertainty-in-neuroscience) for discussion of various types of ongoing neuroscientific uncertainty.\n\n\n[28.](https://www.openphilanthropy.org/brain-computation-report#footnoteref28_redzbhw)[Azevedo et al. (2009)](https://www.ncbi.nlm.nih.gov/pubmed/19226510): “We find that the adult male human brain contains on average 86.1 ± 8.1 billion NeuN-positive cells (“neurons”) and 84.6 ± 9.8 billion NeuN-negative (“nonneuronal”) cells” (532). My understanding is that the best available method of counting neurons is isotropic fractionation, which proceeds by dissolving brain structures into a kind of homogenous “[brain soup](https://news.vanderbilt.edu/vanderbiltmagazine/brainiac-with-her-innovative-brain-soup-suzana-herculano-houzel-is-changing-neuroscience-one-species-at-a-time/),” and then counting cell nuclei (see [Herculano-Houzel and Lent (2005)](https://www.ncbi.nlm.nih.gov/pmc/articles/PMC6725175/pdf/00252518.pdf) for a more technical description of the process, and [Bartheld et al. (2016)](https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5063692/pdf/nihms799882.pdf) for a history of cell-counting in the brain). Note that there may be substantial variation in cell counts between individuals (for example, according to [Bartheld et al. (2016)](https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5063692/pdf/nihms799882.pdf) (p. 9), citing [Haug (1986)](https://pubmed.ncbi.nlm.nih.gov/3540464/) and [Pakkenberg and Gundersen (1997)](https://pubmed.ncbi.nlm.nih.gov/9215725/), neocortical neuron count may vary by a factor of more than two, though I haven’t checked these further citations). At one point it was widely thought that the ratio of glial cells (a type of non-neuronal cell) to neurons in the brain was 10:1, but this is wrong (see [Bartheld et al. (2016)](https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5063692/pdf/nihms799882.pdf)).\n\n\n[29.](https://www.openphilanthropy.org/brain-computation-report#footnoteref29_awm3f8h)I do not have a rigorous definition of “signaling” between cells, though there may be one available. A central example would be when one cell has a specialized mechanism for sending out a particular type of chemical to another cell, which in turn has a specialized receptor for receiving that chemical. See [Lodish et al. (2008)](https://books.google.com/books?hl=en&lr=&id=K3JbjG1JiUMC&oi=fnd&pg=PA1&dq=lodish+et+al+2008&ots=asG8ZUys4H&sig=ysgdCjfzBAJCpglEURb1P-_M1sY#v=snippet&q=neurons%20and%20glia&f=false), ch. 15 and 16, for lengthy discussion of biological signaling mechanisms. For examples of signaling by non-neuronal cells, see the [section on glia](https://www.openphilanthropy.org/research/how-much-computational-power-does-it-take-to-match-the-human-brain/#Glia). Jess Riedel suggested a definition on which the functionally-structured impact of one cell on another counts as signaling if the impact on the second cell varies based on the state of the first (as opposed to, e.g., one cell sending the other one resources irrespective of the first cell’s state) – a case in which the impact on the second cell provides information about the state of the first (see [Open Philanthropy’s non-verbatim notes from a conversation with Dr. Jess Riedel](https://www.openphilanthropy.org/research/dr-jess-riedel-senior-research-scientist-physics-ntt-research/), p. 5).\n\n\n[30.](https://www.openphilanthropy.org/brain-computation-report#footnoteref30_jtcnpkq)The texts I have engaged with in cognitive science and neuroscience do not attempt to give necessary and sufficient conditions for a physical system to count as “processing information,” and I will not attempt a rigorous definition here (see [Piccinini and Scarantino (2011)](https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3006465/pdf/10867_2010_Article_9195.pdf) for an attempt to disambiguate and evaluate a few possible interpretations, based on different possible conceptions of the relevant type of “information”). My impression, though, is that the intuitive notion is roughly as follows. The brain’s activity makes what you do sensitive to sensory input, past and present (someone [throws a shoe at your head](https://www.youtube.com/watch?v=TxNprnas7i8), and you duck; you see an old friend at a coffee shop, and you stop to chat). Such sensitivity requires that when the brain receives one set of sensory inputs, rather than another, this difference is reflected somehow in the state of the nervous system in a manner available, at least initially, to make a reliable difference between one macroscopically-specified behavioral response or another (though lots of information is quickly discarded). In this sense, the brain takes in or “encodes” information about sensory inputs using different biophysical variables (that is, aspects of the biophysical system that can be in different states). The brain then processes this information in the sense that the states of these variables serve as inputs to further causal processes in the brain which combine to create behavioral sensitivity to high-level properties of an organism’s environment and history. Thus, for example, if you want to set up a brain that causes an organism to run from a tiger, but not from a tree, you need to have more than a set of biophysical variables that correlate with the type of light hitting different parts of the eye – you also need causal processes that “extract” from that light an answer to the question “is this a tiger or a tree?”, and then cause the relevant behavioral response. For more discussion in this vein, see e.g. [London and Häusser (2005)](https://pdfs.semanticscholar.org/81cf/8182d5725d5fdd9a33b65843f2f9fdb6a9f6.pdf) (p. 209); [Koch (1999)](https://www.amazon.com/Biophysics-Computation-Information-Computational-Neuroscience/dp/0195181999) (p. 1); [Hanson (2016)](https://www.amazon.com/Age-Em-Work-Robots-Earth/dp/1536619590) (p. 50); and [Marr (1982)](https://www.amazon.com/Vision-Computational-Investigation-Representation-Information/dp/0262514621) (p. 3). See [this video](https://www.youtube.com/watch?v=rA5qnZUXcqo&vl=en) for a vivid illustration of feature extraction; and [this video](hhttps://www.youtube.com/watch?v=obAHnwp9tOM) for a nice example of neural information-processing.\n\n\n[31.](https://www.openphilanthropy.org/brain-computation-report#footnoteref31_xzuh3db)See the “anatomy of a neuron” section [here](https://www.khanacademy.org/science/biology/human-biology/neuron-nervous-system/a/overview-of-neuron-structure-and-function) for quick description. See [Kandel et al. (2013)](https://neurology.mhmedical.com/content.aspx?bookid=1049§ionid=59138626), ch. 4-8, [Lodish et al. (2008)](https://books.google.com/books?hl=en&lr=&id=K3JbjG1JiUMC&oi=fnd&pg=PA1&dq=lodish+et+al+2008&ots=asG8ZUys4H&sig=ysgdCjfzBAJCpglEURb1P-_M1sY#v=snippet&q=neurons%20and%20glia&f=false), ch. 23, and [this series of videos](https://www.khanacademy.org/science/biology/human-biology/neuron-nervous-system/a/overview-of-neuron-structure-and-function), for detailed descriptions of basic neuron structure and function.\n\n\n[32.](https://www.openphilanthropy.org/brain-computation-report#footnoteref32_0qequig)Neurons can also synapse onto blood vessels, muscle cells, neuron cell bodies, axons, and axon terminals (at least according to the [medical gallery of Blausen Medical 2014](https://en.wikipedia.org/wiki/Synapse#/media/File:Blausen_0843_SynapseTypes.png)), but for simplicity, I will focus on synapses between axon terminals and dendrites in what follows.\n\n\n[33.](https://www.openphilanthropy.org/brain-computation-report#footnoteref33_xq4ycfp)See [Siegelbaum and Koester (2013a)](https://neurology.mhmedical.com/content.aspx?bookid=1049§ionid=59138627): “In addition to ion channels, nerve cells contain a second important class of proteins specialized for moving ions across cell membranes, the ion transporters or pumps. These proteins do not participate in rapid neuronal signaling but rather are important for establishing and maintaining the concentration gradients of physiologically important ions between the inside and outside of the cell” (p. 100). See also the section on “Where does the resting membrane potential come from?” [here](https://neurology.mhmedical.com/book.aspx?bookID=1049).\n\n\n[34.](https://www.openphilanthropy.org/brain-computation-report#footnoteref34_hdxry16)See [Siegelbaum and Koester (2013c)](https://neurology.mhmedical.com/content.aspx?bookid=1049§ionid=59138628) (p. 126-147); and the section “Where does the resting membrane potential come from?” [here](https://www.khanacademy.org/science/biology/human-biology/neuron-nervous-system/a/the-membrane-potential).\n\n\n[35.](https://www.openphilanthropy.org/brain-computation-report#footnoteref35_dbl2189)See [Siegelbaum and Koester (2013a)](https://neurology.mhmedical.com/content.aspx?bookid=1049§ionid=59138627) (p. 100-124), for detailed description of ion channel dynamics.\n\n\n[36.](https://www.openphilanthropy.org/brain-computation-report#footnoteref36_ra0rre0)See [Kandel et al. (2013)](https://neurology.mhmedical.com/content.aspx?bookid=1049§ionid=59138622) (p. 31-35); and [Siegelbaum and Koester (2013b)](https://neurology.mhmedical.com/content.aspx?bookid=1049§ionid=59138629) (p. 148-171), for description. See also [here](https://www.khanacademy.org/science/biology/human-biology/neuron-nervous-system/a/the-membrane-potential).\n\n\n[37.](https://www.openphilanthropy.org/brain-computation-report#footnoteref37_dmg140o)See [Siegelbaum and Koester (2013d)](https://neurology.mhmedical.com/content.aspx?bookid=1049§ionid=59138632) (p. 184-187); [Siegelbaum et al. (2013c)](https://neurology.mhmedical.com/content.aspx?bookid=1049§ionid=59138636) (p. 260-287); and description [here](https://www.khanacademy.org/science/biology/human-biology/neuron-nervous-system/a/the-synapse) in the section “overview of transmission at chemical synapses”). See also [Lodish et al. (2008)](https://books.google.com/books?hl=en&lr=&id=K3JbjG1JiUMC&oi=fnd&pg=PA1&dq=lodish+et+al+2008&ots=asG8ZUys4H&sig=ysgdCjfzBAJCpglEURb1P-_M1sY#v=snippet&q=neurons%20and%20glia&f=false) (p. 1020). Note that action potentials do not always trigger synaptic transmission: see [section 2.1.1.2.2.](https://www.openphilanthropy.org/research/how-much-computational-power-does-it-take-to-match-the-human-brain/#PossibleComplications)\n\n\n[38.](https://www.openphilanthropy.org/brain-computation-report#footnoteref38_7e5dirw)I’ll refer to the event of a spike arriving at a synapse as a “spike through synapse.” A network of interacting neurons is sometimes called a *[neural circuit](https://en.wikipedia.org/wiki/Neural_circuit)*. A series of spikes from a single neuron is sometimes called a *[spike train](http://www.neuwritewest.org/blog/2015/1/3/ask-a-neuroscientist-whats-a-spike-train)*. From [Khan Academy](https://www.khanacademy.org/science/biology/human-biology/neuron-nervous-system/a/neurotransmitters-their-receptors): “we can divide the receptor proteins that are activated by neurotransmitters into two broad classes: Ligand-activated ion channels: These receptors are membrane-spanning ion channel proteins that open directly in response to ligand binding. Metabotropic receptors: These receptors are not themselves ion channels. Neurotransmitter binding triggers a signaling pathway, which may indirectly open or close channels (or have some other effect entirely)” (see section “Two types of neurotransmitter receptors”). See [Siegelbaum et al. (2013)](https://neurology.mhmedical.com/content.aspx?bookid=1049§ionid=59138634) (p. 210-235), for more on the first class of receptors; and [Siegelbaum et al. (2013b)](https://neurology.mhmedical.com/content.aspx?bookid=1049§ionid=59138635) (p. 236-255), for more on the second.\n\n\n[39.](https://www.openphilanthropy.org/brain-computation-report#footnoteref39_7iq0k42)This particular picture appears to show one neuron synapsing onto the cell body of another, as opposed to the dendrites. But dendrites are generally taken to be the main receivers of synaptic signals.\n\n\n[40.](https://www.openphilanthropy.org/brain-computation-report#footnoteref40_a1zs2zb)See [Open Philanthropy’s non-verbatim notes from a conversation with Prof. Shaul Druckmann](https://www.openphilanthropy.org/research/professor-shaul-druckmann-assistant-professor-of-neurobiology-and-of-psychiatry-and-behavioral-sciences-stanford-university/): “Setting aside plasticity, most people assume that modeling the immediate impact of a pre-synaptic spike on the post-synaptic neuron is fairly simple. Specifically, you can use a single synaptic weight, which reflects the size of the impact of a spike through that synapse on the post-synaptic membrane potential” (p. 1). [Lahiri and Ganguli (2013)](https://papers.nips.cc/paper/4872-a-memory-frontier-for-complex-synapses.pdf) note that the theoretical models often treat synapses as “described solely by a single scalar value denoting the size of a post-synaptic potential” (p. 1), though they do not endorse this.\n\n\n[41.](https://www.openphilanthropy.org/brain-computation-report#footnoteref41_i4amfgc)See discussion and citations in [Section 2.2](https://www.openphilanthropy.org/research/how-much-computational-power-does-it-take-to-match-the-human-brain/#Learning) for more details.\n\n\n[42.](https://www.openphilanthropy.org/brain-computation-report#footnoteref42_0gq6kl0)[Cudmore and Desai (2008)](http://www.scholarpedia.org/article/Intrinsic_plasticity): “Intrinsic plasticity is the persistent modification of a neuron’s intrinsic electrical properties by neuronal or synaptic activity. It is mediated by changes in the expression level or biophysical properties of ion channels in the membrane, and can affect such diverse processes as synaptic integration, subthreshold signal propagation, spike generation, spike backpropagation, and meta-plasticity” (opening section).\n\n\n[43.](https://www.openphilanthropy.org/brain-computation-report#footnoteref43_s5updbo)See e.g. [Munno and Syed (2003)](https://www.ncbi.nlm.nih.gov/pmc/articles/PMC2343306/), [Ming and Song (2011)](https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3106107/), [Grutzendler et al. (2002)](https://www.ncbi.nlm.nih.gov/pubmed/12490949), [Holtmaat et al. (2005)](https://www.ncbi.nlm.nih.gov/pubmed/15664179).\n\n\n[44.](https://www.openphilanthropy.org/brain-computation-report#footnoteref44_n6gow68)See [Schwartz and Javitch (2013)](https://neurology.mhmedical.com/content.aspx?bookid=1049§ionid=59138637), (p. 297-301); [Russo (2017)](https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5424629/pdf/nihms860267.pdf); and [Leng and Ludwig (2008)](https://www.ncbi.nlm.nih.gov/pmc/articles/pmid/18845614/): “Neurones use many different molecules to communicate with each other, acting in many different ways via specific receptors. Amongst these molecules are more than a hundred different peptides, expressed in different subpopulations of neurons, and many of these peptides are known for the distinctive effects on specific physiological functions that follow central administration of peptide agonists or antagonists.” (p. 5625). See also [Mains and Eipper (1999)](https://www.ncbi.nlm.nih.gov/books/NBK28247/).\n\n\n[45.](https://www.openphilanthropy.org/brain-computation-report#footnoteref45_mt0wn09)[Burrows (1996)](https://www.oxfordscholarship.com/view/10.1093/acprof:oso/9780198523444.001.0001/acprof-9780198523444-chapter-5): “A neuromodulator is a messenger released from a neuron in the central nervous system, or in the periphery, that affects groups of neurons, or effector cells that have the appropriate receptors. It may not be released at synaptic sites, often acts through second messengers and can produce long-lasting effects. The release may be local so that only nearby neurons or effectors are influenced, or may be more widespread, which means that the distinction with a neurohormone can become very blurred. The act of neuromodulation, unlike that of neurotransmission, does not necessarily carry excitation of inhibition from one neuron to another, but instead alters either the cellular or synaptic properties of certain neurons so that neurotransmission between them is changed” (p. 195).\n\n\n[46.](https://www.openphilanthropy.org/brain-computation-report#footnoteref46_sgbsmsn)[Araque and Navarrete (2010)](https://www.ncbi.nlm.nih.gov/pmc/articles/PMC2894949/pdf/rstb20090313.pdf) (p. 2375); [Bullock et al. (2005)](http://utw10020.utweb.utexas.edu/djlab/pdfs/Bullocketal2005.pdf), (p. 792); [Mu et al. (2019)](https://www.cell.com/cell/pdf/S0092-8674(19)30621-X.pdf); and the rest of the discussion in [Section 2.3.2](https://www.openphilanthropy.org/research/how-much-computational-power-does-it-take-to-match-the-human-brain/#Glia).\n\n\n[47.](https://www.openphilanthropy.org/brain-computation-report#footnoteref47_y6mh18x)See e.g. [Anastassiou et al. (2011)](https://www.ncbi.nlm.nih.gov/pubmed/21240273) and [Chang (2019)](https://www.ncbi.nlm.nih.gov/pmc/articles/PMC6312416/pdf/TJP-597-249.pdf), along with the other citations in [Section 2.3.4](https://www.openphilanthropy.org/research/how-much-computational-power-does-it-take-to-match-the-human-brain/#EphapticEffects).\n\n\n[48.](https://www.openphilanthropy.org/brain-computation-report#footnoteref48_40cg5iw)See [Bullock et al. (2005)](http://utw10020.utweb.utexas.edu/djlab/pdfs/Bullocketal2005.pdf), describing the history of early neuroscience: “physiological studies established that conduction of electrical activity along the neuronal axon involved brief, all-or-nothing, propagated changes in membrane potential called action potentials. It was thus often assumed that neuronal activity was correspondingly all-or-nothing and that action potentials spread over all parts of a neuron. The neuron was regarded as a single functional unit: It either was active and “firing” or was not” (p. 791).\n\n\n[49.](https://www.openphilanthropy.org/brain-computation-report#footnoteref49_qes5ysp)See [Zbili and Debanne (2019)](https://www.ncbi.nlm.nih.gov/pmc/articles/PMC6492051/pdf/fncel-13-00160.pdf) for a review, together with the other citations in [Section 2.3.5](https://www.openphilanthropy.org/research/how-much-computational-power-does-it-take-to-match-the-human-brain/#OtherFormsOfAxonSignaling).\n\n\n[50.](https://www.openphilanthropy.org/brain-computation-report#footnoteref50_crouzqe)See [Moore and Cao (2008)](https://journals.physiology.org/doi/pdf/10.1152/jn.01366.2006): “we propose that hemodynamics also play a role in information processing through modulation of neural activity… We predict that hemodynamics alter the gain of local cortical circuits, modulating the detection and discrimination of sensory stimuli. This novel view of information processing—that includes hemodynamics as an active and significant participant— has implications for understanding neural representation and the construction of accurate brain models” (p. 2035).\n\n\n[51.](https://www.openphilanthropy.org/brain-computation-report#footnoteref51_bar1658)A few others I am not discussing include: quantum dynamics (see endnote in [section 1.6](https://www.openphilanthropy.org/research/how-much-computational-power-does-it-take-to-match-the-human-brain/#ClarifyingTheQuestion)), the [perineuronal net](https://en.wikipedia.org/wiki/Perineuronal_net) (see [Tsien (2013)](https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3725115/pdf/pnas.201310158.pdf) for discussion), and classical dynamics in microtubules (see [Cantero et al. (2018)](https://www.nature.com/articles/s41598-018-30453-2)). I am leaving quantum dynamics aside mostly for the reasons listed in the endnote in section 1.6. I leave out the other two mechanisms partly because of time constraints, and partly because my impression is that they do not feature very prominently in the discourse on this topic. I bucket all the possible alternative mechanisms I am not discussing under the uncertainties discussed in [Section 2.3.7](https://www.openphilanthropy.org/research/how-much-computational-power-does-it-take-to-match-the-human-brain/#OverallComputeForAlternativeSignalingMechanisms).\n\n\n[52.](https://www.openphilanthropy.org/brain-computation-report#footnoteref52_u3uo2xw)A few representative summaries: [Marcus (2015)](https://www.nytimes.com/2015/06/28/opinion/sunday/face-it-your-brain-is-a-computer.html): “Neuroscience today is collection of facts, rather than ideas; what is missing is connective tissue. We know (or think we know) roughly what neurons do, and that they communicate with one another, but not what they are communicating. We know the identities of many of the molecules inside individual neurons and what they do. We know from neuroanatomy that there are many repeated structures (motifs) throughout the neocortex. Yet we know almost nothing about what those motifs are for, or how they work together to support complex real-world behavior. The truth is that we are still at a loss to explain how the brain does all but the most elementary things. We simply do not understand how the pieces fit together” (p. 205): [Einevoll et al. (2015)](https://arxiv.org/pdf/1906.06189.pdf): “Despite decades of intense research efforts investigating the brain at the molecular, cell, circuit and system levels, the operating principles of the human brain, or any brain, remain largely unknown… At present we do not have any well-grounded, and certainly not generally accepted, theory about how networks of millions or billions of neurons work together to provide the salient brain functions in animals or humans. We do not even have a well-established model for how neurons in primary visual cortex of mammals work together to form the intriguing neuronal representations with, for example, orientation selectivity and direction selectivity that were discovered by Hubel and Wiesel sixty years ago ([Hubel and Wiesel (1959)](https://www.ncbi.nlm.nih.gov/pmc/articles/PMC1363130/)).” (p. 2, and p. 8).\n\n\n[53.](https://www.openphilanthropy.org/brain-computation-report#footnoteref53_w6hc027)See especially [Open Philanthropy’s non-verbatim notes from a conversation with Prof. Eric Jonas](https://www.openphilanthropy.org/research/professor-eric-jonas-assistant-professor-of-computer-science-at-the-university-of-chicago/), [Prof. Shaul Druckmann](https://www.openphilanthropy.org/research/professor-shaul-druckmann-assistant-professor-of-neurobiology-and-of-psychiatry-and-behavioral-sciences-stanford-university/), [Prof. Erik De Schutter](https://www.openphilanthropy.org/research/professor-erik-de-schutter-professor-computational-neuroscience-okinawa-institute-of-science-and-technology/), [Prof. Konrad Kording](https://www.openphilanthropy.org/files/Conversations/A%20conversation%20with%20Professor%20Konrad%20Kording,%20September%2011,%202019.pdf); [Prof. Eve Marder](https://www.openphilanthropy.org/research/professor-eve-marder-university-professor-and-victor-and-gwendolyn-beinfield-professor-of-neuroscience-brandeis-university/); [Dr. Adam Marblestone](https://www.openphilanthropy.org/research/dr-adam-marblestone-research-scientist-google-deepmind/); and [Dr. Stephen Larson](https://www.openphilanthropy.org/research/dr-stephen-larson-ceo-of-metacell-and-co-founder-of-openworm/).\n\n\n[54.](https://www.openphilanthropy.org/brain-computation-report#footnoteref54_yjm233l)[Kleinfield et al. (2019)](https://www.cell.com/neuron/pdf/S0896-6273(19)30695-6.pdf), (p. 1005), for description of various techniques and their limitations. See also [Marblestone et al. (2013)](https://www.frontiersin.org/articles/10.3389/fncom.2013.00137/full): “Simultaneously measuring the activities of all neurons in a mammalian brain at millisecond resolution is a challenge beyond the limits of existing techniques in neuroscience… Based on this analysis, all existing approaches require orders of magnitude improvement in key parameters” (p. 1); and [Adam (2019)](https://www.ncbi.nlm.nih.gov/pmc/articles/PMC6613938/): “A technology that simultaneously records membrane potential from multiple neurons in behaving animals will have a transformative effect on neuroscience research” (p. 413), a quote which suggests that at the least, such a technology is at the cutting edge of what’s available (the paper appears to describe progress on this front). [Stevenson and Kording (2011)](https://www.nature.com/articles/nn.2731) found that “the number of simultaneously recorded single neurons has been growing rapidly, doubling approximately every 7 years. The trend described here predicts that in 15 years physiologists should be able to record from approximately 1,000 neurons” (p. 141). Their data shows that as of 2010, the maximum was a few hundred, though I’m not sure where it is now (see p. 140).\n\n\n[55.](https://www.openphilanthropy.org/brain-computation-report#footnoteref55_qjpsj9i)From [Open Philanthropy’s non-verbatim notes from a conversation with Prof. Erik De Schutter](https://www.openphilanthropy.org/research/professor-erik-de-schutter-professor-computational-neuroscience-okinawa-institute-of-science-and-technology/): “At this point, we have no way to reliably measure the input-output transformation of a neuron, where the input is defined as a specific spatio-temporal pattern of synaptic input. You can build models and test their input-output mappings, but you don’t really know how accurate these models are… In live imaging, it’s very difficult to see what’s happening at synapses. Some people do calcium imaging of pre-synaptic terminals, but this is only for one part of the overall synaptic input (and it may create artefacts). Currently, you cannot get a global picture of all the synaptic inputs to a single neuron. You can’t stain all the inputs, and for a big neuron you wouldn’t be able to image the whole relevant volume of space… you don’t actually know what the physiological pattern of inputs is.” See also [Ujfalussy et al. (2018)](https://www.sciencedirect.com/science/article/pii/S0896627318307372): “Our understanding of neuronal input integration remains limited because it is either based on data from *in vitro* experiments, studying neurons under highly simplified input conditions, or on *in vivo* approaches in which synaptic inputs were not observed or controlled, and thus a systematic characterization of the input-output transformation of neurons was not possible” (2018); and notes from [Open Philanthropy’s non-verbatim notes from a conversation with Prof. Shaul Druckmann](https://www.openphilanthropy.org/research/professor-shaul-druckmann-assistant-professor-of-neurobiology-and-of-psychiatry-and-behavioral-sciences-stanford-university/): “It is very difficult to tell what spatio-temporal patterns of inputs are actually arriving at a neuron’s synapses *in vivo*. You can use imaging techniques, but this is very messy” (p. 2)\n\n\n[56.](https://www.openphilanthropy.org/brain-computation-report#footnoteref56_f4g2u2f)From [Open Philanthropy’s non-verbatim notes from a conversation with Prof. Erik De Schutter](https://www.openphilanthropy.org/research/professor-erik-de-schutter-professor-computational-neuroscience-okinawa-institute-of-science-and-technology/): “Using glutamate uncaging, you can reliably activate single dendritic spines *in vitro*, and you can even do this in a sequence of spines, thereby generating patterns of synaptic input. However, even these patterns are limited. For example, you can’t actually activate synapses simultaneously, because your laser beam needs to move; there’s only so much you can do in a certain timeframe; and because it’s glutamate, you can only activate excitatory neurons” (p. 2). From [Open Philanthropy’s non-verbatim notes from a conversation with Prof. Shaul Druckmann](https://www.openphilanthropy.org/research/professor-shaul-druckmann-assistant-professor-of-neurobiology-and-of-psychiatry-and-behavioral-sciences-stanford-university/):”it is very difficult to tell how a neuron responds to arbitrary patterns of synaptic input. You can stimulate a pre-synaptic neuron and observe the response, but you can’t stimulate all pre-synaptic neurons in different combinations. And you can only patch-clamp one dendrite while also patch-clamping the soma (and this already requires world-class skill)” (p. 2).\n\n\n[57.](https://www.openphilanthropy.org/brain-computation-report#footnoteref57_mn29j0b)From [Open Philanthropy’s non-verbatim notes from a conversation with Prof. Shaul Druckmann](https://www.openphilanthropy.org/research/professor-shaul-druckmann-assistant-professor-of-neurobiology-and-of-psychiatry-and-behavioral-sciences-stanford-university/): “Technology for measuring the properties relevant to detailed biophysical modeling has improved very little in the past 20 years … Neurons can have a few dozen of some 200-300 types of ions channels, which are strongly non-linear, with large effects, and which are spread out across the neuron. These cannot be modeled based on recordings of neuron spiking activity alone, and staining neurons for these ion channels is very difficult” (p. 2). And from [Open Philanthropy’s non-verbatim notes from a conversation with Prof. Konrad Kording](https://www.openphilanthropy.org/research/professor-konrad-kording-penn-integrated-knowledge-professor-university-of-pennsylvania/): “current techniques are very bad at measuring ion channel plasticity. Neuroscientists don’t tend to focus on it for this reason” (p. 5).\n\n\n[58.](https://www.openphilanthropy.org/brain-computation-report#footnoteref58_j1990lz)From [Open Philanthropy’s non-verbatim notes from a conversation with Prof. Eric Jonas](https://www.openphilanthropy.org/research/professor-eric-jonas-assistant-professor-of-computer-science-at-the-university-of-chicago/): “a lot of our animal models are wrong in clinically-relevant ways” (p. 5). And from [Open Philanthropy’s non-verbatim notes from a conversation with Prof. E.J. Chichilnisky](https://www.openphilanthropy.org/research/professor-e-j-chichilnisky-john-r-adler-professor-of-neurosurgery-and-professor-of-ophthalmology-at-stanford-university/): “There is variability in retinal function both across species and between individuals of the same species. Mouse retinas are very different from human retinas (a difference that is often ignored), and there is variability amongst monkey retinas as well” (p. 3).\n\n\n[59.](https://www.openphilanthropy.org/brain-computation-report#footnoteref59_2dbinr2)For example, [spike-timing dependent plasticity](http://www.scholarpedia.org/article/Spike-timing_dependent_plasticity) – a form of synaptic plasticity – can be reliably elicited *in vitro* (see [Open Philanthropy’s non-verbatim notes from a conversation with Prof. Eric Jonas](https://www.openphilanthropy.org/research/professor-eric-jonas-assistant-professor-of-computer-science-at-the-university-of-chicago/) (p. 3)), but Schulz argues that “Direct evidence for STDP *in vivo* is limited and suffers from the fact that the used protocols significantly deviate, more often than not, from the traditional pairing of single pre- and postsynaptic spikes ([Shulz and Jacob (2010)](https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3059710/#B12)). Thus, many studies use long-lasting large-amplitude postsynaptic potentials (PSP), and pairing usually involves multiple postsynaptic spikes or high repetition rates. Our own experience from cortico-striatal synaptic plasticity experiments indicates that classic STDP may be less effective *in vivo* than commonly expected (Schulz et al., 2010)” (p. 1).\n\n\n[60.](https://www.openphilanthropy.org/brain-computation-report#footnoteref60_xp03igt)From [Open Philanthropy’s non-verbatim notes from a conversation with Prof. Shaul Druckmann](https://www.openphilanthropy.org/research/professor-shaul-druckmann-assistant-professor-of-neurobiology-and-of-psychiatry-and-behavioral-sciences-stanford-university/): “The tasks that neuroscientists tend to study in model animals are very simple. Many, for example, are some variant on a two-alternative forced choice task (e.g., teaching an animal \n\nto act differently, depending on which of two stimuli it receives). This task is extremely easy to model, both with a small number of highly simplified neurons, and with models that do not look like neurons at all. In this sense, tasks like these provide very little evidence about what level of modeling detail is necessary for reproducing more interesting behavior.” (p. 2). And from [Open Philanthropy’s non-verbatim notes from a conversation with Prof. Eric Jonas](https://www.openphilanthropy.org/research/professor-eric-jonas-assistant-professor-of-computer-science-at-the-university-of-chicago/): “In an experiment with a model animal like a rat, which has a very complicated brain, the number of input/output bits we can control/observe is extremely small. This makes it very hard to do informative, high-throughput experiments. Even if you had a billion rats doing your experiment 24/7, you’d still only have a small number of bits going in and out” (p. 2).\n\n\n[61.](https://www.openphilanthropy.org/brain-computation-report#footnoteref61_48zgkqe)From [Open Philanthropy’s non-verbatim notes from a conversation with Dr. Adam Marblestone](https://www.openphilanthropy.org/research/dr-adam-marblestone-research-scientist-google-deepmind/): “Neuroscience is extremely limited by available tools. For example, we have the concept of a post-synaptic potential because we can patch-clamp the post-synaptic neuron and see a change in voltage. When we become able to see every individual dendritic spine, we might see that each has a different response; or when we become able to see molecules, we might see faster state transitions, more interesting spatial organization, or more complicated logic at the synapses. We don’t really know, because we haven’t been able to measure” (p. 9).\n\n\n[62.](https://www.openphilanthropy.org/brain-computation-report#footnoteref62_uqg5ge3)From [Open Philanthropy’s non-verbatim notes from a conversation with Prof. Konrad Kording](https://www.openphilanthropy.org/research/professor-konrad-kording-penn-integrated-knowledge-professor-university-of-pennsylvania/): “current techniques are very bad at measuring ion channel plasticity. Neuroscientists don’t tend to focus on it for this reason” (p. 5). From [Open Philanthropy’s non-verbatim notes from a conversation with Prof. Shaul Druckmann](https://www.openphilanthropy.org/research/professor-shaul-druckmann-assistant-professor-of-neurobiology-and-of-psychiatry-and-behavioral-sciences-stanford-university/): “The history of neuroscience sometimes seems like a process in which even though some process or level of detail is important, if it is very difficult to understand it, the community often shifts away from that level, and moves on to another level.. … he thinks that people don’t do detailed modeling because these models are ill-constrained at the current level of data that can be collected and it would require major investment to get the relevant data.” (p. 7).\n\n\n[63.](https://www.openphilanthropy.org/brain-computation-report#footnoteref63_gqq9trg)[Jonas and Kording (2017)](https://journals.plos.org/ploscompbiol/article/file?id=10.1371/journal.pcbi.1005268&type=printable): “There is a popular belief in neuroscience that we are primarily data limited…here we take a classical microprocessor as a model organism, and use our ability to perform arbitrary experiments on it to see if popular data analysis methods from neuroscience can elucidate the way it processes information. Microprocessors are among those artificial information processing systems that are both complex and that we understand at all levels, from the overall logical flow, via logical gates, to the dynamics of transistors. We show that the approaches reveal interesting structure in the data but do not meaningfully describe the hierarchy of information processing in the microprocessor. This suggests current analytic approaches in neuroscience may fall short of producing meaningful understanding of neural systems, regardless of the amount of data” (p. 1). Though see also [Merel et al. (2020)](https://openreview.net/forum?id=SyxrxR4KPS) (p. 2), who use a virtual rodent as a model system, and who take a more optimistic view.\n\n\n[64.](https://www.openphilanthropy.org/brain-computation-report#footnoteref64_8a8up8n)See e.g. [Lillicrap and Kording (2019)](https://arxiv.org/pdf/1907.06374.pdf): “…We can have a complete description of the network and its computations. And yet, neither we, nor anyone we know feels that they grasp how processing in these networks truly works. Said another way, besides gesturing to a network’s weights and elementary operations, we cannot say how it classifies an image as a cat or a dog, or how it chooses one Go move over another” (p. 1). That said, [research](https://distill.pub/2018/building-blocks/) on this topic is just getting underway, and some participants are optimistic. See e.g. [Olah et al. (2020a)](https://distill.pub/2020/circuits/zoom-in/): “thousands of hours of studying individual neurons have led us to believe the typical case is that neurons (or in some cases, other directions in the vector space of neuron activations) are understandable… our experience is that there’s usually a simple explanation behind these neurons, and that they’re actually doing something quite natural” (see “Claim 1: Features” and “Claim 2: Circuits”). Some of this work focuses on the type of feature detection that neuroscience already has some preliminary handle on, but efforts to explore the interpretability of other types of models are underway as well (see [Greydanus (2017)](https://arxiv.org/abs/1711.00138), [Such et al. (2018)](https://arxiv.org/abs/1812.07069), [Rupprecht et al. (2019)](https://arxiv.org/pdf/1904.01318.pdf), [here](https://openai.com/blog/solving-rubiks-cube/#understandingourneuralnetworks) and [OpenAI et al. (2019)](https://arxiv.org/pdf/1912.06680.pdf) (p. 30-35), for examples). Personally, I would not be at all surprised if this work ends up quite neuroscientifically informative.\n\n\n[65.](https://www.openphilanthropy.org/brain-computation-report#footnoteref65_yd8raxz)From [Open Philanthropy’s non-verbatim notes from a conversation with Prof. Eve Marder](https://www.openphilanthropy.org/research/professor-eve-marder-university-professor-and-victor-and-gwendolyn-beinfield-professor-of-neuroscience-brandeis-university/): “It’s been hard to make progress in understanding neural circuits, because in order to know what details matter, you have to know what the circuit is doing, and in most parts of the brain, we don’t know this…It’s not that you can’t make simplifying assumptions. It’s that absent knowledge of what a piece of nervous system needs to be able to do, you have no way of assessing whether you’ve lost something fundamental or not” (p. 4); from [Open Philanthropy’s non-verbatim notes from a conversation with Prof. Eric Jonas](https://www.openphilanthropy.org/research/professor-eric-jonas-assistant-professor-of-computer-science-at-the-university-of-chicago/): “One level of uncertainty comes from the difficulty of defining the high-level task that neural systems are trying to perform (e.g., the “computational level” in the hierarchy proposed by David Marr). Our attempts to capture cognitive tasks with objective functions we can fit machine learning models to are all extreme simplifications. For example, Prof. Jonas is fairly confident that the visual system is not classifying objects into one of k categories” (p. 1); and the notes from [Open Philanthropy’s non-verbatim notes from a conversation with Prof. E.J. Chichilnisky](https://www.openphilanthropy.org/research/professor-e-j-chichilnisky-john-r-adler-professor-of-neurosurgery-and-professor-of-ophthalmology-at-stanford-university/): “It’s hard to know when to stop fine-tuning the details of your model. A given model may be inaccurate to some extent, but we don’t know whether a given inaccuracy matters, or whether a human wouldn’t be able to tell the difference (though focusing on creating usable retinal prostheses can help with this)” (p. 3).\n\n\n[66.](https://www.openphilanthropy.org/brain-computation-report#footnoteref66_le47ua2)Dr. Stephen Larson suggested that one benefit of successfully simulating a simple nervous system would be that you could then bound the complexity necessary for such a simulation, and proceed with attempting to simplify it in a principled way (see [Open Philanthropy’s non-verbatim notes from a conversation with Dr. Stephen Larson](https://www.openphilanthropy.org/research/professor-stephen-baccus-professor-of-neurobiology-stanford-university/), p. 2). Prof. Shaul Druckmann (see [here](https://www.openphilanthropy.org/files/Conversations/A%20conversation%20with%20Professor%20Shaul%20Druckmann,%20September%205,%202019.pdf), p. 6) and Prof. Erik De Schutter appeared sympathetic to a similar research program. From [Open Philanthropy’s non-verbatim notes from a conversation with Prof. Erik De Schutter](https://www.openphilanthropy.org/research/professor-erik-de-schutter-professor-computational-neuroscience-okinawa-institute-of-science-and-technology/):”The best way forward is to try to explore and understand the function of the brain’s underlying mechanisms – a project that may eventually lead to an understanding of what can be simplified. But to try to simplify things too early, before you understand them, is a dangerous game” (p. 1). Exactly what level of modeling success has been achieved by brain simulations as yet is a complicated issue, but many appear to lack any capacity for complex task-performance ([Eliasmith et al. (2012)](https://science.sciencemag.org/content/338/6111/1202) is one exception; see [Open Philanthropy’s non-verbatim notes from a conversation with Prof. Chris Eliasmith](https://www.openphilanthropy.org/research/professor-chris-eliasmith-professor-of-philosophy-and-systems-design-engineering-canada-research-chair-in-theoretical-neuroscience-and-director-of-the-centre-for-theoretical-neuroscience-at-the-uni/) for some discussion). Example brain simulations include: [Arkhipov et al. (2018)](https://journals.plos.org/ploscompbiol/article?id=10.1371/journal.pcbi.1006535), [Bileh et al. (2020)](https://www.cell.com/neuron/fulltext/S0896-6273(20)30067-2), [Markram et al. (2015)](https://www.cell.com/cell/pdf/S0092-8674%2815%2901191-5.pdf); [Izhikevich and Edelman (2007)](https://www.izhikevich.org/publications/large-scale_model_of_human_brain.pdf); [Ananthanarayanan et al. (2009)](https://people.eecs.berkeley.edu/~demmel/cs267_Spr10/Lectures/RajAnanthanarayanan_SC09-a63.pdf), [Howell et al. (2000)](https://www.researchgate.net/publication/220549289_A_large-scale_model_of_the_cerebellar_cortex_using_PGENESIS), [Medina et al. (2000)](https://www.jneurosci.org/content/20/14/5516.long), [McLaughlin (2000)](https://www.pnas.org/content/97/14/8087). See [Garis et al. (2010)](https://www.sciencedirect.com/science/article/abs/pii/S0925231210003279?via%3Dihub) and [Sandberg and Bostrom (2008)](https://www.fhi.ox.ac.uk/brain-emulation-roadmap-report.pdf) for surveys.\n\n\n[67.](https://www.openphilanthropy.org/brain-computation-report#footnoteref67_2fs9wa7)See [White et al. (1984)](https://royalsocietypublishing.org/doi/pdf/10.1098/rstb.1986.0056). See [Jabr (2012b)](https://www.scientificamerican.com/article/c-elegans-connectome/)for some history, as well as [Seung (2012)](https://www.amazon.com/Connectome-How-Brains-Wiring-Makes/dp/0547508182/ref=tmm_hrd_swatch_0?_encoding=UTF8&qid=&sr=): “Mapping the *C. elegans* nervous system took over a dozen years, though it contains only 7,000 connections” (“Introduction”).\n\n\n[68.](https://www.openphilanthropy.org/brain-computation-report#footnoteref68_gscesaz)From [Open Philanthropy’s non-verbatim notes from a conversation with Dr. Stephen Larson](https://www.openphilanthropy.org/research/dr-stephen-larson-ceo-of-metacell-and-co-founder-of-openworm/), who works on the [OpenWorm project](https://royalsocietypublishing.org/doi/10.1098/rstb.2017.0382): “Despite its small size, we do not yet have a model that captures even 50% of the biological behavior of the *C. elegans* nervous system. This is partly because we’re just getting to the point of being able to measure what the worm’s nervous system is doing well enough. It is possible to replicate certain kinds of worm behaviors, such as a crawling forward motion, using a very simple neural network. However, the same model cannot be used to make the worm shift into crawling backwards. Rather, you have to re-train it, and even then, you don’t know if the model makes the decision to crawl backward with the same frequency, and for the same reasons, that the real worm does. In general, evolution has equipped the worm to respond to a very wide range of conditions, and the worm’s biology has all of these intricate and complex mechanisms that could potentially be involved in the behaviors you care about” (p. 1). David Dalrymple, who used to work on emulating *C. elegans*, [writes](https://www.lesswrong.com/posts/XhHetxjWxZ6b85HK9/whole-brain-emulation-looking-at-progress-on-c-elgans?commentId=wwwhhRufNfuNTSmQy): “Contrary to popular belief, connectomes are not the biological equivalent of circuit schematics. Connectomes are the biological equivalent of what you’d get if you removed all the component symbols from a circuit schematic and left only the wires… What you actually need is to functionally characterize the system’s dynamics by performing thousands of perturbations to individual neurons and recording the results on the network, in a fast feedback loop with a very very good statistical modeling framework which decides what perturbation to try next.” [Sarma et al. (2018)](https://royalsocietypublishing.org/doi/10.1098/rstb.2017.0382#RSTB20170382TB2), in an overview of OpenWorm’s progress, write: “The level of detail that we have incorporated to date is inadequate for biological research. A key remaining component is to complete the curation and parameter extraction of Hodgkin–Huxley models for ion channels to produce realistic dynamics in neurons and muscles” (Section 3). [Merel et al. (2020)](https://openreview.net/forum?id=SyxrxR4KPS) create a “virtual rodent,” but this is not a bottom up emulation of a rodent brain.\n\n\n[69.](https://www.openphilanthropy.org/brain-computation-report#footnoteref69_09dybag)Example approaches in this vein include Prof. Markus Meister, see [Open Philanthropy’s non-verbatim notes from a conversation with Prof. Markus Meister](https://www.openphilanthropy.org/research/professor-markus-meister-anne-p-and-benjamin-f-biaggini-professor-of-biological-sciences-at-the-california-institute-of-technology/): “It is theoretically possible that the brain’s task-performance draws on complex chemical computations, implemented by protein circuits, that would require models much more complicated than those that have been successful in the retina. But Prof. Meister’s approach is to ask: is there any evidence that forces us to think in this more complicated way? That is, he starts with the simplest possible explanation of the phenomena, and then adds to this explanation when necessary. Some neuroscientists take a different approach. That is, they ask “what is the most complicated way that this thing could work?”, and then assume that nature is doing that” (p. 4); and from [Open Philanthropy’s non-verbatim notes from a conversation with Prof. Chris Eliasmith](https://www.openphilanthropy.org/research/professor-chris-eliasmith-professor-of-philosophy-and-systems-design-engineering-canada-research-chair-in-theoretical-neuroscience-and-director-of-the-centre-for-theoretical-neuroscience-at-the-uni/): “Prof. Eliasmith’s general approach is to see what simple models are able to do, and to introduce additional complexity only when doing so becomes necessary. In his models, he has thus far been able to successfully replicate various types of high-level behavior, along with various types of neuro-physiological data, without recourse to highly complex neuron models – a result that he thinks substantially less likely in worlds where the brain’s performance on these tasks proceeds via biophysical mechanisms his models do not include. However, this doesn’t mean that we won’t discover contexts in which greater complexity is necessary. And we are very far away from being able to test what is required to capture high-level behavior on the scale of the full human brain” (p. 2).\n\n\n[70.](https://www.openphilanthropy.org/brain-computation-report#footnoteref70_cy6o3yk)From [Open Philanthropy’s non-verbatim notes from a conversation with Dr. Stephen Larson](https://www.openphilanthropy.org/research/professor-stephen-baccus-professor-of-neurobiology-stanford-university/): “the jury is still out on how much simplification is available, and Dr. Larson thinks that in this kind of uncertain context, you should focus on the worst-case, most conservative compute estimates as your default. This means worrying about all of the information-processing present in cell biology. In general, in studying complex biological mechanisms, Dr. Larson thinks that the burden of proof is on those who want to say that a given type of simplification is possible” (p. 2). From [Open Philanthropy’s non-verbatim notes from a conversation with Prof. Erik De Schutter](https://www.openphilanthropy.org/research/professor-erik-de-schutter-professor-computational-neuroscience-okinawa-institute-of-science-and-technology/): “Many common simplifications do not have solid scientific foundations, and are more at the level of “the way we do things.” The best way forward is to try to explore and understand the function of the brain’s underlying mechanisms – a project that may eventually lead to an understanding of what can be simplified. But to try to simplify things too early, before you understand them, is a dangerous game … The brain was not engineered. Rather, it evolved, and evolution works by adding complexity, rather than by simplification. There are good reasons for this complexity. In order to evolve, you can’t have systems, at any level (proteins, channels, cells, brain regions), with unique functions. If you did, and a single mutation knocked out the function, the whole system would crash… Indeed, in general, many scientists who approach the brain from an engineering perspective end up on the wrong footing. Engineering is an appropriate paradigm for building AI systems, but if you want to understand the brain, you need to embrace the fact that it works because it is so complicated. Otherwise, it will be impossible to understand the system” (p. 1).\n\n\n[71.](https://www.openphilanthropy.org/brain-computation-report#footnoteref71_mzce5ud)I will not attempt a definition of which tasks count as “cognitive,” but the category should be construed as excluding tasks that are intuitively particular to the brain’s biological substrate – for example, the task of implementing an input-output transformation that will serve as an effective means of predicting how the biological brain will respond to a certain kind of drug, or the task of serving as a good three-pound weight. [LeCun and Bengio (2007)](http://yann.lecun.com/exdb/publis/pdf/bengio-lecun-07.pdf) gesture at a somewhat similar subset of tasks, which they call the “AI-set”: “Among the set of all possible functions, we are particularly interested in a subset that contains all the tasks involved in intelligent behavior. Examples of such tasks include visual perception, auditory perception, planning, control, etc. The set does not just include specific visual perception tasks (e.g human face detection), but the set of all the tasks that an intelligent agent should be able to learn. In the following, we will call this set of functions the AI-set. Because we want to achieve AI, we prioritize those tasks that are in the AI-set” (p. 4-5). I am also excluding microscopically specified input-output relationships that an actual brain, operating in the type of noisy environments brains evolved in, cannot implement reliably.\n\n\n[72.](https://www.openphilanthropy.org/brain-computation-report#footnoteref72_pw0wfc2)See [Grace et al. (2018)](https://arxiv.org/pdf/1705.08807.pdf) for discussion of a simple version of this task, which involves writing “concise, efficient, and human-readable Python code to implement simple algorithms like quicksort” (p. 19). The median estimate by the experts she surveyed for when AI systems will be able to perform this task was 8.2 years from the time of the survey. GPT-3, a language model released by OpenAI in 2020, is capable of at least some forms of coding (see [here](https://twitter.com/sharifshameem/status/1282676454690451457) for an especially vivid demonstration, [here](https://twitter.com/lacker/status/1279136788326432771/photo/1) for another example, and [here](https://towardsdatascience.com/will-gpt-3-kill-coding-630e4518c04d) for more discussion).\n\n\n[73.](https://www.openphilanthropy.org/brain-computation-report#footnoteref73_ycpxatk)Depending on one’s opinions of the peer review process, perhaps it is debatable whether GPT-3 can do this as well. See [here](https://twitter.com/timothyfbrady/status/1289397905623674881/photo/1) for examples. I chose both the “complex software problem” task and the “review a nature paper” task before the GPT-3 results came out, and they were selected to be tasks that we *couldn’t* yet do with AI systems.\n\n\n[74.](https://www.openphilanthropy.org/brain-computation-report#footnoteref74_ctrfimt)See [Grace et al. (2018)](https://arxiv.org/pdf/1705.08807.pdf) (p. 16), for discussion of a version of this task. The median estimate by the experts she surveyed for when AI systems will be able to perform this task was 33.8 years from the time of the survey.\n\n\n[75.](https://www.openphilanthropy.org/brain-computation-report#footnoteref75_ab5n17s)It has been occasionally hypothesized that some form of quantum-level information processing is occuring in the brain (see, for example, [Hu and Wu (2004)](https://www.ncbi.nlm.nih.gov/pubmed/15325008), [Penrose and Hameroff (2011)](http://www.neurohumanitiestudies.eu/archivio/penrose_consciousness.pdf), and [Fisher (2015)](https://arxiv.org/pdf/1508.05929.pdf) for suggestions in this vein, and see [Tegmark (1999)](https://arxiv.org/pdf/quant-ph/9907009.pdf) and [Litt et al. (2006)](http://watarts.uwaterloo.ca/~pthagard/Articles/quantum.pdf) for counterarguments). My understanding, though, is that the large majority of experts believe that the brain’s information-processing is purely classical. For example, [Sandberg and Bostrom (2008)](https://www.fhi.ox.ac.uk/brain-emulation-roadmap-report.pdf) write that: “Practically all neuroscientists subscribe to the dogma that neural activity is a phenomenon that occurs on a classical scale” (37). My impression is that the most influential arguments against quantum computation have been in the vein of [Tegmark (1999)](https://arxiv.org/pdf/quant-ph/9907009.pdf), who argues that the timescales of quantum decoherence in the brain (~10-13 to 10-20 seconds) are too short to play a role in various possible methods of neural information processing, which proceed on much longer timescales (~10-3 to 10-1 seconds) (p. 1). That said, there is at least some evidence that non-trivial quantum dynamics play a role in some biological contexts (e.g., photosynthesis, enzyme catalysis, and avian navigation) where arguments that appeal solely to the fact that a biological system is warm/wet/noisy might have ruled them out (my thanks to Prof. David Wallace for suggesting I address this): see, e.g., [McFadden and Al-Khalili (2018)](https://royalsocietypublishing.org/doi/10.1098/rspa.2018.0674) for a review. Indeed, [Fisher (2015)](https://arxiv.org/pdf/1508.05929.pdf) presents his hypothesis about quantum dynamics in the brain as immune to timescale-based objections. However, my impression at a glance is that his research at this stage is mostly at the level of establishing the theoretical possibility of some form of quantum computation in the brain, as opposed to verifying that such computation is actually occuring. Thus, for example, in [this 2019 talk](https://youtu.be/IP_GmTKYlsc?t=2202) (36:40), he comments: “What I’ve offered is a story at this stage, if you want it’s a partly formed picture puzzle, and what’s needed are experiments to discern the precise shapes of the various pieces in this puzzle, and to see whether they actually exist as pieces, what shapes they are, and whether they start fitting together.” In general, the possibility of quantum computation in the brain is a further category of uncertainty; but it’s an additional can of worms, and because the hypothesis appears to play a comparatively small role in mainstream neuroscience, I’m not going to address it in depth.\n\n\n[76.](https://www.openphilanthropy.org/brain-computation-report#footnoteref76_trwaeux)See [Nicolesis and Circuel (2015)](https://www.amazon.com/Relativistic-Brain-cannot-simulated-machine-ebook/dp/B00VXGFBI6), [Lucas (1961)](http://users.ox.ac.uk/~jrlucas/mmg.html), [Dreyfus (1972)](https://www.amazon.com/What-Computers-Cant-Artificial-Intelligence/dp/0060906138) and [Penrose (1994)](https://www.amazon.com/Emperors-New-Mind-Concerning-Computers/dp/0192861980) for various forms of skepticism.\n\n\n[77.](https://www.openphilanthropy.org/brain-computation-report#footnoteref77_jyn925w)Note that F does not need to be enough to match the task-performance of a “superbrain” trained and ready to perform any task that any human can perform: e.g., a brain that represents peak human performance on every task simultaneously. Einstein may do physics that requires *x* FLOP/s, and Toni Morrison may write novels that require *y* FLOP/s, but F only needs to be greater than or equal to both *x* and *y*: it doesn’t need to be greater than or equal to *x*+*y*.\n\n\n[78.](https://www.openphilanthropy.org/brain-computation-report#footnoteref78_3n859o0)[Herculano-Houzel (2009)](https://www.frontiersin.org/articles/10.3389/neuro.09.031.2009/full#:~:text=Cognitive%20Abilities%2C%20Brain%20Size%20and,is%20unremarkable%20in%20its%20capabilities.) reports variation in neuron number within a species at around 10-50%. [Reardon et al. (2018)](https://science.sciencemag.org/content/360/6394/1222) write: “Brain size among normal humans varies as much as twofold.” [Koch (2016)](https://www.scientificamerican.com/article/does-brain-size-matter1/) cites numbers ranging from 1,017 grams to 2,021 grams (though these are for post-mortem measures), and from 975 cm3 to 1499 cm3.\n\n\n[79.](https://www.openphilanthropy.org/brain-computation-report#footnoteref79_g2sh3nw)From [Open Philanthropy’s non-verbatim notes from a conversation with Dr. Paul Christiano](https://www.openphilanthropy.org/research/dr-paul-christiano-researcher-openai/): “If you include a sufficiently broad range of tasks that the human brain can perform, and require similarly useful task-performance across the full range of inputs to which the brain could be exposed, it is likely that for at least one of the tasks in the relevant profile, for some set of inputs, the brain’s method will (a) be close to maximally algorithmically efficient (e.g., within an order of magnitude or two), and (b) use a substantial portion of the computational resources that the brain has available. For example, if you take a computer from the 60s, and you look at all of the tasks it could perform, Dr. Christiano expects that many of the algorithms it was running (for example: sorting), were close to optimally efficient. As another example, there is a very inefficient algorithm for SAT solving, which takes 2n time. For many inputs, we can improve on this algorithm by a huge amount, but we can’t for every input: indeed, there is a rough consensus amongst computer scientists that the very inefficient algorithm is close to the best one can do. Indeed, Dr. Christiano expects that for most algorithms, there will be some family of instances on which it does reasonably well. And given how large the space of possible tasks the brain performs is (we can imagine a very wide set of evaluation metrics and input regimes), the density of roughly-optimal-on-some-inputs algorithms doesn’t need to be that high for them to appear in the brain” (p. 7).\n\n\n[80.](https://www.openphilanthropy.org/brain-computation-report#footnoteref80_rkec41c)It’s not entirely clear which concept Moravec and Kurzweil have in mind, but (1) has some support. See [Moravec (1998)](https://jetpress.org/volume1/moravec.pdf): “How much further must this evolution proceed until our machines are powerful enough to approximate the human intellect?” (p. 52), and his reply to Anders Sandberg [here](https://jetpress.org/volume1/commentary.htm): “It is the final computation that matters, not the fuss in doing it.” [Kurzweil (2005)](https://www.amazon.com/Singularity-Near-Humans-Transcend-Biology-ebook/dp/B000QCSA7C): “if two methods achieve the same result but one uses more computation than the other, the more computationally intensive method will be considered to use only the amount of computation of the less intensive method” (p. 137).\n\n\n[81.](https://www.openphilanthropy.org/brain-computation-report#footnoteref81_8l1haef)See [Sandberg and Bostrom (2008)](https://www.fhi.ox.ac.uk/brain-emulation-roadmap-report.pdf) (p. 11), for a taxonomy of possible brain-emulation success criteria. See [Muehlhauser (2017)](https://www.openphilanthropy.org/2017-report-consciousness-and-moral-patienthood) for an investigation at Open Philanthropy of consciousness and moral patienthood.\n\n\n[82.](https://www.openphilanthropy.org/brain-computation-report#footnoteref82_292smym)There is a fairly widespread discourse related to the importance of “[embodiment](https://plato.stanford.edu/entries/embodied-cognition/)” in AI and cognitive science more broadly, which I have not engaged with in depth. At a glance, central points seem to be: (a) that the computation a brain performs is importantly adapted to the physical environment in which it operates, and the representations it employs are constrained by the body that implements them (see e.g. [Hoffmann and Pfeifer (2012)](https://arxiv.org/pdf/1202.0440.pdf), and the discussion of “Body as constraint” in [Wilson and Foglia (2015)](https://plato.stanford.edu/entries/embodied-cognition/)), (b) that the morphology of body itself can contribute to control, perception, and computation proper, and that not all information-processing or storage takes place “inside the head” ([Müller and Hoffmann (2017)](https://www.mitpressjournals.org/doi/full/10.1162/ARTL_a_00219), the discussion of “Body as distributor” in [Wilson and Foglia (2015)](https://plato.stanford.edu/entries/embodied-cognition/), the literature on the “[extended mind](https://en.wikipedia.org/wiki/The_Extended_Mind)”), (c) that the body functions to coordinate/regulate the relationship between cognition and action (see “Body as Regulator” in [Wilson and Foglia (2015)](https://plato.stanford.edu/entries/embodied-cognition/)), and (d) that advanced AI systems won’t be developed until we make it possible for them to learn via engagement in with real-time, complex environments, possibly via robotic bodies (see [Medlock (2017)](https://aeon.co/ideas/the-body-is-the-missing-link-for-truly-intelligent-machines); Prof. Anthony Zador also suggested something like this in conversation, see [here](https://www.openphilanthropy.org/research/professor-anthony-zador-alle-davis-and-maxine-harrison-professor-of-neurosciences-cold-spring-harbor-laboratory/)). These points may well be true, but I do not think they disrupt the conceptual foundations of the present investigation, which aims to estimate the compute sufficient to replicate *the brain’s contribution* to (possibly embodied) task-performance. If points related to embodiment are thought to extend to the claim that e.g. artificial systems without bodies are incapable, in principle, of solving software problems, competing in Math competitions, or reviewing science papers, then I simply disagree.\n\n\n[83.](https://www.openphilanthropy.org/brain-computation-report#footnoteref83_lcms571)This literature review draws from the reviews offered by [Sandberg and Bostrom (2008)](https://www.fhi.ox.ac.uk/brain-emulation-roadmap-report.pdf) (p. 84-85); and [Martins (2012)](https://repositorium.sdum.uminho.pt/bitstream/1822/20756/1/NanoroboticBrainMonitoring2012_%20draft%20with%20page%20numbers.pdf), (p. 3-6). I have supplemented it with other estimates I encountered in my research. In order to limit its scope, I focus on direct attempts to estimate the computation sufficient to run a task-functional model.\n\n\n[84.](https://www.openphilanthropy.org/brain-computation-report#footnoteref84_wjleuqg)The estimates that I think most worth taking seriously are generally the ones I discuss in the report itself.\n\n\n[85.](https://www.openphilanthropy.org/brain-computation-report#footnoteref85_nhjltas)[Merkle (1989)](https://www.merkle.com/brainLimits.html) attempts to estimate the number of spikes through synapses by estimating the energy dissipated by propagating a spike a certain distance, together with the number of synapses per unit distance, rather than counting spikes and synapses directly. He gets ~2e15 synaptic operations, assuming 1 synapse every millimeter, though it is unclear to me what grounds his estimate of synapses per unit distance: “To translate Ranvier ops (1-millimeter jumps) into synapse operations we must know the average distance between synapses, which is not normally given in neuroscience texts. We can estimate it: a human can recognize an image in about 100 milliseconds, which can take at most 100 one-millisecond synapse delays. A single signal probably travels 100 millimeters in that time (from the eye to the back of the brain, and then some). If it passes 100 synapses in 100 millimeters then it passes one synapse every millimeter–which means one synapse operation is about one Ranvier operation” (1989).\n\n\n[86.](https://www.openphilanthropy.org/brain-computation-report#footnoteref86_tezkcp4)[Merkle (1989)](https://www.merkle.com/brainLimits.html): “We might count the number of synapses, guess their speed of operation, and determine synapse operations per second. There are roughly 1015 synapses operating at about 10 impulses/second, giving roughly 1016 synapse operations per second” (see “Other Estimates”).\n\n\n[87.](https://www.openphilanthropy.org/brain-computation-report#footnoteref87_peg2hsk)[Mead (1990)](https://web.stanford.edu/group/brainsinsilicon/documents/MeadNeuroMorphElectro.pdf): “There are about 1016 synapses in the brain. A nerve pulse arrives at each synapse about ten times/s, on average. So in rough numbers, the brain accomplishes 1016 complex operations/s” (p. 1629). Some aspect of this estimate appears to be in error, however, as it seems to suggest the calculation 1016 synapses × 10 spikes/sec = 1016 spikes per synapse/sec.\n\n\n[88.](https://www.openphilanthropy.org/brain-computation-report#footnoteref88_boayzm3)[Freitas (1996)](http://www.rfreitas.com/Nano/TheFutureOfComputers--Analog--March1996.htm): “A fair estimate is that the 1.5 kilogram organ has 1010 neurons with 103 synapses firing an average 10 times per second, which is about 1014 bits/second. Using 64-bit words like the largest supercomputers, that’s about one teraflop” (see opening section).\n\n\n[89.](https://www.openphilanthropy.org/brain-computation-report#footnoteref89_k50ybja)[Sarpeshkar (1997)](https://thesis.library.caltech.edu/3063/1/Sarpeshkar_R_1997.pdf): “From the numbers in the first paragraph of Section 5.6.1, we know that there are about 2.4 × 1014 synapses in each cortex of the brain. The average firing rate of cortex is about 5-10 Hz – we shall use 7.5 Hz. Assuming that each synapse is always operational and constantly computing, then the number of synaptic operations per second is 2 × 2.4 × 1014 × 7.5 = 3.6 × 1015” (p. 202-203).\n\n\n[90.](https://www.openphilanthropy.org/brain-computation-report#footnoteref90_ujtenio)[Bostrom (1998)](https://nickbostrom.com/superintelligence.html): “The human brain contains about 1011 neurons. Each neuron has about 5 × 103 synapses, and signals are transmitted along these synapses at an average frequency of about 102 Hz. Each signal contains, say, 5 bits. This equals 1017 ops” (see “Hardware Requirements” section).\n\n\n[91.](https://www.openphilanthropy.org/brain-computation-report#footnoteref91_qyi74oy)[Kurzweil (1999)](https://www.amazon.com/Age-Spiritual-Machines-Computers-Intelligence/dp/B000OYDNBA): “With an estimated average of one thousand connections between each neuron and its neighbors, we have about 100 trillion connections, each capable of a simultaneous calculation… With 100 trillion connections, each computing at 200 calculations per second, we get 20 million billion calculations per second. This is a conservatively high estimate; other estimates are lower by one to three orders of magnitude” (see Chapter 6, section “Achieving the Hardware Capacity of the Human Brain”).\n\n\n[92.](https://www.openphilanthropy.org/brain-computation-report#footnoteref92_a89cm02)[Dix (2005)](https://alandix.com/academic/papers/brain-and-web-2005/): “At a simplified level each neuron’s level of activation is determined by pulses generated at the (1000 to 10,000) synapses connected to it. Some have a positive excitatory effect [sic] some are inhibitory. A crude model simply adds the weighted sum and ‘fires’ the neuron if the sum exceeds a value. The rate of this activity, the ‘clock period’ of the human brain is approximately 100 Hz – very slow compared to the GHz of even a home PC, but of course this happens simultaneously for all 10 billion neurons! If we think of the adding of the weighted synaptic value as a single neural operation (nuop) then each neuron has approximately 10,000 nuops per cycle, that is 1mega-nuop per second. In total the 10 billion neurons in the brain perform 10 peta-nuop per second.”\n\n\n[93.](https://www.openphilanthropy.org/brain-computation-report#footnoteref93_cmsmgsb)[Malickas (2007)](https://www.aleph.se/Trans/Global/Uploading/gupload.html): “The evaluation of the computational power of [sic] human brain [sic] very uncertain at this time. Some estimates of brain power could be based on the brain synapses number and neurons [sic] firing rate. The human brain have [sic] a 1011 neurons and each neuron has [sic] average of 102 – 104 synapses. The average firing rate of brain neurons is about 100-1000 Hz. As result the brain modeling would require the computational power of 1011 neurons × (102-104 synapses/neuron) × (100-1000 Hz) = 1015 – 1018 synapses/second” (see section “Computer”).\n\n\n[94.](https://www.openphilanthropy.org/brain-computation-report#footnoteref94_h3c4bw8)[Tegmark (2017)](https://www.amazon.com/Life-3-0-Being-Artificial-Intelligence/dp/1101946598/ref=tmm_hrd_swatch_0?_encoding=UTF8&qid=1586106499&sr=8-1): “Multiplying together about 1011 neurons, about 104 connections per neuron and about one (100) firing per neuron each second might suggest that about 1015 FLOPS (1 petaFLOPS) suffice to simulate a human brain, but there are many poorly understood complications, including the detailed timing of firings and the question of whether small parts of neurons and synapses need to be simulated too” (see endnote 58, p. 340). That said, Tegmark presents this less as an independent estimate of his own, and more as an example of a certain methodology.\n\n\n[95.](https://www.openphilanthropy.org/brain-computation-report#footnoteref95_o5m6su3)[Sandberg and Bostrom (2008)](https://www.fhi.ox.ac.uk/brain-emulation-roadmap-report.pdf) also cite Fiala (2007) as estimating “1014 synapses, identity coded by 48 bits plus 2 × 36 bits for pre‐and postsynaptic neuron id, 1 byte states. 10 ms update time… 256,000 terabytes/s” (p. 85), and Seitz (no date) as estimate “50-200 billion neurons, 20,000 shared synapses per neuron with 256 distinguishable levels, 40 Hz firing” (p. 85). However, I wasn’t able to find the original papers on a quick search. [Adams (2013)](https://lips.cs.princeton.edu/what-is-the-computational-capacity-of-the-brain/) estimates ~1e15 FLOP/s in a blog post, but his estimate of neuron count is off by two orders of magnitude.\n\n\n[96.](https://www.openphilanthropy.org/brain-computation-report#footnoteref96_m0qn8pp)I haven’t investigated comparisons between these different units and FLOP/s (though see [Sandberg and Bostrom (2008)](https://www.fhi.ox.ac.uk/brain-emulation-roadmap-report.pdf), p. 91, for some discussion of the relationship between FLOP/s and MIPS).\n\n\n[97.](https://www.openphilanthropy.org/brain-computation-report#footnoteref97_kogm211)As I note in [Section 2.1.1.1](https://www.openphilanthropy.org/research/how-much-computational-power-does-it-take-to-match-the-human-brain/#SpikesThroughSynapsesPerSecond), many of these estimates rely on average spike rates that seem to me too high.\n\n\n[98.](https://www.openphilanthropy.org/brain-computation-report#footnoteref98_8j3fb59)[Sarpeshkar (2010)](https://www.cambridge.org/core/books/ultra-low-power-bioelectronics/ED8504DA1504856B74E2502EA859FDEA): “The brain’s neuronal cells output ~1ms pulses (spikes) at an average rate of 5 Hz [55]. The 240 trillion synaptic connections [1] amongst the brain’s neurons thus lead to a computational rate of at least 1015 synaptic operations per second. A synapse implements multiplication and filtering operations on every spike and sophisticated learning operations over multiple spikes. If we assume that synaptic multiplication is at least one floating-point operation (FLOP), the 20 ms second-order filter impulse response due to each synapse is 40 FLOPS, and that synaptic learning requires at least 10 FLOPS per spike, a synapse implements at least 50 FLOPS of computation per spike. The nonlinear adaptation-and- thresholding computations in the somatic regions of a neuron implement almost 1200 floating-point operations (FLOPS) per spike [66]. Thus, the brain is performing at least 50 FLOPS × 5Hz × 240 × 1012 + 1200 FLOPS × 5Hz × 22 × 109 = [approximate] 6 × 1016 FLOPS per second” (p. 748-749).\n\n\n[99.](https://www.openphilanthropy.org/brain-computation-report#footnoteref99_cp6rkin)[Martins et al. (2012)](https://repositorium.sdum.uminho.pt/bitstream/1822/20756/1/NanoroboticBrainMonitoring2012_%20draft%20with%20page%20numbers.pdf): “These data may be combined using Eqns. (1) and (2) to yield an estimate of the synaptic-processed spike rate of Tss = (4.31 ± 0.86) × 1015 spikes/sec and the synaptic-processed bit rate of Tsb = (5.52 ± 1.13) × 1016 bits/sec for the entire human brain” (p. 14).\n\n\n[100.](https://www.openphilanthropy.org/brain-computation-report#footnoteref100_44gf4xt)[Kurzweil (2005)](https://www.amazon.com/Singularity-Near-Humans-Transcend-Biology-ebook/dp/B000QCSA7C): “The ‘fan out’ (number of interneuronal connections) per neuron is estimated at 103. With an estimated 1011 neurons, that’s about 1014 connections. With a reset time of five milliseconds, that comes to about 1016 synaptic transactions per second. Neuron-model simulations indicate the need for about 103 calculations per synaptic transaction to capture the nonlinearities (complex interactions) in the dendrites and other neuron regions, resulting in an overall estimate of about 1019 cps for simulating the human brain at this level. We can therefore consider this an upper bound, but 1014 to 1016 cps to achieve functional equivalence of all brain regions is likely to be sufficient” (p. 124-125).\n\n\n[101.](https://www.openphilanthropy.org/brain-computation-report#footnoteref101_1c0argn)[Thagard (2002)](http://cogsci.uwaterloo.ca/Articles/molecules.html): “If we count the number of processors in the brain as not just the number of neurons in the brain, but the number of proteins in the brain, we get a figure of around a billion times 100 billion, or 1017. Even if it is not legitimate to count each protein as a processor all by itself, it is still evident from the discussion in Section 3 that the number of computational elements in the brain is more than the 1011 or 1012 neurons. Moreover, the discussion of hormones and other neuroregulators discussed in Section 5 shows that the number of computationally relevant causal connections is far greater than the thousand or so synaptic connections per neuron. I do not know how to estimate the number of neurons with hormonal receptors that can be influenced by a single neuron that secretes hormones or that activates glands which secrete hormones, but the number must be huge. If it is a million, and if every brain protein is viewed as a mini-processor, then the computational speed of the brain is on the order of 1023 calculations per second, far larger than the 1015 calculations per second that Kurzweil expects to be available by 2020, although less than where he expects computers to be by 2060. Thus quantitatively it appears that digital computers are much farther away than Kurzweil and Moravec estimate from reaching the raw computational power of the human brain” (see Section 7, “Artificial Intelligence”).\n\n\n[102.](https://www.openphilanthropy.org/brain-computation-report#footnoteref102_i8dgg7m)[Tuszynski (2006)](https://www.terasemjournals.org/GNJournal/GN0104/tuszynski_01e.html): “There are four c-termini states per dimer because we have two states per monomer. There could be at least four states per electron inside the tubulin dimer, as they hop between two locations. There could be at least two computational changes due to the GTP hydrolysis. Thus there are 4 × 4 × 2, which is 32 states per dimer; thirteen dimers per ring; and 1,250 rings per midsize microtubule. If you do the math, the result is about 100 kilobytes per microtubule. Calculating the number of microtubules per neuron, you get one gigabyte of processing power per neuron. There are ten billion neurons. You have ten to the 19th bytes per brain and they oscillate or make transitions in this state on the order of nanoseconds, and ten to the 28th flops per brain” (p. 4-5 on the website).\n\n\n[103.](https://www.openphilanthropy.org/brain-computation-report#footnoteref103_i1kkgg4)[von Neumann (1958)](https://www.amazon.com/Computer-Brain-Silliman-Memorial-Lectures/dp/0300181116): “Thus the standard receptor would seem to accept about 14 distinct digital impressions per second, which can probably be reckoned as the same number of bits. Allowing 1010 nerve cells, assuming that each one of them is under suitable conditions essentially an (inner or outer) receptor, a total input of 14 × 1010 bits per second results” (p. 63).\n\n\n[104.](https://www.openphilanthropy.org/brain-computation-report#footnoteref104_qogkimq)[Dettmers (2015)](https://timdettmers.com/2015/07/27/brain-vs-deep-learning-singularity/): “So my estimate would be 1.075×1021 FLOPS for the brain, the fastest computer on earth as of July 2013 has 0.58×1015 FLOPS for practical application (more about this below)” (see section “estimation of cerebellar input/output dimensions”).\n\n\n[105.](https://www.openphilanthropy.org/brain-computation-report#footnoteref105_baa2ca3)See [Ananthanarayanan et al. (2009)](https://people.eecs.berkeley.edu/~demmel/cs267_Spr10/Lectures/RajAnanthanarayanan_SC09-a63.pdf), Figure 8 (p. 10). [Greenemeier (2009)](https://blogs.scientificamerican.com/news-blog/computers-have-a-lot-to-learn-from-2009-03-10/) cites IBM’s Dharmendra Modha (one of the authors on the paper) as estimating that a computer comparable to the human brain would need to perform 4e16 operations per second, but I’m not sure his methodology.\n\n\n[106.](https://www.openphilanthropy.org/brain-computation-report#footnoteref106_ugg4eka)[Waldrop (2012)](https://www.nature.com/news/computer-modelling-brain-in-a-box-1.10066): “The computer power required to run such a grand unified theory of the brain would be roughly an exaflop, or 1018 operations per second — hopeless in the 1990s. But Markram was undaunted: available computer power doubles roughly every 18 months, which meant that exascale computers could be available by the 2020s (see [‘Far to go’](https://www.nature.com/news/computer-modelling-brain-in-a-box-1.10066#far)). And in the meantime, he argued, neuroscientists ought to be getting ready for them” (see section “Markram’s big idea”). See also [this chart](https://www.nature.com/news/482456a-i3-0-jpg-7.2933?article=1.10066).\n\n\n[107.](https://www.openphilanthropy.org/brain-computation-report#footnoteref107_lgmt7y6)He also discusses a possible lower estimate around [19:43](https://youtu.be/DvE-nphgswY?t=1183), but the video is too blurry for me to read the numbers.\n\n\n[108.](https://www.openphilanthropy.org/brain-computation-report#footnoteref108_c6ytj55)See [here](https://www.izhikevich.org/human_brain_simulation/why.htm). See also [Izhikevich and Edelman (2007)](https://www.izhikevich.org/publications/large-scale_model_of_human_brain.pdf).\n\n\n[109.](https://www.openphilanthropy.org/brain-computation-report#footnoteref109_shkthll)See [Sandberg and Bostrom (2008)](https://www.fhi.ox.ac.uk/brain-emulation-roadmap-report.pdf) (p. 80-81). My impression is that these estimates were very rough, and their 1e18 estimate for a spiking neural network seems inconsistent with the estimate methodology they use elsewhere in the chart, since 1e15 entities × 10 FLOPs per entity × 1e3 time-steps per second = 1e19 FLOP/s.\n\n\n[110.](https://www.openphilanthropy.org/brain-computation-report#footnoteref110_kld6skz)Strong selection effects were like at work in determining who was present at the workshop.\n\n\n[111.](https://www.openphilanthropy.org/brain-computation-report#footnoteref111_xu192cs)See [Moravec (1988)](https://www.amazon.com/Mind-Children-Future-Robot-Intelligence/dp/0674576187/ref=sr_1_2?dchild=1&keywords=mind+children+moravec&qid=1586128538&sr=8-2), Chapter 2 (p. 51-74). See also [Moravec (1988)](https://www.amazon.com/Mind-Children-Future-Robot-Intelligence/dp/0674576187/ref=sr_1_2?dchild=1&keywords=mind+children+moravec&qid=1586128538&sr=8-2), [Moravec (2008)](https://www.scientificamerican.com/article/rise-of-the-robots-2008-02/). I discuss this estimate in detail in [Section 3.1](https://www.openphilanthropy.org/brain-computation-report#TheRetina).\n\n\n[112.](https://www.openphilanthropy.org/brain-computation-report#footnoteref112_y2n1wbk)[Kurzweil (2005)](https://www.amazon.com/Singularity-Near-Humans-Transcend-Biology-ebook/dp/B000QCSA7C) also cites [Zaghloul and Boahen (2006)](https://web.stanford.edu/group/brainsinsilicon/pdf/06_ZaghloulBoahenJNE06.pdf) as an example of replicating retinal functionality, but does not attempt a quantitative estimate using it (endnote 41, p. 532).\n\n\n[113.](https://www.openphilanthropy.org/brain-computation-report#footnoteref113_77eoep9)[Kurzweil (2005)](https://www.amazon.com/Singularity-Near-Humans-Transcend-Biology-ebook/dp/B000QCSA7C): “Another estimate comes from the work of Lloyd Watts and his colleagues on creating functional simulations of regions of the human auditory system, which I discuss further in chapter 4… Watts’s own group has created functionally equivalent re-creations of these brain regions derived from reverse engineering. He estimates that 1011 cps are required to achieve human-level localization of sounds. The auditory cortex regions responsible for this processing comprise at least 0.1 percent of the brain’s neurons. So we again arrive at a ballpark estimate of around 1014 cps (1011 cps × 103)” (p. 123).\n\n\n[114.](https://www.openphilanthropy.org/brain-computation-report#footnoteref114_m2kycos)[Kurzweil (2005)](https://www.amazon.com/Singularity-Near-Humans-Transcend-Biology-ebook/dp/B000QCSA7C): “Yet another estimate comes from a simulation at the University of Texas that represents the functionality of a cerebellum region containing 104 neurons; this required about 108 cps, or about 104 cps per neuron. Extrapolating this over an estimated 1011 neurons results in a figure of about 1015 cps for the entire brain” (p. 123).\n\n\n[115.](https://www.openphilanthropy.org/brain-computation-report#footnoteref115_uziij9h)[Kurzweil (2012)](https://www.amazon.com/How-Create-Mind-Thought-Revealed/dp/1491518839): “emulating one cycle in a single pattern recognizer in the biological brain’s neocortex would require about 3,000 calculations. Most simulations run at a fraction of this estimate. With the brain running at about 102 (100) cycles per second, that comes to 3 × 105 (300,000) calculations per second per pattern recognizer. Using my estimate of 3 × 108 (300 million) pattern recognizers, we get about 1014 (100 trillion) calculations per second” (p. 195).\n\n\n[116.](https://www.openphilanthropy.org/brain-computation-report#footnoteref116_ab1y3do)[Drexler (2019)](https://www.fhi.ox.ac.uk/wp-content/uploads/Reframing_Superintelligence_FHI-TR-2019-1.1-1.pdf): “In light of the above comparisons, all of which yield values of RPFLOP in the 10 to 1000 range, it seems likely that 1 PFLOP/s machines equal or exceed the human brain in raw computation capacity. To draw the opposite conclusion would require that the equivalents of a wide range of seemingly substantial perceptual and cognitive tasks would consistently require no more than an implausibly small fraction of total neural activity” (p. 188).\n\n\n[117.](https://www.openphilanthropy.org/brain-computation-report#footnoteref117_nfzjc5y)[Sandberg (2016)](https://arxiv.org/pdf/1602.04019.pdf): “20 W divided by 1.3 × 10-21 J (the Landauer limit at body temperature) suggests a limit of no more than 1.6·1022 irreversible operations per second” (p. 5).\n\n\n[118.](https://www.openphilanthropy.org/brain-computation-report#footnoteref118_gjxsb8g)[De Castro (2013)](https://link.springer.com/article/10.1007/s11023-013-9302-x): “If system 1 is considered to be a powerful computer operating at maximum Landauer efficiency—i.e., at a minimum energy cost equal to kBT ln(2)—that works at an average brain temperature, the number of perceptual operations per second that it could perform is on the order of 1023 (1/kB), depending on the idiosyncratic power of the brain” (p. 483).\n\n\n[119.](https://www.openphilanthropy.org/brain-computation-report#footnoteref119_dl1zb98)Though there is some discussion of it on [Metaculus](https://www.metaculus.com/questions/2646/what-will-the-necessary-computational-power-to-replicate-human-mental-capability-turn-out-to-be/).\n\n\n[120.](https://www.openphilanthropy.org/brain-computation-report#footnoteref120_k8el95a)For example, [Laughlin et al. (1998)](https://pubmed.ncbi.nlm.nih.gov/10195106/) estimate that “synapses and cells are using 105 to 108 times more energy than the thermodynamic minimum” (the minimum they have in mind is on the order of a *k*T per bit “observed”); and [Levy et al. (2014)](https://arxiv.org/abs/1408.6777) argue that once the costs of communication and computation in the brain are adequately distinguished, it is possible to identify places in which the energy efficiency of neural computation approaches the minimum set by Landauer. For more on the energy efficiency of neural computation, see also [Laughlin (2001)](https://www.sciencedirect.com/science/article/abs/pii/S0959438800002373), [Attwell and Laughlin (2001)](https://journals.sagepub.com/doi/full/10.1097/00004647-200110000-00001), [Balasubramanian et al. (2001)](https://www.ncbi.nlm.nih.gov/pubmed/11255570), [Hasenstaub et al. (2010)](https://www.pnas.org/content/107/27/12329), [Levy and Baxter (1996)](https://www.ncbi.nlm.nih.gov/pubmed/8868566?dopt=Abstract), [Skora et al. (2017)](https://www.med.upenn.edu/ngg/assets/user-content/documents/paper2-energy-scarcity-promotes-a-brain-wide-sleep-state-modulated-by-insulin-signaling-in-c.-elegans.pdf), [Levy and Baxter (2002)](https://www.jneurosci.org/content/22/11/4746?ijkey=e7c9b28abb3dd8f022cfbe7c7c2ab07b7a1949b3&keytype2=tf_ipsecsha), [Balasubramanian and Berry (2002)](https://www.ncbi.nlm.nih.gov/pubmed/12463343?dopt=Abstract), [Niven et al. (2007)](https://www.ncbi.nlm.nih.gov/pubmed/17373859?dopt=Abstract), [Lennie (2003)](http://www2.bcs.rochester.edu/sites/plennie/pdfs/Lennie03a.pdf#page=3), [Howarth et al. (2010)](https://www.ncbi.nlm.nih.gov/pubmed/19888288/), and [Sarpeshkar (2010)](https://www.cambridge.org/core/books/ultra-low-power-bioelectronics/ED8504DA1504856B74E2502EA859FDEA), Chapter 23. For discussions of thermodynamics in the brain in particular, see [Collel and Fauquet (2015)](https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4468356/), [Varpula (2013)](http://www.m-hikari.com/asb/asb2013/asb1-4-2013/annilaASB1-4-2013.pdf), [Deli et al. (2017)](https://vixra.org/pdf/1710.0168v1.pdf), and [Street (2016)](https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5108784/). Work on the “free energy principle” (see e.g. [Friston (2010)](https://www.fil.ion.ucl.ac.uk/~karl/The%20free-energy%20principle%20A%20unified%20brain%20theory.pdf)) in the context of the brain also has connection to thermodynamics. In a not-specifically-neural context, [Kempes et al. (2017)](https://arxiv.org/pdf/1706.05043.pdf) argue: “Here we show that the computational efficiency of translation, defined as free energy expended per amino acid operation, outperforms the best supercomputers by several orders of magnitude, and is only about an order of magnitude worse than the Landauer bound” (p. 1); and [Wolpert (2016)](https://www.mdpi.com/1099-4300/18/4/138) attempts to extend a version of Landauer’s reasoning to derive the minimal free energy required by an organism to run a stochastic map from sensor inputs to actuator outputs. See also [Ouldridge and ten Wolde (2017)](https://journals.aps.org/prl/abstract/10.1103/PhysRevLett.118.158103), [Ouldridge (2017)](https://arxiv.org/abs/1702.00360), [Sartori et al. (2014)](https://journals.plos.org/ploscompbiol/article?id=10.1371/journal.pcbi.1003974), [Mehta and Schwab (2012)](https://www.pnas.org/content/109/44/17978), and [Mehta et al. (2016)](https://link.springer.com/article/10.1007%2Fs10955-015-1431-6).\n\n\n[121.](https://www.openphilanthropy.org/brain-computation-report#footnoteref121_29ofeeo)[AI Impacts](https://aiimpacts.org/brain-performance-in-flops/#easy-endnote-bottom-4-596): “Among a small number of computers we compared[4](https://aiimpacts.org/brain-performance-in-flops/#easy-endnote-bottom-4-596), FLOPS and TEPS seem to vary proportionally, at a rate of around 1.7 GTEPS/TFLOP. We also [estimate](http://aiimpacts.org/brain-performance-in-teps/) that the human brain performs around 0.18 – 6.4 × 1014 TEPS. Thus if the FLOPS:TEPS ratio in brains is similar to that in computers, a brain would perform around 0.9 – 33.7 × 1016 FLOPS.[5](https://aiimpacts.org/brain-performance-in-flops/#easy-endnote-bottom-5-596) We have not investigated how similar this ratio is likely to be.” (See section “Conversion from brain performance in TEPS”).\n\n\n[122.](https://www.openphilanthropy.org/brain-computation-report#footnoteref122_g9dmi27)See e.g. the rough estimates from [Sandberg and Bostrom (2008)](https://www.fhi.ox.ac.uk/brain-emulation-roadmap-report.pdf) (p. 80-81), to the effect that emulating the states of the protein complexes in the brain would require 1e27 FLOP/s, and that emulating the stochastic behavior of single molecules in the brain would require 1e43 FLOP/s. Henry Markham, in a [2018 video (18:28)](https://youtu.be/DvE-nphgswY?t=1112), estimates the FLOP/s burdens of running a “real-time molecular simulation of the human-brain” at 4E29 FLOP/s. [Today’s top supercomputers](https://www.top500.org/lists/2019/11/) can do roughly 1e17 FLOP/s. [Mike Frank projects](https://youtu.be/IQZ_bQbxSXk?t=727) that 1e21 FLOP/s would require more than a gigawatt of power in 2030 – [comparable to the power generated by the Hoover Dam](http://www.powerauthority.org/about-us/history-of-hoover/) – and his chart suggests that physical limits would begin to cause serious problems for performing many orders of magnitude more than that on currently-reasonable amounts of power..\n\n\n[123.](https://www.openphilanthropy.org/brain-computation-report#footnoteref123_jzdna4c)I first encountered the idea that the computational relevance of processes within the neuron are bottlenecked by intercellular signaling via one of our technical advisors, Dr. Dario Amodei. See also [Open Philanthropy’s non-verbatim notes from a conversation with Prof. Dong Song](https://www.openphilanthropy.org/research/professor-dong-song-research-associate-professor-department-of-biomedical-engineering-university-of-southern-california/): “Prof. Song thinks that everyone should agree that neurons are the fundamental computational unit of the brain. If you can replicate all the neuron activity, you’ll probably be able to replicate brain function. Neurons communicate with each other via spikes. Variables internal to a neuron are important to determining the neuron’s spiking behavior in response to inputs, but the other neurons do not know or care about these internal variables. So as long as you can replicate the input-output mapping at the level of spiking, you are basically replicating the relevant function of a single neuron. So if you have a good spiking neuron model, and you connect your neurons correctly, you should be able to replicate brain function” (p. 2). Robin Hanson gestures at a similar idea in the beginning of his [his 2017 TED talk](https://www.ted.com/talks/robin_hanson_what_would_happen_if_we_upload_our_brains_to_computers?language=en#t-91367). My general impression was that almost all of the neuroscientists I spoke to took something like this kind of paradigm for granted.\n\n\n[124.](https://www.openphilanthropy.org/brain-computation-report#footnoteref124_c3am1ey)“Standard” here indicates “the type of neuron signaling people tend to focus on.” Whether it is the signaling method that the brain relies on most heavily is a more substantive question.\n\n\n[125.](https://www.openphilanthropy.org/brain-computation-report#footnoteref125_z0yp9du)In particular, the categories plausibly overlap: much of the standard neuron signaling in the brain may be in the service of what would generally be folk-theoretically understood as “learning” (see [Open Philanthropy’s non-verbatim notes from a conversation with Dr. Adam Marblestone](https://www.openphilanthropy.org/research/dr-adam-marblestone-research-scientist-google-deepmind/): “it might be that all of the neurons and synapses in the brain are there in order to make the brain more likely to converge on a solution while learning,” (p. 7)); various alternative signaling mechanisms (for example, neuromodulation, and signaling in certain types of glial cells) may themselves be central to learning as well.\n\n\n[126.](https://www.openphilanthropy.org/brain-computation-report#footnoteref126_chajgdw)[Azevedo et al. (2009)](https://www.ncbi.nlm.nih.gov/pubmed/19226510): “We find that the adult male human brain contains on average 86.1 ± 8.1 billion NeuN-positive cells (“neurons”) and 84.6 ± 9.8 billion NeuN-negative (“nonneuronal”) cells” (532). My understanding is that the best available method of counting neurons is isotropic fractionation, which proceeds by dissolving brain structures into a kind of homogenous “[brain soup](https://news.vanderbilt.edu/vanderbiltmagazine/brainiac-with-her-innovative-brain-soup-suzana-herculano-houzel-is-changing-neuroscience-one-species-at-a-time/),” and then counting cell nuclei (see [Herculano-Houzel and Lent (2005)](https://www.ncbi.nlm.nih.gov/pmc/articles/PMC6725175/pdf/00252518.pdf) for a more technical description of the process, and [Bartheld et al. (2016)](https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5063692/pdf/nihms799882.pdf) for a history of cell-counting in the brain). Note that there may be substantial variation in cell counts between individuals (for example, according to [Bartheld et al. (2016)](https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5063692/pdf/nihms799882.pdf) (p. 9), citing [Haug (1986)](https://pubmed.ncbi.nlm.nih.gov/3540464/) and [Pakkenberg and Gundersen (1997)](https://pubmed.ncbi.nlm.nih.gov/9215725/), neocortical neuron count may vary by a factor of more than two, though I haven’t checked these further citations).\n\n\n[127.](https://www.openphilanthropy.org/brain-computation-report#footnoteref127_o6sytwb)See e.g. [Pakkenberg et al. (2002)](https://www.sciencedirect.com/science/article/abs/pii/S0531556502001511?via%3Dihub): “Synapses have a diameter of 200–500 nm and can only be seen by electron microscopy. The primary problem in assessing the number of synapses in human brains is their lack of resistance to the decay starting shortly after death” (p. 98).\n\n\n[128.](https://www.openphilanthropy.org/brain-computation-report#footnoteref128_upexhe0)[Kandel et al. (2013)](https://neurology.mhmedical.com/content.aspx?bookid=1049§ionid=59138631): “An average neuron forms and receives 1,000 to 10,000 synaptic connections. Thus 1014 to 1015 synaptic connections are formed in the brain” (p. 175). Henry Markram uses 1e15 total synapses in this video (18:31); [AI Impacts](https://aiimpacts.org/scale-of-the-human-brain/#Number_of_synapses_in_the_brain) suggests 1.8-3.2e14. A number of synapse estimates focus on the [cerebral cortex](https://en.wikipedia.org/wiki/Cerebral_cortex), and in particular on the neocortex (the cerebral cortex is divided into two parts, the [neocortex](https://en.wikipedia.org/wiki/Neocortex), and the [allocortex](https://en.wikipedia.org/wiki/Allocortex), but [Swenson (2006)](https://www.dartmouth.edu/~rswenson/NeuroSci/chapter_11.html) suggests that “most of the cerebral cortex is neocortex”). For example: [Tang et al. (2001)](https://www.ncbi.nlm.nih.gov/pubmed/11418939), for example, write that “The average total number of synapses in the neocortex of five young male brains was 164 × 1012 (CV = 0.17)” (p. 258); [Pakkenberg et al. (2003)](http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.332.5850&rep=rep1&type=pdf): “The total number of synapses in the human neocortex is approximately 0.15 × 1015 (0.15 quadrillion) … On average, the neocortical neurons thus have about 7000 synapses each for intracortical reception and exchange of information” (p. 95 and 98); [Zador (1999)](https://journals.physiology.org/doi/pdf/10.1152/jn.1998.79.3.1219) writes that “A [pyramidal neuron](https://en.wikipedia.org/wiki/Pyramidal_cell) in the cortex receives excitatory synaptic input from 1e3 to 1e4 other neurons” (p. 1219) (he cites [Shepherd (1990)](https://www.amazon.com/Synaptic-Organization-Brain-Gordon-Shepherd/dp/019515956X) for this number, though I haven’t followed up on the citation); [Ananthanarayanan et al. (2009)](https://people.eecs.berkeley.edu/~demmel/cs267_Spr10/Lectures/RajAnanthanarayanan_SC09-a63.pdf): “Cognition and computation arise from the cerebral cortex; a truly complex system that contains roughly 20 billion neurons and 200 trillion synapses” (Section 6). [AI Impacts](https://aiimpacts.org/scale-of-the-human-brain/#Number_of_synapses_in_the_brain) suggests that their impression is that this focus on the neocortex derives “from the assumption that the neocortex contains the great bulk of synapses in the brain” – an impression that I share. They suggest that this assumption may derive in part from the fact that the neocortex represents the bulk of the brain’s volume. The cerebral cortex contains a minority of the brain’s neurons (about 19%, according to [Azevedo et al. (2009)](https://www.ncbi.nlm.nih.gov/pubmed/19226510) (p. 536)), but almost all of the rest reside in the cerebellum, and about 50 billion of those are non-neocortical [cerebellar granule cells](https://en.wikipedia.org/wiki/Cerebellar_granule_cell) (at least according to [Llinás et al. (2004)](https://oxford.universitypressscholarship.com/view/10.1093/acprof:oso/9780195159561.001.1/acprof-9780195159561-chapter-7) (p. 277)), which appear to have a comparatively small number of synapses each: “[Granule] cells are the most numerous in the CNS; there are about 5 × 1010 cerebellar granule cells in the human brain. Each cell has four or five short dendrites (each less than 30 μm long) that end in an expansion called a dendritic claw (see fig. [7.4C](https://oxford.universitypressscholarship.com/view/10.1093/acprof:oso/9780195159561.001.1/acprof-9780195159561-chapter-7) in chapter 7).” [Wikipedia](https://en.wikipedia.org/wiki/Cerebellar_granule_cell) cites [Llinás et al. (2004)](https://oxford.universitypressscholarship.com/view/10.1093/acprof:oso/9780195159561.001.1/acprof-9780195159561-chapter-7) as grounds for attributing 80-100 synaptic connections to granule cells, but I haven’t been able to find the relevant number. The cerebellum also contains Purkinje cells (up to 1.5e7, according to [Llinás et al. (2004)](https://oxford.universitypressscholarship.com/view/10.1093/acprof:oso/9780195159561.001.1/acprof-9780195159561-chapter-7) (p. 276)), which can have over 100,000 synapses each, though I’m not sure average number (see [Napper and Harvey (1988)](https://onlinelibrary.wiley.com/doi/abs/10.1002/cne.902740204?sid=nlm%3Apubmed): “We conclude that there are some 175,000 parallel fiber synapses on an individual Purkinje cell dendritic tree in the cerebellar cortex of the rat” (abstract), though this is an old estimate). I have not attempted to estimate the synapses in the cerebellum in particular, and I am not sure the extent to which synapse counts for granule cells and Purkinje cells overlap (a possibility that could lead to double counting). AI Impacts, on the basis of energy consumption and volume estimates for the neocortex, guesses the number of synapses in the entire brain is “somewhere between 1.3 and 2.3 times the number in the cerebral cortex.”\n\n\n[129.](https://www.openphilanthropy.org/brain-computation-report#footnoteref129_8jywjmc)[Wang et al. (2016)](https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5067378/): “By recording in human, monkey, and mouse neocortical slices, we revealed that FS neurons in human association cortices (mostly temporal) could generate APs at a maximal mean frequency (Fmean) of 338 Hz and a maximal instantaneous frequency (Finst) of 453 Hz, and they increase with age” (p. 1). [Marblestone et al. (2013)](https://www.frontiersin.org/articles/10.3389/fncom.2013.00137/full): “certain neurons spike at 500 Hz or faster ([Gittis et al. (2010)](https://pubmed.ncbi.nlm.nih.gov/20592126/))” (section 2.2).\n\n\n[130.](https://www.openphilanthropy.org/brain-computation-report#footnoteref130_hqxkyyt)[Barth and Poulet (2012)](https://www.bio.cmu.edu/labs/barth/papers/TINS_2012.pdf) (p. 4-5), list a large number firing rates overserved in rat neurons, almost all of which appear to be below 10 Hz. [Buzaki and Mizuseki (2014)](http://www.buzsakilab.com/content/PDFs/Mizuseki2014.pdf#page=5): “Recent quantifications of firing patterns of cortical pyramidal neurons in the intact brain have shown that the mean spontaneous and evoked firing rates of individual neurons span at least four orders of magnitude and that the distribution of both stimulus-evoked and spontaneous activity in cortical neurons obeys a long-tailed, typically lognormal, pattern” (p. 266). I have not attempted to calculate mean rates using the numbers in [Buzaki and Mizuseki (2014)](http://www.buzsakilab.com/content/PDFs/Mizuseki2014.pdf#page=5). See also the studies cited by [AI impacts](https://aiimpacts.org/rate-of-neuron-firing/#:~:text=So%20based%20on%20this%20rough,less%20than%201.82%20per%20second.) in the section titled “estimates of the rate of firing in non-human visual cortex.”\n\n\n[131.](https://www.openphilanthropy.org/brain-computation-report#footnoteref131_63gjxz9)Anthony Zador used an average rate of 1 Hz (see [Open Philanthropy’s non-verbatim notes from a conversation with Prof. Anthony Zador](https://www.openphilanthropy.org/research/professor-anthony-zador-alle-davis-and-maxine-harrison-professor-of-neurosciences-cold-spring-harbor-laboratory/), p. 4). Konrad Kording suggested that neurons run at roughly 10 Hz (see [Open Philanthropy’s non-verbatim notes from a conversation with Prof. Konrad Kording](https://www.openphilanthropy.org/research/professor-konrad-kording-penn-integrated-knowledge-professor-university-of-pennsylvania/)). Sarpeshkar (citing [Attwell and Laughlin (2001)](https://journals.sagepub.com/doi/pdf/10.1097/00004647-200110000-00001)), uses 5 Hz. [Ananthanarayanan et al. (2009)](https://people.eecs.berkeley.edu/~demmel/cs267_Spr10/Lectures/RajAnanthanarayanan_SC09-a63.pdf) suggest that the average neural firing rate is “typically at least 1 Hz” (3.1.2).\n\n\n[132.](https://www.openphilanthropy.org/brain-computation-report#footnoteref132_gla22en)See p. 494-495.\n\n\n[133.](https://www.openphilanthropy.org/brain-computation-report#footnoteref133_9jn7tcs)P. 495.\n\n\n[134.](https://www.openphilanthropy.org/brain-computation-report#footnoteref134_4pub7il)[Barth and Poulet (2012)](https://www.bio.cmu.edu/labs/barth/papers/TINS_2012.pdf): “accumulating experimental evidence, using non-selective methods to assess the activity of identified, individual neurons, indicates that traditional extracellular recordings may have been strongly biased by selection of the most active cells” (p. 1). [Buzaki and Mizuseki (2014)](http://www.buzsakilab.com/content/PDFs/Mizuseki2014.pdf#page=5): “Each recording technique has some caveat. For example, patch-clamping of neurons may affect the firing patterns of neurons. Cell-attached methods are less invasive, but here the identity of the recorded cell often remains unknown and one might argue that the skewed distribution simply reflects the recording of large numbers of slow-firing pyramidal cells and a smaller number of faster-discharging interneurons. Furthermore, long-term recordings are technically difficult to obtain, and this may result in biased sampling of more-active neurons. Extracellular recording of spikes with sharp metal electrodes typically offers reliable single neuron isolation; however, as in cell-attached recordings, sampling of single neurons is often biased towards selecting fast-firing cells because neurons with low firing rates are often not detected during short recording sessions. Moreover, in many cases, only evoked firing patterns in very short time windows are examined. Chronic recordings with tetrodes and silicon probes can reduce such bias towards cells with a high firing rate, as the electrodes are moved infrequently and large numbers of neurons can be monitored from hours to days. In addition, one can separate the recorded population into excitatory and inhibitory neuron types *in vivo* through physiological characterization or by using optogenetic methods. Caveats of the extracellular probe methods include the lack of objective quantification of spike contamination and omission, the difficulty in isolating exceedingly slow-firing neurons and the lack of objective segregation of different neuron types. The left tail of the firing-rate distribution can especially vary across studies because neurons with low firing rates are often not detected during short recording sessions or because an arbitrary cut-off rate eliminates slow-firing cells. The differences in the right tail of the distribution across studies and species are probably the result of inadequate segregation of principal cells and interneurons” (p. 276).\n\n\n[135.](https://www.openphilanthropy.org/brain-computation-report#footnoteref135_u77pyc1)[Shoham et al. (2005)](https://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.457.6826&rep=rep1&type=pdf): “To summarize, the existence of large populations of silent neurons has been suggested recently by experimental evidence from diverse systems. Only some regions and neuron types show this phenomenon: as counterexamples, interneurons and cerebellar Purkinje cells are active most or all of the time. Nonetheless, the diversity of cases in which many neurons appear to be silent includes major neuron types in the mammalian neocortex and hippocampus, the cerebellum, and the zebra finch song system. Silent neurons may be a recurring principle of brain organization” (see Conclusion, p. 6). They also suggest that their estimate of the “recordable radius” around an electrode suggests “a silent fraction of at least 90%” of neurons in the cat primary visual cortex (see Conclusion, p. 6).\n\n\n[136.](https://www.openphilanthropy.org/brain-computation-report#footnoteref136_k3yrezi)It’s also possible that the metabolic considerations could be used as evidence for the combinations of synapse count and average spiking rate that would be compatible with the brain’s energy budget. For example, it’s possible that 10,000 synapses per neuron is incompatible with higher average spiking rates. However, I have not investigated this. Thanks to Carl Shulman for suggesting this possibility.\n\n\n[137.](https://www.openphilanthropy.org/brain-computation-report#footnoteref137_65x5da1)Examples include: [Bostrom (1998)](https://nickbostrom.com/superintelligence.html): “signals are transmitted along these synapses at an average frequency of about 102 Hz” (“Hardware requirements”); [Mead (1990)](https://web.stanford.edu/group/brainsinsilicon/documents/MeadNeuroMorphElectro.pdf): “A nerve pulse arrives at each synapses about ten times/s, on average” (p. 1629); [Merkle (1989)](https://www.merkle.com/brainLimits.html): “There are roughly 1015 synapses operating at about 10 impulses/second”; [Dix (2005)](https://alandix.com/academic/papers/brain-and-web-2005/): “The rate of this activity, the ‘clock period’ of the human brain is approximately 100 Hz”; [Kurzweil (1999)](https://www.amazon.com/Age-Spiritual-Machines-Computers-Intelligence/dp/B000OYDNBA): “With 100 trillion connections, each computing at 200 calculations per second, we get 20 million billion calculations per second” (Chapter 6, “Achieving the Hardware Capacity of the Human Brain”).\n\n\n[138.](https://www.openphilanthropy.org/brain-computation-report#footnoteref138_xdjqs5r)This model of synaptic transmission was suggested by our technical advisor, Dr. Dario Amodei. See also [Open Philanthropy’s non-verbatim notes from a conversation with Prof. Shaul Druckmann](https://www.openphilanthropy.org/research/professor-shaul-druckmann-assistant-professor-of-neurobiology-and-of-psychiatry-and-behavioral-sciences-stanford-university/): “Setting aside plasticity, most people assume that modeling the immediate impact of a pre-synaptic spike on the post-synaptic neuron is fairly simple. Specifically, you can use a single synaptic weight, which reflects the size of the impact of a spike through that synapse on the post-synaptic membrane potential.\n\n\n[139.](https://www.openphilanthropy.org/brain-computation-report#footnoteref139_od5a01p)The bullet points below were inspired by comments from Dr. Dario Amodei as well.\n\n\n[140.](https://www.openphilanthropy.org/brain-computation-report#footnoteref140_y4g0w5q)See Matt Botvinick’s comments on [this podcast](https://www.youtube.com/watch?v=3t06ajvBtl0): “The activity of units in a deep learning system is broadly analogous to the spike rate of a neuron” (see 57.20 [here](https://www.youtube.com/watch?v=3t06ajvBtl0)).\n\n\n[141.](https://www.openphilanthropy.org/brain-computation-report#footnoteref141_1s1wnmp)Precision, here, refers to number of bits used to represent the floating point numbers in question.\n\n\n[142.](https://www.openphilanthropy.org/brain-computation-report#footnoteref142_qngerpw)[Koch (1999)](https://www.amazon.com/Biophysics-Computation-Information-Computational-Neuroscience/dp/0195181999): “It is doubtful whether the effective resolution, that is, the ratio of minimal change in any one variable, such as Vm or [Ca2+]i, relative to the noise amplitude associated with this variable, exceeds a factor of 100. Functionally, this corresponds to between 6 and 7 bits of resolution, a puny number compared to a standard 32-bit machine architecture” (p. 471).\n\n\n[143.](https://www.openphilanthropy.org/brain-computation-report#footnoteref143_akdpn9p)See [Bartol et al. (2015)](https://elifesciences.org/articles/10778) (abstract): “Signal detection theory holds that at a Signal-to-Noise Ratio (SNR) of 1, a common detection threshold used in psychophysical experiments, an ideal observer can correctly detect whether a signal is higher or lower than some threshold 69% of the time ([Green and Swets (1966)](https://www.amazon.com/Signal-Detection-Theory-Psychophysics-Marvin/dp/0932146236); [Schultz (2007)](http://www.scholarpedia.org/article/Signal-to-noise_ratio_in_neuroscience)). Put another way, if random samples are drawn from two Gaussian distributions whose areas overlap by 31%, an ideal observer will correctly assign a given sample to the correct distribution 69% of the time. Using this logic, we found that ~26 different mean synaptic strengths could span the entire range, assuming CV = 0.083 for each strength level, and a 69% discrimination threshold ([Figure 8](https://elifesciences.org/articles/10778#fig8), see Materials and methods)” (this quote is from the “Results” section of the paper). The “[e-life digest](https://elifesciences.org/articles/10778)” for the paper also suggests that previous estimates were lower than this: “This estimate is markedly higher than previous suggestions. It implies that the total memory capacity of the brain – with its many trillions of synapses – may have been underestimated by an order of magnitude. Additional measurements in the same and other brain regions are needed to confirm this possibility” (see “e-life digest”).\n\n\n[144.](https://www.openphilanthropy.org/brain-computation-report#footnoteref144_ny7brcb)[Sandberg and Bostrom (2008)](https://www.fhi.ox.ac.uk/brain-emulation-roadmap-report.pdf): “Assumption on the order of one bit of information per synapse has some support on theoretical grounds. Models of associative neural networks have an information storage capacity slightly under 1 bit per synapse depending on what kind of information is encoded ([Nadal (1991)](https://iopscience.iop.org/article/10.1088/0305-4470/24/5/023/meta); [Nadal and Toulouse (1990)](https://www.tandfonline.com/doi/abs/10.1088/0954-898X_1_1_005)). Extending the dynamics of synapses for storing sequence data does not increase this capacity ([Rehn and Lansner (2004)](https://www.sciencedirect.com/science/article/abs/pii/S0925231204000517)). Geometrical and combinatorial considerations suggest 3‐5 bits per synapse ([Stepanyants, Hof et al. (2002)](https://pubmed.ncbi.nlm.nih.gov/11970869/); [Kalisman, Silberberg et al. (2005)](https://link.springer.com/article/10.1007/s00422-002-0377-3)). Fitting theoretical models to Purkinje cells suggests that they can reach 0.25 bits/synapse ([Brunel, Hakim et al. (2004)](https://www.sciencedirect.com/science/article/pii/S0896627304005288))” (p. 84).\n\n\n[145.](https://www.openphilanthropy.org/brain-computation-report#footnoteref145_e8r6qhq)[Zador (2019)](http://zadorlab.labsites.cshl.edu/wp-content/uploads/sites/59/2019/08/A-critique-of-pure-learning-and-what-artificial-neuralnetworks-can-learn-from-animal-brains.pdf): “a few extra bits/synapse would be required to specify graded synaptic strengths. But because of synaptic noise and for other reasons, synaptic strength may not be specified very precisely” (p. 5).\n\n\n[146.](https://www.openphilanthropy.org/brain-computation-report#footnoteref146_5u8l0dq)[Lahiri and Ganguli (2013)](https://papers.nips.cc/paper/4872-a-memory-frontier-for-complex-synapses.pdf): “recent experimental work has shown that many synapses are more digital than analog; they cannot robustly assume an infinite continuum of analog values, but rather can only take on a finite number of distinguishable strengths, a number than can be as small as two [[4](https://www.nature.com/articles/361031a0)[–](https://pubmed.ncbi.nlm.nih.gov/9539807/)[6](https://www.pnas.org/content/102/27/9679)] (though see [[7](https://www.cell.com/neuron/fulltext/S0896-6273(09)00204-9?_returnURL=https%3A%2F%2Flinkinghub.elsevier.com%2Fretrieve%2Fpii%2FS0896627309002049%3Fshowall%3Dtrue)])”.\n\n\n[147.](https://www.openphilanthropy.org/brain-computation-report#footnoteref147_qtcr5g8)[Enoki et al. (2009)](https://www.cell.com/neuron/fulltext/S0896-6273(09)00204-9?_returnURL=https%3A%2F%2Flinkinghub.elsevier.com%2Fretrieve%2Fpii%2FS0896627309002049%3Fshowall%3Dtrue): “The results demonstrate that individual Schaffer collateral synapses on CA1 pyramidal neurons behave in an incremental rather than binary fashion, sustaining graded and bidirectional long-term plasticity” (“summary”).\n\n\n[148.](https://www.openphilanthropy.org/brain-computation-report#footnoteref148_ienw4ky)[Siegelbaum et al. (2013c)](https://neurology.mhmedical.com/content.aspx?bookid=1049§ionid=59138636): “The mean probability of transmitter release from a single active zone also varies widely among different presynaptic terminals, from less than 0.1 (that is, a 10% chance that a presynaptic action potential will trigger release of a vesicle) to greater than 0.9” … “Thus central neurons vary widely in the efficacy and reliability of synaptic transmission. Synaptic reliability is defined as the probability that an action potential in a pre-synaptic cell leads to some measurable response in the post-synaptic cell – that is, the probability that a presynaptic action potential releases one or more quanta of transmitter. Efficacy refers to the mean amplitude of the synaptic response, which depends on both the reliability of synaptic transmission and on the mean size of the response when synaptic transmission does occur” (p. 271). [Koch (1999)](https://www.amazon.com/Biophysics-Computation-Information-Computational-Neuroscience/dp/0195181999): “We have seen that single synapses in the mammalian cortex appear to be unreliable: release at single sites can occur as infrequently as one out of every 10 times (or even less) that an action potential invades the presynaptic terminal (Fig. 4.3)” (p. 327).\n\n\n[149.](https://www.openphilanthropy.org/brain-computation-report#footnoteref149_lmzpkhk)See e.g. [McDonnel and Ward (2011)](https://www.nature.com/articles/nrn3061), [Jonas (2014, unpublished)](http://ericjonas.com/publication/thesis/thesis.pdf), and [Faisel et al. (2008)](https://www.ncbi.nlm.nih.gov/pmc/articles/PMC2631351/pdf/ukmss-3512.pdf) (p. 3) for discussion of the benefits of noise.\n\n\n[150.](https://www.openphilanthropy.org/brain-computation-report#footnoteref150_3p2waz9)As [Siegelbaum et al. (2013c)](https://neurology.mhmedical.com/content.aspx?bookid=1049§ionid=59138636) note, “in synaptic connections where a low probability of release is deleterious for function, this limitation is overcome by simply having many active zones [that is, neurotransmitter release sites] in one synapse” (p. 271). The fact that the brain can choose to have reliable synapses if necessary leads [Koch (1999)](https://www.amazon.com/Biophysics-Computation-Information-Computational-Neuroscience/dp/0195181999) to suggest that there may be some “computational advantage to having unreliable synapses” – for example, increasing the number of distinguishable states a synapse can be in (p. 327).\n\n\n[151.](https://www.openphilanthropy.org/brain-computation-report#footnoteref151_a3oqmr9)From [Open Philanthropy’s non-verbatim notes from a conversation with Dr. Paul Christiano](https://www.openphilanthropy.org/research/dr-paul-christiano-researcher-openai/): “One way of modeling synaptic stochasticity is by assigning a fixed release probability to each synaptic vesicle, conditional on presynaptic activity. Dr. Christiano does not think that modeling spikes through synapses in this way would constitute a significant increase in required compute, relative to modeling each spike through synapse deterministically. Sampling from a normal distribution is cheap unless you need a lot of precision, and even then, Dr. Christiano believes that the cost is just linear in the number of bits of precision that you want. At 8 bits of precision and 10 vesicles, he expects that it would be possible to perform the relevant sampling with about the same amount of energy as a FLOP” (p. 5).\n\n\n[152.](https://www.openphilanthropy.org/brain-computation-report#footnoteref152_hy7p3jc)See Seigelbaum et al. (2013) quotes above. From [Open Philanthropy’s non-verbatim notes from a conversation with Prof. Erik De Schutter](https://www.openphilanthropy.org/research/professor-erik-de-schutter-professor-computational-neuroscience-okinawa-institute-of-science-and-technology/): “Some hypothesize that it’s about energy efficiency, but there is no proof of this.” (p. 3).\n\n\n[153.](https://www.openphilanthropy.org/brain-computation-report#footnoteref153_ut288pm)From [Open Philanthropy’s non-verbatim notes from a conversation with Prof. Erik De Schutter](http://erik): “[synaptic stochasticity] is almost never included in neural network models” (p. 3). From [Open Philanthropy’s non-verbatim notes from a conversation with Prof. Chris Eliasmith](https://www.openphilanthropy.org/research/professor-chris-eliasmith-professor-of-philosophy-and-systems-design-engineering-canada-research-chair-in-theoretical-neuroscience-and-director-of-the-centre-for-theoretical-neuroscience-at-the-uni/): ‘Pretty much everything Prof. Eliasmith does with his models works fine in a stochastic regime, but stochastic approaches require more synapses, so he does not bother with them. This decision is driven primarily by the availability of deterministic large-scale computational platforms. If there were cheap stochastic computers available, Prof. Eliasmith would probably use stochastic approaches” (p. 3).\n\n\n[154.](https://www.openphilanthropy.org/brain-computation-report#footnoteref154_iyyadpx)From [Open Philanthropy’s non-verbatim notes from a conversation with Prof. Erik De Schutter](https://www.openphilanthropy.org/research/professor-erik-de-schutter-professor-computational-neuroscience-okinawa-institute-of-science-and-technology/): “It’s an open question whether you could capture this stochasticity by drawing from a relatively simple distribution, or whether the brain manipulates synaptic stochasticity in more computationally complex ways” (p. 3).\n\n\n[155.](https://www.openphilanthropy.org/brain-computation-report#footnoteref155_eeue7aw)This change can be modeled in different ways (for example, as an exponential decay, or as a difference of exponentials), and different post-synaptic receptors exhibit different behaviors. See [Dayan and Abbott (2001)](https://www.amazon.com/Theoretical-Neuroscience-Computational-Mathematical-Modeling/dp/0262541858) (p. 182), Figure 5.15, and the pictures of different models [here](https://www.compneuroprinciples.org/code-examples/all/all?page=1).\n\n\n[156.](https://www.openphilanthropy.org/brain-computation-report#footnoteref156_irupfq7)[Sarpeshkar (2010)](https://www.cambridge.org/core/books/ultra-low-power-bioelectronics/ED8504DA1504856B74E2502EA859FDEA): “Synapses are effectively spike-dependent electrochemical gm generators [my understanding is that “gm” stands for conductance]. They convert the input digital spike impulse arriving from a presynaptic transmitting neuronal axon into an exponential analog impulse-response current on the receiving dendrite of the postsynaptic neuron” (p. 739).\n\n\n[157.](https://www.openphilanthropy.org/brain-computation-report#footnoteref157_919tkzq)[Sarpeshkar (2010)](https://www.cambridge.org/core/books/ultra-low-power-bioelectronics/ED8504DA1504856B74E2502EA859FDEA): “A synapse implements multiplication and filtering operations on every spike and sophisticated learning operations over multiple spikes. If we assume that synaptic multiplication is at least one floating-point operation (FLOP), the 20 ms second-order filter impulse response due to each synapse is 40 FLOPS, and that synaptic learning requires at least 10 FLOPS per spike, a synapse implements at least 50 FLOPS of computation per spike” (p. 748-749).\n\n\n[158.](https://www.openphilanthropy.org/brain-computation-report#footnoteref158_if5q4rq)I’m partly influenced here by comments from Dr. Adam Marblestone, see [Open Philanthropy’s non-verbatim notes from a conversation with Dr. Adam Marblestone](https://www.openphilanthropy.org/research/dr-adam-marblestone-research-scientist-google-deepmind/): “If you neglect this temporal shape, you’ll get the wrong output: it matters that incoming spikes coincide and add up properly” (p. 3).\n\n\n[159.](https://www.openphilanthropy.org/brain-computation-report#footnoteref159_88dbbzg)See [Open Philanthropy’s non-verbatim notes from a conversation with Prof. Shaul Druckmann](https://www.openphilanthropy.org/research/professor-shaul-druckmann-assistant-professor-of-neurobiology-and-of-psychiatry-and-behavioral-sciences-stanford-university/): “the long time-constant of NMDA receptors increases the complexity of the neuron’s input-output transformation” (p. 3). [Beniaguev et al. (2020)](https://www.biorxiv.org/content/10.1101/613141v2.full.pdf): “Detailed studies of synaptic integration in dendrites of cortical pyramidal neurons suggested the primary role of the voltage-dependent current through synaptic NMDA receptors, including at the subthreshold and suprathreshold (the NMDA-spike) regimes ([Polsky, Mel, and Schiller (2004)](https://pubmed.ncbi.nlm.nih.gov/15156147/); [Branco, Clark, and Häusser (2010)](https://science.sciencemag.org/content/329/5999/1671/tab-article-info)). As NMDA receptors depend nonlinearly on voltage it is highly sensitive not only to the activity of the synapse in which the receptors are located but also to the activity of (and the voltage generated by) neighboring synapses and to their dendritic location. Moreover, the NMDA-current has slow dynamics, promoting integration over a time window of tens of milliseconds ([Major, Larkum, and Schiller (2013)](https://pubmed.ncbi.nlm.nih.gov/23841837/); [Doron et al. (2017)](https://www.cell.com/cell-reports/pdf/S2211-1247(17)31467-5.pdf))” (p. 8).\n\n\n[160.](https://www.openphilanthropy.org/brain-computation-report#footnoteref160_8z3ghmx)From [Open Philanthropy’s non-verbatim notes from a conversation with Prof. Anthony Zador](https://www.openphilanthropy.org/research/professor-anthony-zador-alle-davis-and-maxine-harrison-professor-of-neurosciences-cold-spring-harbor-laboratory/): “He does not think that … we need to include the details of synaptic conductances in our models” (p. 1). From [Open Philanthropy’s non-verbatim notes from a conversation with Dr. Adam Marblestone](https://www.openphilanthropy.org/research/dr-adam-marblestone-research-scientist-google-deepmind/): “Dr. Marblestone is not sure that you need the exact shape [of the synaptic conductance], or that it needs to be re-computed every time. Specialized hardware could also be helpful (though one can say this for everything). Overall, Dr. Marblestone expects it to be possible to either leave out or simplify this computation” (p. 3).\n\n\n[161.](https://www.openphilanthropy.org/brain-computation-report#footnoteref161_n5slur7)My discussion of this assumption is inspired by some comments from Dr. Dario Amodei.\n\n\n[162.](https://www.openphilanthropy.org/brain-computation-report#footnoteref162_6hmdngt)See, for example, the recent [Cerebras whitepaper](https://www.cerebras.net/wp-content/uploads/2019/08/Cerebras-Wafer-Scale-Engine-Whitepaper.pdf): “Multiplying by zero is a waste—a waste of silicon, power, and time, all while creating no new information. In deep learning, the data are often very sparse. Half to nearly all the elements in the vectors and matrices that are to be multiplied together are zeros. The source of the zeros are fundamental deep learning operations, such as the rectified linear unit nonlinearity (ReLU) and dropout, both of which introduce zeros into neural network tensors…when the data is 50 to 98% zeros, as it often is in neural networks, then 50 to 98% of your multiplications are wasted. Because the Cerebras SLA core was designed specifically for the sparse linear algebra of neural networks, it never multiplies by zero. To take advantage of this sparsity, the core has built-in, fine-grained dataflow scheduling, so compute is triggered by the data. The scheduling operates at the granularity of a single data value so only non-zero data triggers compute. All zeros are filtered out and can be skipped in the hardware. In other words, the SLA core never multiplies by zero and never propagates a zero across the fabric” (p. 5).\n\n\n[163.](https://www.openphilanthropy.org/brain-computation-report#footnoteref163_hybeweg)[Ananthanarayanan et al. (2009)](https://people.eecs.berkeley.edu/~demmel/cs267_Spr10/Lectures/RajAnanthanarayanan_SC09-a63.pdf): “The basic algorithm of our cortical simulator C2 [2] is that neurons are simulated in a clock-driven fashion whereas synapses are simulated in an event-driven fashion. For every neuron, at every simulation time step (say 1 ms), we update the state of each neuron, and if the neuron fires, generate an event for each synapse that the neuron is post-synaptic to and presynaptic to. For every synapse, when it receives a pre- or post-synaptic event, we update its state and, if necessary, the state of the post-synaptic neuron” (p. 3).\n\n\n[164.](https://www.openphilanthropy.org/brain-computation-report#footnoteref164_x5musf2)See e.g. [Sandberg and Bostrom (2008)](https://www.fhi.ox.ac.uk/brain-emulation-roadmap-report.pdf) (p. 80-81); and Henry Markram, in a [2018 video (18:28)](https://youtu.be/DvE-nphgswY?t=1112).\n\n\n[165.](https://www.openphilanthropy.org/brain-computation-report#footnoteref165_2soem9q)From [Open Philanthropy’s non-verbatim notes from a conversation with Prof. Blake Richards](https://www.openphilanthropy.org/research/professor-blake-richards-assistant-professor-in-the-montreal-neurological-institute-and-the-school-of-computer-science-at-mcgill-university/): “Some neuroscientists are interested in the possibility that a lot of computation is occurring via molecular processes in the brain. For example, very complex interactions could be occurring in a structure known as the post-synaptic density, which involves molecular machinery that could in principle implicate many orders of magnitude of additional compute per synapse. We don’t yet know what this molecular machinery is doing, because we aren’t yet able to track the states of the synapses and molecules with adequate precision. There is evidence that perturbing the molecular processes within the synapse alters the dynamics of synaptic plasticity, but this doesn’t necessarily provide much evidence about whether these processes are playing a computational role. For example, their primary role might just be to maintain and control a single synaptic weight, which is itself a substantive task for a biological system” (p. 2). See also [Bhalla (2014)](https://www.sciencedirect.com/science/article/abs/pii/S0959438813002171): “Neurons perform far more computations than the conventional framework of summation and propagation of electrical signals from dendrite to soma to axon. There is an enormous and largely hidden layer of molecular computation, and many aspects of neuronal plasticity have been modeled in chemical terms. Memorable events impinge on a neuron as special input patterns, and the neuron has to decide if it should ‘remember’ this event. This pattern-decoding decision is mediated by kinase cascades and signaling networks over millisecond to hour-long timescales. The process of cellular memory itself is rooted in molecular changes that give rise to life-long, stable physiological changes. Modeling studies show how cascades of synaptic molecular switches can achieve this, despite stochasticity and molecular turnover. Such biochemically detailed models form a valuable conceptual framework to assimilate the complexities of chemical signaling in neuronal computation” (abstract).\n\n\n[166.](https://www.openphilanthropy.org/brain-computation-report#footnoteref166_9pm5y18)From [Open Philanthropy’s non-verbatim notes from a conversation with Prof. Barak Pearlmutter](https://www.openphilanthropy.org/research/professor-barak-pearlmutter-professor-of-computer-science-maynooth-university/): “Prof. Pearlmutter thought that the compute for firing decisions would be “in the noise” relative to compute for spikes through synapses, because there are so many fewer neurons than synapses” (p. 2). And from [Open Philanthropy’s non-verbatim notes from a conversation with Prof. Anthony Zador](https://www.openphilanthropy.org/research/professor-anthony-zador-alle-davis-and-maxine-harrison-professor-of-neurosciences-cold-spring-harbor-laboratory/): “There is a big difference, computationally, between processes that happen at every synapse, and processes that only happen at the soma, because there are orders of magnitude fewer somas than synapses” (p. 2).\n\n\n[167.](https://www.openphilanthropy.org/brain-computation-report#footnoteref167_c8rjwso)See Fig. 1. (p. 80).\n\n\n[168.](https://www.openphilanthropy.org/brain-computation-report#footnoteref168_hyw5s0o)See figure 2.\n\n\n[169.](https://www.openphilanthropy.org/brain-computation-report#footnoteref169_beg4yu8)See figure 2. Integrate and fire models are roughly 5-15 FLOPs per ms: Hodgkin-Huxley is 1200.\n\n\n[170.](https://www.openphilanthropy.org/brain-computation-report#footnoteref170_bso0024)One expert I spoke to said this, though the comment didn’t end up in the conversation notes.\n\n\n[171.](https://www.openphilanthropy.org/brain-computation-report#footnoteref171_yr22pja)See Fig. 3. (p. 83), in [Herz et al. (2006)](http://www.ini.ethz.ch/~cwang/ModelingSingleNeuron.pdf). The two-layer cascade model they discuss resembles the one suggested by [Poirazi et al. (2003)](https://www.sciencedirect.com/science/article/pii/S0896627303001491). See [Section 2.1.2.2](https://www.openphilanthropy.org/brain-computation-report#DendriticComputation) for more discussion of dendritic computation in particular.\n\n\n[172.](https://www.openphilanthropy.org/brain-computation-report#footnoteref172_b4piqbd)From [Open Philanthropy’s non-verbatim notes from a conversation with Prof. Erik De Schutter](https://www.openphilanthropy.org/research/professor-erik-de-schutter-professor-computational-neuroscience-okinawa-institute-of-science-and-technology/): “Old multi-compartmental models, based on cable theory, described voltage in one dimension, and the typical resolution was on the order of tens of microns per compartment. That is adequate for modeling voltage, but molecular events happen on much smaller scales. Researchers now have much more computing power available to them, and so can build more ambitious models. For instance, they can now use fully stochastic, three-dimensional “mesh” models with sub-micron resolution (typically on the order of 100 nanometers). These can incorporate molecular reactions, as well as features of cell biology like spatial models of synaptic vesicles. fully stochastic, three-dimensional “mesh” models with sub-micron resolution (typically on the order of 100 nanometers). These can incorporate molecular reactions, as well as features of cell biology like spatial models of synaptic vesicles” (p. 1-2).\n\n\n[173.](https://www.openphilanthropy.org/brain-computation-report#footnoteref173_aasbfz6)From a review article by [Brette (2015)](https://www.frontiersin.org/articles/10.3389/fnsys.2015.00151/full): “Do individual spikes matter or can neural computation be essentially described in terms of rates, with spikes physically instantiating this description? This contentious question has generated considerable debate in neuroscience, and is still unsettled” (p. 1). Brette lists a large number of citations relevant to the debate. It’s also possible that something else altogether matters as well (see, e.g., the discussion of other forms of axon signaling in [Section 2.3.5](https://www.openphilanthropy.org/research/how-much-computational-power-does-it-take-to-match-the-human-brain/#OtherFormsOfAxonSignaling)).\n\n\n[174.](https://www.openphilanthropy.org/brain-computation-report#footnoteref174_5k465h6)[Koch (1999)](https://www.amazon.com/Biophysics-Computation-Information-Computational-Neuroscience/dp/0195181999) describes a standard procedure: “In a typical physiological experiment, the same stimulus is presented multiple times to a neuron and its response is recorded (Fig. 14.1). One immediately notices that the detailed response of the cell changes from trial to trial….Given the pulselike nature of spike trains, the standard procedure to quantify the neuronal response is to count how many spikes arrived within some sampling window Δt and to divide this number by the number of presentations” (p. 331). One example of a plausible role of firing rates comes from neurons in the visual cortex, whose firing rates correlate with features of visual images. Classic results in this respect include motion-sensitive neurons in the frog visual system (sometimes characterized as “bug-detectors”) (see [Maturna et al. (1960)](https://www.ncbi.nlm.nih.gov/pmc/articles/PMC2195076/pdf/129.pdf) (p. 148), and [Yuste (2015)](https://www.nature.com/articles/nrn3962), in the section on “History of the neuron doctrine”) and the orientation-selectivity of neurons in V1 ([Hubel and Wisel (1959)](https://www.ncbi.nlm.nih.gov/pmc/articles/PMC1363130/pdf/jphysiol01298-0128.pdf), also see video [here](https://www.youtube.com/watch?v=IOHayh06LJ4)). [Maheswaranathan et al. (2019)](https://www.biorxiv.org/content/10.1101/340943v5.full.pdf) also discuss various computations performed in the retina, all of which are expressed in terms of spike rates. Examples include Latency Coding, Motion Reversal, Motion Anticipation, and the Omitted Stimulus Response. See (p. 14). See also Surya Ganguli’s description of the results at 4:56 [here](https://www.youtube.com/watch?v=FKi6sWK9Qo0&feature=youtu.be&t=295). Markus Meister, in a [2016 talk (34:04)](https://youtu.be/2UpiWMukZeI?t=2048), also discusses a retinal ganglion cell whose firing rate appears to respond to the average of the center of the images in a naturalistic movie (its firing rate remains roughly the same when the entire movie is reduced to this simple summary)\n\n\n[175.](https://www.openphilanthropy.org/brain-computation-report#footnoteref175_50ysqpj)See e.g. [Hochberg (2012)](https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3640850/pdf/nihms366580.pdf): “Raw neural signals for each channel were sampled at 30 kHz and fed through custom Simulink (Mathworks Inc., Natick, MA) software in 100 ms bins (S3) or 20 ms bins (T2) to extract threshold crossing rates; these threshold crossing rates were used as the neural features for real-time decoding and for filter calibration” (p. 5). See also this discussion at (1:02:00-1:05:00) the [Neuralink Launch Event](https://youtu.be/r-vbh3t7WVI?t=3708) on July 16, 2019.\n\n\n[176.](https://www.openphilanthropy.org/brain-computation-report#footnoteref176_7zk9r1h)See e.g. [Weiss et al. (2018)](https://www.ncbi.nlm.nih.gov/pmc/articles/PMC6147046/pdf/main.pdf): “many sensory systems use millisecond or even sub-millisecond precise spike timing across sensory neurons to rapidly encode stimulus features (e.g., visual patterns in salamanders [[Gollisch and Meister (2008)](https://www.cell.com/iscience/fulltext/S2589-0042(18)30064-6?_returnURL=https%3A%2F%2Flinkinghub.elsevier.com%2Fretrieve%2Fpii%2FS2589004218300646%3Fshowall%3Dtrue#)], direction of sound in barn owls [[Carr and Konishi (1990)](https://www.jneurosci.org/content/10/10/3227?ijkey=d4987df0788fd215557034462d162ed702c3cf78&keytype2=tf_ipsecsha)], and touch location in leeches [[Thomson and Kristan (2006)](https://pubmed.ncbi.nlm.nih.gov/16870746/)])” (p. 76). [Zuo et al. (2015)](https://www.cell.com/current-biology/fulltext/S0960-9822(14)01560-7?_returnURL=https%3A%2F%2Flinkinghub.elsevier.com%2Fretrieve%2Fpii%2FS0960982214015607%3Fshowall%3Dtrue), in a discussion of perceptual decisions in the rat somatosensory cortex: “These results indicate that spike timing makes crucial contributions to tactile perception, complementing and surpassing those made by rate” (abstract). See [Funabiki et al. (2011)](https://www.jneurosci.org/content/31/43/15245) for *very* temporally precise *in vivo* sensitivity in the auditory system of owls, though this could emerge from combining many imprecise inputs: “In owls, NL neurons change their firing rates with changes in ITD of <10 μs ([Carr and Konishi (1990)](https://www.jneurosci.org/content/10/10/3227?ijkey=d4987df0788fd215557034462d162ed702c3cf78&keytype2=tf_ipsecsha); [Peña et al. (1996)](https://www.jneurosci.org/content/16/21/7046?ijkey=91b4d4043c5c1546894b9dbfeb713c140c6eded0&keytype2=tf_ipsecsha)), far below the spike duration of the neurons (e.g., ∼1 ms).”\n\n\n[177.](https://www.openphilanthropy.org/brain-computation-report#footnoteref177_03l045b)[Brette (2015)](https://www.frontiersin.org/articles/10.3389/fnsys.2015.00151/full): “Perhaps the most used argument against spike-based theories is the fact that spike trains *in vivo* are variable both temporally and over trials ([Shadlen and Newsome (1998)](https://www.frontiersin.org/articles/10.3389/fnsys.2015.00151/full#B68)), and yet this might well be the least relevant argument. This assertion is what philosophers call a ‘category error’, when things of one kind are presented as if they belonged to another. Specifically, it presents the question as if it were about variability vs. reproducibility. I will explain how variability can arise in spike-based theories, but first an important point to make is that the rate-based view does not explain variability, but rather it simply states that there is variability” (see section on “Assertion #2”). Brette goes on to list a number of objections to appeals to variability as evidence for rate-based theories.\n\n\n[178.](https://www.openphilanthropy.org/brain-computation-report#footnoteref178_0t746zz)One expert suggested this type of thought.\n\n\n[179.](https://www.openphilanthropy.org/brain-computation-report#footnoteref179_18tt7id)See e.g. [Izhikevich and Edelman (2007)](https://www.izhikevich.org/publications/large-scale_model_of_human_brain.pdf), in the context of a neural network simulation: “We perturbed a single spike (34, 35) in this regime (out of millions) and showed that the network completely reorganized its firing activity within half a second. It is not clear, however, how to interpret this sensitivity in response to perturbations (Fig. 5). On one hand, one could say that this sensitivity indicates that only firing patterns in a statistical sense should be considered, and individual spikes are too volatile. On the other hand, one could say that this result demonstrates that every spike of every neuron counts in shaping the state of the brain, and hence the details of the behavior, at any particular moment. This conclusion would be consistent with the experimental observations that microstimulation of a single tactile afferent is detectable in human subjects (36), and that microstimulation of single neurons in somatosensory cortex of rats affects behavioral responses in detection tasks (37)” (p. 3597).\n\n\n[180.](https://www.openphilanthropy.org/brain-computation-report#footnoteref180_nzmknxi)E.g., stochastic processes in the brain can cause a neuron to spike at one time, rather than another, without the brain’s cognitive processing breaking down. See [Faisal et al. (2008)](https://www.ncbi.nlm.nih.gov/pmc/articles/PMC2631351/) for discussion of a number of these processes.\n\n\n[181.](https://www.openphilanthropy.org/brain-computation-report#footnoteref181_7nq6g36)See [Doose et al. (2016)](https://www.jneurosci.org/content/36/43/11120) for one study of *in vivo* stimulation in rats. [Sandberg (2013)](https://link.springer.com/chapter/10.1007/978-3-642-31674-6_19) argues for a more general point in this vicinity: “Brains sensitive to microscale properties for their functioning would exhibit erratic and non-adaptive behavior” (p. 260). See also Hanson (2011) for comments in a somewhat similar vein. Though note that single impulse stimulation to nerve fibers can result in sensory responses in humans: [Vallbo et al. (1984)](https://pubmed.ncbi.nlm.nih.gov/6478176/): “It was confirmed that a single impulse in a single FA I unit may elicit a sensory response in the attending subject, whereas a much larger input was required from SA I units, which are also less sensitive to mechanical stimuli. This was one of several findings supporting the impression that differential receptive properties, even within a group of afferents, were associated with different sensory responses. It was concluded that a train of impulses in a single tactile unit may produce within the brain of the subject a construct which specifies with great accuracy the skin area of the unit’s terminals as well as a tactile subquality which is related to unit properties” (abstract).\n\n\n[182.](https://www.openphilanthropy.org/brain-computation-report#footnoteref182_0w89g43)From [Open Philanthropy’s non-verbatim notes from a conversation with Prof. Chris Eliasmith](https://www.openphilanthropy.org/research/professor-chris-eliasmith-professor-of-philosophy-and-systems-design-engineering-canada-research-chair-in-theoretical-neuroscience-and-director-of-the-centre-for-theoretical-neuroscience-at-the-uni/): There is no “magical answer” to the question of how accurate a model of neuron spiking needs to be. In experiments fitting neuron models to spike timing data, neuroscientists pick a metric, optimize their model according to that metric, and then evaluate the model according to that metric as well, leaving ongoing uncertainty about the importance of the aspects of neural activity that the relevant metric doesn’t capture” (p. 2).\n\n\n[183.](https://www.openphilanthropy.org/brain-computation-report#footnoteref183_bsq2qbl)From [Open Philanthropy’s non-verbatim notes from a conversation with Prof. Eve Marder](https://www.openphilanthropy.org/research/professor-eve-marder-university-professor-and-victor-and-gwendolyn-beinfield-professor-of-neuroscience-brandeis-university/): “It’s been hard to make progress in understanding neural circuits, because in order to know what details matter, you have to know what the circuit is doing, and in most parts of the brain, we don’t know this…It’s not that you can’t make simplifying assumptions. It’s that absent knowledge of what a piece of nervous system needs to be able to do, you have no way of assessing whether you’ve lost something fundamental or not” (p. 4); and the notes from [Open Philanthropy’s non-verbatim notes from a conversation with Prof. E.J. Chichilnisky](https://www.openphilanthropy.org/research/professor-e-j-chichilnisky-john-r-adler-professor-of-neurosurgery-and-professor-of-ophthalmology-at-stanford-university/): “It’s hard to know when to stop fine-tuning the details of your model. A given model may be inaccurate to some extent, but we don’t know whether a given inaccuracy matters, or whether a human wouldn’t be able to tell the difference (though focusing on creating usable retinal prostheses can help with this)” (p. 3).\n\n\n[184.](https://www.openphilanthropy.org/brain-computation-report#footnoteref184_nko203c)[Keat et al. (2001)](https://www.sciencedirect.com/science/article/pii/S0896627301003221): “Is this level of accuracy sufficient? In the real world, the visual system operates exclusively on single trials, without the luxury of improving resolution by averaging many responses to identical stimuli. Nor is there much opportunity to average across equivalent cells, because neurons in the early visual system tend to tile the visual field with little redundancy. Consequently, operation of the visual system under natural conditions does not require the properties of these neurons to be specified more precisely than their trial-to-trial fluctuations. To understand a neuron’s role in visual behavior, we therefore suggest that a model of the light response can be deemed successful if its systematic errors are as small as the neuron’s random errors” (p. 810). See also [Open Philanthropy’s non-verbatim notes from a conversation with Dr. Stephen Baccus](https://www.openphilanthropy.org/research/professor-stephen-baccus-professor-of-neurobiology-stanford-university/): “Prof. Baccus expects that there would be consensus in the field that if a model’s correlation with an individual cell’s response to a stimulus matches the correlation between that cell’s responses across different trials with that stimulus, and the model also captures all of the higher-order correlations across different cells, this would suffice to capture everything that the retina is communicating to the brain. Indeed, it would do so almost by definition” (p. 2).\n\n\n[185.](https://www.openphilanthropy.org/brain-computation-report#footnoteref185_mplj43e)[Brette (2015)](https://www.frontiersin.org/articles/10.3389/fnsys.2015.00151/full): “The lack of reproducibility of neural responses to sensory stimuli does not imply that neurons respond randomly to those stimuli. There are a number of sensible arguments supporting the hypothesis that a large part of this variability reflects changes in the state of the neuron or of its neighbors, changes that are functionally meaningful” (see the section on the “State-Dependence”). See also the discussion in [Faisal (2012)](https://www.semanticscholar.org/paper/Noise-in-Neurons-and-Other-Constraints-Faisal/e0cb8d65ef6ea5c69d79c99505c49ee73c81430f): “The question whether this neuronal trial-to-trial variability is[:] Indeed just noise (defined in the following as individually unpredictable, random events that corrupt signals) [;] Results because the brain is to [sic] complex to control the conditions across trials (e.g. the organisms may become increasingly hungry or tired across trials) [;] Or rather the reflection of a highly efficient way of coding information [;] cannot easily be answered. In fact, being able to decide whether we are measuring the neuronal activity that is underlying the logical reasoning and not just meaning- less noise is a fundamental problem in neuroscience, with striking resemblance to finding the underlying message in cryptographic code breaking efforts ([Rieke et al. (1997)](https://www.amazon.com/Spikes-Exploring-Neural-Computational-Neuroscience/dp/0262681080))” (p. 231).\n\n\n[186.](https://www.openphilanthropy.org/brain-computation-report#footnoteref186_ata0g89)From [Open Philanthropy’s non-verbatim notes from a conversation with Dr. Stephen Baccus](https://www.openphilanthropy.org/research/professor-stephen-baccus-professor-of-neurobiology-stanford-university/): “various correlation coefficient measures and information theory measures do not address the importance of the meaning of a given signal. For example, if your model misses a tiger hiding in the bushes, that’s pretty important, even though the difference might account for only a very small fraction of the correlation coefficient between your model and the retina’s response” (p. 2).\n\n\n[187.](https://www.openphilanthropy.org/brain-computation-report#footnoteref187_m22okig)My thanks to Carl Shulman and Katja Grace for discussion of this analogy.\n\n\n[188.](https://www.openphilanthropy.org/brain-computation-report#footnoteref188_2jreu87)[Naud and Gerstner (2012a)](https://www.researchgate.net/publication/264893074_The_Performance_and_Limits_of_Simple_Neuron_Models_Generalizations_of_the_Leaky_Integrate-and-Fire_Model) and [Herz et al. (2006)](http://www.ini.ethz.ch/~cwang/ModelingSingleNeuron.pdf) for overviews of various models; and [Guo et al. (2014)](http://www.dl.begellhouse.com/journals/4b27cbfc562e21b8,64a3e6f7290a8a6e,64cd0e236c1f5579.html) for a review of retinal models in particular.\n\n\n[189.](https://www.openphilanthropy.org/brain-computation-report#footnoteref189_tz98ekg)See e.g. [Schulz (2010)](https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3059710/): “the network state *in vitro* is fundamentally different from the *in vivo* situation. In acute slices in particular, background synaptic activity is almost absent.”\n\n\n[190.](https://www.openphilanthropy.org/brain-computation-report#footnoteref190_9cl0xl9)From [Open Philanthropy’s non-verbatim notes from a conversation with Prof. Shaul Druckmann](https://www.openphilanthropy.org/research/professor-shaul-druckmann-assistant-professor-of-neurobiology-and-of-psychiatry-and-behavioral-sciences-stanford-university/): “Prof. Druckmann does not think it obvious that the kind of multi-compartmental biophysical models neuroscientists generally use are adequate to capture what a neuron does, as these models, too, involve a huge amount of simplification. Calcium dynamics are the most egregious example. Real neurons clearly do things with calcium, which moves around the cell in a manner that has consequences for e.g. calcium-dependent ion channels. Most biophysical models, however, simplify this a lot, and in general, they treat ions just as concentrations affected by currents.” (p. 4).\n\n\n[191.](https://www.openphilanthropy.org/brain-computation-report#footnoteref191_5j8hayw)From [Open Philanthropy’s non-verbatim notes from a conversation with Prof. Erik De Schutter](https://www.openphilanthropy.org/research/professor-erik-de-schutter-professor-computational-neuroscience-okinawa-institute-of-science-and-technology/): “At this point, we have no way to reliably measure the input-output transformation of a neuron, where the input is defined as a specific spatio-temporal pattern of synaptic input. You can build models and test their input-output mappings, but you don’t really know how accurate these models are… In live imaging, it’s very difficult to see what’s happening at synapses. Some people do calcium imaging of pre-synaptic terminals, but this is only for one part of the overall synaptic input (and it may create artefacts). Currently, you cannot get a global picture of all the synaptic inputs to a single neuron. You can’t stain all the inputs, and for a big neuron you wouldn’t be able to image the whole relevant volume of space… you don’t actually know what the physiological pattern of inputs is.” See also [Ujfalussy et al. (2018)](https://www.sciencedirect.com/science/article/pii/S0896627318307372): “Our understanding of neuronal input integration remains limited because it is either based on data from *in vitro* experiments, studying neurons under highly simplified input conditions, or on *in vivo* approaches in which synaptic inputs were not observed or controlled, and thus a systematic characterization of the input-output transformation of neurons was not possible” (2018); and [Open Philanthropy’s non-verbatim notes from a conversation with Prof. Shaul Druckmann](https://www.openphilanthropy.org/research/professor-shaul-druckmann-assistant-professor-of-neurobiology-and-of-psychiatry-and-behavioral-sciences-stanford-university/): “It is very difficult to tell what spatio-temporal patterns of inputs are actually arriving at a neuron’s synapses *in vivo*. You can use imaging techniques, but this is very messy” (p. 2)\n\n\n[192.](https://www.openphilanthropy.org/brain-computation-report#footnoteref192_l73dgkh)From [Open Philanthropy’s non-verbatim notes from a conversation with Prof. Shaul Druckmann](https://www.openphilanthropy.org/research/professor-shaul-druckmann-assistant-professor-of-neurobiology-and-of-psychiatry-and-behavioral-sciences-stanford-university/): “many dendritic non-linearities contribute more strongly when triggered by synaptic inputs arriving at similar times to similar dendritic locations (“clustering”), and there is evidence that such clustering occurs (“clustering”), and there is evidence that such clustering occurs *in vivo*. In this sense, a random input regime is unrepresentative, more weakly non-linear than it should be and therefore may be particularly easy to model.” (p. 3).\n\n\n[193.](https://www.openphilanthropy.org/brain-computation-report#footnoteref193_2hel8di)From [Open Philanthropy’s non-verbatim notes from a conversation with Prof. Erik De Schutter](https://www.openphilanthropy.org/research/professor-erik-de-schutter-professor-computational-neuroscience-okinawa-institute-of-science-and-technology/): “Using glutamate uncaging, you can reliably activate single dendritic spines *in vitro*, and you can even do this in a sequence of spines, thereby generating patterns of synaptic input. However, even these patterns are limited. For example, you can’t actually activate synapses simultaneously, because your laser beam needs to move; there’s only so much you can do in a certain timeframe; and because it’s glutamate, you can only activate excitatory neurons” (p. 2). From [Open Philanthropy’s non-verbatim notes from a conversation with Prof. Shaul Druckmann](https://www.openphilanthropy.org/research/professor-shaul-druckmann-assistant-professor-of-neurobiology-and-of-psychiatry-and-behavioral-sciences-stanford-university/): “It is very difficult to tell how a neuron responds to arbitrary patterns of synaptic input. You can stimulate a pre-synaptic neuron and observe the response, but you can’t stimulate *all* pre-synaptic neurons in different combinations. And you can only patch-clamp one dendrite while also patch-clamping the soma (and this already requires world-class skill)” (p. 2).\n\n\n[194.](https://www.openphilanthropy.org/brain-computation-report#footnoteref194_cnpaq20)From [Open Philanthropy’s non-verbatim notes from a conversation with Prof. Erik De Schutter](https://www.openphilanthropy.org/research/professor-erik-de-schutter-professor-computational-neuroscience-okinawa-institute-of-science-and-technology/): “There is a tradition of integrate and fire modeling that achieves very accurate fits of neuron firings in response to noisy current injection into the soma (more accurate, indeed, than could be achieved by current biophysical models). However, this is a very specific type of experiment, which doesn’t tell you anything about what happens to synaptic input in the dendrites” (p. 2). From [Open Philanthropy’s non-verbatim notes from a conversation with Prof. Shaul Druckmann](https://www.openphilanthropy.org/research/professor-shaul-druckmann-assistant-professor-of-neurobiology-and-of-psychiatry-and-behavioral-sciences-stanford-university/): “One neuron modeling competition proceeded by assuming that dendritic inputs are randomly distributed, and that dendrites just integrate inputs linearly – assumptions used to create a pattern of current to be injected into the soma of the neurons whose spikes were recorded. If these assumptions are true, then there is good reason to think that fairly simple models are adequate. However, these assumptions are very friendly to the possibility of non-detailed modeling. The point of complex models is to capture the possibly non-linear dendritic dynamics that determine what current goes into the soma: after that point, modeling is much easier. And we don’t know to what extent non-random inputs trigger these dendritic dynamics. There were also a few other aspects of this neuron modeling competition that were not \n\noptimal. For example, it was fairly easy to game the function used to evaluate the models” (p. 4).\n\n\n[195.](https://www.openphilanthropy.org/brain-computation-report#footnoteref195_dd98n7e)From [Open Philanthropy’s non-verbatim notes from a conversation with Prof. Markus Meister](https://www.openphilanthropy.org/research/professor-markus-meister-anne-p-and-benjamin-f-biaggini-professor-of-biological-sciences-at-the-california-institute-of-technology/): “Information in the retina also flows in an almost exclusively feedforward direction (though there are some feedback signals, and it is an interesting question what those fibers do)” (p. 3).\n\n\n[196.](https://www.openphilanthropy.org/brain-computation-report#footnoteref196_nt21qt4)See [Meister et al. (2013)](https://neurology.mhmedical.com/content.aspx?bookid=1049§ionid=59138654) (p. 577-578). Note also that photoreceptor cells do not spike. [Meister et al. (2013)](https://neurology.mhmedical.com/content.aspx?bookid=1049§ionid=59138654): “Photoreceptors do not fire action potentials; like bipolar cells they release neurotransmitter in a graded fashion using a specialized structure, the ribbon synapse” (p. 592).\n\n\n[197.](https://www.openphilanthropy.org/brain-computation-report#footnoteref197_y5bocf2)[Meister et al. (2013)](https://neurology.mhmedical.com/content.aspx?bookid=1049§ionid=59138654): “The retina is a thin sheet of neurons, a few hundred micrometers thick, composed of five major cell types that are arranged in three cellular layers separated by two synaptic layers” (p. 577). See [Meister et al. (2013)](https://neurology.mhmedical.com/content.aspx?bookid=1049§ionid=59138654) (p. 578). The optic nerve also contains glial cells (see [Butt et al. (2004)](https://www.nature.com/articles/6701595)).\n\n\n[198.](https://www.openphilanthropy.org/brain-computation-report#footnoteref198_0j0bxpc)Note that the light actually has to travel through the ganglion cells in order to get to the photoreceptors.\n\n\n[199.](https://www.openphilanthropy.org/brain-computation-report#footnoteref199_ehchkaf)From [Open Philanthropy’s non-verbatim notes from a conversation with Prof. Markus Meister](https://www.openphilanthropy.org/research/professor-markus-meister-anne-p-and-benjamin-f-biaggini-professor-of-biological-sciences-at-the-california-institute-of-technology/): “Information in the retina also flows in an almost exclusively feedforward direction (though there are some feedback signals, and it is an interesting question what those fibers do)” (p. 3)”\n\n\n[200.](https://www.openphilanthropy.org/brain-computation-report#footnoteref200_y8kcqb3)See [Section 2.1.2.2](https://www.openphilanthropy.org/research/how-much-computational-power-does-it-take-to-match-the-human-brain/#DendriticComputation) for discussion of [Beniaguev et al. (2020)](https://www.biorxiv.org/content/10.1101/613141v2.full.pdf); and see [Section 3.1](https://www.openphilanthropy.org/brain-computation-report#TheRetina) for discussion of [Maheswaranathan et al. (2019)](https://www.biorxiv.org/content/10.1101/340943v5.full.pdf) and [Batty et al. (2017)](https://openreview.net/pdf?id=HkEI22jeg)).\n\n\n[201.](https://www.openphilanthropy.org/brain-computation-report#footnoteref201_9wwwpxb)See e.g. [London and Häusser (2005)](https://pdfs.semanticscholar.org/81cf/8182d5725d5fdd9a33b65843f2f9fdb6a9f6.pdf): “In this review we argue that this model is oversimplified in view of the properties of real neurons and the computations they perform. Rather, additional linear and nonlinear mechanisms in the dendritic tree are likely to serve as computational building blocks, which combined together play a key role in the overall computation performed by the neuron” (p. 504).\n\n\n[202.](https://www.openphilanthropy.org/brain-computation-report#footnoteref202_r43rc3s)[Stuart and Spruston (2015)](https://www.nature.com/articles/nn.4157): “Rall and others found that the passive membrane properties of dendrites, that is, their resistance and capacitance as well as their geometry, influence the way neurons integrate synaptic inputs in complex ways, enabling a wide range of nonlinear operations” (p. 1713). For example: if you inject a high-frequency current into a dendrite, the local voltage response in that dendrite will be higher frequency and larger amplitude than the response recorded in the soma (see [London and Häusser (2005)](https://pdfs.semanticscholar.org/81cf/8182d5725d5fdd9a33b65843f2f9fdb6a9f6.pdf) (p. 508)); when multiple inputs arrive in a similar dendritic location at the same time, the impact on the membrane potential of the first can reduce the size of the impact on the membrane potential of the other (see [London and Häusser (2005)](https://pdfs.semanticscholar.org/81cf/8182d5725d5fdd9a33b65843f2f9fdb6a9f6.pdf) (p. 507)); and when excitatory and inhibitory inputs arrive at a similar location in the dendrite, the inhibitory input can “shunt” the excitatory input, reducing its impact on somatic membrane potential in a manner distinct from a linear sum, and perhaps even cancelling the excitatory signal entirely (see [London and Häusser (2005)](https://pdfs.semanticscholar.org/81cf/8182d5725d5fdd9a33b65843f2f9fdb6a9f6.pdf) (p. 509)).\n\n\n[203.](https://www.openphilanthropy.org/brain-computation-report#footnoteref203_1iu6pxc)See [London and Häusser (2005)](https://pdfs.semanticscholar.org/81cf/8182d5725d5fdd9a33b65843f2f9fdb6a9f6.pdf) (p. 509-516), and [Stuart and Spruston (2015)](https://www.nature.com/articles/nn.4157) (p. 1713-1714). If a back-propagating action potential occurs at the same time as a certain type of input to the dendrite, this can trigger a burst of somatic action potentials (see [London and Häusser (2005)](https://pdfs.semanticscholar.org/81cf/8182d5725d5fdd9a33b65843f2f9fdb6a9f6.pdf) (p. 509)). A new class of calcium-mediated dendritic action-potentials (dCaAPs) was recently discovered in humans, and shown to make possible a type of input-output relation previously thought to require a network of neurons. [Gidon et al. (2020)](https://science.sciencemag.org/content/367/6473/83.long): “we investigated the dendrites of layer 2 and 3 (L2/3) pyramidal neurons of the human cerebral cortex ex vivo. In these neurons, we discovered a class of calcium-mediated dendritic action potentials (dCaAPs) whose waveform and effects on neuronal output have not been previously described…. These dCaAPs enabled the dendrites of individual human neocortical pyramidal neurons to classify linearly non-separable inputs—a computation conventionally thought to require multilayered networks” (from the abstract).\n\n\n[204.](https://www.openphilanthropy.org/brain-computation-report#footnoteref204_1yiousr)See [Reyes (2001)](https://www.annualreviews.org/doi/full/10.1146/annurev.neuro.24.1.653), [London and Häusser (2005)](https://pdfs.semanticscholar.org/81cf/8182d5725d5fdd9a33b65843f2f9fdb6a9f6.pdf), [Stuart and Spruston (2015)](https://www.nature.com/articles/nn.4157), [Payeur et al. (2019)](https://www.sciencedirect.com/science/article/abs/pii/S0959438818302162), and [Poirazi and Papoutsi (2020)](https://www.nature.com/articles/s41583-020-0301-7) for reviews.\n\n\n[205.](https://www.openphilanthropy.org/brain-computation-report#footnoteref205_cukag7x)See discussion of synaptic clustering on p. 310 of [Poirazi and Papoutsi (2020)](https://www.nature.com/articles/s41583-020-0301-7), though they also suggest that “The above predictions suggest that dendritic — and, consequently, somatic — spiking is not necessarily facilitated by synaptic clustering, as was previously assumed” (p. 310).\n\n\n[206.](https://www.openphilanthropy.org/brain-computation-report#footnoteref206_xtzfmqx)[Moore et al. (2017)](https://science.sciencemag.org/content/355/6331/eaaj1497): “The dendritic spike rates, however, were fivefold greater than the somatic spike rates of pyramidal neurons during slow-wave sleep and 10-fold greater during exploration. The high stability of dendritic signals suggested that these large rates are unlikely to arise due to the injury caused by the electrodes” (p. 1 of “Research Article Summary”).\n\n\n[207.](https://www.openphilanthropy.org/brain-computation-report#footnoteref207_pjfgr2b)[Moore et al. (2017)](https://science.sciencemag.org/content/355/6331/eaaj1497): “the total energy consumption in neural tissue … could be dominated by the dendritic spikes” (p. 8). The *Science* summary [here](https://science.sciencemag.org/content/355/6331/eaaj1497) also notes that dendrites occupy more than 90% of neuronal tissue.\n\n\n[208.](https://www.openphilanthropy.org/brain-computation-report#footnoteref208_8oeycmd)See [London and Häusser (2005)](https://pdfs.semanticscholar.org/81cf/8182d5725d5fdd9a33b65843f2f9fdb6a9f6.pdf) (p. 516-524), and [Payeur et al. (2019)](https://www.sciencedirect.com/science/article/abs/pii/S0959438818302162) for examples. See also [Schmidt-Hiever et al. (2017)](https://www.ncbi.nlm.nih.gov/pubmed/28628104): “Our results suggest that active dendrites may therefore constitute a key cellular mechanism for ensuring reliable spatial navigation” (abstract).\n\n\n[209.](https://www.openphilanthropy.org/brain-computation-report#footnoteref209_y7u1r72)Stephen Baccus recalled estimates from Bartlett Mel to the effect that something in the range of five dendritic sub-units would be sufficient (see [Open Philanthropy’s non-verbatim notes from a conversation with Dr. Stephen Baccus](https://www.openphilanthropy.org/research/professor-stephen-baccus-professor-of-neurobiology-stanford-university/), p. 3). Markus Meister also suggested that models of cortical pyramidal cells that include two point neurons – one for the dynamics at the soma, and the other for the dynamics in the apical tuft – can account for a lot of what’s going on (see [Open Philanthropy’s non-verbatim notes from a conversation with Prof. Markus Meister](https://www.openphilanthropy.org/research/professor-markus-meister-anne-p-and-benjamin-f-biaggini-professor-of-biological-sciences-at-the-california-institute-of-technology/), p. 4). From [Open Philanthropy’s non-verbatim notes from a conversation with Prof. Anthony Zador](https://www.openphilanthropy.org/research/professor-anthony-zador-alle-davis-and-maxine-harrison-professor-of-neurosciences-cold-spring-harbor-laboratory/): “Much of Prof. Zador’s PhD work was devoted to the hypothesis that dendritic computation is the key difference between artificial neural networks and real brains. However, at the end of the day, he was led to the conclusion that dendritic computation does not make a qualitative difference to the computational capacity of a neuron. There is some computational boost, but the same effect could be achieved by replacing each biological neuron with a handful of artificial neurons” (p. 3). See also [Naud et al. (2014)](https://www.frontiersin.org/articles/10.3389/fncom.2014.00090/full): “We conclude that a simple two-compartment model can predict spike times of pyramidal cells stimulated in the soma and dendrites simultaneously. Our results support that regenerating activity in the apical dendritic is required to properly account for the dynamics of layer 5 pyramidal cells under in-vivo-like conditions” (abstract). See also [Ujfalussy et al. (2018)](https://www.sciencedirect.com/science/article/pii/S0896627318307372), though I’m not sure exactly how complex their model was: “We used the hLN to predict the somatic [membrane potential](https://www.sciencedirect.com/topics/neuroscience/membrane-potential) of an *in vivo*-validated detailed biophysical model of a L2/3 [pyramidal cell](https://www.sciencedirect.com/topics/neuroscience/pyramidal-cell). Linear input integration with a single global dendritic nonlinearity achieved above 90% prediction accuracy.” (abstract).\n\n\n[210.](https://www.openphilanthropy.org/brain-computation-report#footnoteref210_y3olldq)See [Li et al. (2019)](https://www.pnas.org/content/pnas/116/30/15244.full.pdf): “We derive an effective point neuron model, which incorporates an additional synaptic integration current arising from the nonlinear interaction between synaptic currents across spatial dendrites. Our model captures the somatic voltage response of a neuron with complex dendrites and is capable of performing rich dendritic computations” (p. 15246).\n\n\n[211.](https://www.openphilanthropy.org/brain-computation-report#footnoteref211_ztde5jk)From [Open Philanthropy’s non-verbatim notes from a conversation with Prof. Chris Eliasmith](https://www.openphilanthropy.org/research/professor-chris-eliasmith-professor-of-philosophy-and-systems-design-engineering-canada-research-chair-in-theoretical-neuroscience-and-director-of-the-centre-for-theoretical-neuroscience-at-the-uni/): “There are also arguments that certain forms of active dendritic computation function to “linearize” the inputs – e.g., to combat the attenuation of an input signal as it travels through the dendritic tree, such that the overall result looks more like direct injection into the soma” (p. 3-4).\n\n\n[212.](https://www.openphilanthropy.org/brain-computation-report#footnoteref212_lamjoq9)For example, various results explore the computational role of active computation in the apical dendrite of cortical pyramidal cells (see [London and Häusser (2005)](https://pdfs.semanticscholar.org/81cf/8182d5725d5fdd9a33b65843f2f9fdb6a9f6.pdf) for examples). For results related to dendritic computation that does happen in the retina, see [Taylor et al. (2000)](https://science.sciencemag.org/content/289/5488/2347) and [Hanson et al. (2019)](https://elifesciences.org/articles/42392).\n\n\n[213.](https://www.openphilanthropy.org/brain-computation-report#footnoteref213_r4bf1x7)I’m not sure exactly what grounds this suggestion, but it is consistent with a number of abstract models of dendritic computation. See [Poirazi et al. (2003)](https://www.sciencedirect.com/science/article/pii/S0896627303001491); [Tzilivaki et al. (2019)](https://www.nature.com/articles/s41467-019-11537-7); [Jadi et al. (2014)](https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4279447/); and [Ujfalussy et al. (2018)](https://www.sciencedirect.com/science/article/pii/S0896627318307372). All of these use sigmoidal non-linearities in dendritic subunits. See e.g. [Ujfalussy et al. (2018)](https://www.sciencedirect.com/science/article/pii/S0896627318307372)“We chose a sigmoid nonlinearity for several reasons. First, the sigmoid has been proposed elsewhere as an appropriate dendritic nonlinearity (Poirazi et al., 2003a, Polsky et al., 2004). Second, under different parameter settings and input statistics, the sigmoid is sufficiently flexible to capture purely linear, sublinear, and supralinear behavior, as well as combinations thereof.”\n\n\n[214.](https://www.openphilanthropy.org/brain-computation-report#footnoteref214_hsy7kch)It is possible to formulate and prove this sort of limitation using graph theory. However, the proof is quite long, and I won’t include it here.\n\n\n[215.](https://www.openphilanthropy.org/brain-computation-report#footnoteref215_xfzl14q)Some assumption is required here to the effect that the non-linearities themselves can’t be that expensive, and/or performed many times in a row. I haven’t explored this much, but I could imagine questions about the interchangeability of nonlinearities in artificial neural networks being relevant (see discussion in next section). [Poirazi et al. (2003)](https://www.sciencedirect.com/science/article/pii/S0896627303001491), [Tzilivaki et al. (2019)](https://www.nature.com/articles/s41467-019-11537-7), [Jadi et al. (2014)](https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4279447/), and [Ujfalussy et al. (2018)](https://www.sciencedirect.com/science/article/pii/S0896627318307372) all use sigmoidal non-linearities, a standard version of which (y = 1 / (1 + exp-x)) appears to be ~4 FLOPs (see “Activation Functions” [here](https://machinethink.net/blog/how-fast-is-my-model/)).\n\n\n[216.](https://www.openphilanthropy.org/brain-computation-report#footnoteref216_tc37def)See the notes from [Open Philanthropy’s non-verbatim notes from a conversation with Dr. Adam Marblestone](https://www.openphilanthropy.org/research/dr-adam-marblestone-research-scientist-google-deepmind/) (p. 5):\n\n\nAs Dr. Marblestone understands this argument, the idea is that while there may well be dendritic non-linearities, you should expect a tree-like structure of local interactions, and activity in one part of the tree can’t exert fast, long-range influence on activity in another part. This rules out scenarios where, for example, any synapse can communicate with any other – a scenario in which required compute could scale with the square of the number of synapses. This argument is consistent with Dr. Marblestone’s perspective, and he thinks it is very interesting, though it would be nice to formalize it more precisely.\n\n\nFrom [Open Philanthropy’s non-verbatim notes from a conversation with Prof. Barak Pearlmutter](https://www.openphilanthropy.org/research/professor-barak-pearlmutter-professor-of-computer-science-maynooth-university/) (p. 2):\n\n\nProf. Pearlmutter was sympathetic to the idea that the tree-structure of dendrites would limit the compute burdens that dendritic computation could introduce. There is an important distinction between causal models that are tree-structured and ones that are not tree-structured. Non-tree structured causal model can have cycles that quickly become very computationally expensive, whereas tree structured models are comparatively easy to compute. He suggested that this type of consideration applies to dendrites as well (including in the context of feedbacks between the dendrites and the soma). Prof. Pearlmutter thought it a fairly good intuition that dendritic computation would only implicate a small constant factor increase in required compute, though very complicated local interactions could introduce uncertainty.\n\n\nFrom [Open Philanthropy’s non-verbatim notes from a conversation with Prof. Chris Eliasmith](https://www.openphilanthropy.org/research/professor-chris-eliasmith-professor-of-philosophy-and-systems-design-engineering-canada-research-chair-in-theoretical-neuroscience-and-director-of-the-centre-for-theoretical-neuroscience-at-the-uni/) (p. 3):\n\n\nProf. Eliasmith believes that neurons probably have non-linearities in their dendrites. In attempting to construct models of attention, for example, he has found that he needs more model neurons than seem biologically realistic, and the neuron count would go way down if he had certain kinds of non-linearities in the dendrites. Including these non-linearities would not drastically increase compute burdens (it might be equivalent to a 2× increase). A simple version would basically involve treating a single neuron as a two-layer neural network, in which dendrites collect inputs and then perform a non-linearity before passing the output to the soma. Prof. Eliasmith is sympathetic to the idea that the tree-structure of dendrites limits the additional complexity that dendritic computation could implicate in the context of such multi-layer networks (e.g., the tree-structure limits the outgoing connections of a dendritic sub-unit, and additional non-linearities in the neuron do not themselves add much compute in a regime where spikes through synapses are already the dominant compute burden). That said, there are many mechanisms in neurons that could in principle make everything more complicated.\n\n\n[217.](https://www.openphilanthropy.org/brain-computation-report#footnoteref217_suebofg)From [Open Philanthropy’s non-verbatim notes from a conversation with Prof. Shaul Druckmann](https://www.openphilanthropy.org/research/professor-shaul-druckmann-assistant-professor-of-neurobiology-and-of-psychiatry-and-behavioral-sciences-stanford-university/): “Prof. Druckmann does not think that appeals to the manageable compute burdens of modeling of dendrites as comparatively small multi-layer neural networks (for example, with each dendritic sub-unit performing its own non-linearity on a subset synaptic inputs) definitively address the possibility that modeling dendritic non-linearities requires very large amounts of compute. Small multi-layer network models are really just a guess about what’s required to capture the neuron’s response to realistic inputs. For example, in a recent unpublished paper, David Beniaguev, Idan Segev, and Michael London found that adding NMDA currents to the detailed model increased the size of the neural network required to replicate its outputs to seven layers (the long time-constant of NMDA receptors increases the complexity of the neuron’s input-output transformation). Adding in other neuron features could require many more layers than this. 10 layers might be manageable, but 500 is a pain, and the true number is not known” (p. 3).\n\n\n[218.](https://www.openphilanthropy.org/brain-computation-report#footnoteref218_e4dml2i)This type of illustration was also suggested by Dr. Amodei.\n\n\n[219.](https://www.openphilanthropy.org/brain-computation-report#footnoteref219_x4qj9fw)See [Poirazi et al. (2003)](https://www.sciencedirect.com/science/article/pii/S0896627303001491); [Tzilivaki et al. (2019)](https://www.nature.com/articles/s41467-019-11537-7); [Jadi et al. (2014)](https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4279447/); and [Ujfalussy et al. (2018)](https://www.sciencedirect.com/science/article/pii/S0896627318307372).\n\n\n[220.](https://www.openphilanthropy.org/brain-computation-report#footnoteref220_y9dib6e)From [Open Philanthropy’s non-verbatim notes from a conversation with Dr. Paul Christiano](https://www.openphilanthropy.org/research/dr-paul-christiano-researcher-openai/): “A ReLU costs less than a FLOP. Indeed, it can be performed with many fewer transistors than a multiply of equivalent precision” (p. 6). See [here](https://stackoverflow.com/questions/41251698/how-many-flops-does-tanh-need#:~:text=The%20key%20takeaway%3A%20the%20costs,between%2010%20and%20100%20FLOPs.) for some discussion of the FLOPs costs of a tanh, and [here](https://cs.stackexchange.com/questions/105026/number-of-flops-floating-point-operations-for-exponentiation) for discussion of exponentials. A standard sigmoid activation (y = 1 / (1 + exp-x)) appears to be ~4 FLOPs (see “Activation Functions” [here](https://machinethink.net/blog/how-fast-is-my-model/)). [Poirazi et al. (2003)](https://www.sciencedirect.com/science/article/pii/S0896627303001491) use various sigmoids in this vein, see Figure 5.\n\n\n[221.](https://www.openphilanthropy.org/brain-computation-report#footnoteref221_fakywoz)This factor is centrally determined by the ratio of FLOPs per input to FLOPs per non-linearity. This is 10x in the example above, but this is on the high end for non-linearities in ANNs.\n\n\n[222.](https://www.openphilanthropy.org/brain-computation-report#footnoteref222_au7fn0r)Thus, for example, assuming 1000 inputs and a 1 Hz average firing rate, on average there will be one spike through synapse per 1 ms timestep. If we budget 1 FLOP per spike through synapse, but assume 100 dendritic sub-units, each performing non-linearities on 10 synaptic input connections each, and we assume that everything but spikes through synapses must be computed every time-step, we get the following budget per 1 ms timestep:\n\n\n**Point neuron model** (assuming sparse FLOP/s for synaptic transmission): \n\nSoma: 1 FLOPs (average number of input spikes per ms) + 10 FLOPs (non-linearity) \n\nTotal: 11 FLOPs \n\n**Sub-unit model**: \n\nDendrites: 100 (subunits) × (.01 FLOPs (average number spikes through synapse per 10 synapses per ms) + 10 FLOPs (non-linearity)) \n\nSoma: 100 FLOPs (additions from sub-unit outputs) + 10 FLOPs (non-linearity) \n\nTotal: ~1110 FLOPs\n\n\n[223.](https://www.openphilanthropy.org/brain-computation-report#footnoteref223_122mthx)[Beniaguev et al. (2020)](https://www.biorxiv.org/content/10.1101/613141v2.full.pdf): “A thorough search of configurations of deep and wide fully-connected neural network architectures (FCNs) have failed to provide a good fit to the I/O characteristics of the L5PC model. These failures suggest a substantial increase in the complexity of I/O transformation compared to that of I&F. Indeed, only temporally convolutional network architecture (TCN) with 7 layers and 128 channels per layer, provided a good fit (Fig. 2B, C Fig. S5)” (p. 7).\n\n\n[224.](https://www.openphilanthropy.org/brain-computation-report#footnoteref224_smjhlmp)[Beniaguev et al. (2020)](https://www.biorxiv.org/content/10.1101/613141v2.full.pdf): “We hypothesized that removing NMDA dependent synaptic currents from our L5PC model will significantly decrease the size of the respective DNN… after removing the NMDA voltage dependent conductance, such that the excitatory input relies only on AMPA mediated conductances, we have managed to achieve a similar quality fit as in Fig. 2 when using a much smaller network – a fully connected DNN (FCN) with 128 hidden units and only a single hidden layer (Fig. 3B). This significant reduction in complexity is due to the ablation of NMDA channels” (p. 8-10).\n\n\n[225.](https://www.openphilanthropy.org/brain-computation-report#footnoteref225_nitsi6h)Here’s my estimate, which the lead author tells me looks about right. 1st layer: 1278 synaptic inputs × 35 × 128 = 5.7 million MACCs (from line 140 and lines 179-180 [here](https://github.com/SelfishGene/neuron_as_deep_net/blob/master/fit_CNN.py)); Next 6 layers: 6 layers × 128 × 35 × 128 = 3.4 million MACCs. Total per ms: ~ 10 million MACCs. Total per second: ~10 billion MACCs. Multiplied by 2 to count individual FLOPs (see “It’s dot products all the way down” [here](https://machinethink.net/blog/how-fast-is-my-model/)) = ~20 billion FLOP/s per cell. Though the authors also note that “the accuracy of the model was insensitive to the temporal kernel sizes of the different DNN layers when keeping the total temporal extent of the entire network fixed, so the temporal extent of the first layer was selected to be larger than subsequent layers mainly for visualization purposes” (p. 7). I’m not sure what kind of difference this might make. Note also that this is still less than the biophysical model itself, which they say ran several orders of magnitude slower: “Note that, despite its seemingly large size, the resulting TCN represents a substantial decrease in computational resources relative to a full simulation of a detailed biophysical model (involving numerical integration of thousands of nonlinear differential equations), as indicated by a speedup of simulation time by several orders of magnitude” (p. 8).\n\n\n[226.](https://www.openphilanthropy.org/brain-computation-report#footnoteref226_5j8664a)[Beniaguev et al. (2020)](https://www.biorxiv.org/content/10.1101/613141v2.full.pdf) (p. 15):\n\n\nIt is important to emphasize that, due to optimization, the complexity measure described above is an upper bound of the true computational complexity of the I/O of a single neuron, i.e., it is possible that there exists a much smaller neural network that could mimic the biophysical neuron with a similar degree of accuracy but the training process we used could not find it. Additionally, we note that we have limited our architecture search space only to fully connected (FCN) and temporally convolutional (TCN) neural network architectures. It is likely that additional architectural search could yield even simpler and more compact models for any desired degree of prediction accuracy. In order to facilitate this search in the [sic] scientific community, we hereby release our large readymade [sic] dataset of simulated inputs and outputs of a fully complex single layer 5 cortical neuron in an invivo [sic] like regime so that the community can focus on modelling various aspects of this endeavour and avoid running the simulations themselves.\n\n\n[227.](https://www.openphilanthropy.org/brain-computation-report#footnoteref227_i4s9tjl)[Beniaguev et al. (2020)](https://www.biorxiv.org/content/10.1101/613141v2.full.pdf): “now that we estimate that a cortical L5 pyramidal neuron is equivalent to a deep network with 7 hidden layers, this DNN could be used to teach the respective neuron to implement a function which is in the scope of the capabilities of such a network, such as classifying hand written digits or a sequence of auditory sounds. One can then both validate the hypothesis that single neurons could perform complex computational tasks and investigate how these neurons can implement such complex tasks” (p. 16).\n\n\n[228.](https://www.openphilanthropy.org/brain-computation-report#footnoteref228_eq9x5h6)Though see [Open Philanthropy’s non-verbatim notes from a conversation with Dr. Paul Christiano](https://www.openphilanthropy.org/research/dr-paul-christiano-researcher-openai/): “Dr. Christiano is very skeptical of the hypothesis that a single, biological cortical neuron could be used to classify handwritten digits” (p. 6).\n\n\n[229.](https://www.openphilanthropy.org/brain-computation-report#footnoteref229_gukt45d)From [Open Philanthropy’s non-verbatim notes from a conversation with Prof. Eve Marder](https://www.openphilanthropy.org/research/professor-eve-marder-university-professor-and-victor-and-gwendolyn-beinfield-professor-of-neuroscience-brandeis-university/): “You can see maintaining these rhythms as the high-level function that the circuit is performing at a given time (transitions between modes of operation are discussed below). Neuroscientists had a wiring diagram for the pyloric rhythm in 1980, and there was a fairly good first-principles idea of how it worked back then. It is not too difficult to model tri-phasic rhythm” (p. 1).\n\n\n[230.](https://www.openphilanthropy.org/brain-computation-report#footnoteref230_b1k70h2)From [Open Philanthropy’s non-verbatim notes from a conversation with Prof. Eve Marder](https://www.openphilanthropy.org/research/professor-eve-marder-university-professor-and-victor-and-gwendolyn-beinfield-professor-of-neuroscience-brandeis-university/): “Prof. Marder and her collaborators have used single-compartment conductance models to replicate the rhythms in the stomatogastric ganglion” (p. 4). And from [Open Philanthropy’s non-verbatim notes from a conversation with Dr. Adam Marblestone](https://www.openphilanthropy.org/research/dr-adam-marblestone-research-scientist-google-deepmind/): “These neurons create oscillations that can be very well modeled and understood using Hodgkin-Huxley type neuron models” (p. 4).\n\n\n[231.](https://www.openphilanthropy.org/brain-computation-report#footnoteref231_6b3e6kd)E.g., if what matters about these rhythms is that just that units activate in a certain regular, rhythmic sequence (I’m not sure about the details here, and the full range of dynamics that matter could be much more complicated), it seems possible to create this sort of sequence in a very non-brain-like way. That said, achieving the brain’s level of robustness and flexibility in maintaining these rhythms across different circumstances is a different story.\n\n\n[232.](https://www.openphilanthropy.org/brain-computation-report#footnoteref232_ototbnh)[Prinz et al. (2004)](https://www.nature.com/articles/nn1352): “To determine how tightly neuronal properties and synaptic strengths need to be tuned to produce a given network output, we simulated more than 20 million versions of a three-cell model of the pyloric network of the crustacean stomatogastric ganglion using different combinations of synapse strengths and neuron properties. We found that virtually indistinguishable network activity can arise from widely disparate sets of underlying mechanisms, suggesting that there could be considerable animal-to-animal variability in many of the parameters that control network activity, and that many different combinations of synaptic strengths and intrinsic membrane properties can be consistent with appropriate network performance” (p. 1345). See also [Marder and Goaillard (2006)](http://www.ccnss.org/ccn_2010/materials/pdf/marder/MarderGoaillard2006.pdf) for review of other related findings, for example Figure 2, “Neurons with similar intrinsic properties have different ratios of conductances” (p. 566), Figure 4, “Similar network behavior with different underlying conductances” (p. 569) and Figure 6, “Constancy of network performance despite major size changes during growth” (p. 571). See also the non-verbatim notes from [Open Philanthropy’s non-verbatim notes from a conversation with Dr. Adam Marblestone](https://www.openphilanthropy.org/research/dr-adam-marblestone-research-scientist-google-deepmind/): “There are important molecular mechanisms at work, but these function to make the circuit robust. For example, across crabs, gene expression levels in equivalent stomatogastric neurons vary a lot, but they are correlated within a given crab, suggesting that there are many different gene expression solutions that can create the same functioning network, and that the cell’s mechanisms are set up to make sure the neurons find such a solution. This system has many different possible states, which can be induced by different neuromodulators. But in any given one of those states, the real-time, fast computation is fairly understandable. Perhaps the whole brain is like that” (p. 4).\n\n\n[233.](https://www.openphilanthropy.org/brain-computation-report#footnoteref233_6ucbfmw)From [Open Philanthropy’s non-verbatim notes from a conversation with Prof. Eve Marder](https://www.openphilanthropy.org/research/professor-eve-marder-university-professor-and-victor-and-gwendolyn-beinfield-professor-of-neuroscience-brandeis-university/): “Biology has found a series of mechanisms that allow the system to transition smoothly between different modes of operation. For example, you can walk slowly or quickly. Although eventually you will change gait. Prof. Marder believes that such smooth transitions are centrally important to understanding brains, especially big brains. The mechanisms involved allow brains to avoid having to fine-tune or find singular solutions. However, most computational models don’t capture these transitions. For example, if you want to capture the behavior of an eight channel neuron with a three channel model, you’ll hit nasty bifurcations. Indeed, one hypothesis is that neurons have many ion channels with overlapping functions because this facilitates smooth transitions between states” (p. 2).\n\n\n[234.](https://www.openphilanthropy.org/brain-computation-report#footnoteref234_9hza8a8)Locusts jump out of the way when you show them a “looming stimulus” – that is, a visual stimulus that grows in size in a manner that mimics an object on a collision course with the locust (see videos [here](https://www.youtube.com/watch?v=Bm40BSZJRck#t=10m10s) and slower-motion [here](https://www.youtube.com/watch?v=5E5MYf9Z8R0)). In a particular locust neuron known as the lobula giant movement detector (LGMD), the firing rate of this neuron increases, peaks, and decreases as collision with the object appears to become imminent, and the peak firing rate occurs with a fixed delay after the object reaches a particular threshold [angular size](https://en.wikipedia.org/wiki/Angular_diameter) on the retina (See [Fotowat and Gabbiani (2011)](https://pdfs.semanticscholar.org/325b/cd539461767e59149bca2803059c89b30d3d.pdf) (p. 4)). [Gabbiani et al. (2002)](https://www.nature.com/articles/nature01190.pdf) hypothesize that this angular size “might be the imaged-based retinal variable used to trigger escape responses in the face of an impending collision. Indeed, a leg flexion (presumably in preparation for an escape jump) has been shown to follow the peak LGMD firing rate with a fixed delay” (p. 320). The LGMD also synapses onto a further neuron – the descending contralateral movement detector (DCMD) – that connects to motor neurons responsible for jumping, and which itself fires every time the LGMD fires. The timing of take-off can be very well predicted from the peak firing rate of the DCMD (see [Fotowat and Gabbiani (2011)](https://pdfs.semanticscholar.org/325b/cd539461767e59149bca2803059c89b30d3d.pdf) (p. 12)). What’s more, examination of the physiology of the neuron supports a particular hypothesis about how its biological hardware implements this function. The dendritic tree of the LGMD can be divided into two portions – an excitatory portion and an inhibitory portion. The excitatory portion receives input from the visual system roughly proportionate to the angular velocity (that is, the rate of change of the angular size) of the stimulus raised to the power of two to three, and then outputs positive current roughly proportionate to the logarithm of angular velocity. The inhibitory portion, by contrast, receives input roughly proportionate to the square of the angular size of the stimulus, and outputs negative current in an approximately linear relationship to the angular size of the stimulus (the relationship is actually best described by a sigmoid, but it is treated as linear in the overall model). These positive and negative currents then combine at the spike initiation zone in a manner that results in an overall membrane potential that reflects the sum of the positive and negative currents. The average spiking rate of the neuron is then proportionate to the membrane potential raised to the power three, which is roughly equivalent to an exponential at the relevant scales (see [Jones and Gabbiani (2012)](https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3752046/), Figure 8, for a description of this hypothesis, together with Christof Koch’s discussion [here](https://www.youtube.com/watch?v=Bm40BSZJRck#t=10m10s)).\n\n\n[235.](https://www.openphilanthropy.org/brain-computation-report#footnoteref235_8iz3kq7)See Fig 1 in [Jadi et al. (2014)](https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4279447/) for some other examples of circuit models using point neuron models. They cite [Raymond et al. (1996)](https://pubmed.ncbi.nlm.nih.gov/8638157/) for cerebellar circuit models; [Raphael et al. (2010)](https://pubmed.ncbi.nlm.nih.gov/20631172/) for a model of the spinal cord; and [Crick (1984)](https://pubmed.ncbi.nlm.nih.gov/6589612/) for a model of attention. [Grid cells](http://www.scholarpedia.org/article/Grid_cells) might be another example, and the [Jeffress model of auditory coincidence detection](http://www.scholarpedia.org/article/Jeffress_model). See also [Open Philanthropy’s non-verbatim notes from a conversation with Prof. Eve Marder](https://www.openphilanthropy.org/research/professor-eve-marder-university-professor-and-victor-and-gwendolyn-beinfield-professor-of-neuroscience-brandeis-university/): “There are also some circuits in leeches, *C. elegans*, flies, and electric fish that are relatively well-characterized” (p. 4).\n\n\n[236.](https://www.openphilanthropy.org/brain-computation-report#footnoteref236_f3atlod)From [Open Philanthropy’s non-verbatim notes from a conversation with Dr. Stephen Larson](https://www.openphilanthropy.org/files/Conversations/Discussions%20with%20Dr.%20Stephen%20Larson,%202019-2020.pdf): “There may be selection bias at work in appeals to the success of simple models in some contexts as evidence for their adequacy in general. With respect to phenomena that simple models have thus far failed to explain, such explanation might not be possible” (p. 4).\n\n\n[237.](https://www.openphilanthropy.org/brain-computation-report#footnoteref237_n3kinnk)I’m partly influenced here by discussions with Dr. Adam Marblestone, see [Open Philanthropy’s non-verbatim notes from a conversation with Dr. Adam Marblestone](https://www.openphilanthropy.org/research/dr-adam-marblestone-research-scientist-google-deepmind/): “Dr. Marblestone does not think that selection effects nullify the evidence provided by our understanding of peripheral sensory and motor systems. E.g., it’s not that we did experiments on a bunch of systems, and some of them we couldn’t figure out, and some of them we could. Rather, the distribution of neuroscientific success has more to do with our experimental access to peripheral sensory/motor systems, together with differences in the types of theories you would need to have in order to explain more architecturally-complex circuits deeper in the brain. Similarly, Dr. Marblestone does not think that the fact that we can’t simulate *C. elegans* is a good argument for any kind of special computation taking place within *C. elegans* neurons. Lots of other explanations are available: notably, that it’s very difficult to figure out the right parameters” (p. 8). See also the section in the notes from [Open Philanthropy’s non-verbatim notes from a conversation with Prof. Markus Meister](https://www.openphilanthropy.org/research/professor-markus-meister-anne-p-and-benjamin-f-biaggini-professor-of-biological-sciences-at-the-california-institute-of-technology/) entitled “Scientific advantages of peripheral systems” (p. 2-3), as well as [Open Philanthropy’s non-verbatim notes from a conversation with Prof. Eve Marder](https://www.openphilanthropy.org/research/professor-eve-marder-university-professor-and-victor-and-gwendolyn-beinfield-professor-of-neuroscience-brandeis-university/) (p. 4), section title: “The epistemic barriers to understanding circuits.”\n\n\n[238.](https://www.openphilanthropy.org/brain-computation-report#footnoteref238_t18s8xs)From [Open Philanthropy’s non-verbatim notes from a conversation with Dr. Stephen Larson](https://www.openphilanthropy.org/research/dr-stephen-larson-ceo-of-metacell-and-co-founder-of-openworm/), who works on the [OpenWorm project](https://royalsocietypublishing.org/doi/10.1098/rstb.2017.0382): “Despite its small size, we do not yet have a model that captures even 50% of the biological behavior of the *C. elegans* nervous system. This is partly because we’re just getting to the point of being able to measure what the worm’s nervous system is doing well enough” (p. 1). David Dalrymple, who used to work on emulating *C. elegans*, [writes](https://www.lesswrong.com/posts/XhHetxjWxZ6b85HK9/whole-brain-emulation-looking-at-progress-on-c-elgans?commentId=wwwhhRufNfuNTSmQy): “What you actually need is to functionally characterize the system’s dynamics by performing thousands of perturbations to individual neurons and recording the results on the network, in a fast feedback loop with a very very good statistical modeling framework which decides what perturbation to try next.” [Sarma et al. (2018)](https://royalsocietypublishing.org/doi/10.1098/rstb.2017.0382#RSTB20170382TB2), in an overview of OpenWorm’s progress, write: “The level of detail that we have incorporated to date is inadequate for biological research. A key remaining component is to complete the curation and parameter extraction of Hodgkin–Huxley models for ion channels to produce realistic dynamics in neurons and muscles” (Section 3).\n\n\n[239.](https://www.openphilanthropy.org/brain-computation-report#footnoteref239_6xq37ft)From [Open Philanthropy’s non-verbatim notes from a conversation with Dr. Adam Marblestone](https://www.openphilanthropy.org/research/dr-adam-marblestone-research-scientist-google-deepmind/): “Some neural circuits, like ones in the spinal cord, are very simple. And one can imagine primitive synapses, involved in primitive computations like “if you get some dopamine, move this part of the jellyfish like so.” Genetic programs build these machines on the basis of relatively simple specifications, and you have to be able to reliably repurpose these machines without every molecule mattering. Dr. Marblestone expects that evolution proceeded by reusing and recombining these relatively simple, reliable components” (p. 4-5). See also [Open Philanthropy’s non-verbatim notes from a conversation with Prof. Jared Kaplan](https://www.openphilanthropy.org/research/professor-jared-kaplan-professor-of-physics-johns-hopkins-university/): “It is theoretically possible that there is a large amount of additional computation taking place within neurons, but this seems very implausible, and Prof. Kaplan finds it difficult to evaluate arguments that condition on this possibility. One reason this seems implausible is that neurons aren’t that different across species, and it does not seem plausible to Prof. Kaplan that in simple species with very few neurons, large amounts of computation are taking place inside the neurons. One would need a story about when this complex internal computation developed in the evolutionary history of neurons” (p. 2-3).\n\n\n[240.](https://www.openphilanthropy.org/brain-computation-report#footnoteref240_gwqlnq7)Though see also comments from [Open Philanthropy’s non-verbatim notes from a conversation with Prof. Erik De Schutter](https://www.openphilanthropy.org/research/professor-erik-de-schutter-professor-computational-neuroscience-okinawa-institute-of-science-and-technology/): “The brain was not engineered. Rather, it evolved, and evolution works by adding complexity, rather than by simplification. There are good reasons for this complexity. In order to evolve, you can’t have systems, at any level (proteins, channels, cells, brain regions), with unique functions. If you did, and a single mutation knocked out the function, the whole system would crash. Whereas if you have overlapping functions, performance suffers somewhat, but something else can take over. If you don’t allow for this, you can’t evolve, since evolution works by random mutations, and most mutations are not positive” (p. 4).\n\n\n[241.](https://www.openphilanthropy.org/brain-computation-report#footnoteref241_17adylm)Dr. Dario Amodei suggests considerations in this vein, though I’m not sure I’ve understood what he has in mind. See also [Open Philanthropy’s non-verbatim notes from a conversation with Prof. Jared Kaplan](https://www.openphilanthropy.org/research/professor-jared-kaplan-professor-of-physics-johns-hopkins-university/): “most of his probability mass on the hypothesis that most of the computation performed by the brain is visible as information transferred between synapses… It is theoretically possible that there is a large amount of additional computation taking place within neurons, but this seems very implausible” (p. 2); and my discussions of the communication method with Dr. Paul Christiano, see [Open Philanthropy’s non-verbatim notes from a conversation with Dr. Paul Christiano](https://www.openphilanthropy.org/research/dr-paul-christiano-researcher-openai/). That said, Amodei, Christiano, and Kaplan all work at the same organization (OpenAI), so their beliefs and arguments may be correlated due to internal discussion.\n\n\n[242.](https://www.openphilanthropy.org/brain-computation-report#footnoteref242_czywoue)From [Open Philanthropy’s non-verbatim notes from a conversation with Dr. Paul Christiano](https://www.openphilanthropy.org/research/dr-paul-christiano-researcher-openai/): “Neurons receive only a limited number of bits in, and they output only a limited number of bits. However, in principle, you can imagine computational elements receiving encodings of computationally intensive problems via their synaptic inputs (e.g., “is this boolean formula satisfiable?”), and then outputting one of a comparatively small set of difficult-to-arrive-at answers.” (p. 6).\n\n\n[243.](https://www.openphilanthropy.org/brain-computation-report#footnoteref243_mhyh6e0)Here I’m using a rough estimation method suggested by Dr. Paul Christiano, from [Open Philanthropy’s non-verbatim notes from a conversation with Dr. Paul Christiano](https://www.openphilanthropy.org/research/dr-paul-christiano-researcher-openai/): “You can roughly estimate the bandwidth of axon communication by dividing the firing rate by the temporal resolution of spiking. Thus, for example, if the temporal precision is 1 ms, and neurons are spiking at roughly 1 Hz, then each spike would communicate ~10 bits of information (e.g., log2(1000)). If you increase the temporal precision to every microsecond, that’s only a factor of two difference (e.g., log2(1,000,000) = ~20 bits)” (p. 2). There is a large literature on the information carried by action potentials that I’m not engaging with. See [Dayan and Abbott (2001)](https://www.amazon.com/Theoretical-Neuroscience-Computational-Mathematical-Modeling/dp/0262541858), Chapter 4 (p. 123-150); [Zador (1998)](http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.125.8765&rep=rep1&type=pdf); [Tsubo et al. (2012)](https://journals.plos.org/ploscompbiol/article?id=10.1371/journal.pcbi.1002461), [Fuhrmann et al. (2001)](https://lobster.ls.huji.ac.il//idan/files/Fuhrmann_etal_2002.pdf), [Mainen and Sejnowski (1995)](http://www.math.pitt.edu/~bard/classes/compneuro/mainensej.pdf), and [van Steveninck et al. (1997)](https://pubmed.ncbi.nlm.nih.gov/9065407/).\n\n\n[244.](https://www.openphilanthropy.org/brain-computation-report#footnoteref244_3jt96bk)See [here](https://lists.gforge.inria.fr/pipermail/cado-nfs-discuss/2019-December/001139.html), and more discussion of the difficulties [here](https://eprint.iacr.org/2010/006.pdf).\n\n\n[245.](https://www.openphilanthropy.org/brain-computation-report#footnoteref245_jsg0ogd)From [Open Philanthropy’s non-verbatim notes from a conversation with Prof. Markus Meister](https://www.openphilanthropy.org/research/professor-markus-meister-anne-p-and-benjamin-f-biaggini-professor-of-biological-sciences-at-the-california-institute-of-technology/): “Prof. Meister thinks that people often overestimate the sophistication of the tasks that humans perform, which tend to involve low-bandwidth outputs. People have measured the bits per second involved in different types of motor outputs (e.g., typing, playing piano, athletics, speaking speed, etc.), and the numbers are in the range of 10-40 bits per second. Similarly, people have tried to measure the information rate of human thought (for example, by seeing how much information humans can retain per second in reading), and it’s in the same ballpark” (p. 5).\n\n\n[246.](https://www.openphilanthropy.org/brain-computation-report#footnoteref246_drpfhd2)[Izhikevich (2004)](https://www.izhikevich.org/publications/whichmod.pdf): “The most common type of excitatory neuron in mammalian neocortex, namely the regular spiking (RS) cell, fires tonic spikes with decreasing frequency, as in Fig. 1(f). That is, the frequency is relatively high at the onset of stimulation, and then it adapts. Low-threshold spiking (LTS) inhibitory neurons also have this property. The interspike frequency of such cells may encode the time elapsed since the onset of the input” (p. 1064); “Most cortical neurons fire spikes with a delay that depends on the strength of the input signal. For a relatively weak but superthreshold input, the delay, also called spike latency, can be quite large, as in Fig. 1(i). The RS cells in mammalian cortex can have latencies of tens of ms. Such latencies provide a spike-timing mechanism to encode the strength of the input” (p. 1065).\n\n\n[247.](https://www.openphilanthropy.org/brain-computation-report#footnoteref247_wz798tg)[Izhikevich (2004)](https://www.izhikevich.org/publications/whichmod.pdf): “The most efficient is the I&F model. However, the model cannot exhibit even the most fundamental properties of cortical spiking neurons, and for this reason it should be avoided by all means. The only advantage of the I&F model is that it is linear, and hence amenable to mathematical analysis. If no attempts to derive analytical results are made, then there is no excuse for using this model in simulations” (p. 1069). See also [Jolivet et al. (2008b)](https://papers.nips.cc/paper/2858-integrate-and-fire-models-with-adaptation-are-good-enough.pdf): “What follows from the results of challenge A displayed in Tables 1 and 2 is that standard leaky integrate-and-fire models or other off-the-shelf methods are not sufficient to account for the variety of firing patterns and firing rates generated by a single neuron. The conclusion is that one has to include some dynamics in the threshold so as to achieve two things: first, to account in some rough fashion for neuronal refractoriness, and, second, to gain some flexibility in matching the mean firing rates across different stimulation paradigms. We had already shown that predicting subthreshold membrane voltage is relatively easy ([Jolivet et al. (2006a)](https://www.zora.uzh.ch/id/eprint/156190/1/ZORA_NL_156190.pdf)). Predicting the exact timing of spikes is where the difficulty resides” (p. 425).\n\n\n[248.](https://www.openphilanthropy.org/brain-computation-report#footnoteref248_4yfu91t)From [Open Philanthropy’s non-verbatim notes from a conversation with Prof. Dong Song](https://www.openphilanthropy.org/research/professor-dong-song-research-associate-professor-department-of-biomedical-engineering-university-of-southern-california/): “The functional impact of ion channel dynamics in the context of a Hodgkin-Huxley model is highly redundant. This makes Prof. Song think that Hodgkin-Huxley models can be simplified – e.g. you can replicate the input-output behavior of the Hodgkin-Huxley model, with fewer equations. Indeed, this almost has to be the case. There are also studies that show that many different combinations of ionic channels can generate the same overall behavior, both for a single neuron and a small neuronal circuit” (p. 2).\n\n\n[249.](https://www.openphilanthropy.org/brain-computation-report#footnoteref249_525djbd)He cites [Hoppensteadt and Izhikevich (2001)](https://www.izhikevich.org/publications/arbib.pdf), in which he goes into more detail: “Briefly, a model is canonical for a family if there is a continuous change of variables that transforms any other model from the family into this one, as we illustrate in Figure 1. For example, the entire family of weakly coupled oscillators of the form (1) can be converted into the canonical phase model (6), where Hij depend on the particulars of the functions fi and gij. The change of variables does not have to [be] invertible, so the canonical model is usually lower-dimensional, simple, and tractable. Yet, it retains many important features of the family. For example, if the canonical model has multiple attractors, then each member of the family has multiple attractors..” (p. 1).\n\n\n[250.](https://www.openphilanthropy.org/brain-computation-report#footnoteref250_zhg31x8)Here is a summary of recent AI progress from [Hassabis et al. (2017)](https://www.cell.com/neuron/pdf/S0896-6273(17)30509-3.pdf): “In AI, the pace of recent research has been remarkable. Artificial systems now match human performance in challenging object recognition tasks ([Krizhevsky et al. (2012)](https://papers.nips.cc/paper/4824-imagenet-classification-with-deep-convolutional-neural-networks)) and outperform expert humans in dynamic, adversarial environments such as Atari video games ([Mnih et al. (2015)](https://www.nature.com/articles/nature14236)), the ancient board game of Go ([Silver et al. (2016)](https://www.nature.com/articles/nature16961)), and imperfect information games such as heads-up poker ([Moravčík et al. (2017)](https://arxiv.org/pdf/1701.01724.pdf)). Machines can autonomously generate synthetic natural images and simulations of human speech that are almost indistinguishable from their real-world counterparts ([Lake et al. (2015)](https://web.mit.edu/cocosci/Papers/Science-2015-Lake-1332-8.pdf), [van den Oord et al. (2016)](https://arxiv.org/pdf/1609.03499.pdf)), translate between multiple languages ([Wu et al. (2016)](https://arxiv.org/pdf/1609.08144.pdf)), and create “neural art” in the style of well-known painters ([Gatys et al. (2015)](https://arxiv.org/pdf/1508.06576.pdf))” (p. 250). See also [LeCun et al. (2015)](https://www.nature.com/articles/nature14539) for a review of deep learning progress. Other recent advances include [OpenAI et al. (2019)](https://arxiv.org/pdf/1912.06680.pdf), [Vinyals et al. (2019)](https://www.nature.com/articles/s41586-019-1724-z.epdf?author_access_token=lZH3nqPYtWJXfDA10W0CNNRgN0jAjWel9jnR3ZoTv0PSZcPzJFGNAZhOlk4deBCKzKm70KfinloafEF1bCCXL6IIHHgKaDkaTkBcTEv7aT-wqDoG1VeO9-wO3GEoAMF9bAOt7mJ0RWQnRVMbyfgH9A%3D%3D), [Radford et al. (2019)](https://cdn.openai.com/better-language-models/language_models_are_unsupervised_multitask_learners.pdf), [Brown et al. (2020)](https://arxiv.org/abs/2005.14165).\n\n\n[251.](https://www.openphilanthropy.org/brain-computation-report#footnoteref251_82ztpws)See [Kriegeskorte (2015)](https://www.annualreviews.org/doi/full/10.1146/annurev-vision-082114-035447?url_ver=Z39.88-2003&rfr_id=ori%3Arid%3Acrossref.org&rfr_dat=cr_pub%3Dpubmed) and [Nielsen’s “Neural Networks and Deep Learning”](http://neuralnetworksanddeeplearning.com/index.html) for general introductions.\n\n\n[252.](https://www.openphilanthropy.org/brain-computation-report#footnoteref252_mfzf562)From [Open Philanthropy’s non-verbatim notes from a conversation with Prof. Eric Jonas](https://www.openphilanthropy.org/research/professor-eric-jonas-assistant-professor-of-computer-science-at-the-university-of-chicago/): “Prof. Jonas does not think that there is a clear meaning to the claim that the brain is a deep learning system, and he is unconvinced by the argument that ‘the brain is doing optimization, and what is deep learning but optimization?’. He also has a long-term prior that researchers are too quick to believe that the brain is doing whatever is currently popular in machine learning, and he doesn’t think we’ve found the right paradigm yet” (p. 3).\n\n\n[253.](https://www.openphilanthropy.org/brain-computation-report#footnoteref253_4ukxme1)From [Open Philanthropy’s non-verbatim notes from a conversation with Prof. Anthony Zador](https://www.openphilanthropy.org/research/professor-anthony-zador-alle-davis-and-maxine-harrison-professor-of-neurosciences-cold-spring-harbor-laboratory/): “In the early days of neural networks, people thought you needed sigmoid activation functions, and that piecewise linear models could not work because they are not differentiable. But it turns out that computers can handle the function having one non-differentiable point, so the two are largely interchangeable, and it’s fine to go with the more convenient option. The main constraint is that the function needs to be monotonically increasing. This is an example of a case in which the precise function generating a neuron’s output does not matter” (p. 2). See also [Kriegeskorte (2015)](https://www.annualreviews.org/doi/full/10.1146/annurev-vision-082114-035447?url_ver=Z39.88-2003&rfr_id=ori%3Arid%3Acrossref.org&rfr_dat=cr_pub%3Dpubmed): “The particular shape of the nonlinear activation function does not matter to the class of input–output mappings that can be represented” (p. 422); and [Tegmark (2017)](https://www.amazon.com/Life-3-0-Being-Artificial-Intelligence/dp/1101946598/ref=tmm_hrd_swatch_0?_encoding=UTF8&qid=1586106499&sr=8-1): “It’s been proven that almost any function will suffice as long as it’s not linear (a straight line)” (p. 72, endnote 5).\n\n\n[254.](https://www.openphilanthropy.org/brain-computation-report#footnoteref254_xg5lnbd)See Matthew Botvinick’s comments in [this podcast](https://www.youtube.com/watch?v=3t06ajvBtl0): “I consider the networks we use in deep learning research to be a reasonable approximation to the mechanisms that carry information in the brain…If you go back to the 1980s, there’s an unbroken chain of research in which a particular strategy is taken, which is: hey, let’s train a deep learning system, let’s train a multi-layer neural network, on this task that we trained our rat on, or our monkey on, or this human being on, and let’s look at what the units deep in the system are doing, and let’s ask whether what they’re doing resembles what we know about what neurons deep in the brain are doing; and over and over and over and over and over, that strategy works, in the sense that, the learning algorithms that we have access to, which typically center on backpropagation, they give rise to patterns of activity, patterns of response, patterns of neuronal behavior in these artificial models, that look hauntingly similar to what you see in the brain. Is that a coincidence? … the circumstantial evidence is overwhelming” (see (53:00-1:00:00 [here](https://www.youtube.com/watch?v=3t06ajvBtl0)).\n\n\n[255.](https://www.openphilanthropy.org/brain-computation-report#footnoteref255_eybco2x)From [Open Philanthropy’s non-verbatim notes from a conversation with Dr. Paul Christiano](https://www.openphilanthropy.org/research/dr-paul-christiano-researcher-openai/), (p. 6).\n\n\n[256.](https://www.openphilanthropy.org/brain-computation-report#footnoteref256_5mfhs6m)[Sandberg (2013)](https://link.springer.com/chapter/10.1007/978-3-642-31674-6_19): “The noise level in the nervous system is fairly high, with spike-timing variability reaching milliseconds due to ion channel noise. Perceptual thresholds and motor precision are noise limited. Various noise management solutions such as redundant codes, averaging and bias have evolved ([Faisal et al. (2008)](https://www.ncbi.nlm.nih.gov/pmc/articles/PMC2631351/)). In synapses the presynaptic transient influx of calcium ions as a response to an action potential corresponds to just 13,000 ions ([Koch (1999)](https://www.amazon.com/Biophysics-Computation-Information-Computational-Neuroscience/dp/0195181999)) (p. 458), and on the postsynaptic side just 250 ions ([Koch (1999)](https://www.amazon.com/Biophysics-Computation-Information-Computational-Neuroscience/dp/0195181999))(p. 302). These numbers are so small that numeric noise begins to be significant, and the chemical dynamics can no longer be described as average concentrations. However, biological systems can resist the discretization noise through error correction mechanisms that lead to discrete attractor dynamics, in line with the evidence that synaptic plasticity involve discrete changes rather than graded response ([Ajay and Bhalla (2006)](https://journals.physiology.org/doi/full/10.1152/physiol.00009.2006) [Bhalla (2004)](http://www.sciencedirect.com/science/article/pii/S0006349504735596) and [Elliott (2011)](https://www.mitpressjournals.org/doi/abs/10.1162/NECO_a_00088)). It is hence not implausible that there exist sufficient scale separation on the synaptic and neuronal level: information is transmitted in a discrete code (with a possible exception of timing) between discrete entities. At finer resolution thermal and chemical noise will be significant, suggesting that evolution would have promoted error correction and hence scale separation” (p. 261). See also [Open Philanthropy’s non-verbatim notes from a conversation with Prof. Konrad Kording](https://www.openphilanthropy.org/research/professor-konrad-kording-penn-integrated-knowledge-professor-university-of-pennsylvania/): “If you want upper bounds on required compute, you can look at the parts list of the computing elements in the brain, the noisiness of which will put physical limits on the amount of computation they can do. This might result in very high estimates. For example, it might say that every ion channel does a bit roughly every ten milliseconds. This approach doesn’t necessarily rule out molecules and proteins as possible avenues of computation. However, some molecules may equilibrate so fast that you can replace them with a variable that describes their average state (e.g., mean field theory is applicable). You can’t do this across a neuron: there are NMDA spikes and other complexities. So the question is: what is the compartment size where local averaging is possible? People disagree. Some think the brain has organized as itself to be mean-field modelable, but they have never shown much evidence for that. Still, at some length-scale (say, ten micrometers) and some time-scale (much faster than electrophysiology), everything will equilibrate” (p. 4).\n\n\n[257.](https://www.openphilanthropy.org/brain-computation-report#footnoteref257_n1eglo4)[Gerstner and Naud (2009)](https://science.sciencemag.org/content/326/5951/379.long): “Opinions strongly diverge on what constitutes a good model of a neuron” (p. 379). [Herz et al. (2006)](http://www.ini.ethz.ch/~cwang/ModelingSingleNeuron.pdf): “Even today, it remains unclear which level of single-cell modeling is appropriate to understand the dynamics and computations carried out by such large systems (p. 83-4). [Kriegeskorte (2015)](https://www.annualreviews.org/doi/full/10.1146/annurev-vision-082114-035447?url_ver=Z39.88-2003&rfr_id=ori%3Arid%3Acrossref.org&rfr_dat=cr_pub%3Dpubmed): “Opinions diverge as to whether more biologically detailed models will ultimately be needed” (see section: “What is meant by the term neural network?”). Gabriel Kreiman, in [this talk](https://ocw.mit.edu/resources/res-9-003-brains-minds-and-machines-summer-course-summer-2015/unit-1.-neural-circuits-of-intelligence/lecture-1.2-gabriel-kreiman-computational-roles-of-neural-feedback/) (8:00): “What’s the exact resolution at which we should study neural systems is a fundamental open question, we don’t know what’s the right level of abstraction. There are people who think about brains in the context of blood flow and millions and millions of neurons averaged together. There are people who think we need to actually pay attention to the exact details of how every single dendrite integrates information and so on. For many of us this is a sufficient level of abstraction, the notion that there is a neuron that can integrate information.” [Dayan and Abbott (2001)](https://www.amazon.com/Theoretical-Neuroscience-Computational-Mathematical-Modeling/dp/0262541858): “It is often difficult to identify the appropriate level of modeling for a particular problem” (p. xiii). See also the non-verbatim notes from [Open Philanthropy’s non-verbatim notes from a conversation with Prof. E.J. Chichilnisky](https://www.openphilanthropy.org/research/professor-e-j-chichilnisky-john-r-adler-professor-of-neurosurgery-and-professor-of-ophthalmology-at-stanford-university/): “Discussion of the compute sufficient to replicate the brain’s information-processing is very speculative. We don’t know enough about the brain to give answers with confidence, and different people with neuroscientific expertise will answer differently” (p. 1); from [Open Philanthropy’s non-verbatim notes from a conversation with Prof. Barak Pearlmutter](https://www.openphilanthropy.org/research/professor-barak-pearlmutter-professor-of-computer-science-maynooth-university/): “Mr. Carlsmith asked Prof. Pearlmutter about his views about the level of modeling detail necessary to create brain models that can replicate task performance. Prof. Pearlmutter suggested that “the truth is: we don’t know,” and that while we may have intuitions, science has shown us that intuitions are not very reliable” (p. 1). See also [Open Philanthropy’s non-verbatim notes from a conversation with Prof. Konrad Kording](https://www.openphilanthropy.org/research/professor-konrad-kording-penn-integrated-knowledge-professor-university-of-pennsylvania/), [Prof. Eric Jonas](https://www.openphilanthropy.org/research/professor-eric-jonas-assistant-professor-of-computer-science-at-the-university-of-chicago/), and [Prof. Erik De Schutter](https://www.openphilanthropy.org/research/professor-erik-de-schutter-professor-computational-neuroscience-okinawa-institute-of-science-and-technology/).\n\n\n[258.](https://www.openphilanthropy.org/brain-computation-report#footnoteref258_yttl1ox)From [Open Philanthropy’s non-verbatim notes from a conversation with Prof. Erik De Schutter](https://www.openphilanthropy.org/research/professor-erik-de-schutter-professor-computational-neuroscience-okinawa-institute-of-science-and-technology/): “Modeling neural networks at the level of simple spiking neuron models or rate-based models is very popular. Prof. De Schutter thinks the field would benefit from a greater diversity of approaches” (p. 2); from [Open Philanthropy’s non-verbatim notes from a conversation with Prof. Shaul Druckmann](https://www.openphilanthropy.org/research/professor-shaul-druckmann-assistant-professor-of-neurobiology-and-of-psychiatry-and-behavioral-sciences-stanford-university/) “The field has basically given up on detailed biophysical modeling. In the 1990s, there were many papers in top journals on the topic, but now there are almost none. Prof. Druckmann expects that the large majority of people who do not work in early sensory systems would say that detailed biophysical modeling is unnecessary for understanding the brain’s computation” (p. 7).\n\n\n[259.](https://www.openphilanthropy.org/brain-computation-report#footnoteref259_ep1jd3l)[Herz et al. (2006)](http://www.ini.ethz.ch/~cwang/ModelingSingleNeuron.pdf): “The appropriate level of description depends on the particular goal of the model. Indeed, finding the best abstraction level is often the key to success” (p. 80). [Pozzorini et al. (2015)](https://journals.plos.org/ploscompbiol/article?id=10.1371/journal.pcbi.1004275): “Detailed biophysical models with stochastic ion channel dynamics can in principle account for every aspect of single-neuron activity; however, due to their complexity, they require high computational power… Overall, a reliable and efficient fitting procedure for detailed biophysical models is not known” (p. 2). [Izhikevich (2004)](https://www.izhikevich.org/publications/whichmod.pdf): “The [Hodkin-Huxley] model is extremely expensive to implement… one can use the Hodgkin–Huxley formalism only to simulate a small number of neurons or when simulation time is not an issue” (p. 1069). [Dayan and Abbott (2001)](https://www.amazon.com/Theoretical-Neuroscience-Computational-Mathematical-Modeling/dp/0262541858): “A frequent mistake is to assume that a more detailed model is necessarily superior. Because models act as bridges between levels of understanding, they must be detailed enough to make contact with the lower level yet simple enough to provide clear results at the higher level” (p. xiii). [Beniaguev et al. (2019)](https://www.biorxiv.org/content/biorxiv/early/2019/04/18/613141.full.pdf): “Simulation of compartmental models entails numerically solving thousands of coupled nonlinear differential equations which is computationally intensive ([Segev and Rall (1998)](https://pubmed.ncbi.nlm.nih.gov/9829684/); [Burke (2000)](https://pubmed.ncbi.nlm.nih.gov/10946991/)). Moreover, while the simulation provides good fit to data, it is not optimized for providing conceptual understanding of the process by which it is achieved” (p. 14). [Kobayashi et al. (2009)](https://www.ncbi.nlm.nih.gov/pmc/articles/PMC2722979/): ‘It has recently become possible to use elaborate simulation platforms, such as NEURON ([Hines and Carnevale (1997)](https://pubmed.ncbi.nlm.nih.gov/9248061/)) and GENESIS ([Bower and Beeman (1995)](https://www.amazon.com/Book-GENESIS-Exploring-Realistic-SImulations/dp/0387940197)), for reproducing experimental data. Because of nonlinearity and complexity, however, parameter optimization of the HH type models is a notoriously difficult problem ([Achard and De Schutter (2006)](https://pubmed.ncbi.nlm.nih.gov/16848639/); [Goldman et al. (2001)](https://pubmed.ncbi.nlm.nih.gov/11438598/); [Huys et al. (2006)](https://pubmed.ncbi.nlm.nih.gov/16624998/)), and these models require a high computational cost, which hinders performing the simulation of a massively interconnected network” (p. 1).\n\n\n[260.](https://www.openphilanthropy.org/brain-computation-report#footnoteref260_rs9m7et)From [Open Philanthropy’s non-verbatim notes from a conversation with Prof. Erik De Schutter](https://www.openphilanthropy.org/research/professor-erik-de-schutter-professor-computational-neuroscience-okinawa-institute-of-science-and-technology/): “The best way forward is to try to explore and understand the function of the brain’s underlying mechanisms – a project that may eventually lead to an understanding of what can be simplified. But to try to simplify things too early, before you understand them, is a dangerous game” (p. 1); from [Open Philanthropy’s non-verbatim notes from a conversation with Dr. Stephen Larson](https://www.openphilanthropy.org/research/dr-stephen-larson-ceo-of-metacell-and-co-founder-of-openworm/): “OpenWorm’s approach is to throw as much complexity into the neuron models as they think is necessary (this is currently roughly at the level of a Hodgkin-Huxley model, plus some additional features), in an effort to really nail down that their model is capturing the worm’s behavior across many conditions and timescales. Success in such a project would allow you to bound the complexity necessary for such a simulation (indeed, this is one of Dr. Larson’s motivations for working on it). After that, you could attempt to simplify the model in a principled way. However, the jury is still out on how much simplification is available, and Dr. Larson thinks that in this kind of uncertain context, you should focus on the worst-case, most conservative compute estimates as your default” (p. 2).\n\n\n[261.](https://www.openphilanthropy.org/brain-computation-report#footnoteref261_p085zba)From [Open Philanthropy’s non-verbatim notes from a conversation with Prof. Anthony Zador](https://www.openphilanthropy.org/research/professor-anthony-zador-alle-davis-and-maxine-harrison-professor-of-neurosciences-cold-spring-harbor-laboratory/) (p. 1-2):\n\n\nProf. Zador believes that integrate-and-fire neuron models, or something like them, are adequate to capture the contribution of a neuron to the brain’s information-processing. He does not think that Hodgkin-Huxley-type models are required, or that we need to include the details of synaptic conductances in our models. However, he believes that the temporal dynamics of spiking are important. That is, it matters that there are discrete spikes, occurring at particular moments in time, which are the conduit of information between neurons…That said, he does not think that the nuances of how these spikes are generated matter very much. The integrate and fire model is one mathematically tractable model, but there are others which, if more mathematically tractable, would be fine as well.\n\n\nFrom [Open Philanthropy’s non-verbatim notes from a conversation with Prof. Dong Song](https://www.openphilanthropy.org/research/professor-dong-song-research-associate-professor-department-of-biomedical-engineering-university-of-southern-california/) (p. 1-2):\n\n\nIn his view, to replicate intelligence at a level similar to humans (as opposed to some more detailed level of simulation accuracy), you don’t need to model quantum phenomena, or ionic channels, or even Hodgkin-Huxley-level dynamics. Rather, a spiking neuron model, with a rich array of input-output behavior, is sufficient. That said, certain simplified spiking neuron models are probably not sufficient. These included linear integrate-and-fire neurons, the Izhikevich model (a simplified version of the Hodgkin-Huxley model), and the models used in Prof. Song’s MIMO model.\n\n\nProf. Chris Eliasmith, whose [large-scale brain model SPAUN](https://science.sciencemag.org/content/338/6111/1202.abstract) uses leaky-integrate-and-fire neurons (see p. 16 [here](https://science.sciencemag.org/content/sci/suppl/2012/11/28/338.6111.1202.DC1/1225266.Eliasmith.SM_revised.pdf)), thought such neuron models likely adequate for task-performance (see [Open Philanthropy’s non-verbatim notes from a conversation with Prof. Chris Eliasmith](https://www.openphilanthropy.org/research/professor-chris-eliasmith-professor-of-philosophy-and-systems-design-engineering-canada-research-chair-in-theoretical-neuroscience-and-director-of-the-centre-for-theoretical-neuroscience-at-the-uni/) (p. 5)):\n\n\nProf. Eliasmith thinks that neuron models at roughly the level of detail he uses in SPAUN (possibly including some non-linearities in the dendrites), if scaled up to the size of the brain as a whole, would be able not just to replicate cognitive performance, but also to reflect a functional profile similar to biological neurons.\n\n\nFrom [Open Philanthropy’s non-verbatim notes from a conversation with Prof. Markus Meister](https://www.openphilanthropy.org/research/professor-markus-meister-anne-p-and-benjamin-f-biaggini-professor-of-biological-sciences-at-the-california-institute-of-technology/) (p.1-4):\n\n\nThe computations performed in the retina are fairly well-understood… If your goal is to predict the spiking outputs of the retina, you don’t need a highly intricate model (for example, you don’t have to simulate the details of every neuron using multi-compartmental models). Rather, you can use very compact models known as “point neuron models,” which you can connect together with simple synapses.… To create a functional model of the whole retina, in the extreme case you’d need a point-neuron model for every cell. However, you can probably get away with less than that, because there are a lot of regularities that can be simplified computationally.… Prof. Meister would be sympathetic to scaling up from the retina as a way of putting an upper limit on the difficulty of simulating the brain as a whole. Prof. Meister has not actually done this back-of-the-envelope calculation, but budgeting based on the rate at which action potentials arrive at synapses, multiplied by the number of synapses, seems like roughly the right approach. … There is evidence that single point neuron models are not sufficient to explain all neural phenomena. For example, in cortical pyramidal cells, the basal dendrites and soma operate with different dynamics than the apical tuft. Using two point-neuron models (one for the soma, and another for the apical tuft), you can capture this fairly well. These are more powerful models, but they are not dramatically more computationally complex: e.g., it’s basically a factor of two.\n\n\nFrom [Open Philanthropy’s non-verbatim notes from a conversation with Dr. Stephen Baccus](https://www.openphilanthropy.org/research/professor-stephen-baccus-professor-of-neurobiology-stanford-university/) (p. 5):\n\n\nTo build a functional computational model of the retina as a whole, you could use a linear filter and a threshold as a model unit, and you could have something like one model unit per cell in the retina. However, in some of Prof. Baccus’s models, they have less than this. Whether you’d need e.g. one model unit for every interneuron, or one for every two or three interneurons, isn’t clear, but it’s around that order of magnitude. Prof. Baccus does not think simulating more complex aspects of neuron biology, like dendrites, compartments and ion channels, would be necessary for replicating the retina’s input-output relationship…Prof. Baccus thinks the answer is “maybe” to the question of whether the compute necessary to model neurons in the retina will be similar to the compute necessary to model neurons in the cortex. You might expect a volume by volume comparison to work as a method of scaling up from the retina to the cortex.\n\n\nDr. Adam Marblestone offered an estimate that seemed to assume that firing decisions would be in the noise. From [Open Philanthropy’s non-verbatim notes from a conversation with Dr. Adam Marblestone](https://www.openphilanthropy.org/research/dr-adam-marblestone-research-scientist-google-deepmind/) (p. 9):\n\n\nDr. Marblestone is fairly comfortable with one FLOP per spike through synapse as a low-end estimate, and ~100 FLOPs per spike through synapse (roughly comparable to the estimate offered by Prof. Rahul Sarpeshkar) as a high-end estimate. His best guess is 10-100 FLOPs per spike through synapse.\n\n\nProf. Barak Pearlmutter said something similar, and he was sympathetic to the idea that dendritic computation would add only a small constant factor. From [Open Philanthropy’s non-verbatim notes from a conversation with Prof. Barak Pearlmutter](https://www.openphilanthropy.org/research/professor-barak-pearlmutter-professor-of-computer-science-maynooth-university/) (p. 2-4):\n\n\nProf. Pearlmutter thought that the compute for firing decisions would be “in the noise” relative to compute for spikes through synapses, because there are so many fewer neurons than synapses… Prof. Pearlmutter thought it a fairly good intuition that dendritic computation would only implicate a small constant factor increase in required compute, though very complicated local interactions could introduce uncertainty… Overall, Prof. Pearlmutter thought that an estimate based on 100 FLOPs per spike through synapse, with a factor of two for learning, sounded fairly reasonable.\n\n\n[262.](https://www.openphilanthropy.org/brain-computation-report#footnoteref262_bq51aob)A number of experts we engaged with indicated that many in the field are sympathetic to the adequacy of models less compute-intensive than single-compartment Hodgkin-Huxley (though we have very few comments in this respect publicly documented), and it fits with my impressions more broadly. See also [Open Philanthropy’s non-verbatim notes from a conversation with Prof. Shaul Druckmann](https://www.openphilanthropy.org/research/professor-shaul-druckmann-assistant-professor-of-neurobiology-and-of-psychiatry-and-behavioral-sciences-stanford-university/) “The field has basically given up on detailed biophysical modeling. In the 1990s, there were many papers in top journals on the topic, but now there are almost none. Prof. Druckmann expects that the large majority of people who do not work in early sensory systems would say that detailed biophysical modeling is unnecessary for understanding the brain’s computation” (p. 7) (though whether Hodgkin-Huxley would fall under *“detailed”* biophysical modeling isn’t totally clear to me).\n\n\n[263.](https://www.openphilanthropy.org/brain-computation-report#footnoteref263_14y6joe)Jonathan Pillow says in a [lecture](https://youtu.be/NFeGW5ljUoI?t=968): “Obviously if I simulate the entire brain using multi-compartment Hodkin-Huxley models that describe the opening and closing of every channel, clearly that model has the capacity to do anything that the brain can do” (16:10). [Pozzorini et al. (2015)](https://journals.plos.org/ploscompbiol/article?id=10.1371/journal.pcbi.1004275) write: “Detailed biophysical models with stochastic ion channel dynamics can in principle account for every aspect of single-neuron activity” (p. 2). [Beniaguev et al. (2019)](https://www.biorxiv.org/content/biorxiv/early/2019/04/18/613141.full.pdf): “Thanks to the introduction of compartmental models ([Rall (1964)](https://scinapse.io/papers/93995994)) and digital anatomical reconstructions, we can now account for nearly all those experimental phenomena, as well as explore conditions that are not accessible with current experimental technique. In that sense we have developed along the last 50 or so years a faithful model of the input-output transformation of neurons” (p. 14).\n\n\n[264.](https://www.openphilanthropy.org/brain-computation-report#footnoteref264_mgq61oi)Workshop participants included: John Fiala, Robin Hanson, Kenneth Jeffrey Hayworth, Todd Huffman, Eugene Leitl, Bruce McCormick, Ralph Merkle, Toby Ord, Peter Passaro, Nick Shackel, Randall A. Koene, Robert A. Freitas Jr and Rebecca Roache. From a brief google, a number of these people appear to be involved in the Brain Preservation Foundation, and some (such as Toby Ord and Rebecca Roache) are philosophers rather than neuroscientists. [Sandberg and Bostrom (2008)](https://www.fhi.ox.ac.uk/brain-emulation-roadmap-report.pdf): “An informal poll among workshop attendees produced a range of estimates of the required resolution for WBE is. The consensus appeared to be level 4‐6. Two participants were more optimistic about high level models, while two suggested that elements on level 8‐9 may be necessary at least initially (but that the bulk of mature emulation, once the basics were understood, could occur on level 4‐5).” (p 14).\n\n\n[265.](https://www.openphilanthropy.org/brain-computation-report#footnoteref265_xs4cr17)From [Open Philanthropy’s non-verbatim notes from a conversation with Dr. Stephen Larson](https://www.openphilanthropy.org/research/dr-stephen-larson-ceo-of-metacell-and-co-founder-of-openworm/) (p. 5):\n\n\nOn the basis of his experience at OpenWorm thus far, Dr. Larson thinks it unlikely that very simplified neuron models (e.g., integrate-and-fire neurons, or models akin to the artificial neurons used in deep neural networks) are going to be sufficient to describe the information-processing dynamics involved in the worm’s behavior…. Dr. Larson does not think that there is strong evidence that spikes and synaptic inputs are the most informative processes for studying information-processing in the brain… Given the many uncertainties involved in estimates of this kind, Dr. Larson believes that the right conclusion is something like: there is insufficient evidence to justify concluding anything (as opposed to, e.g., “there is some moderate evidence in favor of X FLOP/s, so maybe let’s believe that?”). In statistics, for example, one wants a P value less than 0.05, and Dr. Larson is not sure we have anything like that for these FLOP/s estimates.\n\n\nFrom [Open Philanthropy’s non-verbatim notes from a conversation with Prof. Erik De Schutter](https://www.openphilanthropy.org/research/professor-erik-de-schutter-professor-computational-neuroscience-okinawa-institute-of-science-and-technology/):\n\n\nProf. De Schutter thinks that at this point, we simply are not in a position to place any limits on the level of biological detail that might be relevant to replicating the brain’s task-performance. Many common simplifications do not have solid scientific foundations, and are more at the level of ‘the way we do things.’\n\n\nFrom [Open Philanthropy’s non-verbatim notes from a conversation with Prof. Eric Jonas](https://www.openphilanthropy.org/research/professor-eric-jonas-assistant-professor-of-computer-science-at-the-university-of-chicago/) (p. 6):\n\n\nmany electrophysiologists would say that we don’t know what neurons are doing. And they would ask: how can we start making claims about the computational capacity of networks of neurons, if we don’t know how individual neurons work? Prof. Jonas is sympathetic to this. There are a variety of complexities that make the computations performed by a neuron extremely difficult to quantify. Examples include: dendritic spiking, the complex dynamics present in synapses (including large numbers of non-linearities), the diversity of ion-channel receptors, post-translational modification, alternative splicing, and various receptor trafficking regimes. Some people attempt to draw comparisons between neurons and transistors. However, even with a billion transistors, Prof. Jonas does not know how to create a reasonable simulation of a neuron.\n\n\nFrom [Open Philanthropy’s non-verbatim notes from a conversation with Prof. Konrad Kording](https://www.openphilanthropy.org/research/professor-konrad-kording-penn-integrated-knowledge-professor-university-of-pennsylvania/) (p. 4):\n\n\nExamination of neurons reveals that they are actually very non-linear, and the computations involved in plasticity probably include a large number of factors distributed across the cell. In this sense, a neuron might be equivalent to a three-layer neural network, internally trained using backpropagation. In that case, you’d need to add another factor of roughly 105 to your compute estimate, for a total of 1020 multiplications per second. This would be much less manageable… The difference between the estimates generated by these different approaches is very large – something like ten orders of magnitude. It’s unclear where the brain is on that spectrum … Prof. Kording’s hunch is that in order to replicate firing decisions in neurons, you’d need to break the neuron into pieces of something like ten microns (this would hundreds, maybe thousands of compartments per neuron). This hunch is grounded in a belief that neurons are very non-linear.\n\n\nFrom [Open Philanthropy’s non-verbatim notes from a conversation with Prof. Shaul Druckmann](https://www.openphilanthropy.org/research/professor-shaul-druckmann-assistant-professor-of-neurobiology-and-of-psychiatry-and-behavioral-sciences-stanford-university/) (p. 3):\n\n\nWe can distinguish between two approaches to the brain’s biophysical complexity. One camp argues: ‘let’s not assume we need to include a given type of biophysical complexity in our models, until doing so becomes necessary.’ The other argues: ‘If this complexity were in fact important, we would not currently be able to tell.’ Prof. Druckmann tends to be in this latter camp, though he thinks that the former is a fair and practical approach.\n\n\nThough note that:\n\n\nProf. Druckmann would be extremely surprised if future working models of human intelligence incorporate large amounts of biophysical detail (e.g., molecular dynamics). He is confident that the type of non-linearities generated by real biophysics can be more efficiently emulated in different ways in a model. Therefore, these models will look more like giant networks of simple artificial neurons than giant networks of Hodgkin-Huxley models.\n\n\n[266.](https://www.openphilanthropy.org/brain-computation-report#footnoteref266_h38ii39)From [Open Philanthropy’s non-verbatim notes from a conversation with Prof. Erik De Schutter](https://www.openphilanthropy.org/research/professor-erik-de-schutter-professor-computational-neuroscience-okinawa-institute-of-science-and-technology/):\n\n\nMany common simplifications do not have solid scientific foundations, and are more at the level of ‘the way we do things.’\n\n\nFrom [Open Philanthropy’s non-verbatim notes from a conversation with Prof. Konrad Kording](https://www.openphilanthropy.org/research/professor-konrad-kording-penn-integrated-knowledge-professor-university-of-pennsylvania/) (p. 5):\n\n\nIn general, people are often willing to take a philosophical position, without much evidence, if it makes their research more important.\n\n\nFrom [Open Philanthropy’s non-verbatim notes from a conversation with Prof. Anthony Zador](https://www.openphilanthropy.org/research/professor-anthony-zador-alle-davis-and-maxine-harrison-professor-of-neurosciences-cold-spring-harbor-laboratory/) (p. 5):\n\n\nProf. Zador’s views about the relative importance of different neural mechanisms are shaped centrally by gut feeling and scientific aesthetic. Neuroscientists have debated this issue for decades, and ultimately the proof is in the pudding. Prof. Zador expects that a lot of neuroscientists would say that just we don’t know what amount of compute would be required to match human-level task performance. There is also a wide diversity of views in the field, and many people’s views are centrally shaped by their research background. For example, people with backgrounds in biology are generally more excited about incorporating biological detail; people who study humans tend to focus on the importance of learning; and people who study small animals will like *C. elegans* or fruit flies focus less on learning and more on innate behaviors.\n\n\nFrom [Open Philanthropy’s non-verbatim notes from a conversation with Prof. Dong Song](https://www.openphilanthropy.org/research/professor-dong-song-research-associate-professor-department-of-biomedical-engineering-university-of-southern-california/) (p. 2):\n\n\nIt would be hard for Prof. Song to prove his view.\n\n\nFrom [Open Philanthropy’s non-verbatim notes from a conversation with Prof. Barak Pearlmutter](https://www.openphilanthropy.org/research/professor-barak-pearlmutter-professor-of-computer-science-maynooth-university/) (p. 1):\n\n\nProf. Pearlmutter suggested that ‘the truth is: we don’t know,’ and that while we may have intuitions, science has shown us that intuitions are not very reliable.\n\n\nFrom [Open Philanthropy’s non-verbatim notes from a conversation with Prof. E.J. Chichilnisky](https://www.openphilanthropy.org/research/professor-e-j-chichilnisky-john-r-adler-professor-of-neurosurgery-and-professor-of-ophthalmology-at-stanford-university/) (p. 2):\n\n\nno one has been able to prove one way or another whether detailed biophysical modeling is necessary. It’s hard to know, and there isn’t a lot of evidence. There are high-quality experimental and computational efforts underway to understand this…People’s views about the right level of biophysical detail to focus on are sometimes shaped by what they’re good at (e.g., computational simplifications, vs. detailed biophysical analysis). And some people find just biophysical complexity intrinsically interesting.\n\n\nFrom [Open Philanthropy’s non-verbatim notes from a conversation with Prof. Shaul Druckmann](https://www.openphilanthropy.org/research/professor-shaul-druckmann-assistant-professor-of-neurobiology-and-of-psychiatry-and-behavioral-sciences-stanford-university/) (p. 6):\n\n\nProf. Druckmann believes that at our current conceptual understanding of neural computation, many statements in neuroscience to the effect that “we can reduce X to Y” are based mostly on personal opinion, sometimes influenced in part by what current technology allows us to do, rather than in well-justified, first-principles reasoning.\n\n\n[267.](https://www.openphilanthropy.org/brain-computation-report#footnoteref267_jgaaay6)From [Open Philanthropy’s non-verbatim notes from a conversation with Dr. Paul Christiano](https://www.openphilanthropy.org/research/dr-paul-christiano-researcher-openai/):”A ReLU costs less than a FLOP. Indeed, it can be performed with many fewer transistors than a multiply of equivalent precision” (p. 6).\n\n\n[268.](https://www.openphilanthropy.org/brain-computation-report#footnoteref268_jcaxjqs)This number is just a ballpark for lower temporal resolutions. For example, it’s the resolution used by [Maheswaranathan et al. (2019)](https://www.biorxiv.org/content/10.1101/340943v5.full.pdf).\n\n\n[269.](https://www.openphilanthropy.org/brain-computation-report#footnoteref269_062xotz)[Izhikevich (2004)](https://www.izhikevich.org/publications/whichmod.pdf), (p. 1068).\n\n\n[270.](https://www.openphilanthropy.org/brain-computation-report#footnoteref270_8kwfcsn)[Izhikevich (2004)](https://www.izhikevich.org/publications/whichmod.pdf) seems to be assuming at least 1000 time-steps per second: “It takes only 13 floating point operations to simulate 1 ms of the model, so it is quite efficient in large-scale simulations of cortical networks. When and (a,b,c,d) = (0.2, 2, -56, -16) and I = -99, the model has chaotic spiking activity, though the integration time step [here Izhikevich uses a symbol that google doc endnotes can’t reproduce] should be small to achieve adequate numerical precision” (p. 1068).\n\n\n[271.](https://www.openphilanthropy.org/brain-computation-report#footnoteref271_llmwrmw)[Izhikevich (2004)](https://www.izhikevich.org/publications/whichmod.pdf), (p. 1069).\n\n\n[272.](https://www.openphilanthropy.org/brain-computation-report#footnoteref272_sbr4erq)The FLOPs estimate for the Hodgkin-Huxley model given in [Izhikevich (2004)](https://www.izhikevich.org/publications/whichmod.pdf) appears to assume at least 10,000 timesteps/sec: “It takes 120 floating point operations to evaluate 0.1 ms of model time (assuming that each exponent takes only ten operations), hence, 1200 operations/1 ms” (p. 1069). I’m not entirely confident that the “.1 ms of model time” Izhikevich is referring to corresponds with a .1 ms time-step, but this fits with with his characterization of the model as consisting of tens of parameters and requiring at least 10 FLOPs for each exponent. And regardless, it seems unlikely that he has time-steps *larger* than .1 ms in mind, given that he budgets based on .1 ms increments.\n\n\n[273.](https://www.openphilanthropy.org/brain-computation-report#footnoteref273_n90n51a)Here’s my estimate, which the lead author of the paper tells me looks about right. 1st layer: 1278 synaptic inputs × 35 × 128 = 5.7 million MACCs (from line 140 and lines 179-180 [here](https://github.com/SelfishGene/neuron_as_deep_net/blob/master/fit_CNN.py)); Next 6 layers: 6 layers × 128 × 35 × 128 = 3.4 million MACCs. Total per ms: ~ 10 million MACCs. Total per second: ~10 billion MACCs. Multiplied by 2 to count individual FLOPs (see “It’s dot products all the way down” [here](https://machinethink.net/blog/how-fast-is-my-model/)) = ~20 billion FLOP/s per cell. Though the authors also note that “the accuracy of the model was insensitive to the temporal kernel sizes of the different DNN layers when keeping the total temporal extent of the entire network fixed, so the temporal extent of the first layer was selected to be larger than subsequent layers mainly for visualization purposes” (p. 7). I’m not sure what kind of difference this might make.\n\n\n[274.](https://www.openphilanthropy.org/brain-computation-report#footnoteref274_jk1gull)This is a very loose estimate, based on scaling up the estimate for the [Beniaguev et al. (2020)](https://www.biorxiv.org/content/10.1101/613141v2.full.pdf) DNN by ~1000x, on the basis of their reporting, in the 2019 version of the paper, that “In our tests we obtained a factor of ~2000 speed up when using the DNN instead of its compartmental-model counterpart” (p. 15). In the current paper they report a “a speedup of simulation time by several orders of magnitude” (p. 8).\n\n\n[275.](https://www.openphilanthropy.org/brain-computation-report#footnoteref275_y61agdw)This is somewhat analogous to the approach taken by [Ananthanarayanan et al. (2009)](https://people.eecs.berkeley.edu/~demmel/cs267_Spr10/Lectures/RajAnanthanarayanan_SC09-a63.pdf): “The basic algorithm of our cortical simulator C2 [2] is that neurons are simulated in a clock-driven fashion whereas synapses are simulated in an event-driven fashion. For every neuron, at every simulation time step (say 1 ms), we update the state of each neuron, and if the neuron fires, generate an event for each synapse that the neuron is post-synaptic to and presynaptic to. For every synapse, when it receives a pre- or post-synaptic event, we update its state and, if necessary, the state of the post-synaptic neuron” (p. 3, Section 3).”\n\n\n[276.](https://www.openphilanthropy.org/brain-computation-report#footnoteref276_zpiir2b)From [Open Philanthropy’s non-verbatim notes from a conversation with Dr. Paul Christiano](https://www.openphilanthropy.org/research/dr-paul-christiano-researcher-openai/): “Dr. Christiano expects that in modeling a neuron’s input-output function, one would not need to compute, every time-step, whether or not the neuron fires during that time-step. Rather, you could accumulate information about the inputs to a neuron over a longer period, and then compute the timing of its spikes over that period all at once. This definitely holds in a purely feedforward context – e.g., for a given neuron, you could simply compute all of the times that the neuron fires, and then use this information to compute when all of the downstream neurons fire, and so on. The fact that the brain’s architecture is highly recurrent complicates this picture, as the firing pattern of a particular neuron may be able to influence the inputs that that same neuron receives. However, the time it takes for an action potential to propagate would be a lower bound on how long it would be possible to wait in accumulating synaptic inputs (since the timescale of a neuron’s influence on its own inputs is capped by the propagation time of its outgoing signals)” (p. 6).\n\n\n[277.](https://www.openphilanthropy.org/brain-computation-report#footnoteref277_wis0hrn)[Sarpeshkar (2010)](https://www.cambridge.org/core/books/ultra-low-power-bioelectronics/ED8504DA1504856B74E2502EA859FDEA) employs what appears to be a single-compartment Hodgkin-Huxley model of firing decisions as a lower bound (he cites [Izhikevich (2004)](https://www.izhikevich.org/publications/whichmod.pdf), and uses an estimate of 1200 FLOPs per firing decision – the number that Izhikevich gives for running a Hodgkin-Huxley model for one ms (see p. 1066)), but he assumes that the model only needs to be “run” every time a neuron spikes (he uses a 5 Hz average rate) (p. 747-8). My intuition, though, would’ve been that because you do not know ahead of time whether or not the synaptic inputs are sufficient to cause an action potential, you would need to calculate this more often than spiking actually occurs.\n\n\n[278.](https://www.openphilanthropy.org/brain-computation-report#footnoteref278_cinlses)From [Open Philanthropy’s non-verbatim notes from a conversation with Prof. Eve Marder](https://www.openphilanthropy.org/research/professor-eve-marder-university-professor-and-victor-and-gwendolyn-beinfield-professor-of-neuroscience-brandeis-university/): “the computational power necessary to run e.g. a full Hodgkin-Huxley model depends a lot on implementation: e.g., what platform you use, what language you’re using, what method of integration, and what time-step for integration (all of your compute time goes to integrations)” (p. 4-5).\n\n\n[279.](https://www.openphilanthropy.org/brain-computation-report#footnoteref279_dp62lw2)See [Hansel et al. (1998)](https://www.mitpressjournals.org/doi/10.1162/089976698300017845): “It is shown that very small time steps are required to reproduce correctly the synchronization properties of large networks of integrate-and-fire neurons when the differential system describing their dynamics is integrated with the standard Euler or second-order Runge-Kutta algorithms” (p. 467) … An integration time step of t = 0.001 ms is actually required to evaluate correctly the coherence of the network in this regime” (p. ). Thanks to the expert who pointed me to this paper.\n\n\n[280.](https://www.openphilanthropy.org/brain-computation-report#footnoteref280_rznhfwp)From [Open Philanthropy’s non-verbatim notes from a conversation with Prof. Chris Eliasmith](https://www.openphilanthropy.org/research/professor-chris-eliasmith-professor-of-philosophy-and-systems-design-engineering-canada-research-chair-in-theoretical-neuroscience-and-director-of-the-centre-for-theoretical-neuroscience-at-the-uni/): “Prof. Eliasmith typically uses 1 ms time-steps in the simulations he builds” (p. 3); and [Eliasmith et al. (2012)](https://science.sciencemag.org/content/338/6111/1202.abstract) use leaky-Integrate-and-fire models (see p. 16 of the [supplementary materials](https://science.sciencemag.org/content/sci/suppl/2012/11/28/338.6111.1202.DC1/1225266.Eliasmith.SM_revised.pdf)). [Izhikevich (2004)](https://www.izhikevich.org/publications/whichmod.pdf) reports various types of collective neuron behavior in simulations using his 13 FLOP/ms model at 1 ms resolution, and others for a different simulation at 0.5 ms for neuron simulation and 1 ms for synaptic dynamics (see Izhikevich et al. (2004), “Neuronal Dynamics”). [Ananthanarayanan et al. (2009)](https://people.eecs.berkeley.edu/~demmel/cs267_Spr10/Lectures/RajAnanthanarayanan_SC09-a63.pdf) use 0.1-1 ms (see p. 3, Section 3.1.1) for “single-compartment phenomenological spiking neurons” (they cite Izhikevich et al. (2004), which suggests to me that they are using Izhikevich models as well).\n\n\n[281.](https://www.openphilanthropy.org/brain-computation-report#footnoteref281_xmqjwai)It’s based on scaling up the estimate for the [Beniaguev et al. (2020)](https://www.biorxiv.org/content/10.1101/613141v2.full.pdf) DNN by ~1000x, on the basis of their reporting, in the 2019 version of the paper, that “In our tests we obtained a factor of ~2000 speed up when using the DNN instead of its compartmental-model counterpart” (p. 15). In the current paper they report a “a speedup of simulation time by several orders of magnitude” (p. 8).\n\n\n[282.](https://www.openphilanthropy.org/brain-computation-report#footnoteref282_5rx71bi)From [Open Philanthropy’s non-verbatim notes from a conversation with Dr. Adam Marblestone](https://www.openphilanthropy.org/research/dr-adam-marblestone-research-scientist-google-deepmind/): “It might be that all of the neurons and synapses in the brain are there in order to make the brain more likely to converge on a solution while learning, but that once learning has taken place, the brain implements a function that can be adequately approximated using much less compute” (p. 7).\n\n\n[283.](https://www.openphilanthropy.org/brain-computation-report#footnoteref283_if3mwye)[Tsodyks and Wu (2013)](http://www.scholarpedia.org/article/Short-term_synaptic_plasticity): “Compared with long-term plasticity ([Bi and Poo (2001)](https://pubmed.ncbi.nlm.nih.gov/11283308/)), which is hypothesized as the [neural](http://www.scholarpedia.org/article/Neuron) substrate for experience-dependent modification of neural circuit, STP has a shorter time scale, typically on the order of hundreds to thousands of milliseconds.” See also [Ghanbari et al. (2017)](https://journals.plos.org/ploscompbiol/article/file?id=10.1371/journal.pcbi.1005738&type=printable), (p. 1), [Bliss and Lømo (1973)](https://www.ncbi.nlm.nih.gov/pmc/articles/PMC1350458/pdf/jphysiol00958-0128.pdf), and [Citri and Malenka (2008)](https://www.nature.com/articles/1301559). It is also possible to break these categories down more finely. [Clopath (2012)](https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3368062/pdf/11571_2011_Article_9177.pdf), for example, writes: “A change in synaptic strength can last for different lengths of time: we speak about short-term plasticity when the change lasts up to a few minutes, early-long-term plasticity when it lasts up to a few hours and late-long-term plasticity when it lasts beyond the experiment’s duration (which is often about 10 h) but is thought to last much longer even, possibly a life-time. This last type of plasticity is also called *synaptic consolidation* or maintenance” (p. 251). [Sandberg and Bostrom (2008)](https://www.fhi.ox.ac.uk/brain-emulation-roadmap-report.pdf) suggest that short-term synaptic plasticity “likely plays a role in a variety of brain functions, such as temporal filtering ([Fortune and Rose (2001)](http://www.sciencedirect.com/science/article/pii/S016622360001835X)), auditory processing ([Macleod, Horiuchi et al. (2007)](https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3268177/)) and motor control ([Nadim and Manor (2000)](https://pubmed.ncbi.nlm.nih.gov/11240276/))” (p. 32). Types of synaptic plasticity can be further subdivided according to whether the relevant change increases (“facilitation”/”potentiation”) or decreases (“depression”) the size of the post-synaptic impact of a spike through that synapse: see [Tosdyks and Wu (2013)](http://www.scholarpedia.org/article/Short-term_synaptic_plasticity) and [Yang and Calakos (2013)](https://www.frontiersin.org/articles/10.3389/fnsyn.2013.00008/full).\n\n\n[284.](https://www.openphilanthropy.org/brain-computation-report#footnoteref284_8j6p0jx)[Cudmore and Desai (2008)](http://www.scholarpedia.org/article/Intrinsic_plasticity): “Intrinsic plasticity is the persistent modification of a neuron’s intrinsic electrical properties by neuronal or synaptic activity. It is mediated by changes in the expression level or biophysical properties of ion channels in the membrane, and can affect such diverse processes as synaptic integration, subthreshold signal propagation, spike generation, spike backpropagation, and meta-plasticity.” Indeed, it has been shown that a type of neuron in the cerebellum known as a cerebellar Purjinke cell can learn timed responses to inputs in a manner that does not rely on synaptic plasticity. [Johansson et al. (2014)](https://www.pnas.org/content/pnas/111/41/14930.full.pdf): “The standard view of the mechanisms underlying learning is that they involve strengthening or weakening synaptic connections. Learned response timing is thought to combine such plasticity with temporally patterned inputs to the neuron. We show here that a cerebellar Purkinje cell in a ferret can learn to respond to a specific input with a temporal pattern of activity consisting of temporally specific increases and decreases in firing over hundreds of milliseconds without a temporally patterned input. Training Purkinje cells with direct stimulation of immediate afferents, the parallel fibers, and pharmacological blocking of interneurons shows that the timing mechanism is intrinsic to the cell itself. Purkinje cells can learn to respond not only with increased or decreased firing but also with an adaptively timed activity pattern” (p. 14930).\n\n\n[285.](https://www.openphilanthropy.org/brain-computation-report#footnoteref285_f7c4itc)See e.g. [Munno and Syed (2003)](https://www.ncbi.nlm.nih.gov/pmc/articles/PMC2343306/), [Ming and Song (2011)](https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3106107/), [Grutzendler et al. (2002)](https://www.ncbi.nlm.nih.gov/pubmed/12490949), [Holtmaat et al. (2005)](https://www.ncbi.nlm.nih.gov/pubmed/15664179).\n\n\n[286.](https://www.openphilanthropy.org/brain-computation-report#footnoteref286_6tlzb9w)See e.g. [Markram et al. (1997)](https://science.sciencemag.org/content/275/5297/213.long).\n\n\n[287.](https://www.openphilanthropy.org/brain-computation-report#footnoteref287_306bfxn)See [Luscher and Malenka (2012)](https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3367554/).\n\n\n[288.](https://www.openphilanthropy.org/brain-computation-report#footnoteref288_2krzune)See e.g. [Gerstner et al. (2018)](https://www.frontiersin.org/articles/10.3389/fncir.2018.00053/full), and [Nadim and Bucher (2014)](https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4252488/pdf/nihms603280.pdf).\n\n\n[289.](https://www.openphilanthropy.org/brain-computation-report#footnoteref289_ty2zt5r)See [Monday et al. (2018)](https://www.ncbi.nlm.nih.gov/pmc/articles/PMC6238218/) (p. 7-8).\n\n\n[290.](https://www.openphilanthropy.org/brain-computation-report#footnoteref290_4nty86n)See [Tao and Poo (2001)](https://www.pnas.org/content/98/20/11009).\n\n\n[291.](https://www.openphilanthropy.org/brain-computation-report#footnoteref291_05a2564)See [Yap and Greenberg (2018)](https://www.cell.com/neuron/pdf/S0896-6273(18)30901-2.pdf).\n\n\n[292.](https://www.openphilanthropy.org/brain-computation-report#footnoteref292_0sg24zp)See [Bhalla (2014)](https://www.sciencedirect.com/science/article/abs/pii/S0959438813002171), Figure 1, for a diagram depicting some of this machinery.\n\n\n[293.](https://www.openphilanthropy.org/brain-computation-report#footnoteref293_ndt82lx)From [Open Philanthropy’s non-verbatim notes from a conversation with Prof. Blake Richards](https://www.openphilanthropy.org/research/professor-blake-richards-assistant-professor-in-the-montreal-neurological-institute-and-the-school-of-computer-science-at-mcgill-university/): “Some neuroscientists are interested in the possibility that a lot of computation is occurring via molecular processes in the brain. For example, very complex interactions could be occurring in a structure known as the post-synaptic density, which involves molecular machinery that could in principle implicate many orders of magnitude of additional compute per synapse. We don’t yet know what this molecular machinery is doing, because we aren’t yet able to track the states of the synapses and molecules with adequate precision. There is evidence that perturbing the molecular processes within the synapse alters the dynamics of synaptic plasticity, but this doesn’t necessarily provide much evidence about whether these processes are playing a computational role. For example, their primary role might just be to maintain and control a single synaptic weight, which is itself a substantive task for a biological system” (p. 2). [Monday et al. (2018)](https://www.ncbi.nlm.nih.gov/pmc/articles/PMC6238218/): ‘The cellular basis of learning and memory is one of the greatest unsolved mysteries in neuroscience … Despite significant advancements in the molecular basis of neurotransmission, exactly how transmitter release is modified in a long-term manner remains largely unclear” (p. 1-2).\n\n\n[294.](https://www.openphilanthropy.org/brain-computation-report#footnoteref294_kfedsz3)[Lahiri and Ganguli (2013)](https://papers.nips.cc/paper/4872-a-memory-frontier-for-complex-synapses.pdf): [Lahiri and Ganguli (2013)](https://papers.nips.cc/paper/4872-a-memory-frontier-for-complex-synapses.pdf): “To understand the functional contribution of such molecular complexity to learning and memory, it is essential to expand our theoretical conception of a synapse from a single scalar to an entire dynamical system with many internal molecular functional states” (p. 1). [Benna and Fusi (2016)](https://www.nature.com/articles/nn.4401): “The molecular machinery responsible for memory consolidation at the level of synaptic connections is believed to employ a complex network of diverse biochemical processes that operate on different timescales. Understanding how these processes are orchestrated to preserve memories over a lifetime requires guiding principles to interpret the complex organization of the observed synaptic molecular interactions and explain its computational advantage. Here we present a class of synaptic models that can efficiently harness biological complexity to store and preserve a huge number of memories on long timescales, vastly outperforming all previous synaptic models of memory” (p. 1697). [Kaplanis et al. (2018)](https://arxiv.org/pdf/1802.07239.pdf): “we show that by equipping tabular and deep reinforcement learning agents with a synaptic model that incorporates this biological complexity ([Benna and Fusi (2016)](https://www.nature.com/articles/nn.4401)), catastrophic forgetting can be mitigated at multiple timescales. In particular, we find that as well as enabling continual learning across sequential training of two simple tasks, it can also be used to overcome within-task forgetting by reducing the need for an experience replay database” (p. 1). [Zenke et al. (2017)](https://arxiv.org/pdf/1703.04200.pdf): “In this study, we introduce intelligent synapses that bring some of this biological complexity into artificial neural networks. Each synapse accumulates task relevant information over time, and exploits this information to rapidly store new memories without forgetting old ones. We evaluate our approach on continual learning of classification tasks, and show that it dramatically reduces forgetting while maintaining computational efficiency” (abstract).\n\n\n[295.](https://www.openphilanthropy.org/brain-computation-report#footnoteref295_qugpogh)Activity-dependent myelination might be one example (see e.g. [Faria et al. (2019)](https://www.ncbi.nlm.nih.gov/pmc/articles/PMC6587454/)).\n\n\n[296.](https://www.openphilanthropy.org/brain-computation-report#footnoteref296_icahuh0)Though short-term plasticity is both (a) fairly fast and (b) possibly involved in working memory, which many tasks require. See also [Sandberg and Bostrom (2008)](https://www.fhi.ox.ac.uk/brain-emulation-roadmap-report.pdf): “Since neurogenesis occurs on fairly slow timescales (> 1 week) compared to brain activity and normal plasticity, it could probably be ignored in brain emulation if the goal is an emulation that is intended to function faithfully for only a few days and not to exhibit truly long‐term memory consolidation or adaptation” (p. 35).\n\n\n[297.](https://www.openphilanthropy.org/brain-computation-report#footnoteref297_n8o3a01)[Sorrells et al. (2018)](https://www.nature.com/articles/nature25975): “In humans, some studies have suggested that hundreds of new neurons are added to the adult dentate gyrus every day, whereas other studies find many fewer putative new neurons.” See also [Moreno-Jimenez et al. (2019)](https://www.nature.com/articles/s41591-019-0375-9): “we identified thousands of immature neurons in the DG of neurologically healthy human subjects up to the ninth decade of life” (abstract).\n\n\n[298.](https://www.openphilanthropy.org/brain-computation-report#footnoteref298_3kmdprg)[Zuo et al. (2005)](https://pubmed.ncbi.nlm.nih.gov/15848798/): “In adult mice (4-6 months old), 3%-5% of spines were eliminated and formed over 2 weeks in various cortical regions. Over 18 months, only 26% of spines were eliminated and 19% formed in adult barrel cortex” (from the abstract). From [Open Philanthropy’s non-verbatim notes from a conversation with Prof. Erik De Schutter](https://www.openphilanthropy.org/research/professor-erik-de-schutter-professor-computational-neuroscience-okinawa-institute-of-science-and-technology/): “Networks of neurons can rewire themselves fairly quickly, over timescales of tens of minutes. These changes correlate with improvements in performance on tasks” (p. 3).\n\n\n[299.](https://www.openphilanthropy.org/brain-computation-report#footnoteref299_0g0p2f7)Dr. Dario Amodei suggested considerations in this vein.\n\n\n[300.](https://www.openphilanthropy.org/brain-computation-report#footnoteref300_gmdpody)See e.g. [this diagram](https://qbi.uq.edu.au/brain-basics/brain/brain-physiology/long-term-synaptic-plasticity) of a potentiated synapse, illustrating an increased number of post-synaptic receptors\n\n\n[301.](https://www.openphilanthropy.org/brain-computation-report#footnoteref301_0lxnibf)Thus, for example, [Bliss and Lømo (1973)](https://www.ncbi.nlm.nih.gov/pmc/articles/PMC1350458/pdf/jphysiol00958-0128.pdf), in an early result related to long-lasting synaptic potentiation, use conditioning spike trains of 10-15 secs, and 3-4 seconds (p. 331).\n\n\n[302.](https://www.openphilanthropy.org/brain-computation-report#footnoteref302_ja6bkie)See discussion of the “stability – plasticity dilemma,” e.g. [Mermillod et al. (2013)](https://www.frontiersin.org/articles/10.3389/fpsyg.2013.00504/full). One possible solution is to use multiple dynamical variables operating on different timescales – see [Benna and Fusi (2016)](https://www.nature.com/articles/nn.4401).\n\n\n[303.](https://www.openphilanthropy.org/brain-computation-report#footnoteref303_21l28lg)[Koch (1999)](https://www.amazon.com/Biophysics-Computation-Information-Computational-Neuroscience/dp/0195181999): “An important distinction between ionotropic and metabotropic receptors is their time scale. While members of the former class act rapidly, terminating within a very small fraction of a second, the speed of the latter class is limited by diffusion. Biochemical reactions can happen nearly instantaneously at the neuronal time scale. However, if a synaptic input to a metabotropic receptor induces the release of some messenger, such as calcium ions, which have to diffuse to the cell body in order to ‘do their thing,’ the time scale is extended to seconds or longer “ (p. 95). See also [Siegelbaum et al. (2013b)](https://neurology.mhmedical.com/content.aspx?bookid=1049§ionid=59138635): “whereas the action of ionotropic receptors is fast and brief, metabotropic receptors produce effects that begin slowly and persist for long periods, ranging from hundreds of milliseconds to many minutes” (p. 236).\n\n\n[304.](https://www.openphilanthropy.org/brain-computation-report#footnoteref304_zqd65gk)See p. 32. [Bhalla (2014)](https://www.sciencedirect.com/science/article/abs/pii/S0959438813002171) also suggests that chemical computation involves 1e6 “computations per second” per neuron.\n\n\n[305.](https://www.openphilanthropy.org/brain-computation-report#footnoteref305_ik9iwww)[Yap and Greenberg (2018)](https://www.cell.com/neuron/pdf/S0896-6273(18)30901-2.pdf): “Discovered by Greenberg and Ziff in 1984 ([Greenberg and Ziff (1984)](https://www.nature.com/articles/311433a0)), the rapid and transient induction of Fos transcription provided the first evidence that mammalian cells could respond to the outside world within minutes by means of rapid gene transcription, in particular through the activation of specific genes ([Cochran et al. (1984)](https://science.sciencemag.org/content/226/4678/1080.abstract); [Greenberg et al. (1985)](https://www.jbc.org/content/260/26/14101.short); [Greenberg et al. (1986)](https://science.sciencemag.org/content/234/4772/80.abstract); [Kruijer et al. (1984)](https://www.nature.com/articles/312711a0); [Lau and Nathans (1987)](https://www.pnas.org/content/84/5/1182.short); [Müller et al. (1984)](https://www.nature.com/articles/312716a0))” (p. 331).\n\n\n[306.](https://www.openphilanthropy.org/brain-computation-report#footnoteref306_6x3exyg)Indeed, certain models of synaptic plasticity explicitly include variables whose state is not immediately expressed in changes to synaptic efficacy (that is, in the size of the effect that a spike through that synapse has on a downstream neuron). See e.g. three-factor learning rules discussed by [Gerstner et al. (2018)](https://www.frontiersin.org/articles/10.3389/fncir.2018.00053/full). From [Open Philanthropy’s non-verbatim notes from a conversation with Dr. Adam Marblestone](https://www.openphilanthropy.org/research/dr-adam-marblestone-research-scientist-google-deepmind/): “Compute increases are more likely to come from synaptic decisions that get computed on something like a per-spike basis. For example, you might need to do a lot of fast computation in order to set the synaptic “flag” variables involved in some neo-Hebbian three-factor learning rules, even if these variables take a long time to have effects” (p. 3).\n\n\n[307.](https://www.openphilanthropy.org/brain-computation-report#footnoteref307_97l1xxq)[Tsodyks and Wu (2013)](http://www.scholarpedia.org/article/Short-term_synaptic_plasticity): “Compared with long-term plasticity ([Bi and Poo (2001)](https://pubmed.ncbi.nlm.nih.gov/11283308/)), which is hypothesized as the [neural](http://www.scholarpedia.org/article/Neuron) substrate for experience-dependent modification of neural circuit, STP has a shorter time scale, typically on the order of hundreds to thousands of milliseconds.” [Cheng et al. (2018)](https://www.frontiersin.org/articles/10.3389/fnsyn.2018.00033/full): “It is well established that both augmentation and potentiation are triggered by a transient rise in calcium concentration within the presynaptic terminal.”\n\n\n[308.](https://www.openphilanthropy.org/brain-computation-report#footnoteref308_u9yjp2f)From [Open Philanthropy’s non-verbatim notes from a conversation with Prof. Blake Richards](https://www.openphilanthropy.org/research/professor-blake-richards-assistant-professor-in-the-montreal-neurological-institute-and-the-school-of-computer-science-at-mcgill-university/): “it is very difficult to say at this point exactly how much compute would be required to model learning in the brain, because there is a lot of disagreement in the field as to how sophisticated the learning algorithms in the brain are. This is partly because we don’t have a good hold on how much human learning is truly general purpose, vs. constrained to particular tasks” (p. 1).\n\n\n[309.](https://www.openphilanthropy.org/brain-computation-report#footnoteref309_j8qsip2)See [Yann LeCun’s 2017 talk](https://www.youtube.com/watch?v=cWzi38-vDbE): “How does the brain learn so much so quickly?”, and Stuart Russell’s comments [here](https://futureoflife.org/2020/06/15/steven-pinker-and-stuart-russell-on-the-foundations-benefits-and-possible-existential-risk-of-ai/): “I think another area where deep learning is clearly not capturing the human capacity for learning, is just in the efficiency of learning. I remember in the mid ’80s going to some classes in psychology at Stanford, and there were people doing machine learning then and they were very proud of their results, and somebody asked Gordon Bower, “how many examples do humans need to learn this kind of thing?” And Gordon said “one [sic] Sometimes two, usually one”, and this is genuinely true, right? If you look for a picture book that has one to two million pictures of giraffes to teach children what a giraffe is, you won’t find one. Picture books that tell children what giraffes are have one picture of a giraffe, one picture of an elephant, and the child gets it immediately, even though it’s a very crude cartoonish drawing, of a giraffe or an elephant, they never have a problem recognizing giraffes and elephants for the rest of their lives. Deep learning systems are needing, even for these relatively simple concepts, thousands, tens of thousands, millions of examples, and the idea within deep learning seems to be that well, the way we’re going to scale up to more complicated things like learning how to write an email to ask for a job, is that we’ll just have billions or trillions of examples, and then we’ll be able to learn really, really complicated concepts. But of course the universe just doesn’t contain enough data for the machine to learn direct mappings from perceptual inputs or really actually perceptual input history. So imagine your entire video record of your life, and that feeds into the decision about what to do next, and you have to learn that mapping as a supervised learning problem. It’s not even funny how unfeasible that is. The longer the deep learning community persists in this, the worse the pain is going to be when their heads bang into the wall.” That said, work on this topic is ongoing, and these comparisons don’t seem straightforward.\n\n\n[310.](https://www.openphilanthropy.org/brain-computation-report#footnoteref310_6bn1yf6)SSee e.g., [Guerguiev et al. (2017)](https://elifesciences.org/articles/22901), [Bartunov et al. (2018)](https://arxiv.org/pdf/1807.04587.pdf), and [Hinton (2011)](https://www.cs.toronto.edu/~hinton/backpropincortex2014.pdf). From [Guerguiev et al. (2017)](https://elifesciences.org/articles/22901): “Backpropagation assigns credit by *explicitly* using current downstream synaptic connections to calculate synaptic weight updates in earlier layers, commonly termed ‘hidden layers’ ([LeCun et al., 2015](https://www.nature.com/articles/nature14539)) ([Figure 1B](https://elifesciences.org/articles/22901#fig1)). This technique, which is sometimes referred to as ‘weight transport’, involves non-local transmission of synaptic weight information between layers of the network ([Lillicrap et al. (2016)](https://pubmed.ncbi.nlm.nih.gov/27824044/); [Grossberg (1987)](https://onlinelibrary.wiley.com/doi/abs/10.1111/j.1551-6708.1987.tb00862.x)). Weight transport is clearly unrealistic from a biological perspective ([Bengio et al. (2015)](https://arxiv.org/abs/1502.04156); [Crick (1989)](https://pubmed.ncbi.nlm.nih.gov/2911347/)). It would require early sensory processing areas (e.g. V1, V2, V4) to have precise information about *billions* of synaptic connections in downstream circuits (MT, IT, M2, EC, etc.). According to our current understanding, there is no physiological mechanism that could communicate this information in the brain. Some deep learning algorithms utilize purely Hebbian rules ([Scellier and Bengio, 2016](https://elifesciences.org/articles/22901#bib55); [Hinton et al. (2006)](https://pubmed.ncbi.nlm.nih.gov/16764513/)). But, they depend on feedback synapses that are symmetric to feedforward synapses ([Scellier and Bengio, 2016](https://elifesciences.org/articles/22901#bib55); [Hinton et al. (2006)](https://pubmed.ncbi.nlm.nih.gov/16764513/)), which is essentially a version of weight transport. Altogether, these artificial aspects of current deep learning solutions to credit assignment have rendered many scientists skeptical of the proposal that deep learning occurs in the real brain ([Crick, 1989](https://elifesciences.org/articles/22901#bib12); [Grossberg (1987)](https://onlinelibrary.wiley.com/doi/abs/10.1111/j.1551-6708.1987.tb00862.x); [Harris (2008)](https://pubmed.ncbi.nlm.nih.gov/18255165/); [Urbanczik and Senn (2009)](https://pubmed.ncbi.nlm.nih.gov/19219040/)). Recent findings have shown that these problems may be surmountable, though. [Lillicrap et al. (2016)](https://pubmed.ncbi.nlm.nih.gov/27824044/), [Lee et al. (2015)](https://link.springer.com/chapter/10.1007/978-3-319-23528-8_31) and [Liao et al. (2015)](https://arxiv.org/abs/1510.05067) have demonstrated that it is possible to solve the credit assignment problem even while avoiding weight transport or symmetric feedback weights” (p. 3).\n\n\n[311.](https://www.openphilanthropy.org/brain-computation-report#footnoteref311_l5bk1na)See e.g. [David Pfau via twitter](https://twitter.com/pfau/status/1105443964423938049): “In 100 years, we’ll look back on theories of ‘how the brain does backpropagation’ the way we look at the luminiferous aether now.” See also [Open Philanthropy’s non-verbatim notes from a conversation with Prof. Eric Jonas](https://www.openphilanthropy.org/research/professor-eric-jonas-assistant-professor-of-computer-science-at-the-university-of-chicago/): “Prof. Jonas does not think that there is a clear meaning to the claim that the brain is a deep learning system” (p. 3).\n\n\n[312.](https://www.openphilanthropy.org/brain-computation-report#footnoteref312_nar90fi)See e.g. [Gerstner et al. (2018)](https://www.frontiersin.org/articles/10.3389/fncir.2018.00053/full) for some descriptions. From [Open Philanthropy’s non-verbatim notes from a conversation with Dr. Adam Marblestone](https://www.openphilanthropy.org/research/dr-adam-marblestone-research-scientist-google-deepmind/): “A lot of the learning models discussed in neuroscience are also significantly simpler than backpropagation: e.g., three-factor rules like “if the pre-synaptic neuron was active, and the post-synaptic neuron was active, and you had dopamine in the last ~3 seconds, then strengthen” (p. 6). From [Open Philanthropy’s non-verbatim notes from a conversation with Prof. Anthony Zador](https://www.openphilanthropy.org/research/professor-anthony-zador-alle-davis-and-maxine-harrison-professor-of-neurosciences-cold-spring-harbor-laboratory/): “We know the general outlines of the rules governing synaptic plasticity. The synapse gets stronger and weaker as a function of pre and post synaptic activity, and external modulation” (p. 3).\n\n\n[313.](https://www.openphilanthropy.org/brain-computation-report#footnoteref313_p7n156i)From [Open Philanthropy’s non-verbatim notes from a conversation with Prof. Chris Eliasmith](https://www.openphilanthropy.org/research/professor-chris-eliasmith-professor-of-philosophy-and-systems-design-engineering-canada-research-chair-in-theoretical-neuroscience-and-director-of-the-centre-for-theoretical-neuroscience-at-the-uni/): “In the large scale brain simulations that Chris Eliasmith builds, he often uses an error-driven Hebbian rule, which computes updates to synaptic weights based on pre-synaptic activity, post-synaptic activity, and an error signal (which, in the brain, could proceed via a mechanism like dopamine modulation). This rule requires on the order of three to five operations per synapse (a couple of products, and then a weight update), though the total burden depends on how often you perform the updates” (p. 4).\n\n\n[314.](https://www.openphilanthropy.org/brain-computation-report#footnoteref314_cka8moy)From [Open Philanthropy’s non-verbatim notes from a conversation with Prof. Anthony Zador](https://www.openphilanthropy.org/research/professor-anthony-zador-alle-davis-and-maxine-harrison-professor-of-neurosciences-cold-spring-harbor-laboratory/): “We know the general outlines of the rules governing synaptic plasticity. The synapse gets stronger and weaker as a function of pre and post synaptic activity, and external modulation. There is a lot of room for discovery there, and it may be difficult to get just right, but conceptually, it’s pretty simple. Prof. Zador expects it to be possible to capture synaptic plasticity with a small number of FLOPs per spike through synapse” (p. 3).\n\n\n[315.](https://www.openphilanthropy.org/brain-computation-report#footnoteref315_ausmoux)From [Open Philanthropy’s non-verbatim notes from a conversation with Prof. Chris Eliasmith](https://www.openphilanthropy.org/research/professor-chris-eliasmith-professor-of-philosophy-and-systems-design-engineering-canada-research-chair-in-theoretical-neuroscience-and-director-of-the-centre-for-theoretical-neuroscience-at-the-uni/): “In the large scale brain simulations that Chris Eliasmith builds, he often uses an error-driven Hebbian rule, which computes updates to synaptic weights based on pre-synaptic activity, post-synaptic activity, and an error signal (which, in the brain, could proceed via a mechanism like dopamine modulation)” (p. 4).\n\n\n[316.](https://www.openphilanthropy.org/brain-computation-report#footnoteref316_f9yt0fr)[Kaplanis et al. (2018)](https://arxiv.org/pdf/1802.07239.pdf) add 30 extra dynamical variables per synapse, but manage to increase runtime by only 1.5-2 times relative to a control model, though I’m not sure about the details here. They note that “the complexity of the algorithm is O(mN), where N is the number of trainable parameters in the network and m is the number of Benna-Fusi variables per parameter.”\n\n\n[317.](https://www.openphilanthropy.org/brain-computation-report#footnoteref317_0k1ohee)See e.g. [Lahiri and Ganguli (2013)](https://papers.nips.cc/paper/4872-a-memory-frontier-for-complex-synapses.pdf): “To understand the functional contribution of such molecular complexity to learning and memory, it is essential to expand our theoretical conception of a synapse from a single scalar to an entire dynamical system with many internal molecular functional states” (p. 1). [Benna and Fusi (2016)](https://www.nature.com/articles/nn.4401): “The molecular machinery responsible for memory consolidation at the level of synaptic connections is believed to employ a complex network of diverse biochemical processes that operate on different timescales. Understanding how these processes are orchestrated to preserve memories over a lifetime requires guiding principles to interpret the complex organization of the observed synaptic molecular interactions and explain its computational advantage. Here we present a class of synaptic models that can efficiently harness biological complexity to store and preserve a huge number of memories on long timescales, vastly outperforming all previous synaptic models of memory” (p. 1697). My understanding is that [Fusi and Abbott (2007)](https://pubmed.ncbi.nlm.nih.gov/17351638/) is a precursor to some of this work.\n\n\n[318.](https://www.openphilanthropy.org/brain-computation-report#footnoteref318_b0ndkna)From [Open Philanthropy’s non-verbatim notes from a conversation with Prof. Blake Richards](https://www.openphilanthropy.org/research/professor-blake-richards-assistant-professor-in-the-montreal-neurological-institute-and-the-school-of-computer-science-at-mcgill-university/): “First-order gradient descent methods, like back-propagation, use the slope of the loss function to minimize the loss” (p. 1-2).\n\n\n[319.](https://www.openphilanthropy.org/brain-computation-report#footnoteref319_x4zynha)From [Open Philanthropy’s non-verbatim notes from a conversation with Prof. Blake Richards](https://www.openphilanthropy.org/research/professor-blake-richards-assistant-professor-in-the-montreal-neurological-institute-and-the-school-of-computer-science-at-mcgill-university/): “[For first-order gradient descent methods], learning is basically a backwards pass through the network, so the compute required scales linearly with the number of neurons and synapses in the network, adding only a small constant factor” (p. 1-2). See also [Open Philanthropy’s non-verbatim notes from a conversation with Prof. Barak Pearlmutter](https://www.openphilanthropy.org/research/professor-barak-pearlmutter-professor-of-computer-science-maynooth-university/): “Prof. Pearlmutter’s best-guess estimate was that the learning overhead (that is, the compute increase from moving from a non-adaptive system to an adaptive system) would be a factor of two. It could be more or less, but this is a number we actually understand, because the existing learning algorithms that we know work for large-scale systems, and that we have put effort into optimizing – for example, backpropagation – implicate roughly this type of overhead” (p. 3).\n\n\n[320.](https://www.openphilanthropy.org/brain-computation-report#footnoteref320_44y93lr)From [Open Philanthropy’s non-verbatim notes from a conversation with Prof. Barak Pearlmutter](https://www.openphilanthropy.org/research/professor-barak-pearlmutter-professor-of-computer-science-maynooth-university/): “Prof. Pearlmutter’s best-guess estimate was that the learning overhead (that is, the compute increase from moving from a non-adaptive system to an adaptive system) would be a factor of two. It could be more or less, but this is a number we actually understand, because the existing learning algorithms that we know work for large-scale systems, and that we have put effort into optimizing – for example, backpropagation – implicate roughly this type of overhead” (p. 3). From [Open Philanthropy’s non-verbatim notes from a conversation with Prof. Konrad Kording](https://www.openphilanthropy.org/research/professor-konrad-kording-penn-integrated-knowledge-professor-university-of-pennsylvania/): “Prof. Kording thinks that learning in the brain requires the same amount of compute as processing. If you have a compute graph, going forwards and backwards comes at roughly the same cost” (p. 3). From [Open Philanthropy’s non-verbatim notes from a conversation with Prof. Blake Richards](https://www.openphilanthropy.org/research/professor-blake-richards-assistant-professor-in-the-montreal-neurological-institute-and-the-school-of-computer-science-at-mcgill-university/): “Prof. Richards favors the hypothesis that the brain uses a learning method with compute scaling properties similar to backpropagation. This is partly because humans are capable of learning so many tasks that were not present in the evolutionary environment (and hence are unlikely to be hardwired into our brains), with comparatively little data (e.g., less than a weight-perturbation algorithm would require)” (p. 2).\n\n\n[321.](https://www.openphilanthropy.org/brain-computation-report#footnoteref321_2ra142f)From [Open Philanthropy’s non-verbatim notes from a conversation with Prof. Blake Richards](https://www.openphilanthropy.org/research/professor-blake-richards-assistant-professor-in-the-montreal-neurological-institute-and-the-school-of-computer-science-at-mcgill-university/): “More sophisticated learning algorithms, such as second-order gradient methods, take into account not just the slope of the loss function gradient but also its curvature. These require more compute (the compute per learning step scales as a polynomial with the number of neurons and synapses), which is why people don’t use these techniques, even though they are arguably much better” (p. 2).\n\n\n[322.](https://www.openphilanthropy.org/brain-computation-report#footnoteref322_a6jb6ye)See previous endnote.\n\n\n[323.](https://www.openphilanthropy.org/brain-computation-report#footnoteref323_fcf365p)From [Open Philanthropy’s non-verbatim notes from a conversation with Dr. Paul Christiano](https://www.openphilanthropy.org/research/dr-paul-christiano-researcher-openai/): “Based on his understanding of the brain’s physiology, Dr. Christiano thinks it extremely implausible that the brain could be implementing second-order optimization methods” (p. 7).\n\n\n[324.](https://www.openphilanthropy.org/brain-computation-report#footnoteref324_jcqc7eb)From [Open Philanthropy’s non-verbatim notes from a conversation with Dr. Adam Marblestone](https://www.openphilanthropy.org/research/dr-adam-marblestone-research-scientist-google-deepmind/): “He has not seen proposals for how second-order gradient methods of learning could be implemented in the brain.” (p. 6).\n\n\n[325.](https://www.openphilanthropy.org/brain-computation-report#footnoteref325_yqtp9tg)From [Open Philanthropy’s non-verbatim notes from a conversation with Prof. Blake Richards](https://www.openphilanthropy.org/research/professor-blake-richards-assistant-professor-in-the-montreal-neurological-institute-and-the-school-of-computer-science-at-mcgill-university/): “In the other direction, there are algorithms known as “weight-perturbation” or “node-perturbation” algorithms. These involve keeping/consolidating random changes to the network that result in reward, and getting rid of changes that result in punishment (a process akin to updating parameters based on simple signals of “hotter” and “colder”). These algorithms require less compute than first-order gradient descent methods, but they take longer to converge as the size of the network grows. In this sense, they involve trade-offs between compute and time” (p. 2).\n\n\n[326.](https://www.openphilanthropy.org/brain-computation-report#footnoteref326_2kl2gw4)See previous endnote.\n\n\n[327.](https://www.openphilanthropy.org/brain-computation-report#footnoteref327_cj8hlda)From [Open Philanthropy’s non-verbatim notes from a conversation with Prof. Blake Richards](https://www.openphilanthropy.org/research/professor-blake-richards-assistant-professor-in-the-montreal-neurological-institute-and-the-school-of-computer-science-at-mcgill-university/): “Prof. Richards favors the hypothesis that the brain uses a learning method with compute scaling properties similar to backpropagation. This is partly because humans are capable of learning so many tasks that were not present in the evolutionary environment (and hence are unlikely to be hardwired into our brains), with comparatively little data (e.g., less than a weight-perturbation algorithm would require)” (p. 2).\n\n\n[328.](https://www.openphilanthropy.org/brain-computation-report#footnoteref328_g9ppb7s)From [Open Philanthropy’s non-verbatim notes from a conversation with Dr. Adam Marblestone](https://www.openphilanthropy.org/research/dr-adam-marblestone-research-scientist-google-deepmind/): “There are also non-gradient methods of learning. For example, some people are interested in Bayesian belief propagation, though Dr. Marblestone is not aware of efforts to describe how this might be implemented at the level of e.g. dendrites. We shouldn’t assume that the brain is doing some sort of gradient-based learning” (p. 6). See also [Gütig and Sompolinsky (2006)](https://www.nature.com/articles/nn1643) (though I’m not sure if this would fall into one of the categories above).\n\n\n[329.](https://www.openphilanthropy.org/brain-computation-report#footnoteref329_8bturdb)From [Open Philanthropy’s non-verbatim notes from a conversation with Dr. Kate Storrs](https://www.openphilanthropy.org/research/dr-kate-storrs-alexander-von-humboldt-research-fellow-justus-liebig-university/): “Dr. Storrs’ sense is that, in the parts of the field she engages with most closely (e.g., systems level modeling, visual/cognitive/perceptual modeling, human behavior), and maybe more broadly, a large majority of people treat synaptic weights as the core learned parameters in the brain. That said, she is not a neurophysiologist, and so isn’t the right person to ask about what sort of biophysical complexities could imply larger numbers of parameters. She is peripherally aware of papers suggesting that glia help store knowledge, and there are additional ideas as well. The truth probably involves mechanisms other than synaptic weights, but she believes that the consensus is that such weights hold most of the knowledge” (p. 2).\n\n\n[330.](https://www.openphilanthropy.org/brain-computation-report#footnoteref330_qyhi0sa)From [Open Philanthropy’s non-verbatim notes from a conversation with Prof. Konrad Kording](https://www.openphilanthropy.org/research/professor-konrad-kording-penn-integrated-knowledge-professor-university-of-pennsylvania/): “Here is one non-standard argument for this degree of non-linearity in neurons. Adjusting synapses in helpful ways requires computing how that synapse should adjust based on its contribution to whether the neuron fires. But this computation applies in basically the same way to individual ion channels in the cell: e.g., if the brain can signal to the synapse how to adjust in order to improve neuron firing, it can do the same for ion channels, at no additional cost. This makes Prof. Kording thinks that the brain is optimizing both. However, current techniques are very bad at measuring ion channel plasticity. Neuroscientists don’t tend to focus on it for this reason. There are considerably more ion channels than synapses, and ion channels change how synapses linearly and nonlinearly interact with one another. This suggests an uglier computational space” (p. 4-5).\n\n\n[331.](https://www.openphilanthropy.org/brain-computation-report#footnoteref331_ff4ho2p)See p. 494.\n\n\n[332.](https://www.openphilanthropy.org/brain-computation-report#footnoteref332_16p857n)[Sarpeshkar (2010)](https://www.cambridge.org/core/books/ultra-low-power-bioelectronics/ED8504DA1504856B74E2502EA859FDEA): “Information is always represented by the states of variables in a physical system, whether that system is a sensing, actuating, communicating, controlling, or computing system or a combination of all types. It costs energy to change or to maintain the states of physical variables. These states can be in the voltage of a piezoelectric sensor, in the mechanical displacement of a robot arm, in the current of an antenna, in the chemical concentration of a regulating enzyme in a cell, or in the voltage on a capacitor in a digital processor. Hence, it costs energy to process information, whether that energy is used by enzymes in biology to copy a strand of DNA or in electronics to filter an input. To save energy, one must then reduce the amount of information that one wants to process. The higher the output precision and the higher the temporal bandwidth or speed at which the information needs to be processed, the higher is the rate of energy consumption, i.e., power. To save power, one must then reduce the rate of information processing…The art of low-power design consists of decomposing the task to be solved in an intelligent fashion such that the rate of information processing is reduced as far as is possible without compromising the performance of the system” (p. 9).\n\n\n[333.](https://www.openphilanthropy.org/brain-computation-report#footnoteref333_64735qn)From [Open Philanthropy’s non-verbatim notes from a conversation with Prof. Blake Richards](https://www.openphilanthropy.org/files/Conversations/A%20conversation%20with%20Professor%20Blake%20Richards,%20September%2020,%202019.pdf) (p. 3):\n\n\nBased on Prof. Richard’s best guess, it seems reasonable to him to budget an order of magnitude of compute for learning, on top of a budget of roughly one FLOP (possibly a bit more) per spike through synapse. However, it could also be higher or lower.\n\n\nFrom [Open Philanthropy’s non-verbatim notes from a conversation with Prof. Anthony Zador](https://www.openphilanthropy.org/research/professor-anthony-zador-alle-davis-and-maxine-harrison-professor-of-neurosciences-cold-spring-harbor-laboratory/) (p. 3):\n\n\nProf. Zador expects it to be possible to capture synaptic plasticity with a small number of FLOPs per spike through synapse.\n\n\nFrom [Open Philanthropy’s non-verbatim notes from a conversation with Prof. Barak Pearlmutter](https://www.openphilanthropy.org/research/professor-barak-pearlmutter-professor-of-computer-science-maynooth-university/) (p. 4):\n\n\nOverall, Prof. Pearlmutter thought that an estimate based on 100 FLOPs per spike through synapse, with a factor of two for learning, sounded fairly reasonable.\n\n\nFrom [Open Philanthropy’s non-verbatim notes from a conversation with Dr. Adam Marblestone](https://www.openphilanthropy.org/research/dr-adam-marblestone-research-scientist-google-deepmind/) (p. 9):\n\n\nDr. Marblestone expects that both three-factor rules and backpropagation-type methods would imply compute burdens within an order of magnitude or two of estimates based on 1 FLOP per spike through synapse…Dr. Marblestone is fairly comfortable with one FLOP per spike through synapse as a low-end estimate, and ~100 FLOPs per spike through synapse (roughly comparable to the estimate offered by Prof. Rahul Sarpeshkar) as a high-end estimate. His best guess is 10-100 FLOPs per spike through synapse.\n\n\nFrom [Open Philanthropy’s non-verbatim notes from a conversation with Prof. Chris Eliasmith](https://www.openphilanthropy.org/research/professor-chris-eliasmith-professor-of-philosophy-and-systems-design-engineering-canada-research-chair-in-theoretical-neuroscience-and-director-of-the-centre-for-theoretical-neuroscience-at-the-uni/) (p. 5):\n\n\nIn the large scale brain simulations that Chris Eliasmith builds, he often uses an error-driven Hebbian rule, which computes updates to synaptic weights based on pre-synaptic activity, post-synaptic activity, and an error signal (which, in the brain, could proceed via a mechanism like dopamine modulation). This rule requires on the order of three to five operations per synapse (a couple of products, and then a weight update), though the total burden depends on how often you perform the updates…Prof. Eliasmith thinks that neuron models at roughly the level of detail he uses in SPAUN (possibly including some non-linearities in the dendrites), if scaled up to the size of the brain as a whole, would be able not just to replicate cognitive performance, but also to reflect a functional profile similar to biological neurons.\n\n\n[334.](https://www.openphilanthropy.org/brain-computation-report#footnoteref334_muo5ixc)[Sarpeshkar (2010)](https://www.cambridge.org/core/books/ultra-low-power-bioelectronics/ED8504DA1504856B74E2502EA859FDEA): “If we assume that synaptic multiplication is at least one floating-point operation (FLOP), the 20 ms second-order filter impulse response due to each synapse is 40 FLOPS, and that synaptic learning requires at least 10 FLOPS per spike, a synapse implements at least 50 FLOPS of computation per spike” (p. 748-749).\n\n\n[335.](https://www.openphilanthropy.org/brain-computation-report#footnoteref335_hgdnl8w)From [Open Philanthropy’s non-verbatim notes from a conversation with Prof. Eric Jonas](https://www.openphilanthropy.org/research/professor-eric-jonas-assistant-professor-of-computer-science-at-the-university-of-chicago/): “Prof. Jonas is not convinced by any arguments he’s heard that attempt to limit the amount of state you can store in a neuron. Indeed, some recent work explores the possibility that some information is stored using DNA. If there are actually molecular-level storage mechanisms at work in these systems, that would alter compute estimates by multiple orders of magnitude. … Prof. Jonas thinks that estimating the complexity of learning in the brain involves even more uncertainty than estimates based on firing decisions in neurons. Neuroscientists have been studying things like spike timing dependent plasticity and long-term plasticity for decades, and we can elicit versions of them reliably *in vitro*. But it’s much harder to understand the actual biological processes occurring *in vivo* in a behaving animal, because we have so much less experimental access. The machine learning community has multiple theories of the computational complexity of learning. However, these don’t seem to capture the interesting properties of natural systems or existing machine learning systems. … He also has a long-term prior that researchers are too quick to believe that the brain is doing whatever is currently popular in machine learning, and he doesn’t think we’ve found the right paradigm yet” (p. 3-4). One other expert I spoke with was also skeptical/agnostic, though I didn’t do notes from this conversation.\n\n\n[336.](https://www.openphilanthropy.org/brain-computation-report#footnoteref336_1ybqtqo)From [Open Philanthropy’s non-verbatim notes from a conversation with Prof. Konrad Kording](https://www.openphilanthropy.org/research/professor-konrad-kording-penn-integrated-knowledge-professor-university-of-pennsylvania/): “Here is one non-standard argument for this degree of non-linearity in neurons. Adjusting synapses in helpful ways requires computing how that synapse should adjust based on its contribution to whether the neuron fires. But this computation applies in basically the same way to individual ion channels in the cell: e.g., if the brain can signal to the synapse how to adjust in order to improve neuron firing, it can do the same for ion channels, at no additional cost. This makes Prof. Kording thinks that the brain is optimizing both. However, current techniques are very bad at measuring ion channel plasticity. Neuroscientists don’t tend to focus on it for this reason. There are considerably more ion channels than synapses, and ion channels change how synapses linearly and nonlinearly interact with one another. This suggests an uglier computational space” (p. 4-5).\n\n\n[337.](https://www.openphilanthropy.org/brain-computation-report#footnoteref337_nn3igzb)Dr. Dario Amodei emphasized this distinction.\n\n\n[338.](https://www.openphilanthropy.org/brain-computation-report#footnoteref338_agpn26h)A number of experts we engaged with indicated that many computational neuroscientists would not emphasize these other mechanisms very much (though their comments in this respect are not publicly documented); and the experts I interviewed didn’t tend to emphasize such mechanisms either.\n\n\n[339.](https://www.openphilanthropy.org/brain-computation-report#footnoteref339_0rkhynu)For example, Dr. Adam Marblestone noted that his own implicit ontology distinguishes between “fast, real-time computation,” – the rough equivalent of “standard neuron signaling” on the categorization I’ve been using – and other processes (see [Open Philanthropy’s non-verbatim notes from a conversation with Dr. Adam Marblestone](https://www.openphilanthropy.org/research/dr-adam-marblestone-research-scientist-google-deepmind/) (p. 2)). And Prof. Anthony Zador suggested that processes that proceed on longer timescales won’t add much computational burden (see [Open Philanthropy’s non-verbatim notes from a conversation with Prof. Anthony Zador](https://www.openphilanthropy.org/research/professor-anthony-zador-alle-davis-and-maxine-harrison-professor-of-neurosciences-cold-spring-harbor-laboratory/) (p. 4)).\n\n\n[340.](https://www.openphilanthropy.org/brain-computation-report#footnoteref340_0kg10hg)From [Open Philanthropy’s non-verbatim notes from a conversation with Dr. Adam Marblestone](https://www.openphilanthropy.org/research/dr-adam-marblestone-research-scientist-google-deepmind/): “It’s also hard to rule out the possibility that even though relevant processes (e.g., neuropeptide signaling) are proceeding on slow timescales, there are so many of them, implicating sufficiently many possible states and sufficiently complex interactions, that a lot of compute is required regardless” (p. 3).\n\n\n[341.](https://www.openphilanthropy.org/brain-computation-report#footnoteref341_szu7cnf)From [Open Philanthropy’s non-verbatim notes from a conversation with Prof. Eve Marder](https://www.openphilanthropy.org/research/professor-eve-marder-university-professor-and-victor-and-gwendolyn-beinfield-professor-of-neuroscience-brandeis-university/): “Both experimentalists and theorists sometimes act as though there’s a mechanistic wall between short-term, middle-term, and long-term changes in neural systems. This is partly because you have to come up with experiments that will occur over a given timeframe (two hours, two days, two weeks). But that doesn’t mean the time constants of these processes are two hours, two days, two weeks, etc.: it’s just that you designed an experimental protocol that allows you to see the difference between these periods of time. Historically, limitations on computational resources have also played a role in popularizing such separations. In the old days, people were limited by how much they could compute by the timesteps and integrators they were using, so there was tremendous pressure to separate timescales: no one wants to integrate over very long times at the rates you’d need to in order to capture fast dynamics. Thus, for example, people will take a model with eight or ten currents, and try to reduce it by separating timescales. If you’re clever, you can retain various essential features, but it’s hard to know if you’ve got them all. Whether or not such separations between timescales are biologically reasonable, though, they were computationally necessary, and they have resulted in ingrained beliefs in the field. In reality, the nervous system has an incredible ability to move seamlessly between timescales ranging from milliseconds to years, and the relevant processes interact. That is, short time-scale processes influence long time-scale processes, and vice versa. And unlike digital computers, the brain integrates over very long timescales at very fast speeds easily and seamlessly” (p. 2-3). In an ordinary differential equation model, variables that update more slowly might impose comparable FLOP/s costs to faster variables.\n\n\n[342.](https://www.openphilanthropy.org/brain-computation-report#footnoteref342_jdhjjxy)From [Open Philanthropy’s non-verbatim notes from a conversation with Prof. Anthony Zador](https://www.openphilanthropy.org/research/professor-anthony-zador-alle-davis-and-maxine-harrison-professor-of-neurosciences-cold-spring-harbor-laboratory/): “while global signals may be very important to a model’s function, they won’t add much computational burden (the same goes for processes that proceed on longer timescales). It takes fewer bits to specify a global signal, almost by definition” (p. 4). From [Open Philanthropy’s non-verbatim notes from a conversation with Prof. Barak Pearlmutter](https://www.openphilanthropy.org/research/professor-barak-pearlmutter-professor-of-computer-science-maynooth-university/): “He also suggested that ephaptic effects would be ‘in the noise’ because they are bulk effects, representation of which would involve one number that covers thousands of synapses” (p. 3).\n\n\n[343.](https://www.openphilanthropy.org/brain-computation-report#footnoteref343_as6aafh)[Leng and Ludwig (2008)](https://www.ncbi.nlm.nih.gov/pmc/articles/pmid/18845614/): [Leng and Ludwig (2008)](https://www.ncbi.nlm.nih.gov/pmc/articles/pmid/18845614/): “Classical neurotransmitters are released from axon terminals by Ca2+-dependent exocytosis ([Burgoyne and Morgan (2003)](https://pubmed.ncbi.nlm.nih.gov/12663867/)); they are packaged in small synaptic vesicles which are preferentially localized at synapses, although recent evidence indicates that extrasynaptic vesicular release can also occur from the somato/dendritic regions of neurones ([Cheramy et al. (1981)](https://pubmed.ncbi.nlm.nih.gov/6258083/); [Huang and Neher (1996)](https://pubmed.ncbi.nlm.nih.gov/8755485/); [Zilberter et al. (2005)](https://pubmed.ncbi.nlm.nih.gov/16061520/)). Peptides are also released by Ca2+-dependent exocytosis, but they are packaged in large dense-core vesicles which generally are not localized to synapses; some are found at synapses, but these vesicles tend to be distributed in soma, dendrites and in axonal varicosities as well as at nerve endings” (p. 5625). See also [Mains and Eipper (1999)](https://www.ncbi.nlm.nih.gov/books/NBK28247/). [Russo (2017)](https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5424629/pdf/nihms860267.pdf): “All neuropeptides act as signal transducers via cell-surface receptors. Nearly all neuropeptides act at G-protein coupled receptors ([Figure 2](https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5424629/figure/F2/)). This is an important distinction from ion channel-coupled receptors, since G-protein coupled signaling is consistent with neuropeptides inducing a slower and modulatory response compared to neurotransmitters. In addition, neuropeptide receptors have relatively high ligand affinities (nanomolar Kds), compared to neurotransmitter receptors. This allows a small amount of diffused peptide to still activate receptors. In summary, the combination of these features allows neuropeptides to be active at relatively large distances at relatively low concentrations” (p. 5). My impression is that neuropeptides can also diffuse through the blood (see [Mains and Eipper (1999)](https://www.ncbi.nlm.nih.gov/books/NBK28247/): “Probably the first neuropeptide to be identified was vasopressin, a nine-amino-acid peptide secreted by the nerve endings in the neural lobe of the pituitary. The source of the vasopressin is the magnocellular neurons of the hypothalamus, which send axons to the neurohypophysis, which is the site of release into the blood, in classic neurosecretory fashion”).\n\n\n[344.](https://www.openphilanthropy.org/brain-computation-report#footnoteref344_zs3z49o)See [Siegelbaum et al. (2013b)](https://neurology.mhmedical.com/content.aspx?bookid=1049§ionid=59138635), (p. 248), and [Alger (2002)](https://www.ncbi.nlm.nih.gov/pubmed/12498988).\n\n\n[345.](https://www.openphilanthropy.org/brain-computation-report#footnoteref345_4gwl7hi)[Burrows (1996)](https://www.oxfordscholarship.com/view/10.1093/acprof:oso/9780198523444.001.0001/acprof-9780198523444-chapter-5): “A neuromodulator is a messenger released from a neuron in the central nervous system, or in the periphery, that affects groups of neurons, or effector cells that have the appropriate receptors. It may not be released at synaptic sites, often acts through second messengers and can produce long-lasting effects. The release may be local so that only nearby neurons or effectors are influenced, or may be more widespread, which means that the distinction with a neurohormone can become very blurred. The act of neuromodulation, unlike that of neurotransmission, does not necessarily carry excitation of inhibition from one neuron to another, but instead alters either the cellular or synaptic properties of certain neurons so that neurotransmission between them is changed” (p. 195).\n\n\n[346.](https://www.openphilanthropy.org/brain-computation-report#footnoteref346_0uoiyws)See e.g. [Smith et al. (2019)](https://elifesciences.org/articles/47889): “Our analysis exposes transcriptomic evidence for dozens of molecularly distinct neuropeptidergic modulatory networks that directly interconnect all cortical neurons.”\n\n\n[347.](https://www.openphilanthropy.org/brain-computation-report#footnoteref347_9jg93c9)[Koch (1999)](https://www.amazon.com/Biophysics-Computation-Information-Computational-Neuroscience/dp/0195181999): “It is difficult to overemphasize the importance of modulatory effects involving complex intracellular biochemical pathways. The sound of stealthy footsteps at night can set our heart to pound, sweat to be released, and all our senses to be at a maximum level of alertness, all actions that are caused by second messengers. They underlie the difference in sleep-wake wake behavior, in affective moods, and in arousal, and they mediate the induction of long-term term memories” (p. 95).\n\n\n[348.](https://www.openphilanthropy.org/brain-computation-report#footnoteref348_4hfs29h)[Marder (2012)](https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3482119/): “Because neuromodulators can transform the intrinsic firing properties of circuit neurons and alter effective synaptic strength, neuromodulatory substances reconfigure neuronal circuits, often massively altering their output… the neuromodulatory environment constructs and specifies the functional circuits that give rise to behavior” (abstract).\n\n\n[349.](https://www.openphilanthropy.org/brain-computation-report#footnoteref349_u8wndiz)[Smith et al. (2019)](https://elifesciences.org/articles/47889): “secreted neuropeptides are thought to persist long enough (e.g., minutes) in brain interstitial spaces for diffusion to very-high-affinity NP-GPCRs hundreds of micrometers distant from release sites… Though present information is limited, eventual degradation by interstitial peptidases nonetheless probably restricts diffusion of most neuropeptides to sub-millimeter, local circuit distance scales.”\n\n\n[350.](https://www.openphilanthropy.org/brain-computation-report#footnoteref350_ap9aami)This is a point suggested by Dr. Dario Amodei. See also [Siegelbaum et al. (2013b)](https://neurology.mhmedical.com/content.aspx?bookid=1049§ionid=59138635): “whereas the action of ionotropic receptors is fast and brief, metabotropic receptors produce effects that begin slowly and persist for long periods, ranging from hundreds of milliseconds to many minutes” (p. 236). [Koch (1999)](https://www.amazon.com/Biophysics-Computation-Information-Computational-Neuroscience/dp/0195181999) says something similar, attributing the difference at least in part to the time it takes for a second messenger to diffuse through a cell: “An important distinction between ionotropic and metabotropic receptors is their time scale. While members of the former class act rapidly, terminating within a very small fraction of a second, the speed of the latter class is limited by diffusion. Biochemical reactions can happen nearly instantaneously at the neuronal time scale. However, if a synaptic input to a metabotropic receptor induces the release of some messenger, such as calcium ions, which have to diffuse to the cell body in order to ‘do their thing,’ the time scale is extended to seconds or longer” (p. 95). [Russo (2017)](https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5424629/pdf/nihms860267.pdf): “All neuropeptides act as signal transducers via cell-surface receptors. Nearly all neuropeptides act at G-protein coupled receptors ([Figure 2](https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5424629/figure/F2/)). This is an important distinction from ion channel-coupled receptors, since G-protein coupled signaling is consistent with neuropeptides inducing a slower and modulatory response compared to neurotransmitters” (p. 5).\n\n\n[351.](https://www.openphilanthropy.org/brain-computation-report#footnoteref351_z03yi0w)See the abstract.\n\n\n[352.](https://www.openphilanthropy.org/brain-computation-report#footnoteref352_bbu6j2n)[Leng and Ludwig (2008)](https://www.ncbi.nlm.nih.gov/pmc/articles/pmid/18845614/): “These arguments suggest that, in the neural lobe, exocytosis of a large dense-core vesicle is a surprisingly rare event; at any given nerve terminal, it may take about 400 spikes to release a single vesicle. As these sendings contain far more vesicles than are found at any synapse, synaptic release of peptides generally in the CNS seems likely to occur with a much lower probability of release. Release of oxytocin within the brain from the dendrites of magnocellular neurones is also infrequent, likely to occur at rates of only about 1 vesicle per cell every few seconds. This seems incompatible with the notion of peptides being effective and faithful mediators of information flow at short time scales and with spatial precision…There is clearly a massive qualitative discrepancy between the rates of release of synaptic vesicles and of peptide-containing vesicles … release of a peptide-containing vesicle is a comparatively rare event for any neurone” (p. 5629-5630).\n\n\n[353.](https://www.openphilanthropy.org/brain-computation-report#footnoteref353_azwuwh9)[Leng and Ludwig (2008)](https://www.ncbi.nlm.nih.gov/pmc/articles/pmid/18845614/): “Peptide-containing vesicles may contain more than 10 times as much cargo (in terms of the number of messenger molecules)…There are no known reuptake mechanisms for the peptides and the vesicles cannot be re-used. Thus release of a peptide-containing vesicle is a comparatively rare event for any neurone, but one with potentially widespread and profound consequences (cf. volume transmission Fuxe et al. 2007)” (p. 5630).\n\n\n[354.](https://www.openphilanthropy.org/brain-computation-report#footnoteref354_1py17w1)From [Open Philanthropy’s non-verbatim notes from a conversation with Prof. Anthony Zador](https://www.openphilanthropy.org/research/professor-anthony-zador-alle-davis-and-maxine-harrison-professor-of-neurosciences-cold-spring-harbor-laboratory/): “Prof. Zador believes that neuromodulation is the dominant form of global signaling in the brain. However, while global signals may be very important to a model’s function, they won’t add much computational burden (the same goes for processes that proceed on longer timescales). It takes fewer bits to specify a global signal, almost by definition” (p. 4). Dr. Dario Amodei also took the slow timescales of such signals as evidence that they would not introduce substantially additional FLOP/s. See also [Moravec (1988)](https://www.amazon.com/Mind-Children-Future-Robot-Intelligence/dp/0674576187/ref=sr_1_2?dchild=1&keywords=mind+children+moravec&qid=1586128538&sr=8-2), who writes that “broadcast chemical messages are slow and contain only a relatively small amount of information. In a program their effect can probably be mimicked by a modest number of global variables that are referenced by other computations” (p. 163).\n\n\n[355.](https://www.openphilanthropy.org/brain-computation-report#footnoteref355_800w93j)[Araque and Navarrete (2010)](https://www.ncbi.nlm.nih.gov/pmc/articles/PMC2894949/pdf/rstb20090313.pdf): “The nervous system is formed by two major cell types, neurons and glial cells. Glial cells are subdivided into different types with different functions: oligodendroglia, microglia, ependimoglia and astroglia… Glial cells, and particularly astrocytes—the most abundant glial cell type in the central nervous system—were considered to play simple supportive roles for neurons, probably because they lack long processes connecting sensory and effector organs” (p. 2375). [Bullock et al. (2005)](http://utw10020.utweb.utexas.edu/djlab/pdfs/Bullocketal2005.pdf): “Astrocytes are now known to communicate among themselves by means of glial transmitters and neuromodulators as well as by gap junctions (18). Moreover, astrocytes can detect neurotransmitters that are released from neuronal chemical synapses (21). These transmitters are delivered via synaptic vesicles into the synaptic cleft and diffuse to perisynaptic astrocytes. Additionally, neurotransmitters can be released outside the synapse and detected by perisynaptic glia (22, 23). In response, astrocytes can regulate communication between neurons by modifying synaptic transmission through the release of neurotransmitters and neuromodulators (18). Thus, there may be a parallel system of information processing that interacts with neuronal communication but propagates over much slower time scales through a functionally reticular network of non-neuronal cells” (p. 792). [Sandberg and Bostrom (2008)](https://www.fhi.ox.ac.uk/brain-emulation-roadmap-report.pdf): “Glia cells have traditionally been regarded as merely supporting actors to the neurons, but recent results suggest that they may play a fairly active role in neural activity” (p. 36).\n\n\n[356.](https://www.openphilanthropy.org/brain-computation-report#footnoteref356_h27j4re)See abstract.\n\n\n[357.](https://www.openphilanthropy.org/brain-computation-report#footnoteref357_cisc9ey)[Min et al. (2012)](https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3485583/pdf/fncom-06-00093.pdf): “astrocytes can sense a wide variety of neurotransmitters and signaling molecules, and respond with increased Ca2+ signaling” (p. 3). More detail: “when stimulated with specific metabotropic receptor agonists, astrocytes display prominent and extremely slow (up to 10 s of seconds) whole-cell Ca2+ responses…. astrocytes can modulate neurons by releasing transmitters themselves. These so-called gliotransmitters are very diverse, including conventional transmitters like GABA and glutamate, as well as signaling molecules like purines, D-serine, taurine, cytokines, peptides, and metabolites like lactate ([Volterra and Meldolesi (2005)](https://www.nature.com/articles/nrn1722)). Astrocytes can release transmitters through two mechanisms. Firstly, they can release transmitter containing vesicles through SNARE mediated exocytosis. Astrocytes contain the necessary proteins for SNARE mediated exocytosis ([Araque et al. (2000)](https://www.jneurosci.org/content/20/2/666); [Bezzi et al. (2004)](https://www.nature.com/articles/nn1246); [Parpura and Zorec (2010)](http://www.sciencedirect.com/science/article/pii/S0165017309001283); [Schubert et al. (2011)](https://onlinelibrary.wiley.com/doi/abs/10.1002/glia.21190)), and genetic or pharmacological interference with proteins of the SNARE-complex in astrocytes inhibits numerous forms of astrocyte-neuron signaling ([Pascual et al. (2005)](https://pubmed.ncbi.nlm.nih.gov/16210541/); [Jourdain et al. (2007)](https://www.nature.com/articles/nn1849); [Halassa et al. (2009)](https://www.pnas.org/content/106/35/15037); [Henneberger et al. (2010)](https://www.nature.com/articles/nature08673); [Min and Nevian (2012)](https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3485583/)). Secondly, transmitter can be released through reverse transport ([Héja et al. (2009)](https://www.ncbi.nlm.nih.gov/pmc/articles/PMC2744931/)), or through membrane channels ([Kozlov et al. (2006)](https://www.pnas.org/content/103/26/10058); [Lee et al. (2010)](https://science.sciencemag.org/content/330/6005/790))… (p. 2-3). See [Porter and McCarthy (1997)](https://www.ncbi.nlm.nih.gov/pubmed/9106901) for more discussion of astrocyte receptors.\n\n\n[358.](https://www.openphilanthropy.org/brain-computation-report#footnoteref358_tmxk4kb)[Min et al. (2012)](https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3485583/pdf/fncom-06-00093.pdf): “When stimulated with specific metabotropic receptor agonists, astrocytes display prominent and extremely slow (up to 10 s of seconds) whole-cell Ca2+ responses. This is also true for *in vivo* experiments, where sensory stimulation reliably induces astroglial slow Ca2+ transients ([Wang et al. (2006)](https://www.nature.com/articles/nn1703#:~:text=Astrocytic%20Ca2%2B%20signaling%20was%20a,in%20response%20to%20sensory%20stimulation.&text=Thus%2C%20astrocytes%20are%20activated%20by,responses%20are%20reduced%20or%20absent.)) sometimes related to vascular responses (Petzold et al., 2008). The recorded Ca2+ signal can remain restricted to a single or few astrocytes responding to specific sensory stimuli ([Wang et al. (2006)](https://www.nature.com/articles/nn1703#:~:text=Astrocytic%20Ca2%2B%20signaling%20was%20a,in%20response%20to%20sensory%20stimulation.&text=Thus%2C%20astrocytes%20are%20activated%20by,responses%20are%20reduced%20or%20absent.); [Schummers et al. (2008)](https://science.sciencemag.org/content/320/5883/1638)). Additionally, since astrocytes form complex networks through gap-junctional coupling with neighboring astrocytes (for review see [Giaume (2010)](https://www.frontiersin.org/articles/10.3389/fnene.2010.00129/full); [Giaume et al. (2010)](https://www.nature.com/articles/nrn2757/)) Ca2+ signals can spread like a wave through the astrocyte network ([Nimmerjahn et al. (2009)](http://www.sciencedirect.com/science/article/pii/S089662730900244X); [Kuga et al. (2011)](https://www.jneurosci.org/content/31/7/2607)). Although the mechanisms underlying the propagation of such Ca2+ waves are not fully understood, transport of either IP3 or Ca2+ itself through gap-junctions may play an important role ([Venance et al. (1997)](https://www.jneurosci.org/content/17/6/1981)). Furthermore, regenerative activity through astrocytic release of signaling molecules like ATP, which in turn activate Ca2+ signals in neighboring astrocytes, can be involved in Ca2+ wave propagation ([Guthrie et al. (1999)](https://www.jneurosci.org/content/19/2/520))” (p. 2).\n\n\n[359.](https://www.openphilanthropy.org/brain-computation-report#footnoteref359_z83w73i)[Kirischuk et al. (2012)](https://www.cell.com/trends/neurosciences/fulltext/S0166-2236(12)00054-9?_returnURL=https%3A%2F%2Flinkinghub.elsevier.com%2Fretrieve%2Fpii%2FS0166223612000549%3Fshowall%3Dtrue): “In addition to generally acknowledged Ca2+ excitability of astroglia, recent studies have demonstrated that neuronal activity triggers transient increases in the cytosolic Na+ concentration ([Na+]i) in perisynaptic astrocytes. These [Na+]i transients are controlled by multiple Na+-permeable channels and Na+-dependent transporters; spatiotemporally organized [Na+]i dynamics in turn regulate diverse astroglial homeostatic responses such as metabolic/signaling utilization of lactate and glutamate, transmembrane transport of neurotransmitters and K+ buffering. In particular, near-membrane [Na+]i transients determine the rate and the direction of the transmembrane transport of GABA and Ca2+” (abstract). [Bernardinell et al. (2004)](https://www.ncbi.nlm.nih.gov/pmc/articles/PMC522032/): “Glutamate-evoked Na+ increase in astrocytes has been identified as a signal coupling synaptic activity to glucose consumption. Astrocytes participate in multicellular signaling by transmitting intercellular Ca2+ waves. Here we show that intercellular Na+ waves are also evoked by activation of single cultured cortical mouse astrocytes in parallel with Ca2+ waves; however, there are spatial and temporal differences. Indeed, maneuvers that inhibit Ca2+ waves also inhibit Na+ waves; however, inhibition of the Na+/glutamate cotransporters or enzymatic degradation of extracellular glutamate selectively inhibit the Na+ wave. Thus, glutamate released by a Ca2+ wave-dependent mechanism is taken up by the Na+/glutamate cotransporters, resulting in a regenerative propagation of cytosolic Na+ increases. The Na+ wave gives rise to a spatially correlated increase in glucose uptake, which is prevented by glutamate transporter inhibition. Therefore, astrocytes appear to function as a network for concerted neurometabolic coupling through the generation of intercellular Na+ and metabolic waves” (abstract).\n\n\n[360.](https://www.openphilanthropy.org/brain-computation-report#footnoteref360_6tsclpm)[Min et al. (2012)](https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3485583/pdf/fncom-06-00093.pdf): “astrocytes can sense a wide variety of neurotransmitters and signaling molecules, and respond with increased Ca2+ signaling. But how do astrocytes signal back to neurons? Broadly speaking, astrocytes can do this through three separate mechanisms. Firstly, because astrocytes are crucial for ion homeostasis, they can influence neurons by dynamically altering the ionic balance. Secondly, astrocytes can alter neuronal functioning by modulating the uptake of neurotransmitter molecules from the extracellular space ([Theodosis et al. (2008)](https://pubmed.ncbi.nlm.nih.gov/18626065/)). Thirdly, astrocytes can release transmitters themselves ([Araque et al. (2001)](https://pubmed.ncbi.nlm.nih.gov/11181976/#:~:text=Astrocytes%2C%20a%20sub%2Dtype%20of,and%20can%20modulate%20neighboring%20neurons.))” (p. 3).\n\n\n[361.](https://www.openphilanthropy.org/brain-computation-report#footnoteref361_f8rhgak)[Min et al. (2012)](https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3485583/pdf/fncom-06-00093.pdf): “Several studies have shown that astrocytes can regulate neuronal excitability. Astrocytes can achieve this through several mechanisms: by regulation of the extracellular ionic composition, by maintaining a tonic extracellular transmitter concentration, by regulation of basal synaptic transmission, and by the induction of phasic events in neighboring neurons” (p. 4). [Min et al. (2012)](https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3485583/pdf/fncom-06-00093.pdf): “In addition to modulating neuronal excitability and basal synaptic transmission, astrocytes play a role in the specific strengthening or weakening of synaptic connections, either transiently (short-term plasticity), or long-lasting (long-term plasticity)” (p. 5). See p. 5-9 for more details on astrocyte involvement in short-term and long-term plasticity. [Baldwin and Eroglu (2017)](https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5573249/pdf/nihms880422.pdf): “astrocytes are key players in circuit formation, instructing the formation of synapses between distinct classes of neurons” (p. 1).\n\n\n[362.](https://www.openphilanthropy.org/brain-computation-report#footnoteref362_ef0zrw0)[Oberheim et al. (2006)](https://www.cell.com/trends/neurosciences/fulltext/S0166-2236(06)00175-5?_returnURL=https%3A%2F%2Flinkinghub.elsevier.com%2Fretrieve%2Fpii%2FS0166223606001755%3Fshowall%3Dtrue): “Human protoplasmic astrocytes manifest a threefold larger diameter and have tenfold more primary processes than those of rodents” (p. 547). On these grounds, [Oberheim et al. (2006)](https://www.cell.com/trends/neurosciences/fulltext/S0166-2236(06)00175-5?_returnURL=https%3A%2F%2Flinkinghub.elsevier.com%2Fretrieve%2Fpii%2FS0166223606001755%3Fshowall%3Dtrue) propose that the human brain’s astrocytes may play a role in explaining its unique computational power: “By integrating the activity of a larger contiguous set of synapses, the astrocytic domain might extend the processing power of human brain beyond that of other species” (p. 552).\n\n\n[363.](https://www.openphilanthropy.org/brain-computation-report#footnoteref363_l7aa53x)[Sakry et al. (2014)](https://journals.plos.org/plosbiology/article?id=10.1371/journal.pbio.1001993): “Oligodendrocyte precursor cells (OPC) characteristically express the transmembrane proteoglycan nerve-glia antigen 2 (NG2) and are unique glial cells receiving synaptic input from neurons. The development of NG2+ OPC into myelinating oligodendrocytes has been well studied, yet the retention of a large population of synapse-bearing OPC in the adult brain poses the question as to additional functional roles of OPC in the neuronal network. Here we report that activity-dependent processing of NG2 by OPC-expressed secretases functionally regulates the neuronal network” (p. 1). [Káradóttir et al. (2008)](https://www.ncbi.nlm.nih.gov/pubmed/18311136): “We show here that there are two distinct types of morphologically identical oligodendrocyte precursor glial cells (OPCs) in situ in rat CNS white matter. One type expresses voltage-gated sodium and potassium channels, generates action potentials when depolarized and senses its environment by receiving excitatory and inhibitory synaptic input from axons” (p. 1).\n\n\n[364.](https://www.openphilanthropy.org/brain-computation-report#footnoteref364_1u6wenz)[Bullock et al. (2005)](http://utw10020.utweb.utexas.edu/djlab/pdfs/Bullocketal2005.pdf): “Myelinating glia do not fire action potentials, but they can detect impulses in axons through membrane receptors that bind signaling molecules. These include ATP (16) and adenosine (17) that are released along the axon and also potassium that is released during intense neural activity” (p. 792). [de Faria, Jr. et al. (2019)](https://www.ncbi.nlm.nih.gov/pmc/articles/PMC6587454/): “Alternatively, active axons can also signal OPCs [oligodendrocyte precursor cells] via non‐synaptic vascular release of growth factors [e.g. platelet‐derived growth factor (PDGF) AA and neurotrophins] and neurotransmitters (e.g. glutamate, GABA or ATP). OPCs express not only ion channels including glutamate‐activated ion channels, the sodium and potassium channels, but also receptors of growth factors. These cellular properties make OPCs equipped to respond to neuronal activity” (p. 450).\n\n\n[365.](https://www.openphilanthropy.org/brain-computation-report#footnoteref365_q5bw2s5)[Stobart et al. (2018b)](https://www.cell.com/action/showPdf?pii=S0896-6273%2818%2930284-8): “We identified calcium responses in both astrocyte processes and endfeet that rapidly followed neuronal events (∼120 ms after). These fast astrocyte responses were largely independent of IP3R2-mediated signaling and known neuromodulator activity (acetylcholine, serotonin, and norepinephrine), suggesting that they are evoked by local synaptic activity. The existence of such rapid signals implies that astrocytes are fast enough to play a role in synaptic modulation and neurovascular coupling” 726)(. [Agarwal et al. (2017)](http://www.sciencedirect.com/science/article/pii/S0896627316310078); [Bindocci et al. (2017)](https://science.sciencemag.org/content/356/6339/eaai8185); [Lind et al. (2018)](https://onlinelibrary.wiley.com/doi/abs/10.1002/glia.23246); [Otsu et al. (2015)](https://www.nature.com/articles/nn.3906); [Srinivasan et al. (2015)](https://pubmed.ncbi.nlm.nih.gov/28213444/); [Stobart et al. (2018a)](https://academic.oup.com/cercor/article/28/1/184/2572087) [Winship et al. (2007)](https://www.jneurosci.org/content/jneuro/27/23/6268.full.pdf): “These *in vivo* findings suggest that astrocytes can respond to sensory activity in a selective manner and process information on a subsecond time scale, enabling them to potentially form an active partnership with neurons for rapid regulation of microvascular tone and neuron–astrocyte network properties” (p. 6268). [Min et al. (2012)](https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3485583/pdf/fncom-06-00093.pdf): “Two parallel studies have indeed identified small and relatively fast Ca2+ signals that are restricted to the astrocyte process ([Di Castro et al. (2011)](https://pubmed.ncbi.nlm.nih.gov/21909085/); [Panatier et al. (2011)](https://www.cell.com/action/showPdf?pii=S0092-8674%2811%2900820-8)). Two main classes of local calcium events have been identified: focal highly confined transients (about 4μm) and more robust regional events (about 12 μm; Figure 1; [Di Castro et al. (2011)](https://pubmed.ncbi.nlm.nih.gov/21909085/)). The more local events have been proposed to be generated by spontaneous single vesicle release at individual synapses whereas the expanded events seem to be generated by single action potentials activating several neighboring synapses in the astrocyte domain” (p. 2-3).\n\n\n[366.](https://www.openphilanthropy.org/brain-computation-report#footnoteref366_8tfbze2)[Panatier et al. (2011)](https://www.cell.com/action/showPdf?pii=S0092-8674%2811%2900820-8): “we show that astrocytes in the hippocampal CA1 region detect synaptic activity induced by single-synaptic stimulation… single pulse stimulation of neuronal presynaptic elements evoked local Ca2+ events in an astrocytic process” (p. 785, p. 787).\n\n\n[367.](https://www.openphilanthropy.org/brain-computation-report#footnoteref367_h21sy2t)[Wang et al. (2009)](https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3638986/): “Astrocytes are electrically non-excitable cells that, on a slow time scale of seconds, integrate synaptic transmission by dynamic increases in cytosolic Ca2+.” [Panatier et al. (2011)](https://www.cell.com/action/showPdf?pii=S0092-8674%2811%2900820-8): “the detection and modulation mechanisms in astrocytes are deemed too slow to be involved in local modulation of rapid, basal synaptic transmission. Indeed, although Ca2+ activities have been reported in glial processes ([Nett et al. (2002)](https://pubmed.ncbi.nlm.nih.gov/11784768/#:~:text=Hippocampal%20astrocytes%20in%20situ%20exhibit%20calcium,occur%20independent%20of%20neuronal%20activity.&text=Results%20presented%20in%20this%20study,the%20absence%20of%20neuronal%20activity.), [Perea and Araque (2005)](https://pubmed.ncbi.nlm.nih.gov/15745945/), [Santello et al. (2011)](https://pubmed.ncbi.nlm.nih.gov/21382557/), [Wang et al. (2006)](https://www.nature.com/articles/nn1703#:~:text=Astrocytic%20Ca2%2B%20signaling%20was%20a,in%20response%20to%20sensory%20stimulation.&text=Thus%2C%20astrocytes%20are%20activated%20by,responses%20are%20reduced%20or%20absent.)), Ca2+ signaling has been generally studied globally in the whole astrocyte, where the slow timescale of Ca2+ changes precludes any spatial and temporal match with fast and localized synaptic transmission. Moreover, trains of sustained stimulation of afferents were necessary to induce this type of glial Ca2+ activity” (p. 785).\n\n\n[368.](https://www.openphilanthropy.org/brain-computation-report#footnoteref368_rtgiec5)[Min et al. (2012)](https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3485583/pdf/fncom-06-00093.pdf): “The temporal characteristics of astrocytic Ca2+ transients have led to the idea that unlike neurons, astrocytes display exclusively particularly slow responses, and that their signals are not suited to be restricted to small cellular compartments, as happens for example, in dendritic spines” (p. 2).\n\n\n[369.](https://www.openphilanthropy.org/brain-computation-report#footnoteref369_jqs1ktm)[von Bartheld et al. (2016)](https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5063692/pdf/nihms799882.pdf): “The recently validated isotropic fractionator demonstrates a glia:neuron ratio of less than 1:1 and a total number of less than 100 billion glial cells in the human brain. A survey of original evidence shows that histological data always supported a 1:1 ratio of glia to neurons in the entire human brain, and a range of 40-130 billion glial cells. We review how the claim of one trillion glial cells originated, was perpetuated, and eventually refuted” (p. 1).\n\n\n[370.](https://www.openphilanthropy.org/brain-computation-report#footnoteref370_bdg0xla)[von Bartheld et al. (2016)](https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5063692/pdf/nihms799882.pdf): “All three methods: histology, DNA extraction, and the IF method support numbers of about 10–20 billion neurons and at most a 2-fold larger number of glial cells (20–40 billion) in the human cerebral cortical grey matter, thus supporting an average GNR of approximately 1.5. Inclusion of the white matter (that underlies the grey matter of cerebral cortex) increases the GNR to about 3.0” (p. 11)\n\n\n[371.](https://www.openphilanthropy.org/brain-computation-report#footnoteref371_mq2uhrd)[Verkhratsky and Butt, eds. (2013)](https://onlinelibrary.wiley.com/doi/book/10.1002/9781118402061): “The authors tried to calculate the relative numbers of glial cell types, and they found that astrocytes accounted for ~20 percent, oligodendrocytes for 75 per cent and micro glia for 5 per cent of the total glial cell population. The identifying criteria, however, were rather doubtful, since no specific staining was employed… In the earlier morphological studies, based on 2d counting, the distribution of glial cell types was found to be: astrocytes 40 per cent, oligodendrocytes 50 per cent and microglia 5-10 percent ([Blinkow and Glezer (1968)](https://onlinelibrary.wiley.com/doi/abs/10.1002/ajpa.1330290327)” (p. 95-96).\n\n\n[372.](https://www.openphilanthropy.org/brain-computation-report#footnoteref372_6iyn45w)[Verkhratsky and Butt, eds. (2013)](https://onlinelibrary.wiley.com/doi/book/10.1002/9781118402061): “NG2-glia constitute 8-9 per cent of total cells in white matter and 2-3 per cent of total cells in the gray matter, with an estimated density of 10-140 mm2 in the adult CNS (Nishyama et al., 2009)” (p. 326).\n\n\n[373.](https://www.openphilanthropy.org/brain-computation-report#footnoteref373_88pgn08)This was a point suggested by Dr. Dario Amodei. See also [Open Philanthropy’s non-verbatim notes from a conversation with Prof. Konrad Kording](https://www.openphilanthropy.org/research/professor-konrad-kording-penn-integrated-knowledge-professor-university-of-pennsylvania/): “Glial cells would imply a factor of two in required compute, but we are likely to be so many orders of magnitude wrong already that incorporating glia will not make the difference” (p. 3).\n\n\n[374.](https://www.openphilanthropy.org/brain-computation-report#footnoteref374_ysj1xpz)Oberheim et al. (2006): “Taking into account the increase in size of protoplasmic astrocytes that accompanies this increased synaptic density, we can estimate that each astrocyte supports and modulates the function of roughly two million synapses” (p. 549). [Verkhratsky and Butt, eds. (2013)](https://onlinelibrary.wiley.com/doi/book/10.1002/9781118402061): “A single protoplastmic astrocyte in rodent cortex contacts 4-8 neurones, surrounds ~300-600 neuronal dendrites and provides cover for up to 20,000-120,000 synapses residing within its domain ([Bushong et al. (2002)](https://www.jneurosci.org/content/22/1/183?ijkey=12959cc2fb497700c703bcaefeaf254f4a8ec157&keytype2=tf_ipsecsha); [Halassa et al. (2007b)](https://www.jneurosci.org/content/27/24/6473))… Human protoplasmic astrocytes are 2-3 times larger and exceedingly more complex; the processes of a single human protoplasmic astrocyte cover approximately 2 million synapses” (p. 114). [Winship et al. (2007)](https://www.jneurosci.org/content/jneuro/27/23/6268.full.pdf): “It is worth noting that astrocyte processes can contact up to 100,000 synapses ([Bushong et al. (2002)](https://www.jneurosci.org/content/22/1/183?ijkey=12959cc2fb497700c703bcaefeaf254f4a8ec157&keytype2=tf_ipsecsha))” (p. 6271).\n\n\n[375.](https://www.openphilanthropy.org/brain-computation-report#footnoteref375_k31xjrh)Their methodology assumes that “the same type of neuron or non-neuronal cells is assumed to approximately have a similar energy expenditure no matter where they located (in GM or WM)” (p. 14). Given roughly equal numbers of neurons and non-neuronal cells in the brain as a whole (see [Azevedo et al. (2009)](https://www.ncbi.nlm.nih.gov/pubmed/19226510), (p. 536), this would naively suggest that neurons account for roughly 97% of the brain’s overall energy consumption. However, I’m not sure that such a naive application of their estimate is appropriate.\n\n\n[376.](https://www.openphilanthropy.org/brain-computation-report#footnoteref376_q0lmi01)This is a point made by [AI Impacts](https://aiimpacts.org/glial-signaling/), who also add that “although we can imagine many possible designs on which glia would perform most of the information transfer in the brain while neurons provided particular kinds of special-purpose communication at great expense, this does not seem likely given our current understanding.”\n\n\n[377.](https://www.openphilanthropy.org/brain-computation-report#footnoteref377_b5l023g)“**FIG. 3. (A)** Distribution of signaling-related ATP usage among different cellular mechanisms when the mean firing rate of neurons is 4 \n\nHz. The percentages of the expenditure maintaining resting potentials, propagating action potentials through a neuron, and driving \n\npresynaptic Ca2+ entry, glutamate recycling, and postsynaptic ion fluxes, are shown (100% = 3.29 × 109 ATP/neuron/s). **(B)** Comparison of our predicted distribution of signaling-related energy consumption with the distribution of mitochondria observed by [Wong-Riley (1989)](http://www.sciencedirect.com/science/article/pii/0166223689901653). For the dendrites + soma column, Wong-Riley’s data are the percentage of mitochondria in dendrites, whereas our prediction is the percentage of energy expended on postsynaptic currents, dendritic and somatic action potentials, and the neuronal resting potential. For the axons + terminals column, Wong-Riley’s data are the percentage of mitochondria in axons and presynaptic terminals, and our prediction is for the percentage of energy expended on axonal action potentials, presynaptic Ca2+ entry, accumulating glutamate into vesicles, and recycling vesicles. The close spacing of terminals along axons (5 µm, implying a diffusion time of only 25 milliseconds ([Braitenberg and Schüz (1998)](https://link.springer.com/book/10.1007/978-3-662-03733-1)) will make terminal and axonal mitochondria functionally indistinguishable. For the glia column, WongRiley’s data are the percentage of mitochondria in glia, whereas our prediction is for the energy expended on the glial resting potential, glutamate uptake, and its conversion to glutamine. This comparison ignores the 25% of energy expenditure not related to signaling (see Discussion), and the possibility that some processes (for example, in glia) may be driven mainly by glycolysis” (p. 1140).\n\n\n[378.](https://www.openphilanthropy.org/brain-computation-report#footnoteref378_e2d4kms)From [Open Philanthropy’s non-verbatim notes from a conversation with Prof. Anthony Zador](https://www.openphilanthropy.org/research/professor-anthony-zador-alle-davis-and-maxine-harrison-professor-of-neurosciences-cold-spring-harbor-laboratory/): “Glia are very important to understanding disease, but Prof. Zador does not believe that they are important to computing in the brain” (p. 4).\n\n\n[379.](https://www.openphilanthropy.org/brain-computation-report#footnoteref379_84hhw86)See [Siegelbaum and Koester (2013d)](https://neurology.mhmedical.com/content.aspx?bookid=1049§ionid=59138632), (p. 178)\n\n\n[380.](https://www.openphilanthropy.org/brain-computation-report#footnoteref380_6odmoo1)See [Siegelbaum and Koester (2013d)](https://neurology.mhmedical.com/content.aspx?bookid=1049§ionid=59138632), (p. 178)\n\n\n[381.](https://www.openphilanthropy.org/brain-computation-report#footnoteref381_xopzi3z)See [Siegelbaum and Koester (2013d)](https://neurology.mhmedical.com/content.aspx?bookid=1049§ionid=59138632), (p. 178)\n\n\n[382.](https://www.openphilanthropy.org/brain-computation-report#footnoteref382_au6fl09)[Siegelbaum and Koester (2013d)](https://neurology.mhmedical.com/content.aspx?bookid=1049§ionid=59138632): “Most synapses in the brain are chemical” (p. 177). [Lodish et al. (2000)](https://scholar.google.com/scholar?cluster=198058569078716943&hl=en&as_sdt=2005&sciodt=0,5): “We also briefly discuss electric synapses, which are much rarer, but simpler in function, than chemical synapses.” [Purves et al. (2001)](https://www.ncbi.nlm.nih.gov/books/NBK11164/): “Although they are a distinct minority, [electrical synapses](https://www.ncbi.nlm.nih.gov/books/n/neurosci/A2251/def-item/A2436/) are found in all nervous systems, including the human brain.” [Wang et al. (2010)](https://journals.plos.org/plosone/article?id=10.1371/journal.pone.0010253) suggest probabilities of 0.5% and 1.4% of coupling between pyramidal cells in different brain regions. From [Open Philanthropy’s non-verbatim notes from a conversation with Prof. Chris Eliasmith](https://www.openphilanthropy.org/research/professor-chris-eliasmith-professor-of-philosophy-and-systems-design-engineering-canada-research-chair-in-theoretical-neuroscience-and-director-of-the-centre-for-theoretical-neuroscience-at-the-uni/): “Adding gap junctions probably would not substantially increase the overall compute budget, because they are not very common” (p. 4). From [Open Philanthropy’s non-verbatim notes from a conversation with Prof. Barak Pearlmutter](https://www.openphilanthropy.org/files/Conversations/A%20conversation%20with%20Prof.%20Barak%20Pearlmutter.pdf): “Prof. Pearlmutter characterized the comparatively minimal number of gap junction as the “bottom line” with respect to their computational role (p. 3).\n\n\n[383.](https://www.openphilanthropy.org/brain-computation-report#footnoteref383_xlrufz4)[Siegelbaum and Koester (2013d)](https://neurology.mhmedical.com/content.aspx?bookid=1049§ionid=59138632): “Electrical synapses are employed primarily to send rapid and stereotyped depolarizing signals. In contrast, chemical synapses are capable of more variable signaling and thus can produce more complex behaviors. They can mediate either excitatory or inhibitory actions in postsynaptic cells and produce electrical changes in the postsynaptic cell that last from milliseconds to many minutes. Chemical synapses also serve to amplify neuronal signals, so even a small presynaptic nerve terminal can alter the response of large postsynaptic cells. Not surprisingly, most synapses in the brain are chemical” (p. 177). [Bullock et al. (2005)](http://utw10020.utweb.utexas.edu/djlab/pdfs/Bullocketal2005.pdf) also suggest that “electrical transmission through gap junctions was initially considered primitive and likely incapable of the subtleties of chemical transmission through axon-dendrite synapses” (p. 792). From [Open Philanthropy’s non-verbatim notes from a conversation with Dr. Jess Riedel](https://www.openphilanthropy.org/research/dr-jess-riedel-senior-research-scientist-physics-ntt-research/): “From a computational perspective, electrical synapses lack gain – the ability to amplify signals. Dr. Riedel recalls that gain is a key property of computational units like transistors” (p. 5).\n\n\n[384.](https://www.openphilanthropy.org/brain-computation-report#footnoteref384_2xywc20)From [Open Philanthropy’s non-verbatim notes from a conversation with Dr. Adam Marblestone](https://www.openphilanthropy.org/research/dr-adam-marblestone-research-scientist-google-deepmind/): “Sometimes the coupling between neurons created by gap junctions is so fast that they are treated as one neuron for modeling purposes. Gap junctions are also often thought of as supporting some kind of oscillation or globally coherent behavior that might not require a lot of computation. Whether gap junctions could create more computationally-expensive, non-linear interactions between different parts of neurons is an interesting question” (p. 6). [Bennett and Zukin (2004)](https://www.sciencedirect.com/science/article/pii/S0896627304000431): “Gap junctions can synchronize electrical activity and may subserve metabolic coupling and chemical communication as well. They are thought to play an important role in brain development, morphogenesis, and pattern formation ([Bennett et al. (1991)](https://www.cell.com/neuron/abstract/0896-6273(91)90241-Q), [Bruzzone et al. (1996)](https://pubmed.ncbi.nlm.nih.gov/8665925/), [Dermietzel et al. (1989)](https://pubmed.ncbi.nlm.nih.gov/2557621/), [Goodenough et al. (1996)](https://pubmed.ncbi.nlm.nih.gov/8811187/))” (p. 495).\n\n\n[385.](https://www.openphilanthropy.org/brain-computation-report#footnoteref385_qgzeuf5)From [Open Philanthropy’s non-verbatim notes from a conversation with Prof. Barak Pearlmutter](https://www.openphilanthropy.org/research/professor-barak-pearlmutter-professor-of-computer-science-maynooth-university/): “[Prof. Pearlmutter] took the fact that gap junctions are roughly linear, and that they don’t involve time delays, as evidence they would be easy to model” (p. 3). Though [Bullock et al. (2005)](http://utw10020.utweb.utexas.edu/djlab/pdfs/Bullocketal2005.pdf) seem to suggest some forms of complex behavior: “an electrical impulse in one cell by no means inevitably propagates to the other cells with which it shares gap junctions. In fact, a channel within a gap junction is not necessarily open, and an entire gap junction may not transmit electrical current until it is appropriately modified in response to transmission from chemical synapses of the same, ‘presynaptic’ neuron” (p. 792).\n\n\n[386.](https://www.openphilanthropy.org/brain-computation-report#footnoteref386_59s5so0)[Trenholm et al. (2013)](https://www.nature.com/articles/nn.3308?draft=marketing): “We identified a network of electrically coupled motion–coding neurons in mouse retina that act collectively to register the leading edges of moving objects at a nearly constant spatial location, regardless of their velocity” (abstract).\n\n\n[387.](https://www.openphilanthropy.org/brain-computation-report#footnoteref387_6n9r1pd)From [Open Philanthropy’s non-verbatim notes from a conversation with Dr. Stephen Larson](https://www.openphilanthropy.org/research/dr-stephen-larson-ceo-of-metacell-and-co-founder-of-openworm/): “Dr. Larson thinks that gap junctions can contribute to non-linear dynamics and near-chaotic dynamics within neural networks. As a rough rule of thumb: the more non-linear a system is, the more computationally expensive it is to simulate” (p. 3).\n\n\n[388.](https://www.openphilanthropy.org/brain-computation-report#footnoteref388_6f7x4tj)From [Open Philanthropy’s non-verbatim notes from a conversation with Prof. Chris Eliasmith](https://www.openphilanthropy.org/research/professor-chris-eliasmith-professor-of-philosophy-and-systems-design-engineering-canada-research-chair-in-theoretical-neuroscience-and-director-of-the-centre-for-theoretical-neuroscience-at-the-uni/): “You can model a gap junction as a connection that updates every timestep, rather than every time a spike occurs” (p. 4).\n\n\n[389.](https://www.openphilanthropy.org/brain-computation-report#footnoteref389_6limr9r)They show that a wave of periodic neural activity can propagate across two physically separated pieces of hippocampal tissue (separation that removes the possibility of chemical or electrical synaptic communication), and that this propagation was blocked by a mechanism that cancels the relevant electrical field – results that strongly suggest ephaptic effects as a causal mechanism. [Chiang et al. (2019)](https://www.ncbi.nlm.nih.gov/pmc/articles/PMC6312416/pdf/TJP-597-249.pdf): “To confirm the absence of any role of synaptic transmission and to eliminate other forms of communication between neurons except for ephaptic coupling, we next examined the possibility that electric fields generated by pyramidal neurons could propagate through a cut in the tissue by activating other cells across a small gap of the tissue, thereby eliminating chemical, electrical synapses (gap junctions), or axonal transmission. Fig. [4](https://physoc.onlinelibrary.wiley.com/doi/10.1113/JP276904#tjp13271-fig-0004)*A* and *B* shows the propagation of the slow hippocampal periodic activity before and after the cut in the tissue. To ensure that the slice was completely cut, the two pieces of tissue were separated and then rejoined while a clear gap was observed under the surgical microscope. The slow hippocampal periodic activity could indeed generate an event on the other side of a complete cut through the whole slice (Fig. [4](https://physoc.onlinelibrary.wiley.com/doi/10.1113/JP276904#tjp13271-fig-0004)*B*). However, the slow hippocampal periodic activity failed to trigger the activity across the gap when the distance of the gap increased (Fig. [4](https://physoc.onlinelibrary.wiley.com/doi/10.1113/JP276904#tjp13271-fig-0004)*C*). The expanded window in Fig. [4](https://physoc.onlinelibrary.wiley.com/doi/10.1113/JP276904#tjp13271-fig-0004)*D* shows that the waveforms of the slow hippocampal periodic activity and the delay between two signals measured in recording electrodes 1 and 2 were similar. The speed of the slow hippocampal periodic activity across the tissue was not affected by the presence of the cut in Fig. [4](https://physoc.onlinelibrary.wiley.com/doi/10.1113/JP276904#tjp13271-fig-0004)*E* (*t* test, *n* = 36 events in 3 slices). Therefore, this experiment shows that slow hippocampal periodic activity can propagate along a cut tissue by activating cells on the other side without any chemical and electrical synaptic connections at a similar speed to those observed in the intact tissue” (p. 255).\n\n\n[390.](https://www.openphilanthropy.org/brain-computation-report#footnoteref390_cdl8b62)[Anastassiou et al. (2011)](https://pubmed.ncbi.nlm.nih.gov/21240273/): “We found that extracellular fields induced ephaptically mediated changes in the somatic membrane potential that were less than 0.5 mV under subthreshold conditions. Despite their small size, these fields could strongly entrain action potentials, particularly for slow (<8 Hz) fluctuations of the extracellular field” (abstract).[Chang (2019)](https://www.ncbi.nlm.nih.gov/pmc/articles/PMC6312416/pdf/TJP-597-249.pdf): “Ephaptic coupling has been suggested as a mechanism involved in modulating neural activity from different regions of the nervous system ([Jefferys (1995)](https://pubmed.ncbi.nlm.nih.gov/7480159/); [Weiss and Faber (2010)](https://www.ncbi.nlm.nih.gov/pmc/articles/PMC2876880/); [Anastassiou and Koch (2015)](https://www.sciencedirect.com/science/article/abs/pii/S0959438814001809?via%3Dihub)) especially in the vertebrate retina ([Vroman et al. (2013)](https://pubmed.ncbi.nlm.nih.gov/24068997/)) and in the olfactory circuit ([Su et al. (2012)](https://pubmed.ncbi.nlm.nih.gov/23172146/)). Several studies also indicate that weak electric fields can influence the neural activity at the cortical and hippocampal network level ([Francis et al. (2003)](https://pubmed.ncbi.nlm.nih.gov/12917358/); [Deans et al. (2007)](https://pubmed.ncbi.nlm.nih.gov/17599962/); [Fröhlich and McCormick (2010)](https://pubmed.ncbi.nlm.nih.gov/20624597/)). In hippocampal slices, weak electric fields can affect the excitability of pyramidal cells and the synchronization of the hippocampal network ([Francis et al. (2003)](https://pubmed.ncbi.nlm.nih.gov/12917358/); [Deans et al. (2007)](https://pubmed.ncbi.nlm.nih.gov/17599962/)). In the cortex, weak electric fields have also been shown to modulate slow periodic activity in the *in vitro* preparation (Frohlich & McCormick, [Fröhlich and McCormick (2010)](https://pubmed.ncbi.nlm.nih.gov/20624597/)). Although endogenous electric fields are thought to be too weak to excite neurons, two recent studies suggest that weak electric fields are involved in the propagation of epileptiform activity at a specific speed of 0.1 m s−1([Zhang et al. (2014)](https://pubmed.ncbi.nlm.nih.gov/24453330/); [Qiu et al. (2015)](https://pubmed.ncbi.nlm.nih.gov/26631463/))” (p. 250).\n\n\n[391.](https://www.openphilanthropy.org/brain-computation-report#footnoteref391_tk2oy18)[Chiang et al. (2019)](https://www.ncbi.nlm.nih.gov/pmc/articles/PMC6312416/pdf/TJP-597-249.pdf): “Slow oscillations have been observed to propagate with speeds around 0.1 m s−1 throughout the cerebral cortex *in vivo*… The mechanism most consistent with the data is ephaptic coupling whereby a group of neurons generates an electric field capable of activating the neighbouring neurons” (p. 250).\n\n\n[392.](https://www.openphilanthropy.org/brain-computation-report#footnoteref392_ofakk78)[Anastassiou and Koch (2015)](https://www.sciencedirect.com/science/article/abs/pii/S0959438814001809?via%3Dihub): “The biggest question about ephaptic coupling to endogenous fields remains its functional role: does such nonsynaptic, electric communication contribute to neural function and computationsin the healthy brain (e.g., in the absence of the strong fields generated during epileptic seizures or other pathological brain states)? And, if yes, where, how and under which conditions? While characterizing ephaptic effects at the level of synapses, neurons and circuits in slice remains invaluable, ephaptic coupling must ultimately be studied in behaving animals. This is particularly so as such effects are likely to be small (e.g., compared to spike threshold) and spatially diffuse (in the case of LFPs), suggesting a circuit-wide feedback mechanism, that is, at the level where neural processing relevant to behavior occurs [[62](https://www.sciencedirect.com/science/article/pii/S0896627310007658%E2%80%9D)]” (see “Outlook”).\n\n\n[393.](https://www.openphilanthropy.org/brain-computation-report#footnoteref393_dqg03pl)From [Open Philanthropy’s non-verbatim notes from a conversation with Prof. Anthony Zador](https://www.openphilanthropy.org/research/professor-anthony-zador-alle-davis-and-maxine-harrison-professor-of-neurosciences-cold-spring-harbor-laboratory/): “Prof. Zador believes that ephaptic communication is very unlikely to be important to the brain’s information-processing” (p. 4).\n\n\n[394.](https://www.openphilanthropy.org/brain-computation-report#footnoteref394_jn5mqgj)Resting membrane potential is typically around [-70 mV](https://courses.lumenlearning.com/wm-biology2/chapter/resting-membrane-potential/), and the threshold for firing is around [-55 mV](https://courses.lumenlearning.com/wm-biology2/chapter/action-potential/), though these vary somewhat. [Anastassiou and Koch (2015)](https://www.sciencedirect.com/science/article/abs/pii/S0959438814001809?via%3Dihub): “such effects are likely to be small (e.g., compared to spike threshold)” (see “Outlook”).\n\n\n[395.](https://www.openphilanthropy.org/brain-computation-report#footnoteref395_scbcm3r)[Anastassiou and Koch (2015)](https://www.sciencedirect.com/science/article/abs/pii/S0959438814001809?via%3Dihub): “The usefulness of such studies for understanding ephaptic coupling to endogenous fields is limited–chiefly, the cases emulated in slice oversimplify *in vivo* activity where neurons are continuously bombarded by hundreds of postsynaptic currents along their intricate morphology in the presence of a spatially inhomogeneous and temporally dynamic electric field (Figure 1c; compare to fields in Figure 1a,b). Such limitations are present both for fields induced across parallel plates positioned millimeters away from each other (e.g., [[24](https://pubmed.ncbi.nlm.nih.gov/14978199/%E2%80%9D), [25](https://www.jneurosci.org/content/27/11/3030%E2%80%9D), [30](https://www.sciencedirect.com/science/article/pii/S08966273100046307%E2%80%9D)]) as well as fields elicited via stimulation pipettes (e.g., [[1](https://www.sciencedirect.com/science/article/pii/S0896627309005455%E2%80%9D), [28](https://www.nature.com/articles/nn.2727%E2%80%9D)]). To account for the impact of endogenous fields on single neurons, both the intracellular and extracellular voltage would not only need to be monitored along a single cell but also manipulated, and all this in the behaving animal” (see “Neurons (mesoscale)”).\n\n\n[396.](https://www.openphilanthropy.org/brain-computation-report#footnoteref396_brppyq2)From [Open Philanthropy’s non-verbatim notes from a conversation with Prof. Anthony Zador](https://www.openphilanthropy.org/research/professor-anthony-zador-alle-davis-and-maxine-harrison-professor-of-neurosciences-cold-spring-harbor-laboratory/): “Prof. Zador believes that ephaptic communication is very unlikely to be important to the brain’s information-processing. Even if it was important, though, it would be a form of global signaling, and so comparatively inexpensive to model.” (p. 4). From [Open Philanthropy’s non-verbatim notes from a conversation with Prof. Barak Pearlmutter](https://www.openphilanthropy.org/research/professor-barak-pearlmutter-professor-of-computer-science-maynooth-university/): “He also suggested that ephaptic effects would be ‘in the noise’ because they are bulk effects, representation of which would involve one number that covers thousands of synapses” (p. 3).\n\n\n[397.](https://www.openphilanthropy.org/brain-computation-report#footnoteref397_0nyd56c)[Sandberg and Bostrom (2008)](https://www.fhi.ox.ac.uk/brain-emulation-roadmap-report.pdf)): “If ephaptic effects were important, the emulation would need to take the locally induced electromagnetic fields into account. This would plausibly involve dividing the extracellular space (possibly also the intracellular space) into finite elements where the field can be assumed to be constant, linear or otherwise easily approximable. The cortical extracellular length constant is on order of ≈100 μm ([Gardner‐Medwin (1983)](https://www.ncbi.nlm.nih.gov/pmc/articles/PMC1197360/)), which would necessitate on the order of 1.4∙1012 such compartments if each compartment is 1/10 of the length constant. 37 Each compartment would need at least two vector state variables and 6 components of a conductivity tensor; assuming one byte for each, the total memory requirements would be on the order of 10 terabytes. Compared to estimates of neural simulation complexity, this is relatively manageable. The processing needed to update these compartments would be on the same order as a detailed compartment model of every neuron and glia cell” (p. 36-7).\n\n\n[398.](https://www.openphilanthropy.org/brain-computation-report#footnoteref398_xlo7zii)[Bullock et al. (2005)](http://utw10020.utweb.utexas.edu/djlab/pdfs/Bullocketal2005.pdf), describing the history of early neuroscience: “physiological studies established that conduction of electrical activity along the neuronal axon involved brief, all-or-nothing, propagated changes in membrane potential called action poten- tials. It was thus often assumed that neuronal activity was correspondingly all-or- nothing and that action potentials spread over all parts of a neuron. The neuron was regarded as a single functional unit: It either was active and “firing” or was not” (p. 791).\n\n\n[399.](https://www.openphilanthropy.org/brain-computation-report#footnoteref399_gos1qt3)[Zbili and Debanne (2019)](https://www.ncbi.nlm.nih.gov/pmc/articles/PMC6492051/pdf/fncel-13-00160.pdf): “When it invades the presynaptic terminal, the spike provokes the opening of voltage-gated calcium channels (Cav), leading to an increase of Ca2+concentration in the bouton and the release of neurotransmitters. Due to the power law between intra-terminal Ca2+ concentration and neurotransmitter release, small variations in presynaptic calcium entry, occurring through spike shape modifications, can lead to large changes in synaptic transmission ([Sabatini and Regehr (1997)](https://pubmed.ncbi.nlm.nih.gov/9133368/); [Bollmann et al. (2000)](https://pubmed.ncbi.nlm.nih.gov/10937999/); [Bischofberger et al. (2002)](https://pubmed.ncbi.nlm.nih.gov/12486151/); [Fedchyshyn and Wang (2005)](https://www.ncbi.nlm.nih.gov/pubmed/15843616); [Yang and Wang (2006)](https://www.ncbi.nlm.nih.gov/pubmed/16723526); [Bucurenciu et al. (2008)](https://pubmed.ncbi.nlm.nih.gov/18304483/); [Scott et al. (2008)](https://www.ncbi.nlm.nih.gov/pubmed/18667608); [Neishabouri and Faisal (2014)](https://www.ncbi.nlm.nih.gov/pubmed/24809823)). In fact, spike broadening during repetitive firing entails synaptic transmission facilitation in the pituitary nerve ([Jackson et al. (1991)](https://www.ncbi.nlm.nih.gov/pubmed/1988937)), dorsal root ganglion ([Park and Dunlap (1998)](https://www.ncbi.nlm.nih.gov/pubmed/9712647)) and mossy fiber bouton ([Geiger and Jonas (2000)](https://www.ncbi.nlm.nih.gov/pubmed/11163277)). Other studies showed that spike amplitude depression during repetitive firing provokes a decrease in synaptic transmission at hippocampal ([Brody and Yue (2000)](https://www.ncbi.nlm.nih.gov/pubmed/10729328); [Prakriya and Mennerick (2000)](https://www.ncbi.nlm.nih.gov/pubmed/26657943); [He et al. (2002)](https://www.ncbi.nlm.nih.gov/pubmed/11826057)) and cerebellar synapses ([Kawaguchi and Sakaba (2015)](https://www.ncbi.nlm.nih.gov/pubmed/25728570))” (p. 2).\n\n\n[400.](https://www.openphilanthropy.org/brain-computation-report#footnoteref400_5lk652l)[Zbili and Debanne (2019)](https://www.ncbi.nlm.nih.gov/pmc/articles/PMC6492051/pdf/fncel-13-00160.pdf): “the synaptic strength depends on the subthreshold membrane potential of the presynaptic cell, indicating that the presynaptic spike transmits this analog information to the postsynaptic cell. However, the direction of this modulation of synaptic transmission seems to depend on the type of synapse” (p. 5). [Zbili and Debanne (2019)](https://www.ncbi.nlm.nih.gov/pmc/articles/PMC6492051/pdf/fncel-13-00160.pdf), reviewing the literature on effects of this broad type, report increases in neurotransmitter release ranging from 10-100%, depending on the study (p. 7). [Shu et al. (2006)](https://www.nature.com/articles/nature04720), for example, caused a 29% median enhancement to the impact of a spike through synapse in ferret pyramidal cells by changing the membrane potential in the soma in a manner that stayed below the threshold for an action potential (abstract).\n\n\n[401.](https://www.openphilanthropy.org/brain-computation-report#footnoteref401_rhcqw2u)[Juusola et al. (1996)](https://www.cell.com/trends/neurosciences/pdf/S0166-2236(96)10028-X.pdf): “Many neurons use graded membrane-potential changes, instead of action potentials, to transmit information. Traditional synaptic models feature discontinuous transmitter release by presynaptic action potentials, but this is not true for synapses between graded-potential neurons. In addition to graded and continuous transmitter release, they have multiple active zones, ribbon formations and L-type Ca2+ channels. These differences are probably linked to the high rate of vesicle fusion required for continuous transmitter release. Early stages of sensory systems provide some of the best characterized graded-potential neurons, and recent work on these systems suggests that modification of synaptic transmission by adaptation is a powerful feature of graded synapses” (abstract).\n\n\n[402.](https://www.openphilanthropy.org/brain-computation-report#footnoteref402_pwxc9c0)[Graubard et al. (1980)](https://www.ncbi.nlm.nih.gov/pmc/articles/PMC349693/pdf/pnas00493-0675.pdf): “Graded synaptic transmission occurs between spiking neurons of the lobster stomatogastric ganglion. In addition to eliciting spike-evoked inhibitory potentials in postsynaptic cells, these neurons also release functionally significant amounts of transmitter below the threshold for action potentials. The spikeless postsynaptic potentials grade in amplitude with presynaptic voltage and can be maintained for long periods. Graded synaptic transmission can be modulated by synaptic input to the presynaptic neuron” (p. 3733).\n\n\n[403.](https://www.openphilanthropy.org/brain-computation-report#footnoteref403_xki512t)Graded synaptic transmission is distinct from the spontaneous release of neurotransmitter associated with what are called “[miniature postsynaptic currents](https://en.wikipedia.org/wiki/Excitatory_postsynaptic_potential#Miniature_EPSPs_and_quantal_analysis).” From [Faisal et al. (2008)](https://www.ncbi.nlm.nih.gov/pmc/articles/PMC2631351/): “The classic manifestation of synaptic noise is the spontaneous miniature postsynaptic current (mPSC) that can be recorded in the absence of presynaptic input. Katz and collaborators interpreted mPSCs as being the result of spontaneously released neurotransmitter vesicles, thus establishing the quantal nature of synaptic transmission” (p. 7).\n\n\n[404.](https://www.openphilanthropy.org/brain-computation-report#footnoteref404_hkmbxy8)See [Dugladze et al. (2012)](https://www.ncbi.nlm.nih.gov/pubmed/22700932): “We found that during *in vitro* gamma oscillations, ectopic action potentials are generated at high frequency in the distal axon of pyramidal cells (PCs) but do not invade the soma. At the same time, axo-axonic cells (AACs) discharged at a high rate and tonically inhibited the axon initial segment, which can be instrumental in preventing ectopic action potential back-propagation. We found that activation of a single AAC substantially lowered soma invasion by antidromic action potential in postsynaptic PCs. In contrast, activation of soma-inhibiting basket cells had no significant impact. These results demonstrate that AACs can separate axonal from somatic activity and maintain the functional polarization of cortical PCs during network oscillations” (abstract). See also [Sheffield (2011)](https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3030701/): “In a subset of rodent hippocampal and neocortical interneurons, hundreds of spikes, evoked over minutes, resulted in persistent firing that lasted for a similar duration. Although axonal action potential firing was required to trigger persistent firing, somatic depolarization was not. In paired recordings, persistent firing was not restricted to the stimulated neuron – it could also be produced in the unstimulated cell. Thus, these interneurons can slowly integrate spiking, share the output across a coupled network of axons, and respond with persistent firing even in the absence of input to the soma or dendrites” (abstract).\n\n\n[405.](https://www.openphilanthropy.org/brain-computation-report#footnoteref405_roiaxlr)Pre-synaptic [hyperpolarization](https://www.khanacademy.org/science/biology/human-biology/neuron-nervous-system/a/depolarization-hyperpolarization-and-action-potentials) (decreasing the membrane potential) can have effects within 15-50 ms. [Zbili and Debanne (2019)](https://www.ncbi.nlm.nih.gov/pmc/articles/PMC6492051/pdf/fncel-13-00160.pdf): “ADFs present various time constants which determine their potential roles in network physiology. In fact, in most of the studies, d-ADF needs 100 ms to several seconds of presynaptic depolarization to occur. On the contrary, h-ADF can be produced by fast presynaptic hyperpolarization (15–50 ms; [Rama et al. (2015a)](https://www.nature.com/articles/ncomms10163)). This difference is well explained by the underlying mechanism of d-ADF and h-ADF: slow accumulation of basal Ca2+ ([Bouhours et al. (2011)](https://www.jneurosci.org/content/31/15/5804); [Christie et al. (2011)](https://www.nature.com/articles/nn.2718)) or slow Kv inactivation for d-ADF ([Shu et al. (2006)](https://www.nature.com/articles/nature04720), [Shu et al. (2007)](https://www.pnas.org/content/104/27/11453); [Kole et al. (2007)](https://pubmed.ncbi.nlm.nih.gov/17698015/); [Bialowas et al. (2015)](https://pubmed.ncbi.nlm.nih.gov/25394682/)), fast recovery from inactivation of Nav for h-ADF ([Rama et al. (2015a)](https://www.nature.com/articles/ncomms10163); [Zbili et al. (2016)](https://www.frontiersin.org/articles/10.3389/fncel.2016.00278/full)). Therefore, d-ADF and h-ADF should have different consequences on information transfer in neuronal networks” (p. 8).\n\n\n[406.](https://www.openphilanthropy.org/brain-computation-report#footnoteref406_icczzz2)Sheffield (2011): “In a subset of rodent hippocampal and neocortical interneurons, hundreds of spikes, evoked over minutes, resulted in persistent firing that lasted for a similar duration” (abstract).\n\n\n[407.](https://www.openphilanthropy.org/brain-computation-report#footnoteref407_x22kc0f)[Zbili and Debanne (2019)](https://www.ncbi.nlm.nih.gov/pmc/articles/PMC6492051/pdf/fncel-13-00160.pdf) report that in most studies, it takes “100 ms to several seconds of presynaptic [depolarization](https://www.khanacademy.org/science/biology/human-biology/neuron-nervous-system/a/depolarization-hyperpolarization-and-action-potentials)” (p. 8).\n\n\n[408.](https://www.openphilanthropy.org/brain-computation-report#footnoteref408_421accc)My understanding is that the applicability of this consideration depends on the “length” or “space” constant associated with different axons in the brain, where the relevant issue is that the influence of pre-synaptic membrane potential changes along the axon decays exponentially in absence of active participation from ion channels. Here’s [Backyard Brains](https://backyardbrains.com/experiments/comparingnervespeed) on the length/space constant: “let’s talk about the length constant (this is sometimes also called the “space constant”). The length constant (λ, or lambda) is a measure of how far the voltage travels down the axon before it decays to zero. If you have a length constant of 1 mm, that means at 1 mm away from the cell body in an axon, 37% of the voltage magnitude remains. At 2 mm away from the cell body in an axon, 14% of the magnitude remains, and at 3 mm away, 5% remains. This is representative of an ‘[exponential decay](http://en.wikipedia.org/wiki/Exponential_decay)’ function.” Here’s [Zbili and Debanne (2019)](https://www.ncbi.nlm.nih.gov/pmc/articles/PMC6492051/pdf/fncel-13-00160.pdf) on how this applies to analog-digital signaling along the axon: “One of the main issues concerning Analog-Digital Facilitations is the spatial extent of these phenomena along the axon. In fact, ADFs are produced by subthreshold modifications of the somatic potential that spreads to the presynaptic terminal and modifies presynaptic spike shape or basal Ca2+ [(Debanne et al. (2013)](https://pubmed.ncbi.nlm.nih.gov/23187813/); [Rama et al. (2015b)](https://pubmed.ncbi.nlm.nih.gov/25461842/)). Therefore, the axonal space constant is a major determinant of the spatial extent of ADF. The axonal space constant varies among neuronal types, depending on the axonal diameter, the density of axonal branching and the axonal membrane resistance ([Sasaki et al. (2012)](https://pubmed.ncbi.nlm.nih.gov/22357869/)). In CA3 hippocampal neurons, the axonal space constant has been evaluated around 200–500 μm ([Sasaki et al. (2012)](https://pubmed.ncbi.nlm.nih.gov/22357869/); [Bialowas et al. (2015)](https://pubmed.ncbi.nlm.nih.gov/25394682/); [Rama et al. (2015a)](https://www.nature.com/articles/ncomms10163)). In L5 pyramidal neurons, the value estimated ranges between 500 μm ([Shu et al. (2006)](https://www.nature.com/articles/nature04720); [Kole et al. (2007)](https://pubmed.ncbi.nlm.nih.gov/17698015/)) and 1,000 μm ([Christie and Jahr (2009)](https://pubmed.ncbi.nlm.nih.gov/19759293/)). In CA1 pyramidal neurons, the axonal space constant was found to be around 700 μm ([Kim (2014)](https://pubmed.ncbi.nlm.nih.gov/25409299/)). Therefore, ADFs seem to be restricted to local brain circuits. For example, d-ADF has been found between CA3 neurons but not at the synapses between CA3 and CA1 neurons ([Sasaki et al. (2012)](https://pubmed.ncbi.nlm.nih.gov/22357869/)). However, several lines of evidence suggest that ADFs could also occur between more distant neurons…” (p. 160).\n\n\n[409.](https://www.openphilanthropy.org/brain-computation-report#footnoteref409_120hgih)[Moore and Cao (2008)](https://journals.physiology.org/doi/pdf/10.1152/jn.01366.2006): “The standard modern view of blood flow is that it serves a physiological function unrelated to information processing, such as bringing oxygen to active neurons, eliminating “waste” generated by neural activity, or regulating temperature” (p. 2035).\n\n\n[410.](https://www.openphilanthropy.org/brain-computation-report#footnoteref410_4u1cx32)See [Moore and Cao (2008)](https://journals.physiology.org/doi/pdf/10.1152/jn.01366.2006), (p. 2037-2040).\n\n\n[411.](https://www.openphilanthropy.org/brain-computation-report#footnoteref411_mrs0jjj)[Moore and Cao (2008)](https://journals.physiology.org/doi/pdf/10.1152/jn.01366.2006): “the somatosensory neocortex, blood flow increases measured using laser Doppler have been observed <200 ms after the onset of sensory-evoked neural responses ([Matsuura et al. (1999)](https://pubmed.ncbi.nlm.nih.gov/10529490/); [Norup Nielsen and Lauritzen (2001)](https://pubmed.ncbi.nlm.nih.gov/11410634/)). Similarly, optical imaging techniques that integrate over local volumes at somewhat slower temporal resolution typically record a significant increase in flow within ≤500 ms of sensory stimulus presentation ([Dunn et al. (2005)](https://pubmed.ncbi.nlm.nih.gov/15925522/); [Malonek et al. (1997)](https://www.ncbi.nlm.nih.gov/pmc/articles/PMC25122/); [Martin et al. (2006)](https://pubmed.ncbi.nlm.nih.gov/16725349/)). The subsequent duration of these increases is often viewed as “poorly correlated” with neural activity, because functional hyperemia can sustain for seconds after the onset and offset of a stimulus. As discussed in a later section, this sustained temporal pattern may not be a mismatch between activity and flow, but rather may be consistent with the information processing role of blood flow” (p. 2037).\n\n\n[412.](https://www.openphilanthropy.org/brain-computation-report#footnoteref412_mipfaff)From [Open Philanthropy’s non-verbatim notes from a conversation with Dr. Stephen Larson](https://www.openphilanthropy.org/files/Conversations/Discussions%20with%20Dr.%20Stephen%20Larson,%202019-2020.pdf): “It’s generally thought that blood flow is more of an epiphenomenon/a sign that other forms of information processing are occurring (akin to the heat generated by a CPU), than a mechanism of information-processing in itself” (p. 4).\n\n\n[413.](https://www.openphilanthropy.org/brain-computation-report#footnoteref413_cih30ha)The exact number, along with the definition of a column, appears to be the subject of some debate (see [Rakic (2008)](https://www.ncbi.nlm.nih.gov/pmc/articles/PMC2527871/) for complaints). [Krueger (2008)](https://www.ncbi.nlm.nih.gov/pmc/articles/PMC2586424/): “In humans, each column contains 1000 to 10,000 cells.”\n\n\n[414.](https://www.openphilanthropy.org/brain-computation-report#footnoteref414_fqi0b9b)[Moore and Cao (2008)](https://journals.physiology.org/doi/pdf/10.1152/jn.01366.2006): “In the somatosensory and visual neocortex, a general consensus exists that the pattern of increased blood flow is similar to that of subthreshold neural activity, with a peak in signal that is localized to a cortical column (400 m) and an extent spanning several columns ([Dunn et al. (2005)](https://pubmed.ncbi.nlm.nih.gov/15925522/); [Hess et al. (2000)](https://www.jneurosci.org/content/20/9/3328.short); [Lauritzen (2001)](https://pubmed.ncbi.nlm.nih.gov/11740198/); [Sheth et al. (2004)](https://pubmed.ncbi.nlm.nih.gov/14736849/); [Vanzetta et al. (2004)](https://pubmed.ncbi.nlm.nih.gov/15182722/); [Yang et al. (1998)](https://www.pnas.org/content/95/13/7715/)) … In other brain areas, evidence for more precise delivery has also been observed, because flow can be localized to a single glomerulus in the olfactory bulb during stimulus presentation (i.e., 100 m) ([Chaigneau et al. (2003)](https://www.pnas.org/content/100/22/13081); [Yang et al. (1998)](https://www.pnas.org/content/95/13/7715/))” (p. 2037).\n\n\n[415.](https://www.openphilanthropy.org/brain-computation-report#footnoteref415_y14aila)Other possibilities include the perineuronal net (see [Tsien (2013)](https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3725115/pdf/pnas.201310158.pdf) for discussion), and classical dynamics in microtubules (see [Cantero et al. (2018)](https://www.nature.com/articles/s41598-018-30453-2)). I leave out the other two mechanisms partly because of time constraints, and partly because my impression is that they do not feature very prominently in the discourse on this topic.\n\n\n[416.](https://www.openphilanthropy.org/brain-computation-report#footnoteref416_suk651a)Though see the non-verbatim notes from [Open Philanthropy’s non-verbatim notes from a conversation with Prof. Anthony Zador](https://www.openphilanthropy.org/files/Conversations/Conversation%20with%20Professor%20Anthony%20Zador,%20September%2012,%202019.pdf): “Prof. Zador is skeptical that there are major unknown unknowns in the parts list in the brain, given how much effort has gone into studying nervous systems. Biology is complicated, and there is still more to understand, but Prof. Zador does not think that what we are missing is a breakthrough in biology. Rather, what’s missing is an understanding of the brain’s organizing principles” (p. 4).\n\n\n[417.](https://www.openphilanthropy.org/brain-computation-report#footnoteref417_56g8doz)A number of experts we engaged with indicated that many computational neuroscientists would not emphasize other mechanisms very much (though their comments in this respect are not publicly documented); and the experts I interviewed didn’t tend to emphasize such mechanisms either.\n\n\n[418.](https://www.openphilanthropy.org/brain-computation-report#footnoteref418_cwzic8i)Technically, this would be ~3e13-3e17 FLOP/s, if we were really adding up synaptic transmission, firing decisions, and learning. But these ranges are sufficiently made-up and arbitrary that this sort of calculation seems to me misleadingly precise.\n\n\n[419.](https://www.openphilanthropy.org/brain-computation-report#footnoteref419_d7j3xc4)That is, I did not do fully independent analyses of each of these areas and then combine them (this is why the ranges are so similar). Rather, I started with a baseline, default model of 1 FLOP per spike through synapse, and then noted that budgeting 10-100x of cushion on top of that would cover various salient complexities and expert estimates across various of these categories.\n\n\n[420.](https://www.openphilanthropy.org/brain-computation-report#footnoteref420_6o4o0dj)[Funabiki et al. (2011)](https://www.jneurosci.org/content/31/43/15245): “In owls, NL neurons change their firing rates with changes in ITD of <10 μs ([Carr and Konishi (1990)](https://www.jneurosci.org/content/10/10/3227?ijkey=d4987df0788fd215557034462d162ed702c3cf78&keytype2=tf_ipsecsha); [Peña et al. (1996)](https://www.jneurosci.org/content/16/21/7046?ijkey=91b4d4043c5c1546894b9dbfeb713c140c6eded0&keytype2=tf_ipsecsha)), far below the spike duration of the neurons (e.g., ∼1 ms). The data used for modeling these coincidence detection processes have so far come from *in vitro* studies in the chick’s NL ([Reyes et al. (1996)](https://www.jneurosci.org/content/16/3/993.short); [Funabiki et al. (1998)](https://www.ncbi.nlm.nih.gov/pmc/articles/PMC2230923/); [Kuba et al. (2005)](https://www.jneurosci.org/content/25/8/1924?ijkey=4a29f33e8283ea454996ffc0a173434343810eb6&keytype2=tf_ipsecsha), [(2006)](https://pubmed.ncbi.nlm.nih.gov/17136099/); [Slee et al. (2010)](https://journals.physiology.org/doi/full/10.1152/jn.00678.2009)), extracellular studies of the barn owl’s NL neurons ([Carr and Konishi (1990)](https://www.jneurosci.org/content/10/10/3227?ijkey=d4987df0788fd215557034462d162ed702c3cf78&keytype2=tf_ipsecsha); [Peña et al. (1996)](https://www.jneurosci.org/content/16/21/7046?ijkey=91b4d4043c5c1546894b9dbfeb713c140c6eded0&keytype2=tf_ipsecsha); [Fischer et al. (2008)](https://www.jneurosci.org/content/28/32/8107?ijkey=2efd2f61d0209fa1aa537f664eec72bfdd4028bc&keytype2=tf_ipsecsha)), and the owl’s behavioral performance ([Knudsen et al. (1979)](https://link.springer.com/article/10.1007%2FBF00663105)). Specialized cellular mechanisms, including extraordinary fast glutamate receptors ([Reyes et al. (1996)](https://www.jneurosci.org/content/16/3/993.short); [Trussell (1999)](https://pubmed.ncbi.nlm.nih.gov/10099698/); [Kuba et al. (2005)](https://www.jneurosci.org/content/25/8/1924?ijkey=4a29f33e8283ea454996ffc0a173434343810eb6&keytype2=tf_ipsecsha)), low threshold-activated potassium conductance (KLVA) ([Reyes et al. (1996)](https://www.jneurosci.org/content/16/3/993.short)), and remote spike initiation ([Carr and Boudreau (1993b)](https://pubmed.ncbi.nlm.nih.gov/8313166/); [Kuba et al. (2006)](https://pubmed.ncbi.nlm.nih.gov/17136099/); [Ashida et al. (2007)](https://journals.physiology.org/doi/full/10.1152/jn.00399.2006)), have been discussed as important elements of this extraordinary precise coincidence detection” (p. 15245).\n\n\n[421.](https://www.openphilanthropy.org/brain-computation-report#footnoteref421_55kzzqw)From [Open Philanthropy’s non-verbatim notes from a conversation with Prof. Eric Jonas](https://www.openphilanthropy.org/files/Conversations/A%20conversation%20with%20Professor%20Eric%20Jonas,%20September%2017,%202019.pdf): “Active dendritic computation could conceivably imply something like 1-5 orders of magnitude more compute than a simple linear summation model of a neuron. And if dendritic morphology is evolving over time, you also need to be thinking about the space of all possible dendrites that could have formed, in addition to the current dendritic tree” (p. 3). He also added, though, “it’s reasonable to think that at the end of the day, simplified dendritic models are available. For example, Prof. Jonas has heard arguments suggesting that post-synapse, there is very little plasticity in dendrites, and that dendritic computation mostly involves applying random features to inputs” (p. 3).\n\n\n[422.](https://www.openphilanthropy.org/brain-computation-report#footnoteref422_46l7rbe)See e.g. [Bhalla (2014)](https://www.sciencedirect.com/science/article/abs/pii/S0959438813002171).\n\n\n[423.](https://www.openphilanthropy.org/brain-computation-report#footnoteref423_u0my6xq)[Kaplanis et al. (2018)](https://arxiv.org/pdf/1802.07239.pdf): “we show that by equipping tabular and deep reinforcement learning agents with a synaptic model that incorporates this biological complexity ([Benna and Fusi (2016)](https://www.nature.com/articles/nn.4401)), catastrophic forgetting can be mitigated at multiple timescales. In particular, we find that as well as enabling continual learning across sequential training of two simple tasks, it can also be used to overcome within-task forgetting by reducing the need for an experience replay database” (p. 1). [Zenke et al. (2017)](https://arxiv.org/pdf/1703.04200.pdf): “In this study, we introduce intelligent synapses that bring some of this biological complexity into artificial neural networks. Each synapse accumulates task relevant information over time, and exploits this information to rapidly store new memories without forgetting old ones. We evaluate our approach on continual learning of classification tasks, and show that it dramatically reduces forgetting while maintaining computational efficiency” (abstract).\n\n\n[424.](https://www.openphilanthropy.org/brain-computation-report#footnoteref424_r1jned3)From [Open Philanthropy’s non-verbatim notes from a conversation with Prof. Eve Marder](https://www.openphilanthropy.org/files/Conversations/A%20conversation%20with%20Professor%20Eve%20Marder,%20May_June%202020.pdf): “In reality, the nervous system has an incredible ability to move seamlessly between timescales ranging from milliseconds to years, and the relevant processes interact. That is, short time-scale processes influence long time-scale processes, and vice versa. And unlike digital computers, the brain integrates over very long timescales at very fast speeds easily and seamlessly” (p. 2).\n\n\n[425.](https://www.openphilanthropy.org/brain-computation-report#footnoteref425_ehc9bmy)See [von Bartheld et al. (2016)](https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5063692/pdf/nihms799882.pdf): “The recently validated isotropic fractionator demonstrates a glia:neuron ratio of less than 1:1… We review how the claim of one trillion glial cells originated, was perpetuated, and eventually refuted.” (p. 1)).\n\n\n[426.](https://www.openphilanthropy.org/brain-computation-report#footnoteref426_24ktzxp)From [Open Philanthropy’s non-verbatim notes from a conversation with Prof. Erik De Schutter](https://www.openphilanthropy.org/files/Conversations/A%20conversation%20with%20Professor%20Erik%20De%20Schutter,%20September%2017,%202019%20%20.pdf): “The brain was not engineered. Rather, it evolved, and evolution works by adding complexity, rather than by simplification… Indeed, in general, many scientists who approach the brain from an engineering perspective end up on the wrong footing. Engineering is an appropriate paradigm for building AI systems, but if you want to understand the brain, you need to embrace the fact that it works because it is so complicated. Otherwise, it will be impossible to understand the system” (p. 4).\n\n\n[427.](https://www.openphilanthropy.org/brain-computation-report#footnoteref427_gyu2y9p)See e.g. [Kempes et al. (2017)](https://arxiv.org/pdf/1706.05043.pdf): “Here we show that the computational efficiency of translation, defined as free energy expended per amino acid operation, outperforms the best supercomputers by several orders of magnitude, and is only about an order of magnitude worse than the Landauer bound” (p. 1). Rahul Sarpeshkar, in a [2018 TED talk](https://youtu.be/ZycidN_GYo0?t=207), suggests that cells are the most energy efficient computers that we know, and that they are already computing at an efficiency near the fundamental laws of physics (3:30-4:04). See also [Laughlin et al. (1998)](https://pubmed.ncbi.nlm.nih.gov/10195106/): “Freed from heavy mechanical work, ion channels change conformation in roughly 100 μs32. In principle, therefore, a single protein molecule, switching at the rate of an ion channel with the stoi- chiometry of kinesin, could code at least 103 bit per second at a cost of 1 ATP per bit” (p. 39). See [Sarpeshkar (2013)](https://www.nature.com/articles/nature12148?proof=true&platform=oscar&draft=collection) for more on computation in cells, and [Sarpeshkar (2010)](https://www.cambridge.org/core/books/ultra-low-power-bioelectronics/ED8504DA1504856B74E2502EA859FDEA) for more on the energy-efficiency of biological systems more generally: “A single cell in the body performs ~10 million energy-consuming biochemical operations per second on its noisy molecular inputs with ~1 pW of average power. Every cell implements a ~30,000 node gene-protein molecular interaction network within its confines. All the ~100 trillion cells of the human body consume ~80 W of power at rest. The average energy for an elementary energy-consuming operation in a cell is about 20kT, where *k*T is a unit of thermal energy. In deep submicron processes today, switching energies are nearly 104 – 105*k*T for just an elementary 0->1 digital switching operation. Even at 10 nm, the likely end of business-as-usual transistor scaling in the future, it is unlikely that we will be able to match such energy efficiency. Unlike traditional digital computation, biological computation is tolerant to error in elementary devices and signals. Nature illustrates that it is significantly more energy efficient to compute with error-prone devices and signals and then correct for these errors through feedback-and-learning architectures than to make every device and every signal in a system robust, as in traditional digital paradigms thus far” (p. 18-19). [Bennett (1989)](https://epubs.siam.org/doi/abs/10.1137/0218053?casa_token=vnD0zJclKZQAAAAA%3AK7-WmLzZs0hMB9f0RLP4QxScEYJ1S5lPtVdmT6QeFfF8ND24mDbadlMU5KzhivkC372qCMTHUw&journalCode=smjcat) also suggests that “a few thermodynamically efficient data processing systems do exist, notably genetic enzymes such as RNA polymerase, which, under appropriate reactant concentrations, can transcribe information from DNA to RNA at a thermodynamic cost considerably less than *k*T per step” (p. 766).\n\n\n[428.](https://www.openphilanthropy.org/brain-computation-report#footnoteref428_su7blgp)See e.g. from [Open Philanthropy’s non-verbatim notes from a conversation with Prof. Eric Jonas](https://www.openphilanthropy.org/files/Conversations/A%20conversation%20with%20Professor%20Eric%20Jonas,%20September%2017,%202019.pdf): “Various discoveries in biology have altered Prof. Jonas’s sense of the complexity of what biological systems can be doing. Examples in this respect include non-coding RNA, the complexity present in the three-dimensional structure of the cell, histone regulatory frameworks, and complex binding events involving different chaperone proteins. The class of computation that Prof. Jonas can imagine a single cell doing now seems multiple orders of magnitude more complex than it did 20 years ago” (p. 4).\n\n\n[429.](https://www.openphilanthropy.org/brain-computation-report#footnoteref429_pkitrks)[Sarpeshkar (1998)](https://ieeexplore.ieee.org/document/6790538): “Items 1 through 3 show that analog computation can be far more efficient than digital computation because of analog computation’s repertoire of rich primitives. For example, addition of two parallel 8-bit numbers takes one wire in analog circuits (using Kirchoff’s current law), whereas it takes about 240 transistors in static CMOS digital circuits. The latter number is for a cascade of 8 full adders. Similarly an 8-bit multiplication of two currents in analog computation takes 4 to 8 transistors, whereas a parallel 8-bit multiply in digital computation takes approximately 3000 transistors. Although other digital implementations could make the comparisons seem less stark, the point here is simply that exploiting physics to do computation can be powerful” (p. 1605). See also [Daniel et al. (2013)](https://www.nature.com/articles/nature12148): “Because analog computation exploits powerful biochemical mathematical basis functions that are naturally present over the entire continuous range of input operation, they are an advantageous alternative to digital logic when resources of device count, space, time or energy are constrained” (p. 619).\n\n\n[430.](https://www.openphilanthropy.org/brain-computation-report#footnoteref430_tygkqr6)See e.g. [Open Philanthropy’s non-verbatim notes from a conversation with Prof. Eve Marder](https://www.openphilanthropy.org/files/Conversations/A%20conversation%20with%20Professor%20Eve%20Marder,%20May_June%202020.pdf): “Unlike digital computers, the brain integrates over very long timescales at very fast speeds easily and seamlessly” (p. 3).\n\n\n[431.](https://www.openphilanthropy.org/brain-computation-report#footnoteref431_fekm37d)From [Open Philanthropy’s non-verbatim notes from a conversation with Prof. Rosa Cao](https://www.openphilanthropy.org/files/Conversations/A%20conversation%20with%20Professor%20Rosa%20Cao,%20August%207,%202019.pdf): “Digital computers achieve speed and reliability by ignoring many dimensions of what is happening in the system. In such a context, you only care about whether the voltage in the transistors is above or below a certain threshold, and designers try hard to shield this variable from disruptive physical fluctuations. The brain is built on fairly different principles. Its functional processes are not shielded from the dynamics of the brain’s biochemistry. Rather, the brain exploits this biochemistry to perform efficient computation. This makes the brain difficult to simulate. In nature, biochemical processes like protein-protein interactions just happen, so they are “free” for the brain to run. Simulating them, however, can be quite computationally expensive” (p. 1-2).\n\n\n[432.](https://www.openphilanthropy.org/brain-computation-report#footnoteref432_p5t0pqb)From [Open Philanthropy’s non-verbatim notes from a conversation with Dr. Adam Marblestone](https://www.openphilanthropy.org/files/Conversations/Discussions%20with%20Dr.%20Adam%20Marblestone.pdf): “Neuroscience is extremely limited by available tools. For example, we have the concept of a post-synaptic potential because we can patch-clamp the post-synaptic neuron and see a change in voltage. When we become able to see every individual dendritic spine, we might see that each has a different response; or when we become able to see molecules, we might see faster state transitions, more interesting spatial organization, or more complicated logic at the synapses. We don’t really know, because we haven’t been able to measure. It’s also possible that some theories in neuroscience emerge and persist primarily because (a) they are the type of simple ideas that humans are able to come up with, and (b) these theories explain some amount of data (though it’s unclear how much). It’s hard to formulate complicated ideas about how the brain works that can then be made testable. “ (p. 9). From [Open Philanthropy’s non-verbatim notes from a conversation with Prof. Erik De Schutter](https://www.openphilanthropy.org/files/Conversations/A%20conversation%20with%20Professor%20Erik%20De%20Schutter,%20September%2017,%202019%20%20.pdf): “with improvements in imaging and cell biology techniques, we discover all sorts of new complexities that we didn’t know were there” (p. 1).\n\n\n[433.](https://www.openphilanthropy.org/brain-computation-report#footnoteref433_2xrg822)Thanks to Luke Muehlhauser for suggesting this possibility.\n\n\n[434.](https://www.openphilanthropy.org/brain-computation-report#footnoteref434_pcj01pk)From [Open Philanthropy’s non-verbatim notes from a conversation with Prof. Eric Jonas](https://www.openphilanthropy.org/files/Conversations/A%20conversation%20with%20Professor%20Eric%20Jonas,%20September%2017,%202019.pdf): “There is a history of over-optimism about scientific progress in neuroscience and related fields. Prof. Jonas grew up in an era of hype about progress in science (e.g., “all of biology will yield its secrets in the next 20 years”), and has watched the envisioned future fail to arrive. Indeed, many problems have been multiple orders of magnitude more complicated than expected, to such a degree that some people are now arguing that science is slowing down, and must rely increasingly on breadth-first search through possible research paths. In biology, for example, there was a lot of faith that the human genome project would lead to more completeness and understanding than it did” (p. 4-5). From [Open Philanthropy’s non-verbatim notes from a conversation with Prof. Rosa Cao](https://www.openphilanthropy.org/files/Conversations/A%20conversation%20with%20Professor%20Rosa%20Cao,%20August%207,%202019.pdf): “*E. Coli*, a comparatively simple, one-celled organism, exhibits fairly sophisticated behavior on the basis of carefully-tuned biochemical chains (for example, various rhythms at different timescales that allow the cell to survive in a range of environments). We have not yet been successfully able to capture this behavior in a computational model, despite throwing a lot of effort and computational power at the project. Indeed, there was a lot of excitement about projects like this a few decades ago, but it seems to Prof. Cao that this energy has since died down, partly due to greater appreciation of their difficulty. Similarly, efforts to build an artificial cell have proven very difficult. At some level, cells are simple, and we basically know what the components are. However, all of the biochemical processes are poised in a delicate balance with each other – a balance that represents a vanishingly smaller percentage of all possible arrangements, and which is correspondingly difficult to replicate. Efforts to create functional brain simulations might run into similar problems. For example, it may be that the brain’s function depends on a particular type of relationship to the environment, which allows it to adjust and fine-tune its internal features in the right way” (p. 2).\n\n\n[435.](https://www.openphilanthropy.org/brain-computation-report#footnoteref435_h8fmcq1)From [Open Philanthropy’s non-verbatim notes from a conversation with Prof. Eric Jonas](https://www.openphilanthropy.org/files/Conversations/A%20conversation%20with%20Professor%20Eric%20Jonas,%20September%2017,%202019.pdf): “many in the neuroscience community feel that some neuroscientists made overly aggressive claims in the past about what amount of progress in neuroscience to expect (for example, from simulating networks of neurons at a particular level of resolution)” (p. 5).\n\n\n[436.](https://www.openphilanthropy.org/brain-computation-report#footnoteref436_5m5kgnk)From [Open Philanthropy’s non-verbatim notes from a conversation with Prof. Eric Jonas](https://www.openphilanthropy.org/files/Conversations/A%20conversation%20with%20Professor%20Eric%20Jonas,%20September%2017,%202019.pdf): “[Prof. Jonas] also has a long-term prior that researchers are too quick to believe that the brain is doing whatever is currently popular in machine learning, and he doesn’t think we’ve found the right paradigm yet” (p. 3). From [Open Philanthropy’s non-verbatim notes from a conversation with Prof. Erik De Schutter](https://www.openphilanthropy.org/files/Conversations/A%20conversation%20with%20Professor%20Erik%20De%20Schutter,%20September%2017,%202019%20%20.pdf): “He is also wary of the history of comparing the brain to the latest engineering technology (e.g., a steam engine, a classical computer, now maybe a quantum computer)” (p. 4).\n\n\n[437.](https://www.openphilanthropy.org/brain-computation-report#footnoteref437_x5mnr6u)Two experts thought this unlikely. From [Open Philanthropy’s non-verbatim notes from a conversation with Dr. Adam Marblestone](https://www.openphilanthropy.org/files/Conversations/Discussions%20with%20Dr.%20Adam%20Marblestone.pdf): “Dr. Marblestone thinks that the probability that the field of neuroscience rests on some very fundamental paradigm mistake is very low. We’re missing a unified explanation of behavior and intelligence, but the basic picture of neurons as modular elements with some sort of transfer function and some sort of (possibly complicated) learning rule, without some extreme amount of internal computation taking place inside the cell, seems fairly solid to Dr. Marblestone” (p. 7).\n\n\n[438.](https://www.openphilanthropy.org/brain-computation-report#footnoteref438_owl6yu5)Thanks to Dr. Dario Amodei and Dr. Owain Evans for suggesting that I consider correlations between different routes to higher numbers.\n\n\n[439.](https://www.openphilanthropy.org/brain-computation-report#footnoteref439_nbeu0j9)From [Open Philanthropy’s non-verbatim notes from a conversation with Prof. Markus Meister](https://www.openphilanthropy.org/files/Conversations/A%20conversation%20with%20Professor%20Markus%20Meister,%20September%2024,%202019.pdf): “Synapses are noisy, and silicon isn’t; and the brain uses huge numbers of neurons to represent the same variable, probably because a single neuron can’t do it robustly. Prof. Meister expects that human-level AI systems will use methods more naturally suited to silicon devices. This would suggest compute estimates lower than what scaling up from the retina would suggest” (p. 4). See [Miller (2018)](https://www.amazon.com/Introductory-Course-Computational-Neuroscience/dp/0262038250): “The key variables of a firing-rate model are the firing rates, which correspond to the average number of spikes per unit time of a subset of similarly responsive cells. This is in contrast to spiking models in which the key variables are the membrane potentials of individual cells” (p. 211). [Eliasmith (2013)](https://www.amazon.com/How-Build-Brain-Architecture-Architectures/dp/0190262125): “Consequently, we can think of the 2D state space as a standard Cartesian space, where two values (x and y co-ordinates) uniquely specify a single object as compactly as possible. In contrast, the 100D vector specifies the same underlying 2D object, but it takes many more resources (i.e., values) to do so. If there was no uncertainty in any of these 100 values, then this would simply be a waste of resources. However, in the much more realistic situation where there is uncertainty (resulting from noise of receptors, noise in the channels sending the signals, etc.), this redundancy can make specifying an underlying point much more reliable. And, interestingly, it can make the system much more flexible in how well it represents different parts of that space. For example, we could use 10 of those neurons to represent the first dimension, or we could use 50 neurons to do so. The second option would give a much more accurate representation of that dimension than the first. Being able to redistribute these resources to respond to task demands is one of the foundations of learning (see Section 6.4)” (p. 75). From [Open Philanthropy’s non-verbatim notes from a conversation with Dr. Adam Marblestone](https://www.openphilanthropy.org/files/Conversations/Discussions%20with%20Dr.%20Adam%20Marblestone.pdf): “One way you might need less than 1 FLOP per spike through synapse is if you don’t need to model all of the neurons in the brain. For example, it might be that all of the neurons and synapses in the brain are there in order to make the brain more likely to converge on a solution while learning, but that once learning has taken place, the brain implements a function that can be adequately approximated using much less compute. A large amount of neuroscience treats populations of neurons as redundant representations of high-level variables relevant to information-processing” (p. 7).\n\n\n[440.](https://www.openphilanthropy.org/brain-computation-report#footnoteref440_b2z7dsh)From the [author summary](https://journals.plos.org/ploscompbiol/article/file?id=10.1371/journal.pcbi.1007074&type=printable): “A network in the brain consists of thousands of neurons. A priori, we expect that the network will have as many degrees of freedom as its number of neurons. Surprisingly, experimental evidence suggests that local brain activity is confined to a subspace spanned by ~10 variables” (p. 1). See also [Gallego et al. (2017)](https://pubmed.ncbi.nlm.nih.gov/28595054/): “Here we argue that the underlying network connectivity constrains these possible patterns of population activity ([Okun et al. (2015)](https://www.nature.com/articles/nature14273), [Sadtler et al. (2014)](https://www.nature.com/articles/nature13665), [Tsodyks et al. (1999)](https://science.sciencemag.org/content/286/5446/1943.abstract)) and that the possible patterns are confined to a low-dimensional manifold ([Stopfer et al. (2003)](https://www.sciencedirect.com/science/article/pii/S089662730300535X), [Yu et al. (2009)](https://journals.physiology.org/doi/full/10.1152/jn.90941.2008)) spanned by a few independent patterns that we call ‘neural modes.’ These neural modes capture a significant fraction of population covariance. It is the activation of these neural modes, rather than the activity of single neurons, that provides the basic building blocks of neural dynamics and function ([Luczak et al. (2015)](https://www.nature.com/articles/nrn4026), [Sadtler et al. (2014)](https://www.nature.com/articles/nature13665), [Shenoy et al. (2013)](https://www.annualreviews.org/doi/full/10.1146/annurev-neuro-062111-150509))” (p. 2).\n\n\n[441.](https://www.openphilanthropy.org/brain-computation-report#footnoteref441_k8s837l)My thanks to the expert who suggested I consider this.\n\n\n[442.](https://www.openphilanthropy.org/brain-computation-report#footnoteref442_j27znit)[Faisal et al. (2008)](https://www.ncbi.nlm.nih.gov/pmc/articles/PMC2631351/): “Averaging is used in many neural systems in which information is encoded as patterns of activity across a population of neurons that all subserve a similar function (for example, see REFS [142](https://pubmed.ncbi.nlm.nih.gov/3749885/),[143](https://pubmed.ncbi.nlm.nih.gov/3352733/)): these are termed neural population codes. A distributed representation of information of this type is more robust to the effects of noise. Many sensory systems form a spatially-ordered population — that is, a map — in which neighbouring neurons encode stimuli that share closely related features. Such spatially ordered populations support two basic goals of neural computation: first, a transformation between different maps (such as the direction of sounds into neck rotation) and, second, the combination of information from multiple sources (such as visual- and auditory-cue combination)[144](https://pubmed.ncbi.nlm.nih.gov/11477429/). The information capacity of a population of neurons is greatest when the noise sources across the population are not correlated. Noise correlations, which are often observed in populations of higher-order neurons, limit information capacity and have led to the development of population-coding strategies that account for the effects of correlations” (p. 10).\n\n\n[443.](https://www.openphilanthropy.org/brain-computation-report#footnoteref443_b5op7sb)See p. 10 [here](https://images.nvidia.com/content/volta-architecture/pdf/volta-architecture-whitepaper.pdf).\n\n\n[444.](https://www.openphilanthropy.org/brain-computation-report#footnoteref444_74a8tz5)From [here](http://visual6502.org/wiki/index.php?title=6502_-_simulating_in_real_time_on_an_FPGA&oldid=608): “[Michael Steil](http://www.pagetable.com/?p=517) and some collaborators had ported the code to C and were able to run at about 1kHz… This was only a thousand times slower than the original, running on a computer that was perhaps two million times faster.” Other emulations may be more efficient.\n\n\n[445.](https://www.openphilanthropy.org/brain-computation-report#footnoteref445_yic9gw7)Dr. Dario Amodei suggests considering whether we can leave out the cerebellum for certain types of tasks.\n\n\n[446.](https://www.openphilanthropy.org/brain-computation-report#footnoteref446_9rhhie7)From the National Organization for Rare Disorders: “Additional reports have noted individuals with cerebellar agenesis whose mental capacities were unaffected and who did not exhibit any symptoms of cerebellar agenesis (asymptomatic cases). However, other researchers have disputed these claims, stating that in virtually all of cases of cerebellar agenesis there have been observable symptoms including profound abnormalities in motor skills…. Intelligence may be unaffected. However, some affected individuals may display mild to moderate cognitive impairment. Some individuals with cerebellar agenesis have exhibited intellectual disability, but normal or near-normal motor skills. In addition to affecting motor skills, damage to the cerebellum has also been associated with abnormalities of non-motor functions. Cerebellar dysfunction may also be associated with abnormalities of visuospatial abilities, expressive language, working memory and affective behavior.” Cases of cerebellar agenesis are described in a popular article by [Hamilton (2015)](https://www.npr.org/sections/health-shots/2015/03/16/392789753/a-man-s-incomplete-brain-reveals-cerebellum-s-role-in-thought-and-emotion) and in [Gelal et al. (2016)](https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4888693/). The case described in [Hamilton (2015)](https://www.npr.org/sections/health-shots/2015/03/16/392789753/a-man-s-incomplete-brain-reveals-cerebellum-s-role-in-thought-and-emotion) seems to involve at least mild cognitive impairment: the subject described has trouble coordinating different sources of information, and he “needed to be taught a lot of things that people with a cerebellum learn automatically, Sarah [his sister] says: how to speak clearly, how to behave in social situations and how to show emotion.” The cases in [Gelal et al. (2016)](https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4888693/) also appear to involve substantive cognitive impairment: “The 61-year-old man had ataxia, dysarthria, abnormalities in cerebellar tests, severe cognitive impairment, and moderate mental retardation. The 26-year-old woman had dysmetria, dysdiadochokinesia, and dysarthria as well as mild cognitive impairment and mild mental retardation” (abstract)).\n\n\n[447.](https://www.openphilanthropy.org/brain-computation-report#footnoteref447_wnqyh6e)[Swanson (1995)](https://www.sciencedirect.com/science/article/abs/pii/016622369592766J?via%3Dihub) (p. 473).\n\n\n[448.](https://www.openphilanthropy.org/brain-computation-report#footnoteref448_eofahg9)[Azevedo et al. (2009)](https://www.ncbi.nlm.nih.gov/pubmed/19226510) (p. 536), suggests that the cerebellum weights ~154.02 g (10.3% of the brain’s mass), whereas the cerebral cortex weighs 1232.93 g (81.8% of the brain’s mass).\n\n\n[449.](https://www.openphilanthropy.org/brain-computation-report#footnoteref449_374qe6o)I’m basing this on the fact that the cerebellum is ~10% of the brain’s weight, relative to ~80% for the cortex, and Howarth et al’s (2012) suggestion that energy consumption per gram is higher in the cerebral cortex than in the cerebellar cortex: “Including this range of values would result in a range of estimates for total energy use for the cerebral cortex of 27.2 to 40.7 μmol ATP/g/min, compared with the measured total energy use of 33 to 50 μmol ATP/g/min in different cortical regions ([Sokoloff et al. (1977)](https://pubmed.ncbi.nlm.nih.gov/864466/)), and for the cerebellar cortex of 17.1 to 25.6 μmol ATP/g/min, compared with the measured value of 20.5 μmol ATP/g/min ([Sokoloff et al. (1977)](https://pubmed.ncbi.nlm.nih.gov/864466/)). Further work is needed to accurately define these parameters” (p. 1232). [Sarpeshkar (1997)](https://thesis.library.caltech.edu/3063/1/Sarpeshkar_R_1997.pdf): “Most of the power in the brain is consumed in the cortex” (p. 204). Thanks to Carl Shulman for suggesting that I consider cerebellar energy consumption, and for pointing me to references.\n\n\n[450.](https://www.openphilanthropy.org/brain-computation-report#footnoteref450_kqli0tq)Most of the neurons in the cerebellum (specifically, about 50 billion, at least according to [Llinás et al. (2004)](https://oxford.universitypressscholarship.com/view/10.1093/acprof:oso/9780195159561.001.1/acprof-9780195159561-chapter-7) (p. 277)) are [cerebellar granule cells](https://en.wikipedia.org/wiki/Cerebellar_granule_cell), which appear to have a comparatively small number of synapses each: “[Granule] cells are the most numerous in the CNS; there are about 5 × 1010 cerebellar granule cells in the human brain. Each cell has four or five short dendrites (each less than 30 μm long) that end in an expansion called a dendritic claw (Fig. [7.4C](https://ezproxy-prd.bodleian.ox.ac.uk:2169/view/10.1093/acprof:oso/9780195159561.001.1/acprof-9780195159561-chapter-7#acprof-9780195159561-figureGroup-108))” ([Llinás et al. (2004)](https://oxford.universitypressscholarship.com/view/10.1093/acprof:oso/9780195159561.001.1/acprof-9780195159561-chapter-7) (p. 277). [Wikipedia](https://en.wikipedia.org/wiki/Cerebellar_granule_cell) cites [Llinás et al. (2004)](https://oxford.universitypressscholarship.com/view/10.1093/acprof:oso/9780195159561.001.1/acprof-9780195159561-chapter-7)) as grounds for attributing 80-100 synaptic connections to granule cells, but I haven’t been able to find the relevant number. The cerebellum also contains Purkinje cells (up to 1.5e7, according to [Llinás et al. (2004)](https://oxford.universitypressscholarship.com/view/10.1093/acprof:oso/9780195159561.001.1/acprof-9780195159561-chapter-7), p. 276), which can have over 100,000 synapses each, though I’m not sure about the average number (see [Napper and Harvey](https://onlinelibrary.wiley.com/doi/abs/10.1002/cne.902740204?sid=nlm%3Apubmed) (1988): “We conclude that there are some 175,000 parallel fiber synapses on an individual Purkinje cell dendritic tree in the cerebellar cortex of the rat” (abstract), though this is an old estimate). I have not attempted to estimate the synapses in the cerebellum in particular, and I am not sure the extent to which synapse counts for granule cells and Purkinje cells overlap (a possibility that could lead to double counting). Energy use in the cerebellum appears to be dominated by granule cells: “This work predicts that the principal neurons in the cerebellum, the Purkinje cells, use only a small fraction of the energy consumed by the cerebellar cortex, while the granule cells dominate the signaling energy use” ([Howarth et al. (2012)](https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3390818/pdf/jcbfm201235a.pdf), p. 1230-1231). Many estimates for total synapses in the brain focus on the cerebral cortex, and in particular the neocortex (see citations in section [Section 2.1.1.1](https://www.openphilanthropy.org/brain-computation-report#SpikesThroughSynapsesPerSecond)), and [AI Impacts](https://aiimpacts.org/scale-of-the-human-brain/#Number_of_synapses_in_the_brain) reports the impression, which I share, that neocortical synapses are often treated as representing the bulk of the synapses in the brain. Indeed, [Kandel et al. (2013)](https://neurology.mhmedical.com/content.aspx?bookid=1049§ionid=59138631) suggests that “1014  to 1015 synaptic connections are formed in the brain” (p. 175) – a number comparable to the neocortical estimates from [Tang et al. (2001)](https://www.ncbi.nlm.nih.gov/pubmed/11418939) (“The average total number of synapses in the neocortex of five young male brains was 164 × 1012 (CV = 0.17)” (p. 258)) and [Pakkenberg et al. (2003)](http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.332.5850&rep=rep1&type=pdf) (“The total number of synapses in the human neocortex is approximately 0.15 × 1015 (0.15 quadrillion)” (p. 95)).\n\n\n[451.](https://www.openphilanthropy.org/brain-computation-report#footnoteref451_hzn5c5s)For example, [Pulsifer et al. (2004)](https://onlinelibrary.wiley.com/doi/full/10.1111/j.0013-9580.2004.15303.x?sid=nlm%3Apubmed) report that in a study of 71 patients who underwent [hemispherectomy](https://en.wikipedia.org/wiki/Hemispherectomy) for severe and intractable seizures, “Cognitive measures typically changed little between surgery and follow-up, with IQ change <15 points for 34 of 53 patients” (abstract) (though absolute levels of cognitive ability may still have been low), and [Pavone et al. (2013)](https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3564735/pdf/1824-7288-39-3.pdf) suggest that “The results obtained from the literature show that relative preservation of cognitive performance suggests that a single cerebral cortical hemisphere connected to an apparently intact brainstem is sufficient for the development of higher cognitive function” (p. 2). See also [this](https://www.newscientist.com/article/mg24532693-800-teen-born-without-half-her-brain-has-above-average-reading-skills/) article in the New Scientist, which reports that “a teenager who was born without the entire left hemisphere of her brain has above-average reading skills – despite missing the part of the brain that is typically specialised for language…The 18-year-old also has an average-to-high IQ and plans to go to university.”\n\n\n[452.](https://www.openphilanthropy.org/brain-computation-report#footnoteref452_kzxod4p)Glancing at one study, asymptomatic Alzehimer’s disease does not appear to be associated with neuron loss. See [Andrade-Moraes et al. (2013)](https://academic.oup.com/brain/article/136/12/3738/442715): “We found a great reduction of neuronal numbers in the hippocampus and cerebral cortex of demented patients with Alzheimer’s disease, but not in asymptomatic subjects with Alzheimer’s disease” (abstract).\n\n\n[453.](https://www.openphilanthropy.org/brain-computation-report#footnoteref453_l0ne7gu)Dr. Dario Amodei suggested considering these constraints. See also the citations throughout the rest of the section.\n\n\n[454.](https://www.openphilanthropy.org/brain-computation-report#footnoteref454_bcalslu)[Sandberg (2016)](https://arxiv.org/pdf/1602.04019.pdf): “Biology has many advantages in robustness and versatility, not to mention energy efficiency. Nevertheless, it is also fundamentally limited by what can be built out of cells with a particular kind of metabolism, the fact that organisms need to build themselves from the inside, and the need of solving problems that exist in a particular biospheric environment” (p. 7).\n\n\n[455.](https://www.openphilanthropy.org/brain-computation-report#footnoteref455_r6c1tpu)See [Moravec (1988)](https://www.amazon.com/Mind-Children-Future-Robot-Intelligence/dp/0674576187/ref=sr_1_2?dchild=1&keywords=mind+children+moravec&qid=1586128538&sr=8-2): “There is insufficient information in the 1010 bits of the human genome to custom-wire many of the 1014 synapses in the brain” (p. 166). See also [Zador (2019)](http://zadorlab.labsites.cshl.edu/wp-content/uploads/sites/59/2019/08/A-critique-of-pure-learning-and-what-artificial-neuralnetworks-can-learn-from-animal-brains.pdf): “ The human genome has about 3 × 109 nucleotides, so it can encode no more than about 1 GB of information—an hour or so of streaming video32. But the human brain has about 1011 neurons, and more than 103 synapses per neuron. Since specifying a connection target requires about log21011 = 37 bits/synapse, it would take about 3.7 × 1015 bits to specify all 1014 connections. (This may represent an underestimate because it considers only the presence or absence of a connection; a few extra bits/synapse would be required to specify graded synaptic strengths. But because of synaptic noise and for other reasons, synaptic strength may not be specified very precisely. So, in large and sparsely connected brains, most of the information is probably needed to specify the locations [of] the nonzero elements of the connection matrix rather than their precise value.). Thus, even if every nucleotide of the human genome were devoted to efficiently specifying brain connections, the information capacity would still be at least six orders of magnitude too small” (p. 5).\n\n\n[456.](https://www.openphilanthropy.org/brain-computation-report#footnoteref456_xje802m)[Moravec (1988)](https://www.amazon.com/Mind-Children-Future-Robot-Intelligence/dp/0674576187/ref=sr_1_2?dchild=1&keywords=mind+children+moravec&qid=1586128538&sr=8-2): “The slow switching speed and limited signaling accuracy of neurons rules out certain solutions for neural circuitry that are easy for computers” (p. 165). Dmitri Strukov’s comments [here](https://www.nature.com/articles/s41467-019-12521-x): “we should also keep in mind that over millions of years the evolution of biological brains has been constrained to biomaterials optimized for specific tasks, while we have a much wider range of material choices now in the context of neuromorphic engineering. Therefore, there could exist profound differences in designing rules. For example, the brains have to rely on poor conductors offered by biomaterials, which have presumably affected the principles of brain structure and operation in some ways that are not necessarily to be applicable to neuromorphic computing based on high conducting materials.”\n\n\n[457.](https://www.openphilanthropy.org/brain-computation-report#footnoteref457_cu3oufs)[Moravec (1988)](https://www.amazon.com/Mind-Children-Future-Robot-Intelligence/dp/0674576187/ref=sr_1_2?dchild=1&keywords=mind+children+moravec&qid=1586128538&sr=8-2): “The neuron’s basic information-passing mechanism – the release of chemicals that affect the outer membranes of other cells – seems to be a very primitive one that can be observed in even the simplest free-swimming bacteria. Animals seem to be stuck with this arrangement because of limitations in their design process. Darwinian evolution is a relentless optimizer of a given design, nuding the parameters this way and that, adding a step here, removing one there, in a plodding, tinkering, way. It’s not much of a redesigner, however. Fundamental changes at the foundation of its creations are out of reach, because too many things would have to change correctly all at once” (p. 168).\n\n\n[458.](https://www.openphilanthropy.org/brain-computation-report#footnoteref458_qmeroe5)Here, the distinction between “finding ways to do it the way the brain does it, but with a high-level of simplification/increased efficiency” and “doing it some other way entirely” is blurry. I have the former vaguely in mind, but see the appendix for more detailed discussion. See also [Sandberg (2016)](https://arxiv.org/pdf/1602.04019.pdf) for more discussion of possible constraints: “While we have reason to admire brains, they are also unable to perform certain very useful computations. In artificial neural networks we often employ non-local matrix operations like inversion to calculate optimal weights ([Toutounian and Ataei (2009)](http://www.sciencedirect.com/science/article/pii/S0377042708005062)): these computations are not possible to perform locally in a distributed manner. Gradient descent algorithms such as backpropagation are unrealistic in a biological sense, but clearly very successful in deep learning. There is no shortage of papers describing various clever approximations that would allow a more biologically realistic system to perform similar operations — in fact, the brains may well be doing it — but artificial systems can perform them directly, and by using low-level hardware intended for it, very efficiently” (p. 7).\n\n\n[459.](https://www.openphilanthropy.org/brain-computation-report#footnoteref459_9ppplrg)From [Open Philanthropy’s non-verbatim notes from a conversation with Prof. Markus Meister](https://www.openphilanthropy.org/files/Conversations/A%20conversation%20with%20Professor%20Markus%20Meister,%20September%2024,%202019.pdf): “The computations performed in the retina are fairly well-understood. There is more to learn, of course, but the core framework is in place. We have a standard model of the retina that can account for a lot of retinal processing, as well as predict new observations… The retina is probably the best understood part of the brain” (p. 1-2).\n\n\n[460.](https://www.openphilanthropy.org/brain-computation-report#footnoteref460_683rh7p)See [Yue et al. (2016)](https://ezproxy-prd.bodleian.ox.ac.uk:2056/science/article/pii/S1350946216300271) for a review of progress in retinal implant development as of 2016. From the [Stanford Artificial Retina Project](https://med.stanford.edu/artificial-retina/research/competition.html): “The current state of the art of retinal prostheses can be summed up as such: no blind patient today would trade their cane or guide dog for a retinal implant.” From [Open Philanthropy’s non-verbatim notes from a conversation with Prof. Markus Meister](https://www.openphilanthropy.org/files/Conversations/A%20conversation%20with%20Professor%20Markus%20Meister,%20September%2024,%202019.pdf): “Despite 30 years of effort, attempts to create functional artificial retinas have met with very little success. Recent performance tests show that people implanted with the devices are functionally blind – e.g., they cannot read, and they cannot distinguish between letters unless the letters occupy the entire visual field” (p. 3). [Nirenberg and Pandarinath (2012)](https://www.pnas.org/content/pnas/early/2012/08/08/1207035109.full.pdf) say: “Current devices still provide only very limited vision. For example, they allow patients to see spots of light and high-contrast edges, which provide some ability for navigation and gross feature detection, but they are far from providing patients with normal representations of faces, landscapes, etc. (4–6). [With respect to navigation, the devices enable the detection of light sources, such as doorways and lamps, and, with respect to feature detection, they allow discrimination of objects or letters if they span ∼7° of visual angle (5); this corresponds to about 20/1,400 vision; for comparison, 20/200 is the acuity-based legal definition of blindness in the United States (7)]” (p. 15012), though their paper aims to improve the situation.\n\n\n[461.](https://www.openphilanthropy.org/brain-computation-report#footnoteref461_gy3jiuz)From [Open Philanthropy’s non-verbatim notes from a conversation with Prof. Markus Meister](https://www.openphilanthropy.org/files/Conversations/A%20conversation%20with%20Professor%20Markus%20Meister,%20September%2024,%202019.pdf): “However, this lack of success is not about computation. People in the field generally agree that if you could make the right kind of one-to-one connection to the optic nerve fibers, you could compute spike trains that would allow the brain to see. The obstacle is actually making the interface between an electrical device and the retina. Electrodes on top of the retina stimulate many nerve fibers at once; you don’t know ahead of time which fiber you’ll be stimulating or what type of retinal ganglion cell you’re connected to, and you can’t get data into the eye at the right rate” (p. 3).\n\n\n[462.](https://www.openphilanthropy.org/brain-computation-report#footnoteref462_52ikdm4)See [Moravec (1988)](https://www.amazon.com/Mind-Children-Future-Robot-Intelligence/dp/0674576187/ref=sr_1_2?dchild=1&keywords=mind+children+moravec&qid=1586128538&sr=8-2), Chapter 2 (p. 51-74). See also [Moravec (1988)](https://www.amazon.com/Mind-Children-Future-Robot-Intelligence/dp/0674576187/ref=sr_1_2?dchild=1&keywords=mind+children+moravec&qid=1586128538&sr=8-2) and [Moravec (2008)](https://www.scientificamerican.com/article/rise-of-the-robots-2008-02/). [Merkle (1989)](https://www.merkle.com/brainLimits.html) uses a broadly similar methodology.\n\n\n[463.](https://www.openphilanthropy.org/brain-computation-report#footnoteref463_rqxq8c8)See [Moravec (1988)](https://www.amazon.com/Mind-Children-Future-Robot-Intelligence/dp/0674576187/ref=sr_1_2?dchild=1&keywords=mind+children+moravec&qid=1586128538&sr=8-2) (p. 57-60). For discussion of what a center-surround and a motion-detection operation in the retina consists in, see [Meister et al. (2013)](https://neurology.mhmedical.com/content.aspx?bookid=1049§ionid=59138654): “A typical ganglion cell is sensitive to light in a compact region of the retina near the cell body, called the cell’s *receptive* field. Within that area one can often distinguish a *center* region and *surround* region in which light produces opposite responses. An ON cell, for example, fires faster when a bright spot shines on the receptive field’s center but decreases its firing when the spot shines on the surround. If light covers both the center and the surround, the response is much weaker than for center-only illumination. A bright spot on the center combined with a dark annulus on the surround elicits very strong firing. For an OFF cell these relationships are reversed; the cell is strongly excited by a dark spot in a bright annulus (Figure 26-10). The output produced by a population of retinal ganglion cells thus enhances regions of spatial contrast in the input, such as an edge between two different areas of different intensity, and gives less emphasis to regions of homogeneous illumination” (p. 587). See [Meister et al. (2013)](https://neurology.mhmedical.com/content.aspx?bookid=1049§ionid=59138654) (p. 588-589), and [this graphic](https://commons.wikimedia.org/wiki/File:Receptive_field.png), for visual depictions of center-surround type responses. With respect to retinal representation of moving objects, [Meister et al. (2013)](https://neurology.mhmedical.com/content.aspx?bookid=1049§ionid=59138654) write: “When an effective light stimulus appears, a ganglion cell’s firing typically increases sharply from the resting level to a peak and then relaxes to an intermediate rate. When the stimulus turns off, the firing rate drops sharply then gradually recovers to the resting level… a moving object elicits strong firing in the ganglion cell population near the edges of the object’s image because these are the only regions of spatial contrast and the only regions where the light intensity changes over time” (p. 587, see p. 588-589 for more on motion-detection).\n\n\n[464.](https://www.openphilanthropy.org/brain-computation-report#footnoteref464_iudes7x)See [Moravec (1988)](https://www.amazon.com/Mind-Children-Future-Robot-Intelligence/dp/0674576187/ref=sr_1_2?dchild=1&keywords=mind+children+moravec&qid=1586128538&sr=8-2) (p. 58-59). That said, he also acknowledges that “though separate frames cannot be distinguished faster than 10 per second, if the light flickers at the frame rate, the flicker itself is detectable until it reaches a frequency of about 50 flashes per second” (p. 59).\n\n\n[465.](https://www.openphilanthropy.org/brain-computation-report#footnoteref465_23c9p6h)See [Gollisch and Meister (2010)](https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3717333/pdf/nihms488912.pdf): “When the image of an object moves on the retina, it creates a wave of neural activity among the ganglion cells. One should expect that this wave lags behind the object image because of the delay in phototransduction. Instead, experiments show that the activity in the ganglion cell layer moves at the true location of the object or even along its leading edge ([Berry et al. (1999)](https://www.nature.com/articles/18678.pdf?origin=ppub)). Effectively, the retinal network computes the anticipated object location and thereby cancels the phototransduction delay” (p. 7-8).\n\n\n[466.](https://www.openphilanthropy.org/brain-computation-report#footnoteref466_zp8shf4)See [Gollisch and Meister (2010)](https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3717333/pdf/nihms488912.pdf): “A somewhat different form of anticipation can be observed when the visual system is exposed to a periodic stimulus, such as a regular series of flashes. The activated visual neurons typically become entrained into a periodic response. If the stimulus sequence is interrupted, for example by omitting just one of the flashes, some neurons generate a pulse of activity at the time corresponding to the missing stimulus ([Bullock et al. (1990)](https://pubmed.ncbi.nlm.nih.gov/2230933/); [Bullock et al. (1994)](https://pubmed.ncbi.nlm.nih.gov/7517843/)). This phenomenon, termed the “omitted stimulus response”, is quite widespread, and has been noted in the brains of many species, including humans ([McAnany and Alexander (2009)](https://www.ncbi.nlm.nih.gov/pmc/articles/PMC2682626/)). Qualitatively it suggests the build-up of an anticipation for the next stimulus, and the large response reflects surprise at the missing element in the sequence” (p. 7-8).\n\n\n[467.](https://www.openphilanthropy.org/brain-computation-report#footnoteref467_krw40jj)[Gollisch and Meister (2010)](https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3717333/pdf/nihms488912.pdf): “Because the ambient light level varies over ~9 orders of magnitude in the course of a day, while spiking neurons have a dynamic range of only ~2 log units, the early visual system must adjust its sensitivity to the prevailing intensities. This adaptation to light level is accomplished by the retina, beginning already in the photoreceptors, and the process is complete before spiking neurons get involved. Over a wide range of intensities, the sensitivity of the retina declines inversely with the average light level. As a result, the ganglion cell signals are more or less independent of the illuminating intensity, but encode the reflectances of objects within the scene, which are the ethologically important variables. The perceptual effects of light adaptation and its basis in the circuitry and cellular mechanisms of the retina have been studied extensively and covered in several excellent reviews ([Shapley and Enroth-Cugell (1984)](https://linkinghub.elsevier.com/retrieve/pii/0278432784900117); [Hood (1998)](https://europepmc.org/article/med/9496631); [Fain et al. (2001)](https://pubmed.ncbi.nlm.nih.gov/11152756/); [Rieke and Rudd (2009)](https://pubmed.ncbi.nlm.nih.gov/20005818/))” (p. 11).\n\n\n[468.](https://www.openphilanthropy.org/brain-computation-report#footnoteref468_7mgf8kj)[Gollisch and Meister (2010)](https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3717333/pdf/nihms488912.pdf): “During a saccade, the image sweeps across the retina violently for tens of milliseconds, precluding any useful visual processing. In humans, visual perception is largely suppressed during this period ([Volkmann (1986)](https://www.sciencedirect.com/science/article/abs/pii/0042698986901641?via%3Dihub); [Burr et al. (1994)](https://pubmed.ncbi.nlm.nih.gov/7935763/); [Castet and Masson (2000)](https://pubmed.ncbi.nlm.nih.gov/10649574/)). The circuits of the retina are at least partly responsible for this suppression: Many types of retinal ganglion cell are strongly inhibited during sweeps of the visual image ([Roska and Werblin (2003)](https://pubmed.ncbi.nlm.nih.gov/12740583/)). This effect is mediated by spiking, inhibitory amacrine cells, which are themselves excited by the global motion signal. Conceivably, the underlying circuitry resembles the one identified for OMS ganglion cells (Figure 2C). In fact, the OMS cells may be distinct simply by an enhanced sensitivity to the global inhibition, so they are suppressed even by the much smaller eye movements during a fixation” (p. 9).\n\n\n[469.](https://www.openphilanthropy.org/brain-computation-report#footnoteref469_qg8h90f)[Gollisch and Meister (2010)](https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3717333/pdf/nihms488912.pdf): “The anatomical diversity suggests that there is much function left to be discovered and that we probably still have a good distance to go before understanding all the computations performed by the retina” (p. 14).\n\n\n[470.](https://www.openphilanthropy.org/brain-computation-report#footnoteref470_o6ndlkx)From [Open Philanthropy’s non-verbatim notes from a conversation with Prof. Markus Meister](https://www.openphilanthropy.org/files/Conversations/A%20conversation%20with%20Professor%20Markus%20Meister,%20September%2024,%202019.pdf): “It has taken more effort to simulate retinal responses to natural scenes than to artificial stimuli used in labs (e.g. spots, flashes, moving bars)” (p. 1). Heitman et al. (2016): “This paper tests how accurately one pseudo-linear model, the generalized linear model (GLM), explains the responses of primate RGCs to naturalistic visual stimuli … The GLM accurately reproduced RGC responses to white noise stimuli, as observed previously, but did not generalize to predict RGC responses to naturalistic stimuli. It also failed to capture RGC responses when fitted and tested with naturalistic stimuli alone. Fitted scalar nonlinearities before and after the linear filtering stage were insufficient to correct the failures. These findings suggest that retinal signaling under natural conditions cannot be captured by models that begin with linear filtering, and emphasize the importance of additional spatial nonlinearities, gain control, and/or peripheral effects in the first stage of visual processing” (p. 1).\n\n\n[471.](https://www.openphilanthropy.org/brain-computation-report#footnoteref471_spq49dx)See Figure 1C in [Maheswaranathan et al. (2019)](https://www.biorxiv.org/content/10.1101/340943v5.full.pdf), and [Batty et al. (2017)](https://openreview.net/pdf?id=HkEI22jeg): “RNNs of varying architectures consistently outperformed LNs and GLMs in predicting neural spiking responses to a novel natural scene movie for both OFF and ON parasol retinal ganglion cells in both experiments (Figure 2)” (p. 6).\n\n\n[472.](https://www.openphilanthropy.org/brain-computation-report#footnoteref472_8m75zda)From [Open Philanthropy’s non-verbatim notes from a conversation with Prof. E.J. Chichilnisky](https://www.openphilanthropy.org/files/Conversations/A%20conversation%20with%20Professor%20E.J.%20Chichilnisky,%20January%2023,%202020.pdf): “It’s hard to know when to stop fine-tuning the details of your model. A given model may be inaccurate to some extent, but we don’t know whether a given inaccuracy matters, or whether a human wouldn’t be able to tell the difference (though focusing on creating usable retinal prostheses can help with this)” (p. 3).\n\n\n[473.](https://www.openphilanthropy.org/brain-computation-report#footnoteref473_xokduhk)From [Open Philanthropy’s non-verbatim notes from a conversation with Dr. Stephen Baccus](https://www.openphilanthropy.org/files/Conversations/A%20conversation%20with%20Professor%20Stephen%20Baccus,%20January%2022,%202020.pdf): “The visual system works under a wide range of conditions – for example, varying light levels and varying contrast levels. Experiments focused on a set of natural scenes only cover some subset of these conditions. For example, Prof. Baccus’s lab has not really tested dim light, or rapid transitions between bright and dim light” (p. 2). From [Open Philanthropy’s non-verbatim notes from a conversation with Prof. E.J. Chichilnisky](https://www.openphilanthropy.org/files/Conversations/A%20conversation%20with%20Professor%20E.J.%20Chichilnisky,%20January%2023,%202020.pdf): “One of the biggest challenges is the world of possible stimuli. It would take lifetimes to present all possible stimuli, so we don’t know if we’re missing something. Prof. Chichilnsky’s lab has the biggest trove of data in the world from retinal ganglion cells. They’ve recorded from something like 500,000 retinal ganglion cells (roughly half the retina), and they have about 50 billion spikes. But even this may not be enough data” (p. 3).\n\n\n[474.](https://www.openphilanthropy.org/brain-computation-report#footnoteref474_tgq0c5d)From [Open Philanthropy’s non-verbatim notes from a conversation with Prof. Markus Meister](https://www.openphilanthropy.org/files/Conversations/A%20conversation%20with%20Professor%20Markus%20Meister,%20September%2024,%202019.pdf): “The biochemistry involved in retinal light adaptation is well-understood, and it can be captured using a simplified computational model. Specifically, you can write down a three-variable dynamical model that gets it about 80% correct. The compute required to run a functional model of the retina would probably be dominated by the feedforward processing in the circuit, rather than by capturing adaptation” (p. 2).\n\n\n[475.](https://www.openphilanthropy.org/brain-computation-report#footnoteref475_ne17ug1)From [Open Philanthropy’s non-verbatim notes from a conversation with Dr. Stephen Baccus](https://www.openphilanthropy.org/files/Conversations/A%20conversation%20with%20Professor%20Stephen%20Baccus,%20January%2022,%202020.pdf): “These models focus on replicating the response of an individual retinal ganglion cell to a stimulus. However, it may also be necessary to replicate correlations between the responses of different cells in the retina, as these may carry important information. Some people think that replicating the firing patterns of individual cells is enough, but most people think that correlations are important. Prof. Baccus’s lab has not yet assessed their model’s accuracy with respect to these between-cell correlations, though it is on their agenda” (p. 2).\n\n\n[476.](https://www.openphilanthropy.org/brain-computation-report#footnoteref476_sjmqwjl)From [Open Philanthropy’s non-verbatim notes from a conversation with Prof. E.J. Chichilnisky](https://www.openphilanthropy.org/files/Conversations/A%20conversation%20with%20Professor%20E.J.%20Chichilnisky,%20January%2023,%202020.pdf): “There is variability in retinal function both across species and between individuals of the same species. Mouse retinas are very different from human retinas (a difference that is often ignored), and there is variability amongst monkey retinas as well” (p. 3).\n\n\n[477.](https://www.openphilanthropy.org/brain-computation-report#footnoteref477_lnl03xc)For example, there are about 20 different types of retinal ganglion cells in humans (see [Open Philanthropy’s non-verbatim notes from a conversation with Prof. E.J. Chichilnisky](https://www.openphilanthropy.org/files/Conversations/A%20conversation%20with%20Professor%20E.J.%20Chichilnisky,%20January%2023,%202020.pdf) (p. 3)), which could vary in complexity. However, Prof. Stephen Baccus seemed to think that the data gathered for [Maheswaranathan et al. (2019)](https://www.biorxiv.org/content/10.1101/340943v5.full.pdf) captures this complication. From [Open Philanthropy’s non-verbatim notes from a conversation with Dr. Stephen Baccus](https://www.openphilanthropy.org/files/Conversations/A%20conversation%20with%20Professor%20Stephen%20Baccus,%20January%2022,%202020.pdf): “There is no special selection involved in choosing which cells to test, and Prof. Baccus would expect similar success with arbitrary sets of retinal ganglion cells, though one cannot account for every cell under every condition without testing it” (p. 1). Another possibility is that these CNNs/RNNs might be vulnerable to [adversarial examples](https://arxiv.org/pdf/1312.6199.pdf), in a manner analogous to the vulnerabilities exhibited by image recognition systems (see discussion in [Section 3.2](https://www.openphilanthropy.org/brain-computation-report#VisualCortex)). And the results were obtained using isolated retinas (I believe this means that the animal’s eyes were removed from the body), which could introduce differences as well.\n\n\n[478.](https://www.openphilanthropy.org/brain-computation-report#footnoteref478_m5n8b57)From [Open Philanthropy’s non-verbatim notes from a conversation with Dr. Stephen Baccus](https://www.openphilanthropy.org/files/Conversations/A%20conversation%20with%20Professor%20Stephen%20Baccus,%20January%2022,%202020.pdf): “Prof. Baccus and his colleagues have calculated that their CNN requires ~20 billion floating point operations to predict the output of one ganglion cell over one second (these numbers treat multiply and addition as separate operations – if we instead counted multiply-add operations (MACCs), the numbers would drop by a factor of roughly 2). The input size is 50 × 50 (pixels) × 40 time points (10 ms bins). Layer 1 has 8 channels and 36 × 36 units with 15 × 15 filters each. Layer 2 has 8 channels and 26 × 26 units with 11 × 11 filters each. Layer 3 (to the ganglion cell) is a dense layer with a 8 × 26 × 26 filter from layer 2. This leads to the following calculation for one ganglion cell:\n\n\nLayer 1: (40 × 15 × 15 × 2 + 1 (for the ReLU)) × 36 × 36 units × 8 channels = 1.87e8\n\n\nLayer 2: (8 × 11 × 11 × 2 + 1) × 26 × 26 units × 8 channels = 1.05e7\n\n\nLayer 3: 8 × 26 × 26 × 2 = 10,816.\n\n\nTotal: 1.97e8 FLOP per 10 ms bin. Multiplied by 100, this equals 1.97e10 FLOP/s” (p. 6).\n\n\n[479.](https://www.openphilanthropy.org/brain-computation-report#footnoteref479_dy0q4zn)From [Open Philanthropy’s non-verbatim notes from a conversation with Dr. Stephen Baccus](https://www.openphilanthropy.org/files/Conversations/A%20conversation%20with%20Professor%20Stephen%20Baccus,%20January%2022,%202020.pdf): “Simulating more ganglion cells simultaneously only alters the last layer of the network, and so results in only a relatively small increase in computation. A typical experiment involves around 5-15 cells, but Prof. Baccus can easily imagine scaling up to 676 cells (26 × 26 — the size of the last layer), or to 2500 (50×50 — the size of the input). 676 cells would require 20.4 billion FLOPs per second. 2500 would require 22.4 billion.” (p. 6). 22.4 billion/2500 is ~9e6, which I’ve rounded to 1e7.\n\n\n[480.](https://www.openphilanthropy.org/brain-computation-report#footnoteref480_n6d8fpo)My estimate is as follows. 1st layer: (31 × 31 (image patch) + 50 (inputs from previous time-step)) × 50 = 48,050 MACCs. Second layer: (50 feedforward inputs from layer 1 + 50 inputs from previous time-step) × 50 = 5,000 MACCs. Total MACCs per timestep: ~ 53,000. Multiplied by two for FLOPs vs. MACCs (see “It’s dot products all the way down” [here](https://machinethink.net/blog/how-fast-is-my-model/)) = 106,000 FLOPs per time-step. Timesteps per second: 1200 (0.83 ms time bins). Total FLOPs per cell per second: ~1.2e8 FLOP/s. I have discussed this estimate with two people with ML expertise, but it has not been confirmed by any of the paper’s authors.\n\n\n[481.](https://www.openphilanthropy.org/brain-computation-report#footnoteref481_311ddcd)[Sarpeshkar (2010)](https://www.cambridge.org/core/books/ultra-low-power-bioelectronics/ED8504DA1504856B74E2502EA859FDEA) estimates at least 1e10 FLOP/s for the retina, based on budgeting at least one floating-point multiplication operation per synapse, and a 12 Hz rate of computation (p. 749). However, he doesn’t (at least in that paragraph) say much to justify this assumption; and estimates that assume 1 FLOP per event at synapses have been covered, to some extent, under the mechanistic method section already. So I’ll focus elsewhere. For what it’s worth, though, [Sarpeshkar (2010)](https://www.cambridge.org/core/books/ultra-low-power-bioelectronics/ED8504DA1504856B74E2502EA859FDEA) estimate would imply at least ~1e13-1e16 FLOP/s for the brain as a whole, using the scaling factors discussed below.\n\n\n[482.](https://www.openphilanthropy.org/brain-computation-report#footnoteref482_6zfittf)From [Open Philanthropy’s non-verbatim notes from a conversation with Dr. Stephen Baccus](https://www.openphilanthropy.org/files/Conversations/A%20conversation%20with%20Professor%20Stephen%20Baccus,%20January%2022,%202020.pdf): “The largest amount of computation takes place in the first layer of the network. If the input size was larger, these numbers would scale up” (p. 6).\n\n\n[483.](https://www.openphilanthropy.org/brain-computation-report#footnoteref483_uobzdaf)[Moravec (2008)](https://www.scientificamerican.com/article/rise-of-the-robots-2008-02/) reports that the brain is about 75,000 times heavier than the retina, which he cites as weighing 0.02 g (though [Sarpeshkar (2010)](https://www.cambridge.org/core/books/ultra-low-power-bioelectronics/ED8504DA1504856B74E2502EA859FDEA) estimates 0.4 g, substantially more). Moravec rounds this factor to 100,000, which in combination with his 1e9 calculations per second estimate for replicating the retina, yields a whole brain estimate of 1e14 calculations per second (this would be ~4e12 if we used Sarpeshkar’s weight estimate). See [Moravec (2008)](https://www.scientificamerican.com/article/rise-of-the-robots-2008-02/), “Nervous Tissue and Computation.” [Azevedo et al. (2009)](https://www.ncbi.nlm.nih.gov/pubmed/19226510) (p. 536), report that the whole brain is ~1508.91 g, which is in line with what Moravec’s estimate implies (1500 g). However, [Sarpeshkar (2010)](https://www.cambridge.org/core/books/ultra-low-power-bioelectronics/ED8504DA1504856B74E2502EA859FDEA) (p. 748), estimates retinal weight at 0.4 g, which would result in a weight-based scale-up of 3750 – considerably less than Moravec’s rounded 100,000.\n\n\n[484.](https://www.openphilanthropy.org/brain-computation-report#footnoteref484_71tkwxi)[Moravec (1988)](https://www.amazon.com/Mind-Children-Future-Robot-Intelligence/dp/0674576187/ref=sr_1_2?dchild=1&keywords=mind+children+moravec&qid=1586128538&sr=8-2): “The 1,500 cubic centimeter human brain is about 100,000 times as large as the retina” (p. 2). [Sarpeshkar (2010)](https://www.cambridge.org/core/books/ultra-low-power-bioelectronics/ED8504DA1504856B74E2502EA859FDEA) (p. 748), reports that the area of the human retina is 2500 mm2, and the average thickness is 160 µm, for a total of 400 mm3 (0.4 cm3). The brain [appears to be around 1400 cm3](https://hypertextbook.com/facts/2001/ViktoriyaShchupak.shtml), which suggests a scale-up, on Sarpeshkar’s numbers, of ~3500.\n\n\n[485.](https://www.openphilanthropy.org/brain-computation-report#footnoteref485_2dxs4mx)The retina has about 1e8 signaling cells if you include all the photoreceptors (though Stephen Baccus indicated that for bright light, it might make more sense to focus on the roughly 5e6 cones), and tens of millions of other non-photoreceptor neurons. These numbers are roughly a factor of 1000 and 10,000 less, respectively, than the brain’s neuron count (1e11). From [Open Philanthropy’s non-verbatim notes from a conversation with Dr. Stephen Baccus](https://www.openphilanthropy.org/files/Conversations/A%20conversation%20with%20Professor%20Stephen%20Baccus,%20January%2022,%202020.pdf): “We can think of the retina as receiving a 100 megapixel input and outputting a 1 megapixel output (though in bright light, it’s more like 5 million inputs, because there are 5 million cones and 95 million rods). And there are something like 10 million other cells in the retina” (p. 3).\n\n\n[486.](https://www.openphilanthropy.org/brain-computation-report#footnoteref486_8p13f7a)[Sarpeshkar (2010)](https://www.cambridge.org/core/books/ultra-low-power-bioelectronics/ED8504DA1504856B74E2502EA859FDEA) (p. 698), lists ~1 billion synapses in the retina, though I’m not sure where he got this number. I am assuming the synapse estimates of 1e14-1e15, discussed in [Section 2.1.1.1](https://www.openphilanthropy.org/brain-computation-report#SpikesThroughSynapsesPerSecond).\n\n\n[487.](https://www.openphilanthropy.org/brain-computation-report#footnoteref487_zpfk6h0)See [Sarpeshkar (2010)](https://www.cambridge.org/core/books/ultra-low-power-bioelectronics/ED8504DA1504856B74E2502EA859FDEA): “The weight of the human retina is 2500 mm2 (area) × 160 mm (avg. thickness) × 1000 kg/m3 (density in SI units) = 0.4 grams. Thus, the power consumption of human rods in the dark may be estimated to be 0.2 grams × 13 µmol ATP/g/min × 20 *k*T/ATP = 2.1mW. If we assume that outer retina power consumption is dominated by the rods, and that the inner and outer retina consume at the same rate in humans, then the total power consumption of the retina in the dark may be estimated to be 2.1 mW × 2 = 4.2 mW. We list the average of (2.6 + 4.2)/2 = 3.4 mW as our estimate for the total power consumption of the retina in Table 23.2. We thank Simon Laughlin for his generous assistance in helping us estimate the number of synapses in the retina and the power consumption of the eye” (p. 748). Following Sarpeshkar, I am here using [Aiello’s (1997)](http://www.scielo.br/scielo.php?script=sci_arttext&pid=S0100-84551997000100023) estimate of 14.6 W for the brain as a whole.\n\n\n[488.](https://www.openphilanthropy.org/brain-computation-report#footnoteref488_9ie69yf)[Moravec (1988)](https://www.amazon.com/Mind-Children-Future-Robot-Intelligence/dp/0674576187/ref=sr_1_2?dchild=1&keywords=mind+children+moravec&qid=1586128538&sr=8-2): “The retina’s evolutionarily pressed neurons are smaller and more tightly packed than average” (p. 59). See also [Moravec’s (3/18/98) replies](https://jetpress.org/volume1/commentary.htm) to Anders Sandberg’s comment in the Journal of Evolution and Technology: “Evolution can just as easily choose two small neurons as one twice as large. The cost in metabolism and materials is the same. So I would expect brain structures to maximize for effective computation per volume, not per neuron. After all, one neuron with ten thousand synapses might be the computational match of 50 neurons with 50 synapses each.”\n\n\n[489.](https://www.openphilanthropy.org/brain-computation-report#footnoteref489_tlnkoa5)See his reply to Moravec [here](https://jetpress.org/volume1/commentary.htm): “volume cannot be compared due to the differences in tissue structure and constraints.”\n\n\n[490.](https://www.openphilanthropy.org/brain-computation-report#footnoteref490_inqf5g9)See his reply to Moravec [here](https://jetpress.org/volume1/commentary.htm). Though his high-end estimate of whole brain neuron count (1e12) is, I think, too large.\n\n\n[491.](https://www.openphilanthropy.org/brain-computation-report#footnoteref491_0c0zse0)From [Open Philanthropy’s non-verbatim notes from a conversation with Prof. E.J. Chichilnisky](https://www.openphilanthropy.org/files/Conversations/A%20conversation%20with%20Professor%20E.J.%20Chichilnisky,%20January%2023,%202020.pdf): “The brain is probably a lot more plastic than the retina, though this is likely a quantitative rather than a qualitative difference” (p. 4).\n\n\n[492.](https://www.openphilanthropy.org/brain-computation-report#footnoteref492_nm9mudh)See Anders Sandberg’s [1998 comments on Moravec](https://jetpress.org/volume1/commentary.htm): “The retina is a highly optimized and fairly stereotypal neural structure, this can introduce a significant bias.”\n\n\n[493.](https://www.openphilanthropy.org/brain-computation-report#footnoteref493_t3ndd2o)For example, it needs to be packed into the eye, and to be transparent enough for light signals to pass through layers of cells to reach the photoreceptors. Anders Sandberg, in [his 1998 comments on Moravec](https://jetpress.org/volume1/commentary.htm), also suggests that it needs to be two dimensional, which might preclude more interesting and complex computational possibilities implicated by 3D structures. I have not investigated this.\n\n\n[494.](https://www.openphilanthropy.org/brain-computation-report#footnoteref494_s3i2g18)From [Open Philanthropy’s non-verbatim notes from a conversation with Dr. Stephen Baccus](https://www.openphilanthropy.org/files/Conversations/A%20conversation%20with%20Professor%20Stephen%20Baccus,%20January%2022,%202020.pdf): “There is higher connectivity in the cortex than in the retina… Recurrence might be the trickiest difference. The retina can be largely approximated as a feedforward structure (there is some feedback, but a feedforward model does pretty well), but in the cortex there is a lot of feedback between different brain regions. This might introduce oscillations and feedback signals that make precise details about spike timings (e.g., at a 1 ms level of precision) more important, and therefore make firing rate models, which blur over 10 ms, inadequate” (p. 5).\n\n\n[495.](https://www.openphilanthropy.org/brain-computation-report#footnoteref495_a76a05z)From [Open Philanthropy’s non-verbatim notes from a conversation with Prof. E.J. Chichilnisky](https://www.openphilanthropy.org/files/Conversations/A%20conversation%20with%20Professor%20E.J.%20Chichilnisky,%20January%2023,%202020.pdf): “We are much further along in mapping all of the cell types in the retina than we are in the brain as a whole. Differences between cell types matter a lot in the retina. We don’t know how much these differences matter in the rest of the brain. Some people think that they don’t matter very much, but Prof. Chichilnisky disagrees, and certainly the field has been moving in the direction of emphasizing the cell type differences in the brain. However, there’s no reason to think that some neuron types in the brain/retina will be radically simple and some will be radically complicated. There will be some variations, but perhaps not a big gulf” (p. 4).\n\n\n[496.](https://www.openphilanthropy.org/brain-computation-report#footnoteref496_s0z8lxj)The retina engages in certain forms of dendritic computation (see e.g. [Taylor et al. (2000)](https://science.sciencemag.org/content/289/5488/2347) and [Hanson et al. (2019)](https://elifesciences.org/articles/42392)), but various dendritic computation results focus on cortical pyramidal cells, and in particular on the apical dendrite of such cells (see [London and Häusser (2005)](https://pdfs.semanticscholar.org/81cf/8182d5725d5fdd9a33b65843f2f9fdb6a9f6.pdf) for examples). [Glia](https://www.ncbi.nlm.nih.gov/pubmed/6771013), [electrical synapses](https://webvision.med.utah.edu/book/part-iii-retinal-circuits/myriad-roles-for-gap-junctions-in-retinal-circuits/), and [neuropeptide signaling](https://www.ncbi.nlm.nih.gov/pubmed/25797468) are all present in the retina; I’m less sure about ephaptic effects (to the extent that they’re present/task-relevant anywhere).\n\n\n[497.](https://www.openphilanthropy.org/brain-computation-report#footnoteref497_81rqr6u)See his reply to Anders Sandberg [here](https://jetpress.org/volume1/commentary.htm). [Drexler (2019)](https://www.fhi.ox.ac.uk/wp-content/uploads/Reframing_Superintelligence_FHI-TR-2019-1.1-1.pdf) assumes something similar: “In the brain, however, typical INA [immediate neural activity] per unit volume is presumably less than that of activated retina” (p. 188).\n\n\n[498.](https://www.openphilanthropy.org/brain-computation-report#footnoteref498_wx0eoi7)From [Open Philanthropy’s non-verbatim notes from a conversation with Prof. Markus Meister](https://www.openphilanthropy.org/files/Conversations/A%20conversation%20with%20Professor%20Markus%20Meister,%20September%2024,%202019.pdf) (p. 4):\n\n\nThere is nothing particularly simplistic about the retina, relative to other neural circuits. It probably has a hundred different cell types, it probably uses almost every neurotransmitter we know of, and it has very intricate microcircuitry. Prof. Meister would be sympathetic to scaling up from the retina as a way of putting an upper limit on the difficulty of simulating the brain as a whole. Prof. Meister has not actually done this back-of-the-envelope calculation, but budgeting based on the rate at which action potentials arrive at synapses, multiplied by the number of synapses, seems like roughly the right approach.\n\n\nThough see later in that section for some small increases (2×) for dendritic computation. From [Open Philanthropy’s non-verbatim notes from a conversation with Prof. E.J. Chichilnisky](https://www.openphilanthropy.org/files/Conversations/A%20conversation%20with%20Professor%20E.J.%20Chichilnisky,%20January%2023,%202020.pdf) (p. 4):\n\n\nThe level of modeling detail necessary in the retina provides a good test of the level of modeling detail necessary in the brain as a whole. However, the data on the retina aren’t in, and they won’t be in for a while.\n\n\nFrom [Open Philanthropy’s non-verbatim notes from a conversation with Dr. Stephen Baccus](https://www.openphilanthropy.org/files/Conversations/A%20conversation%20with%20Professor%20Stephen%20Baccus,%20January%2022,%202020.pdf) (p. 5):\n\n\nProf. Baccus thinks the answer is ‘maybe’ to the question of whether the compute necessary to model neurons in the retina will be similar to the compute necessary to model neurons in the cortex. You might expect a volume by volume comparison to work as a method of scaling up from the retina to the cortex.\n\n\n[499.](https://www.openphilanthropy.org/brain-computation-report#footnoteref499_w94x6yx)See [Open Philanthropy’s non-verbatim notes from a conversation with Prof. Barak Pearlmutter](https://www.openphilanthropy.org/files/Conversations/A%20conversation%20with%20Prof.%20Barak%20Pearlmutter.pdf): “Prof. Hans Moravec attempted to derive estimates of the computational capacity of the brain from examination of the retina. Prof. Pearlmutter thought that Moravec’s estimates for the computational costs of robotic vision were likely accurate, given Moravec’s expertise in vision” (p. 3).\n\n\n[500.](https://www.openphilanthropy.org/brain-computation-report#footnoteref500_6smnrbl)See [here](https://machinethink.net/blog/how-fast-is-my-model/): “Let’s say the input shape for a convolutional layer is 224×224×3, a typical size for an image classifier.” Other input sizes listed [here](https://github.com/albanie/convnet-burden).\n\n\n[501.](https://www.openphilanthropy.org/brain-computation-report#footnoteref501_5zu2pdi)This section is inspired by some arguments suggested by Dr. Dario Amodei, to the effect that ML vision models might be put into productive comparison with parts of the visual cortex (and in particular, conservatively, V1). See also [Drexler (2019)](https://www.fhi.ox.ac.uk/wp-content/uploads/Reframing_Superintelligence_FHI-TR-2019-1.1-1.pdf), who inspired some of Dr. Amodei’s analysis.\n\n\n[502.](https://www.openphilanthropy.org/brain-computation-report#footnoteref502_894tu4y)Some datasets have larger numbers of categories. For example, the full ImageNet dataset has [21k classes](http://www.image-net.org/about-stats), and [JFT-300M](https://arxiv.org/pdf/1707.02968.pdf) has 18,291 classes. However, many results focus on the benchmark set by the [ILSVRC competition](https://www.kaggle.com/getting-started/149448), which uses 1000 classes. I’ll focus there as well.\n\n\n[503.](https://www.openphilanthropy.org/brain-computation-report#footnoteref503_actwami)When asked to provide five labels for a given image, at least one human has managed to include the true label 94.9% of the time, [Russakovsky et al. (2014)](https://arxiv.org/pdf/1409.0575.pdf): “Annotator A1 evaluated a total of 1500 test set images. The GoogLeNet classification error on this sample was estimated to be 6.8% (recall that the error on full test set of 100,000 images is 6.7%, as shown in Table 7). The human error was estimated to be 5.1%.” You can try out the task for yourself [here](https://cs.stanford.edu/people/karpathy/ilsvrc/). [Karpathy (2014b)](https://karpathy.github.io/2014/09/02/what-i-learned-from-competing-against-a-convnet-on-imagenet/), who appears to have served as Annotator A1 for [Russakovsky et al. (2014)](https://arxiv.org/pdf/1409.0575.pdf), writes in a blog post: “There have now been several reported results that surpass my 5.1% error on ImageNet. I’m astonished to see such rapid progress. At the same time, I think we should keep in mind the following: *Human accuracy is not a point. It lives on a tradeoff curve.*We trade off human effort and expertise with the error rate: I am one point on that curve with 5.1%. My labmates with almost no training and less patience are another point, with even up to 15% error. And based on some calculations that consider my exact error types and hypothesizing which ones may be easier to fix than others, it’s not unreasonable to suggest that an ensemble of very dedicated expert human labelers might push this down to 3%, with about 2% being an optimistic error rate lower bound.” DNNs are worse on top 1 labeling, but my understanding is that this is partly because images contain multiple possible labels (see [Kostyaev (2016)](https://blog.kostyaev.me/computer%20vision/2016/03/01/Why-top-5-error-is-more-fair-metric-than-top-1-for-ImageNet-classification-task.html)).\n\n\n[504.](https://www.openphilanthropy.org/brain-computation-report#footnoteref504_7b72tjf)See [Brownlee (2019b)](https://machinelearningmastery.com/object-recognition-with-deep-learning/) for a breakdown of different types of object-recognition tasks, and [here](https://github.com/albanie/convnet-burden) for example models. [Hossain et al. (2018)](https://arxiv.org/pdf/1810.04020.pdf) review different image captioning models.\n\n\n[505.](https://www.openphilanthropy.org/brain-computation-report#footnoteref505_clr7ady)[Cadena et al. (2019)](https://journals.plos.org/ploscompbiol/article?rev=2&id=10.1371/journal.pcbi.1006897#pcbi-1006897-g008): “Despite great efforts over several decades, our best models of primary visual cortex (V1) still predict spiking activity quite poorly when probed with natural stimuli, highlighting our limited understanding of the nonlinear computations in V1” (abstract). See also [Zhang et al. (2019)](https://www.biorxiv.org/content/10.1101/296301v1): “While CNN models, especially those goal-driven ones pre-trained on computer vision tasks, performed very well in our study and some other studies ([Cadena et al. (2017)](https://www.biorxiv.org/content/10.1101/201764v1)) for V1 neuron modeling, we should point out that even the best-performing CNN in our study only explained about 50% of the explainable variance in our neural data, consistent with [Cadena et al. (2017)](https://www.biorxiv.org/content/10.1101/201764v1). The failure of CNN models for explaining the other half of the variance in V1 data can be due to a number of reasons. First, V1 neurons are subject to network interaction and their neural responses are known to be mediated by strong long-range contextual modulation. Second, it is possible that there are some basic structural components missing in the current deep CNN methodology for fully capturing V1 neural code” (p. 51-52 in the published version).\n\n\n[506.](https://www.openphilanthropy.org/brain-computation-report#footnoteref506_pg7ne18)See [Zhang et al. (2019)](https://www.biorxiv.org/content/10.1101/296301v1)[Kiregeskorte (2015)](https://www.annualreviews.org/doi/full/10.1146/annurev-vision-082114-035447?url_ver=Z39.88-2003&rfr_id=ori%3Arid%3Acrossref.org&rfr_dat=cr_pub%3Dpubmed), [Yamins and DiCarlo (2016)](https://www.nature.com/articles/nn.4244) and [Lindsay (2020)](https://arxiv.org/abs/2001.07092) for reviews.\n\n\n[507.](https://www.openphilanthropy.org/brain-computation-report#footnoteref507_ztwzwgl)[Cadena et al. (2019)](https://journals.plos.org/ploscompbiol/article?rev=2&id=10.1371/journal.pcbi.1006897): “We both trained CNNs directly to fit the data, and used CNNs trained to solve a high-level task (object categorization). With these approaches, we are able to outperform previous models and improve the state of the art in predicting the responses of early visual neurons to natural images” (see “Author summary”) … “We compared the models for a number of cells selected randomly ([Fig 8A](https://journals.plos.org/ploscompbiol/article?rev=2&id=10.1371/journal.pcbi.1006897#pcbi-1006897-g008)). There was a diversity of cells, both in terms of how much variance could be explained in principle (dark gray bars) and how well the individual models performed (colored bars). Overall, the deep learning models consistently outperformed the two simpler models of V1. This trend was consistent across the entire dataset ([Fig 8B and 8D](https://journals.plos.org/ploscompbiol/article?rev=2&id=10.1371/journal.pcbi.1006897#pcbi-1006897-g008)). The LNP model achieved 16.3% FEV [Fraction of explainable variance explained], the GFB model 45.6% FEV. The performance of the CNN trained directly on the data was comparable to that of the VGG-based model ([Fig 8C and 8D](https://journals.plos.org/ploscompbiol/article?rev=2&id=10.1371/journal.pcbi.1006897#pcbi-1006897-g008)); they predicted 49.8% and 51.6% FEV, respectively, on average” (p. 11). See also [Zhang et al. (2019)](https://www.biorxiv.org/content/10.1101/296301v1) for comparable results, and [Klindt et al. (2017)](https://papers.nips.cc/paper/6942-neural-system-identification-for-large-populations-separating-what-and-where.pdf) and [Antolík et al. (2016)](https://journals.plos.org/ploscompbiol/article?id=10.1371/journal.pcbi.1004927) for earlier results. [Kindel et al. (2019)](https://jov.arvojournals.org/article.aspx?articleid=2732380) report that “ we trained deep convolutional neural networks to predict the firing rates of V1 neurons in response to natural image stimuli, and we find that the predicted firing rates are highly correlated (CC norm = 0.556 ± 0.01) with the neurons’ actual firing rates over a population of 355 neurons. This performance value is quoted for all neurons, with no selection filter. Performance is better for more active neurons: When evaluated only on neurons with mean firing rates above 5 Hz, our predictors achieve correlations of CCnorm = 0.69 ± 0.01 with the neurons’ true firing rates” (see abstract). I’m not sure how this fits with the characterization of the state of the art in [Cadena et al. (2019)](https://journals.plos.org/ploscompbiol/article?rev=2&id=10.1371/journal.pcbi.1006897).\n\n\n[508.](https://www.openphilanthropy.org/brain-computation-report#footnoteref508_wstsuqk)[Yamins et al. (2014)](https://www.pnas.org/content/111/23/8619): “We found that the top layer of the high-performing HMO model achieves high predictivity for individual IT neural sites, predicting 48.5±1.3% of the explainable IT neuronal variance ([Fig. 3 B and C](https://www.pnas.org/content/111/23/8619#F3)). This represents a nearly 100% improvement over the best comparison models and is comparable to the prediction accuracy of state-of-the-art models of lower-level ventral areas such as V1 on complex stimuli ([10](https://www.jneurosci.org/content/25/46/10577?ijkey=b1aeab6b756dc871b809f168b632df61554970d5&keytype2=tf_ipsecsha)). In comparison, although the HMAX model was better at predicting IT responses than baseline V1 or SIFT, it was not significantly different from the V2-like model” …. [Schrimpf et al. (2018)](https://www.biorxiv.org/content/10.1101/407007v1.full.pdf): “The models from this early work outlined above outperformed all other neuroscience models at the time and yielded reasonable scores on predicting response patterns from both single unit activity and fMRI.” And [Yamins and DiCarlo (2016)](https://www.nature.com/articles/nn.4244): “It turned out that the top hidden layers of these models were the first quantitatively accurate image-computable model of spiking responses in IT cortex, the highest-level area in the ventral hierarchy (Fig. 2b,c). Similar models have also been shown to predict population aggregate responses in functional MRI data from human IT (Fig. 2d)” (p. 359). [Yamins and DiCarlo (2016)](https://www.nature.com/articles/nn.4244) also note that “These results are not trivially explained merely by any signal reflecting object category identity being able to predict IT responses. In fact, at the single neuron level, IT neural responses are largely not categorical, and ideal-observer models with perfect access to category and iden- tity information are far less accurate IT models than goal-driven HCNNs (Fig. 2a,c). Being a true image-computable neural network model appears critical for obtaining high levels of neural predictivity. In other words: combining two general biological constraints—the behavioral constraint of the object recognition task and the architec- tural constraint imposed by the HCNN model class—leads to greatly improved models of multiple layers of the visual sensory cascade” (p. 359). [Schrimpf et al. (2018)](https://www.biorxiv.org/content/10.1101/407007v1.full.pdf): “Current models still fall short of reaching benchmark ceilings: The best ANN model V4 predictivity score is 0.663, which is below the internal consistency ceiling of these V4 data (0.892). The best ANN model IT predictivity score is 0.604, which is below the internal consistency ceiling of these IT data (0.817). And the best ANN model behavioral predictivity score is 0.378, which is below the internal consistency ceiling of these behavioral data (0.497)” (p. 7). That said, I am not sure exactly what the relevant benchmark is in the context of this paper. See [here](http://www.brain-score.org/#leaderboard) for ongoing evaluation of the “brain-score” of different models – evaluation which incorporates the degree to which they predict neuron responses in IT.\n\n\n[509.](https://www.openphilanthropy.org/brain-computation-report#footnoteref509_e1qgb9d)[Yamins et al. (2014)](https://www.pnas.org/content/111/23/8619): “We found that the HMO model’s penultimate layer is highly predictive of V4 neural responses (51.7±2.3% explained V4 variance), providing a significantly better match to V4 than either the model’s top or bottom layers. These results are strong evidence for the hypothesis that V4 corresponds to an intermediate layer in a hierarchical model whose top layer is an effective model of IT” (p. 8623). See also [Bashivan et al. (2019)](https://www.gwern.net/docs/ai/2019-bashivan.pdf): “We found that the neural predictor models correctly predicted 89% of the explainable (i.e., image-driven) variance in the V4 neural responses” (p. 1).\n\n\n[510.](https://www.openphilanthropy.org/brain-computation-report#footnoteref510_cb2shy8)[Khaligh-Razavi and Kiregeskorte (2014)](https://journals.plos.org/ploscompbiol/article?id=10.1371/journal.pcbi.1003915): “The models include well-known neuroscientific object-recognition models (e.g. HMAX, VisNet) along with several models from computer vision (e.g. SIFT, GIST, self-similarity features, and a deep convolutional neural network). We compared the representational dissimilarity matrices (RDMs) of the model representations with the RDMs obtained from human IT (measured with fMRI) and monkey IT (measured with cell recording) for the same set of stimuli (not used in training the models). Better performing models were more similar to IT in that they showed greater clustering of representational patterns by category. In addition, better performing models also more strongly resembled IT in terms of their within-category representational dissimilarities” (abstract). [Yamins and DiCarlo (2016)](https://www.nature.com/articles/nn.4244): “… Similar models have also been shown to predict population aggregate responses in functional MRI data from human IT (Fig. 2d)” (p. 359). See also [Storrs et al. (2020)](https://www.biorxiv.org/content/10.1101/2020.05.07.082743v1.full.pdf).\n\n\n[511.](https://www.openphilanthropy.org/brain-computation-report#footnoteref511_lo5b9q2)See [Yamins and DiCarlo (2016)](https://www.nature.com/articles/nn.4244): “HCNN models that are better optimized to solve object categorization produce hidden layer representations that are better able to predict IT neural response variance” (Figure 2a, p. 360); and [Schrimpf et al. (2018)](https://www.biorxiv.org/content/10.1101/407007v1.full.pdf): “Extending prior work, we found that gains in ANN ImageNet performance led to gains on Brain-Score. However, correlation weakened at ≥ 70% top-1 ImageNet performance, suggesting that additional guidance from neuroscience is needed to make further advances in capturing brain mechanisms” (p. 1). See also  for more data.\n\n\n[512.](https://www.openphilanthropy.org/brain-computation-report#footnoteref512_txcggx1)[Yamins et al. (2014)](https://www.pnas.org/content/111/23/8619): “For example, neurons in the lowest area, V1, are well described by Gabor-like edge detectors that extract rough object outlines.” [Olah et al. (2020b)](https://distill.pub/2020/circuits/early-vision/): “Gabor filters are a simple edge detector, highly sensitive to the alignment of the edge. They’re almost universally found in the fist [sic] layer of vision models.” They report that 44% of the units in the first conv layer of InceptionV1 are gabor filters, and that 14% of the units in conv2d1 are “complex gabor filters, which are “like Gabor Filters, but fairly invariant to the exact position, formed by adding together multiple Gabor detectors in the same orientation but different phases. We call these ‘Complex’ after complex cells in neuroscience” (see section “conv2d1”).\n\n\n[513.](https://www.openphilanthropy.org/brain-computation-report#footnoteref513_2ea75qd)From [Open Philanthropy’s non-verbatim notes from a conversation with Prof. Konrad Kording](https://www.openphilanthropy.org/files/Conversations/A%20conversation%20with%20Professor%20Konrad%20Kording,%20September%2011,%202019.pdf): “There is a traditional view in systems neuroscience that each brain area does something pre-assigned and simple. E.g., V1 detects edges, V4 pulls out colors and curvature, etc. But this view is dying at the moment” (p. 3). See also [Roe et al. (2020)](https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4912377/): “One advanced shape property represented in V4 is curvature. Curvature, which can be considered an integration of oriented line segments, is a prominent feature of object boundaries. V4 cells (receptive fields typically 2–10 deg in size) can be strongly selective for curvature of contours ([Pasupathy and Connor (1999)](https://pubmed.ncbi.nlm.nih.gov/10561421/), [2001](https://pubmed.ncbi.nlm.nih.gov/11698538/)) as well as curved (i.e., non-Cartesian) gratings ([Gallant et al. (1993)](https://pubmed.ncbi.nlm.nih.gov/8418487/), [1996](https://pubmed.ncbi.nlm.nih.gov/8899641/)).” (abstract); and [Walsh (1999)](https://www.ncbi.nlm.nih.gov/pmc/articles/PMC33934/) for more on color in the visual cortex\n\n\n[514.](https://www.openphilanthropy.org/brain-computation-report#footnoteref514_7og0p9r)See [Olah et al. (2020a)](https://distill.pub/2020/circuits/zoom-in/): “Curve detecting neurons can be found in every non-trivial vision model we’ve carefully examined” (see Example 1: Curve Detectors). See also the corners in conv2d2 described in [Olah et al. (2020b)](https://distill.pub/2020/circuits/early-vision/), and the color detectors described in conv2d0-2.\n\n\n[515.](https://www.openphilanthropy.org/brain-computation-report#footnoteref515_ywo0bjb)[Bashivan et al. (2019)](https://www.gwern.net/docs/ai/2019-bashivan.pdf): “Using an ANN-driven image synthesis method, we found that luminous power patterns (i.e., images) can be applied to primate retinae to predictably push the spiking activity of targeted V4 neural sites beyond naturally occurring levels. This method, although not yet perfect, achieves unprecedented independent control of the activity state of entire populations of V4 neural sites, even those with overlapping receptive fields. These results show how the knowledge embedded in today’s ANN models might be used to noninvasively set desired internal brain states at neuron-level resolution, and suggest that more accurate ANN models would produce even more accurate control” (p. 1).\n\n\n[516.](https://www.openphilanthropy.org/brain-computation-report#footnoteref516_014tn72)[Yamins and DiCarlo (2016)](https://www.nature.com/articles/nn.4244): “within the class of HCNNs [e.g., Hierarchical Convolutional Neural Networks], there appear to be comparatively few qualitatively distinct, efficiently learnable solutions to high-variation object categorization tasks, and perhaps the brain is forced over evolutionary and developmental timescales to pick such a solution” (p. 356).\n\n\n[517.](https://www.openphilanthropy.org/brain-computation-report#footnoteref517_ttqlyqh)From [Open Philanthropy’s non-verbatim notes from a conversation with Prof. Konrad Kording](https://www.openphilanthropy.org/files/Conversations/A%20conversation%20with%20Professor%20Konrad%20Kording,%20September%2011,%202019.pdf): “It’s true that simple models of V1 can describe 30 percent of the variance in V1’s activity. But you can describe half of the variance in the activity of your transistors just by realizing that your computer is turned off at night” (p. 3).\n\n\n[518.](https://www.openphilanthropy.org/brain-computation-report#footnoteref518_s9komqo)See [Funke et al. (2020)](https://arxiv.org/pdf/2004.09406.pdf) for some discussion.\n\n\n[519.](https://www.openphilanthropy.org/brain-computation-report#footnoteref519_4o7qblq)From [Open Philanthropy’s non-verbatim notes from a conversation with Prof. Konrad Kording](https://www.openphilanthropy.org/files/Conversations/A%20conversation%20with%20Professor%20Konrad%20Kording,%20September%2011,%202019.pdf): “There is a traditional view in systems neuroscience that each brain area does something pre-assigned and simple. E.g., V1 detects edges, V4 pulls out colors and curvature, etc. But this view is dying at the moment. It was always suspicious on theoretical grounds. The fact that you know so much, about so many types of things, is in conflict with the view that each specific brain area is simple, as this view does not explain where all of the information available to you comes from. But it’s also empirically wrong. If you look at the literature, when you take a type of signal that matters to animals and looks for it in the brain, you find it everywhere. For example, you can find movement signals and expectations in the primary visual cortex, and rewards explain more of the variance in the primary motor cortex (the “movement area”) than movement. Basically, it’s all a complete mess. … Of course, there’s some specialization. Sound explains more of the variance in auditory cortex than in visual cortex. But the specialization isn’t simple. It’s just easier to publish papers saying e.g. ‘X is the brain area for romantic love,’ than e.g. ‘here are another ten variables X region is tuned to.’” (p. 3). From [Open Philanthropy’s non-verbatim notes from a conversation with Prof. Markus Meister](https://www.openphilanthropy.org/files/Conversations/A%20conversation%20with%20Professor%20Markus%20Meister,%20September%2024,%202019.pdf): “There is a long history, in neuroscience, of attempting to assign understandable computational roles to little chunks of brain matter (e.g., “the anterior cingulate cortex is for X”). Prof. Meister believes that this program is not going to be very successful, because these regions are massively interconnected, and we now know that if you inject signals into one part of the brain, you find them in many other parts of the brain” (p. 3).\n\n\n[520.](https://www.openphilanthropy.org/brain-computation-report#footnoteref520_cwscllx)[Stringer et al. (2018)](https://www.biorxiv.org/content/biorxiv/early/2018/04/22/306019.full.pdf) showed mice pictures from Imagenet (“stimuli”) while the mice also engaged in spontaneous motor behavior (“behavior”): “Stimuli and behavior were represented together in V1 as a mixed representation: there were not separate sets of neurons encoding stimuli and behavioral variables, but each neuron multiplexed a unique combination of sensory and behavioral information” (p. 11).\n\n\n[521.](https://www.openphilanthropy.org/brain-computation-report#footnoteref521_1hnj25u)[Saleem et al. (2017)](https://www.biorxiv.org/content/biorxiv/early/2017/12/18/235648.full.pdf): “To establish the nature of these signals we recorded in primary visual cortex (V1) and in the CA1 region of the hippocampus while mice traversed a corridor in virtual reality. The corridor contained identical visual landmarks in two positions, so that a purely visual neuron would respond similarly in those positions. Most V1 neurons, however, responded solely or more strongly to the landmarks in one position…. The presence of such navigational signals as early as in a primary sensory area suggests that these signals permeate sensory processing in the cortex” (p. 1).\n\n\n[522.](https://www.openphilanthropy.org/brain-computation-report#footnoteref522_pl49jdn)See [Cadena et al. (2019)](https://journals.plos.org/ploscompbiol/article?rev=2&id=10.1371/journal.pcbi.1006897#pcbi-1006897-g008), “Dataset and inclusion criteria”: “We recorded a total of 307 neurons in 23 recording sessions…We discarded neurons with a ratio of explainable-to-total variance (see Eq 3) smaller than 0.15, yielding 166 isolated neurons (monkey A: 51, monkey B: 115) recorded in 17 sessions with an average explainable variance of 0.285.”\n\n\n[523.](https://www.openphilanthropy.org/brain-computation-report#footnoteref523_wt3p0z8)[Chong et al. (2016)](https://www.pnas.org/content/113/5/1453?__cf_chl_jschl_tk__=612d27ddc7851b49ceff9efe8dc52400d0e8e5e0-1584569029-0-AaEs1cZBOcpicm3r9lztFzIG1JqZRgJaQO1LxkUFWiGFPXr4TFBFhOiXj2CdSJTDgD05btg9OL3drZjWz3Cy5rtBERY8A8KpsNhFwaPggn6KiUFdnEdTV7X56HuwOZ2898hcDUS9n4OCRf_r1k8x7G50JrLgrbpP26AYXq6cLzcOL_ouqkPms6PhcHJR2JwfU4oq3R13nnDAIGz-nzJfVqoyMYKk9m-B5TJ2Ts7-KMdh9rghoLJqDtXDmTaTvzWg2qhnjQsjoUIV1smZ2ZTWnsvh5nD-xtlC3Zg569ZJj3Lg): “Using fMRI and encoding methods, we found that the ‘intermediate’ orientation of an apparently rotating grating, never presented in the retinal input but interpolated during AM [apparent motion], is reconstructed in population-level, feature-selective tuning responses in the region of early visual cortex (V1) that corresponds to the retinotopic location of the AM path” (p. 1453). See also [Open Philanthropy’s non-verbatim notes from a conversation with Prof. Won Mok Shim](https://www.openphilanthropy.org/files/Conversations/A%20conversation%20with%20Professor%20Won%20Mok%20Shim,%20August%207,%202019.pdf): “There is a traditional view of V1, on which it is the front end of a hierarchical information-processing pipeline, and is responsible for processing simple, low-level features of bottom-up visual input from the retina/LGN. However, many feedback processes and connections have been discovered in V1 over the last decade, and most vision scientists would agree that V1’s information-processing cannot be entirely explained using bottom-up inputs….The anatomy of the visual system also suggests an important role for feedback. For example, there are more feedback connections from V1 to the LGN, than there are feedforward connections from the LGN to V1. V1 receives a large number of connections from other brain areas, like V2, and there are also many lateral connections between cells within V1. The direction of these connections can be identified using neuroanatomical trace studies, mostly from monkeys or cats… On an alternative to the traditional view, V1 is receiving top-down, high-level predictions, which it then compares with the bottom-up input. The difference between the two is an error signal, which is then conveyed from the low-level areas to the high-level areas. The origins of this idea are in computational theory (predictive coding). There is some empirical support as well, but the evidence is not completely clear.” (p. 1-2).\n\n\n[524.](https://www.openphilanthropy.org/brain-computation-report#footnoteref524_baodw8g)See e.g. [Schecter et al. (2017)](https://www.jneurosci.org/content/37/44/10541), [Cooke and Bear (2014)](https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3843896/), and [Cooke et al. (2015)](https://www.nature.com/articles/nn.3920).\n\n\n[525.](https://www.openphilanthropy.org/brain-computation-report#footnoteref525_9j4zrg0)For example, in addition to detecting features of a visual stimulus like the orientation of lines and the spatial frequency of different patterns (features at least somewhat akin to the features detected by the early layers of a ImageNet model), neurons in V1 can also detect the direction that a stimulus is moving, as well as other features of how a stimulus changes over time (see [Carandini (2012)](http://www.scholarpedia.org/article/Area_V1#Stimulus_selectivity): “Cells in area V1 are commonly selective for direction of stimulus motion” and “The slant of receptive fields in space-time confers V1 neurons with some selectivity for stimulus speed, but this selectivity depends on the spatial pattern of a stimulus ([Movshon et al. (1978a)](https://pubmed.ncbi.nlm.nih.gov/722570/)). Rather than speed, V1 neurons are typically thought to be selective for temporal frequency, which is the inverse of the period between temporal [oscillations](http://www.scholarpedia.org/article/Periodic_Orbit) between dark and light” (in the “Stimulus selectivity” section)). Indeed, visual processing requires a changing stimulus (see [Gilbert (2013)](https://neurology.mhmedical.com/content.aspx?bookid=1049§ionid=59138653): “Visual perception requires eye movement. Visual cortex neurons do not respond to an image that is stabilized on the retina because they require moving or flashing stimuli to be activated: they fire in response to transient stimulation” (p. 606)). The images processed by e.g. a ResNet-101, by contrast, are static (though there are computer-vision systems that operate in dynamic environments as well). V1 is also involved in integrating the different visual inputs from different eyes (see [Carandini (2012)](http://www.scholarpedia.org/article/Area_V1#Stimulus_selectivity): “The signals from corresponding regions in the two eyes are kept separate in the LGN, and are combined in V1” (in the “Stimulus selectivity” section)), whereas a ResNet receives only one image.\n\n\n[526.](https://www.openphilanthropy.org/brain-computation-report#footnoteref526_jbuzmnx)From [Open Philanthropy’s non-verbatim notes from a conversation with Dr. Adam Marblestone](https://www.openphilanthropy.org/files/Conversations/Discussions%20with%20Dr.%20Adam%20Marblestone.pdf): “Dr. Marblestone does not think it obvious that the visual cortex should be thought of as doing something like object-detection. It could be, for example, making a more complicated transition model based on all of its multi-modal inputs, predicting future inputs and rewards, or doing some kind of iterative inference procedure. We just don’t know quite how high-dimensional or complicated the task the visual system performs is. So any compute estimates based on comparisons between the visual system and current deep neural networks are highly uncertain” (p. 8).\n\n\n[527.](https://www.openphilanthropy.org/brain-computation-report#footnoteref527_wqaib4y)From [Open Philanthropy’s non-verbatim notes from a conversation with Dr. Kate Storrs](https://www.openphilanthropy.org/files/Conversations/A%20conversation%20with%20Dr.%20Kate%20Storrs,%20June%2011,%202020.pdf): “Returning the name of the main object in an image is a tiny portion of what the visual system can do. Core vision involves understanding the visual world as a navigable 3D space of objects, equipped with orientations, materials, depth, properties, and behavioral affordances. Dr. Storrs would guess that object-recognition only occurs on top of that kind of description of the world. Models analogous to the visual system would need to perform a wider range of the tasks that the visual system performs, which suggests that they would need to be more powerful” (p. 2). From the non-verbatim notes from my conversations with Prof. Konrad Kording: “‘What things are’ isn’t the only question at stake in vision. You want answers to questions like “can I grasp this water bottle? Can I hold it there?”. Indeed, there are a vast number of questions that we want to be able to ask and answer with vision systems, and the “solution” to vision will depend on the exact thing that other parts of the brain need from the visual system. It’s not an easily definable space, and the only way to figure it out is to build a system that fully learns all of the relevant pieces” (p. 4). From [Open Philanthropy’s non-verbatim notes from a conversation with Prof. Eric Jonas](https://www.openphilanthropy.org/files/Conversations/A%20conversation%20with%20Professor%20Eric%20Jonas,%20September%2017,%202019.pdf): “Prof. Jonas is fairly confident that the visual system is not classifying objects into one of k categories” (p. 1).\n\n\n[528.](https://www.openphilanthropy.org/brain-computation-report#footnoteref528_arwcsaf)See [Serre (2019)](https://www.annualreviews.org/doi/abs/10.1146/annurev-vision-091718-014951), section 5.2, for a review.\n\n\n[529.](https://www.openphilanthropy.org/brain-computation-report#footnoteref529_f9mehf4)[Hendricks et al. (2020)](https://arxiv.org/pdf/1907.07174.pdf): “We introduce natural adversarial examples–real-world, unmodified, and naturally occurring examples that cause machine learning model performance to substantially degrade. We introduce two new datasets of natural adversarial examples. The first dataset contains 7,500 natural adversarial examples for ImageNet classifiers and serves as a hard ImageNet classier test set called IMAGENET-A. We also curate an adversarial out-of-distribution detection dataset called IMAGENET-O, which to our knowledge is the first out-of-distribution detection dataset created for ImageNet models. These two datasets provide new ways to measure model robustness and uncertainty. Like lp adversarial examples, our natural adversarial examples transfer to unseen black-box models. For example, on IMAGENET-A a DenseNet-121 obtains around 2% accuracy, an accuracy drop of approximately 90%, and its out-of-distribution detection performance on IMAGENET-O is near random chance levels. Popular training techniques for improving robustness have little effect, but some architectural changes provide mild improvements. Future research is required to enable generalization to natural adversarial examples” (p. 1).\n\n\n[530.](https://www.openphilanthropy.org/brain-computation-report#footnoteref530_1m30nmy)[Elsayed et al. (2018)](http://papers.nips.cc/paper/7647-adversarial-examples-that-fool-both-computer-vision-and-time-limited-humans.pdf): “Machine learning models are vulnerable to adversarial examples: small changes to images can cause computer vision models to make mistakes such as identifying a school bus as an ostrich. However, it is still an open question whether humans are prone to similar mistakes. Here, we address this question by leveraging recent techniques that transfer adversarial examples from computer vision models with known parameters and architecture to other models with unknown parameters and architecture, and by matching the initial processing of the human visual system. We find that adversarial examples that strongly transfer across computer vision models influence the classifications made by time-limited human observers” (p. 1). A full test of whether humans are comparably vulnerable to adversarial examples, though, might require the ability to access and manipulate the parameters of the human brain in the same manner that one can with an artificial neural network.\n\n\n[531.](https://www.openphilanthropy.org/brain-computation-report#footnoteref531_11klhlt)[Barbu et al. (2019)](https://objectnet.dev/objectnet-a-large-scale-bias-controlled-dataset-for-pushing-the-limits-of-object-recognition-models.pdf): “When tested on ObjectNet, object detectors show a 40-45% drop in performance, with respect to their performance on other benchmarks, due to the controls for biases. Controls make ObjectNet robust to fine-tuning showing only small performance increases” (p. 1).\n\n\n[532.](https://www.openphilanthropy.org/brain-computation-report#footnoteref532_ieijx5l)[Geirhos et al. (2020)](https://arxiv.org/pdf/2004.07780.pdf) discusses a number of examples. [Serre (2019)](https://www.annualreviews.org/doi/abs/10.1146/annurev-vision-091718-014951), section 5.2, discusses various generalization failures. See also [Recht et al. (2019)](https://arxiv.org/abs/1902.10811): “We build new test sets for the CIFAR-10 and ImageNet datasets. Both benchmarks have been the focus of intense research for almost a decade, raising the danger of overfitting to excessively reused test sets. By closely following the original dataset creation processes, we test to what extent current classification models generalize to new data. We evaluate a broad range of models and find accuracy drops of 3% – 15% on CIFAR-10 and 11% – 14% on ImageNet. However, accuracy gains on the original test sets translate to larger gains on the new test sets. Our results suggest that the accuracy drops are not caused by adaptivity, but by the models’ inability to generalize to slightly “harder” images than those found in the original test sets” (p. 1); [Lamb et al. (2019)](https://arxiv.org/pdf/1912.11570.pdf): “humans are able to watch cartoons, which are missing many visual details, without being explicitly trained to do so…We propose a dataset that will make it easier to study the detail-invariance problem concretely. We produce a concrete task for this: SketchTransfer, and we show that state-of-the-art domain transfer algorithms still struggle with this task. The state-of-the-art technique which achieves over 95% on MNIST −→ SVHN transfer only achieves 59% accuracy on the SketchTransfer task, which is much better than random (11% accuracy) but falls short of the 87% accuracy of a classifier trained directly on labeled sketches. This indicates that this task is approachable with today’s best methods but has substantial room for improvement” (p. 1); and [Rosenfeld et al. (2018)](https://arxiv.org/pdf/1808.03305.pdf): “We showcase a family of common failures of state-of-the art object detectors. These are obtained by replacing image sub-regions by another sub-image that contains a trained object. We call this ‘object transplanting’. Modifying an image in this manner is shown to have a non-local impact on object detection. Slight changes in object position can affect its identity according to an object detector as well as that of other objects in the image. We provide some analysis and suggest possible reasons for the reported phenomena” (p. 1).\n\n\n[533.](https://www.openphilanthropy.org/brain-computation-report#footnoteref533_f6rk56p)[Jenkins et al. (2018)](https://royalsocietypublishing.org/doi/full/10.1098/rspb.2018.1319) for example, found that “people know about 5000 faces on average” (p. 1) and [Biederman (1987)](https://psycnet.apa.org/record/1987-20898-001) estimates that people know 30,000 distinguishable object categories, though he treats this as “liberal” (e.g., on the high end). I have not attempted to evaluate his methodology, but at a glance it looks both loose and based on fairly substantive assumptions. Here is a relevant quote: “How many readily distinguishable objects do people know? How might one arrive at a liberal estimate for this value? One estimate can be obtained from the lexicon. There are less than 1,500 relatively common basic-level object categories, such as chairs and elephants. If we assume that this estimate is too small by a factor of 2, allowing for idiosyncratic categories and errors in the estimate, then we can assume potential classification into approximately 3,000 basic-level categories. RBC assumes that perception is based on a particular componential configuration rather than the basic-level category, so we need to estimate the mean number of readily distinguishable componential configurations per basic-level category. Almost all natural categories, such as elephants or giraffes, have one or only a few instances with differing componential descriptions. Dogs represent a rare exception for natural categories in that they have been bred to have considerable variation in their descriptions. Categories created by people vary in the number of allowable types, but this number often tends to be greater than the natural categories. Cups, typewriters, and lamps have just a few (in the case of cups) to perhaps 15 or more (in the case of lamps) readily discernible exemplars. Let us assume (liberally) that the mean number of types is 10. This would yield an estimate of 30,000 readily discriminable objects (3,000 categories × 10 types/category)” (p. 127). See also [Open Philanthropy’s non-verbatim notes from a conversation with Dr. Kate Storrs](https://www.openphilanthropy.org/files/Conversations/A%20conversation%20with%20Dr.%20Kate%20Storrs,%20June%2011,%202020.pdf): “The question of how many categories humans can recognize is sort of impossible, because the concept of a category is fairly fuzzy, and it isn’t rich enough to capture what human visual recognition involves. For example, you’ve probably seen tens of thousands of chairs over the course of your life. You were able to immediately recognize them as chairs, but you were also able to immediately see a large number of individuating properties. Indeed, one of the great powers of the visual system is that it arrives at a description that is flexible enough that you can then carve it up in whatever ways are behaviorally relevant. Looking at common nouns, and budgeting a certain number of instances of each (maybe 100 or 1000) as individually recognizable, might be one way to put a very rough number on the categories that humans can recognize.” (p. 4).\n\n\n[534.](https://www.openphilanthropy.org/brain-computation-report#footnoteref534_cai3nb2)Another example might be an image-classification task that involves classifying images into “funny” and “not funny” – a task hardly limited in difficulty by the number of basic objects humans can identify. See [Karpathy (2012)](https://karpathy.github.io/2012/10/22/state-of-computer-vision/) for discussion of all of the complex understanding that goes into appreciating a humorous picture: “the point here is that you’ve used a HUGE amount of information in that half second when you look at the picture and laugh. Information about the 3D structure of the scene, confounding visual elements like mirrors, identities of people, affordances and how people interact with objects, physics (how a particular instrument works, leaning and what that does), people, their tendency to be insecure about weight, you’ve reasoned about the situation from the point of view of the person on the scale, what he is aware of, what his intents are and what information is available to him, and you’ve reasoned about people reasoning about people. You’ve also thought about the dynamics of the scene and made guesses about how the situation will unfold in the next few seconds visually, how it will unfold in the thoughts of people involved, and you reasoned about how likely or unlikely it is for people of particular identity/status to carry out some action. Somehow all these things come together to ‘make sense’ of the scene.”\n\n\n[535.](https://www.openphilanthropy.org/brain-computation-report#footnoteref535_2oprf14)Dr. Dario Amodei suggested this consideration. [Sarpeshkar (2010)](https://www.cambridge.org/core/books/ultra-low-power-bioelectronics/ED8504DA1504856B74E2502EA859FDEA) treats the retina as receiving 36Gb/s, and outputing 20 Mb/s (p. 749, he cites [Koch et al. (2004)](https://www.cell.com/fulltext/S0960-9822(04)00656-6)).\n\n\n[536.](https://www.openphilanthropy.org/brain-computation-report#footnoteref536_mcnwhx7)See [here](https://machinethink.net/blog/how-fast-is-my-model/): “224×224×3, a typical size for an image classifier.” See [here](https://github.com/albanie/convnet-burden) for some example input sizes.\n\n\n[537.](https://www.openphilanthropy.org/brain-computation-report#footnoteref537_5ri0n2k)[Geirhos et al. (2018)](https://arxiv.org/pdf/1706.06969.pdf): “Here we proposed a fair and psychophysically accurate way of comparing network and human performance on a number of object recognition tasks: measuring categorization accuracy for single-fixation, briefly presented (200 ms) and backward-masked images as a function of colour, contrast, uniform noise, and eidolon-type distortions. We find that DNNs outperform human observers by a significant margin for non-distorted, coloured images—the images the DNNs were specifically trained on… In comparison to human observers, we find the classification performance of three currently well-known DNNs trained on ImageNet—AlexNet, GoogLeNet and VGG-16—to decline rapidly with decreasing signal-to-noise ratio under image degradations like additive noise or eidolon-type distortions” (p. 14-17). See also Figures 2 and 3.\n\n\n[538.](https://www.openphilanthropy.org/brain-computation-report#footnoteref538_hw1jw0b)From [Open Philanthropy’s non-verbatim notes from a conversation with Dr. Kate Storrs](https://www.openphilanthropy.org/files/Conversations/A%20conversation%20with%20Dr.%20Kate%20Storrs,%20June%2011,%202020.pdf): “On the other hand, a lot of our impression of the richness of human vision is illusory. For example, we don’t see crisply, or in color, in the periphery of our visual field. So perhaps biological vision uses its own shortcuts” (p. 2).\n\n\n[539.](https://www.openphilanthropy.org/brain-computation-report#footnoteref539_ab42pii)This is a point suggested by Dr. Dario Amodei. The [Cerebras whitepaper](https://www.cerebras.net/wp-content/uploads/2019/08/Cerebras-Wafer-Scale-Engine-Whitepaper.pdf) suggests that “50 to 98% of your multiplications are wasted” on non-sparse hardware (p. 5).\n\n\n[540.](https://www.openphilanthropy.org/brain-computation-report#footnoteref540_w0c0bqu)[Ravi (2018)](https://ai.googleblog.com/2018/05/custom-on-device-ml-models.html): “For example, on [ImageNet](http://image-net.org/) task, Learn2Compress achieves a model 22× smaller than Inception v3 baseline and 4× smaller than MobileNet v1 baseline with just 4.6-7% drop in accuracy. On [CIFAR-10](https://www.cs.toronto.edu/~kriz/cifar.html), jointly training multiple Learn2Compress models with shared parameters, takes only 10% more time than training a single Learn2Compress large model, but yields 3 compressed models that are upto 94× smaller in size and upto 27× faster with up to 36× lower cost and good prediction quality (90-95% top-1 accuracy).” See also [Frankle and Carbin (2018)](https://arxiv.org/abs/1803.03635): “Neural network pruning techniques can reduce the parameter counts of trained networks by over 90%, decreasing storage requirements and improving computational performance of inference without compromising accuracy” (p. 1); and [Lillicrap and Kording (2019)](https://arxiv.org/pdf/1907.06374v1.pdf): “From distillation techniques we know that networks trained on ImageNet, a popular 2012 machine learning benchmark that requires the classification of natural images, cannot readily be compressed to fewer than about 100k free parameters [[13](https://arxiv.org/abs/1503.02531), [20](https://arxiv.org/abs/1804.08838), [32](https://arxiv.org/abs/1604.03058#:~:text=With%20a%20moderate%20size%20network,and%2069.1%25%20on%20binarized%20GoogleNET.)] (though see [[35](https://arxiv.org/abs/1804.05862#:~:text=version%2C%20v3)%5D-,Non%2DVacuous%20Generalization%20Bounds%20at%20the%20ImageNet%20Scale,A%20PAC%2DBayesian%20Compression%20Approach&text=Modern%20neural%20networks%20are%20highly,often%20generalize%20well%20in%20practice.)])” (p. 3). Note also that other models are less efficient than EfficientNet-B2. For example, a ResNet-101 requires ~1e10 FLOPs, and models that both identify and localize objects, that assign the pixels in each image to different objects, or that identify points of interest in a scene, can require more than 1e11 FLOPs per forward pass. See [here](https://github.com/albanie/convnet-burden) for examples..\n\n\n[541.](https://www.openphilanthropy.org/brain-computation-report#footnoteref541_idrdw1c)From [Open Philanthropy’s non-verbatim notes from a conversation with Prof. Won Mok Shim](https://www.openphilanthropy.org/files/Conversations/A%20conversation%20with%20Professor%20Won%20Mok%20Shim,%20August%207,%202019.pdf): “There is a fair amount of consensus in the field that the human visual system can recognize about ten images per second (e.g., one image per 100 ms). However, this doesn’t mean that it takes 100 ms to recognize an image. For example, you might be able to recognize an image shown very briefly (e.g., for less than 100 ms), but without sequences of other images before and afterwards” (p. 3). [Trafton’s (2014)](https://news.mit.edu/2014/in-the-blink-of-an-eye-0116) MIT news article suggests that 10 images per second has been suggested by previous studies. [Potter et al. (2013)](https://link.springer.com/article/10.3758%2Fs13414-013-0605-z), however, suggests that humans can at least do better than chance at images presented for only 13 ms: “The results of both experiments show that conceptual understanding can be achieved when a novel picture is presented as briefly as 13 ms and masked by other pictures” (p. 275, see also further discussion on p. 276); and [Keysers et al. (2001)](https://www.ncbi.nlm.nih.gov/pubmed/11224911) report that “macaque monkeys were presented with continuous rapid serial visual presentation (RSVP) sequences of unrelated naturalistic images at rates of 14–222 msec/image, while neurons that responded selectively to complex patterns (e.g., faces) were recorded in temporal cortex. Stimulus selectivity was preserved for 65% of these neurons even at surprisingly fast presentation rates (14 msec/image or 72 images/sec). Five human subjects were asked to detect or remember images under equivalent conditions. Their performance in both tasks was above chance at all rates (14–111 msec/image)”. That said, “better than chance” is too low a standard. [Potter et al. (2013)](https://link.springer.com/article/10.3758%2Fs13414-013-0605-z) also report that “a picture as brief as 20 ms is easy to see if it is followed by a blank visual field (e.g., [Thorpe, Fize, and Marlot (1996)](https://www.nature.com/articles/381520a0))” (p. 270).\n\n\n[542.](https://www.openphilanthropy.org/brain-computation-report#footnoteref542_0qngcep)[Carandini (2012)](http://www.scholarpedia.org/article/Area_V1#Stimulus_selectivity): “Thanks to high neuronal density and large area, V1 contains a vast number of neurons. In humans, it contains about 140 million neurons per hemisphere (Wandell, 1995), i.e. about 40 V1 neurons per LGN neuron” (from the introduction).\n\n\n[543.](https://www.openphilanthropy.org/brain-computation-report#footnoteref543_zdqzfkq)For example, one recent estimate by [Miller et al. (2014)](https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4032965/), using better methods, finds 675 million neurons for chimpanzee V1 as a whole. Another – [Collins et al. (2016)](https://www.pnas.org/content/113/3/740) – finds 737 million neurons in just onechimpanzee V1 hemisphere, suggesting ~1.4 billion in V1 as a whole. The human cortex has ~2× the neurons of the chimpanzee cortex, suggesting something like 1-3 billion for human V1. [Mora-Bermúdez et al. (2016)](https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5110243/#:~:text=The%20human%20brain%20is%20about,the%20same%20region%20in%20chimpanzees.): “The human brain is about three times as big as the brain of our closest living relative, the chimpanzee. Moreover, a part of the brain called the cerebral cortex – which plays a key role in memory, attention, awareness and thought – contains twice as many cells in humans as the same region in chimpanzees.”\n\n\n[544.](https://www.openphilanthropy.org/brain-computation-report#footnoteref544_wszhr3g)Though [Collins et al. (2016)](https://www.pnas.org/content/113/3/740) find ~400 million in one hemisphere on chimpanzee V2, suggesting 800 million for chimp V2 as a whole, and 1.6 billion for human V2, if we assume [similar ratios in the cortex](https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5110243/#:~:text=The%20human%20brain%20is%20about,the%20same%20region%20in%20chimpanzees.).\n\n\n[545.](https://www.openphilanthropy.org/brain-computation-report#footnoteref545_ti34lk7)The high-end here is more than half of the neurons in the cortex as a whole (~16 billion neurons, according to Azevedo et al. (2016) (p. 536), which seems high to me, based on [eyeballing pictures of the visual cortex](https://www.getbodysmart.com/the-brain/visual-cortex-areas). That said, neuron density in primate visual cortex appears to be unusually high (see [Collins et al. (2016)](https://www.pnas.org/content/113/3/740): “the packing densities of neurons in V1 were 1.2, 2.1, 3.3, and 3.5 times greater than neuron densities in secondary visual cortex (V2) and somatosensory, motor, and premotor cortices, respectively” (“Visual areas of the cortex”), numbers in this range do seem to fall out of extrapolation from the chimpanzee data, and ~50% of the cortex is compatible with comments from Prof. Konrad Kording to the effect that ~half of the brain’s hardware is involved in processing vision in some way. From [Open Philanthropy’s non-verbatim notes from a conversation with Prof. Konrad Kording](https://www.openphilanthropy.org/files/Conversations/A%20conversation%20with%20Professor%20Konrad%20Kording,%20September%2011,%202019.pdf): “The human brain dedicates roughly half of its hardware to processing vision (this can be seen by looking at diagrams created by David Van Essen). And we can solve a lot of the vision problem (e.g., detecting objects, segmenting scenes, storing information) using very modest compute” (p. 1).\n\n\n[546.](https://www.openphilanthropy.org/brain-computation-report#footnoteref546_8a08rmd)See my discussion of the cerebellum in [Section 2.4.2.3](https://www.openphilanthropy.org/brain-computation-report#DoWeNeedTheWholeBrain). Though note that neuron densities in V1 are especially high. See [Collins et al. (2016)](https://www.pnas.org/content/113/3/740): “the packing densities of neurons in V1 were 1.2, 2.1, 3.3, and 3.5 times greater than neuron densities in secondary visual cortex (V2) and somatosensory, motor, and premotor cortices, respectively” (“Visual areas of the cortex”).\n\n\n[547.](https://www.openphilanthropy.org/brain-computation-report#footnoteref547_urskua0)One could also ask questions like: “how many fewer neurons could this region have/how much less energy could it use, if evolution got to rebuild it from scratch, without needing to do task X, but still needing to do everything else it does?” But these are hard to answer.\n\n\n[548.](https://www.openphilanthropy.org/brain-computation-report#footnoteref548_2kqm5fu)[Drexler (2019)](https://www.fhi.ox.ac.uk/wp-content/uploads/Reframing_Superintelligence_FHI-TR-2019-1.1-1.pdf) appears to have something like this in mind: “A key concept in the following will be “immediate neural activity” (INA), an informal measure of potentially task-applicable brain activity. As a measure of current neural activity potentially applicable to task performance, INA is to be interpreted in an abstract, information-processing sense that conceptually excludes the formation of long-term memories (as discussed below, human and machine learning are currently organized in fundamentally different ways)” (p. 183-184)\n\n\n[549.](https://www.openphilanthropy.org/brain-computation-report#footnoteref549_m937gu1)My thanks to Dr. Eric Drexler for discussion.\n\n\n[550.](https://www.openphilanthropy.org/brain-computation-report#footnoteref550_s6wq7wn)Here’s one loose attempt to estimate (1). Following the data in [Cadena et al. (2019)](https://journals.plos.org/ploscompbiol/article?rev=2&id=10.1371/journal.pcbi.1006897#pcbi-1006897-g008), suppose that for half of the neurons in V1, ~28% of the variance is explained by the visual stimulus, and ~50% of *that* can be explained by networks trained on object recognition. To be conservative, let’s assume that none of the variance in the activity of the other half of V1 neurons is explained by visual stimuli at all. This would suggest that at least 7% of variance in V1 neural activity overall can be explained by such models (here I’m following a version of the methodology in [Olshausen and Field (2005)](http://ling.umd.edu/~ellenlau/courses/nacs642/Olshausen_2005.pdf), who suggest that “If we consider that roughly 40% of the population of neurons in V1 has actually been recorded from and characterized, together with our conjecture that 30% to 40% of the response variance of these neurons can be explained under natural conditions using the currently established models, then we are left to conclude that we can currently account for 12% to 16% of V1 function. Thus, approximately 85% of V1 function has yet to be explained” (p.  Higher estimates could incorporate all the data listed on , which I haven’t tried to interpret, but which appears to suggest a substantial amount of variance explained. From [Schrimpf et al. (2018)](https://www.biorxiv.org/content/10.1101/407007v1.full.pdf): “The best ANN model V4 predictivity score is 0.663, which is below the internal consistency ceiling of these V4 data (0.892). The best ANN model IT predictivity score is 0.604, which is below the internal consistency ceiling of these IT data (0.817). And the best ANN model behavioral predictivity score is 0.378, which is below the internal consistency ceiling of these behavioral data (0.497)” (p. 7). See also [Storrs et al. (2020)](https://www.biorxiv.org/content/10.1101/2020.05.07.082743v1.full.pdf): “We find that trained models significantly outperform untrained models (accounting for 57% more of the explainable variance), suggesting that features representing natural images are important for explaining hIT. Model fitting further improves the alignment of DNN and hIT representations (by 124%), suggesting that the relative prevalence of different features in hIT does not readily emerge from the particular ImageNet object-recognition task used to train the networks” (abstract).\n\n\n[551.](https://www.openphilanthropy.org/brain-computation-report#footnoteref551_n9q9pai)See e.g. [Open Philanthropy’s non-verbatim notes from a conversation with Dr. Kate Storrs](https://www.openphilanthropy.org/files/Conversations/A%20conversation%20with%20Dr.%20Kate%20Storrs,%20June%2011,%202020.pdf): “In Dr. Storrs’ area of neuroscience, there can be a narrative to the effect that: “the early visual system is basically done. We understand the canonical computations: e.g., edge, orientation and color selection. You link them up with local exhibition and inhibition, and you have feedback that probably has some kind of predictive function (e.g., you get less and less response from V1 neurons to a predictable stimulus, suggesting that feedback is creating some kind of short-term memory). Once you’ve got all of this, you can explain most of V1 activity.” (This is not necessarily Dr. Storrs’ view; it’s just a summary of a common narrative.)” (p. 3).\n\n\n[552.](https://www.openphilanthropy.org/brain-computation-report#footnoteref552_bbi4mzf)Open Philanthropy’s technical advisor, Dr. Dario Amodei, suggests that V1 might be a helpful point of focus (ImageNet models plausibly cover functions in other parts of the visual cortex, but he suggests that basing estimates on V1 is conservative).\n\n\n[553.](https://www.openphilanthropy.org/brain-computation-report#footnoteref553_g62s4sp)This is a variant on an analogy suggested by Nick Beckstead.\n\n\n[554.](https://www.openphilanthropy.org/brain-computation-report#footnoteref554_adfs930)For example, FLOPs scaling for bigger inputs appears to be roughly linear: see e.g. [here](https://github.com/albanie/convnet-burden/blob/master/reports/resnet-101.md). Dr. Dario Amodei also suggested linear scaling for bigger inputs as a conservative adjustment.\n\n\n[555.](https://www.openphilanthropy.org/brain-computation-report#footnoteref555_5nnim6g)[Kolesnikov et al. (2020)](https://arxiv.org/pdf/1912.11370.pdf): “All of our BiT models use a vanilla ResNet-v2 architecture [[16](https://arxiv.org/abs/1603.05027)], except that we replace all Batch Normalization [[21](https://arxiv.org/abs/1502.03167)] layers with Group Normalization [[60](https://openaccess.thecvf.com/content_ECCV_2018/html/Yuxin_Wu_Group_Normalization_ECCV_2018_paper.html)] and use Weight Standardization [[43](https://arxiv.org/abs/1903.10520)] in all convolutional layers. See Section 4.3 for analysis. We train ResNet-152 architectures in all datasets, with every hidden layer widened by a factor of four (ResNet152×4).” A ResNet-152 is [1e10 FLOPs for a forward pass](https://github.com/albanie/convnet-burden/blob/master/reports/SE-ResNet-152.md), and my understanding is widening every hidden layer by a factor of four results in a ~16× increase in overall FLOPs, suggesting ~2e11 FLOPs.\n\n\n[556.](https://www.openphilanthropy.org/brain-computation-report#footnoteref556_46p67zy)[Tan et al. (2020)](https://arxiv.org/pdf/1911.09070v6.pdf): “In particular, with single-model and single test-time scale, our EfficientDet-D7 achieves state-of-the-art 53.7 AP with 52M parameters and 325B FLOPs, outperforming previous best detector [44] with 1.5 AP while being 4× smaller and using 13× fewer FLOPs” (p. 2).\n\n\n[557.](https://www.openphilanthropy.org/brain-computation-report#footnoteref557_in5pju0)Others not included in the chart include Kurzweil’s (2012) for a “pattern recognition”: “emulating one cycle in a single pattern recognizer in the biological brain’s neocortex would require about 3,000 calculations. Most simulations run at a fraction of this estimate. With the brain running at about 102 (100) cycles per second, that comes to 3 × 105 (300,000) calculations per second per pattern recognizer. Using my estimate of 3 × 108 (300 million) pattern recognizers, we get about 1014 (100 trillion) calculations per second” (p. 195). [Kurzweil (2005)](https://www.amazon.com/Singularity-Near-Humans-Transcend-Biology-ebook/dp/B000QCSA7C) also suggests that “Yet another estimate comes from a simulation at the University of Texas that represents the functionality of a cerebellum region containing 104 neurons; this required about 108 cps, or about 104 cps per neuron. Extrapolating this over an estimated 1011 neurons results in a figure of about 1015 cps for the entire brain” (p. 123).\n\n\n[558.](https://www.openphilanthropy.org/brain-computation-report#footnoteref558_ddodguw)[Drexler (2019)](https://www.fhi.ox.ac.uk/wp-content/uploads/Reframing_Superintelligence_FHI-TR-2019-1.1-1.pdf): “Baidu’s Deep Speech 2 system can approach or exceed human accuracy in recognizing and transcribing spoken English and Mandarin, and would require approximately 1 GFLOP/s per real-time speech stream (Amodei et al. 2015). For this roughly human-level throughput, fPFLOP = 10−6 [fPFLOP is the fraction of a petaFLOP that a given number of FLOPs represents]. Turning to neural function again, consider that task-relevant auditory/semantic cortex probably comprises >1% of the human brain. If the equivalent of the Deep Speech 2 speech-recognition task were to require 10% of that cortex, then fINA = 10−3, and RPFLOP = 1000 [RPFLOP is the ratio of the fraction of the brain’s activity that a task represents, to the fraction of a petaFLOP that the compute to perform that task represents]” (p. 187). Dr. Dario Amodei also suggested an estimate in this vein.\n\n\n[559.](https://www.openphilanthropy.org/brain-computation-report#footnoteref559_fmwa1ai)[Drexler (2019)](https://www.fhi.ox.ac.uk/wp-content/uploads/Reframing_Superintelligence_FHI-TR-2019-1.1-1.pdf): “Google’s neural machine translation (NMT) systems have reportedly approached human quality (Wu et al. 2016). A multi-lingual version of the Google NMT model (which operates with the same resources) bridges language pairs through a seemingly language-independent representation of sentence meaning (Johnson et al. 2016), suggesting substantial (though unquantifiable) semantic depth in the intermediate processing. Performing translation at a human-like rate of one sentence per second would require approximately 100 GFLOP/s, and fPFLOP = 10−4. It is plausible that (to the extent that such things can be distinguished) human beings mobilize as much as 1% of global INA at an “NMT task-level”— involving vocabulary, syntax, and idiom, but not broader understanding— when performing language translation. If so, then for “NMT-equivalent translation,” we can propose fINA = 10−2, implying RPFLOP = 100” (p. 187-188).\n\n\n[560.](https://www.openphilanthropy.org/brain-computation-report#footnoteref560_d6p7272)[Kurzweil (2005)](https://www.amazon.com/Singularity-Near-Humans-Transcend-Biology-ebook/dp/B000QCSA7C): “Another estimate comes from the work of Lloyd Watts and his colleagues on creating functional simulations of regions of the human auditory system, which I discuss further in chapter 4. One of the functions of the software Watts has developed is a task called “stream separation,” which is used in teleconferencing and other applications to achieve telepresence (the localization of each participant in a remote audio teleconference). To accomplish this, Watts explains, means ‘precisely measuring the time delay between sound sensors that are separated in space and that both receive the sound.’ The process involves pitch analysis, spatial position, and speech cues, including language-specific cues. ‘One of the important cues used by humans for localizing the position of a sound source is the Interaural Time Difference (ITD), that is, the difference in time of arrival of sounds at the two ears.’ Watts’s own group has created functionally equivalent re-creations of these brain regions derived from reverse engineering. He estimates that 1011 cps are required to achieve human-level localization of sounds. The auditory cortex regions responsible for this processing comprise at least 0.1 percent of the brain’s neurons. So we again arrive at a ballpark estimate of around 1014 cps (1011 cps × 103)” (p. 123).\n\n\n[561.](https://www.openphilanthropy.org/brain-computation-report#footnoteref561_yuo70im)[Kell et al. (2018)](http://mcdermottlab.mit.edu/papers/Kell_etal_2018_DNN_auditory_cortex.pdf): “…we optimized hierarchical neural networks for speech and music recognition. The best-performing network contained separate music and speech pathways following early shared processing, potentially replicating human cortical organization. The network performed both tasks as well as humans and exhibited human-like errors despite not being optimized to do so, suggesting common constraints on network and human performance. The network predicted fMRI voxel responses substantially better than traditional spectrotemporal filter models throughout auditory cortex. It also provided a quantitative signature of cortical representational hierarchy—primary and non-primary responses were best predicted by intermediate and late network layers, respectively. The results suggest that task optimization provides a powerful set of tools for modeling sensory systems” (p. 630).\n\n\n[562.](https://www.openphilanthropy.org/brain-computation-report#footnoteref562_i5lm81p)[Banino et al. (2018)](https://www.nature.com/articles/s41586-018-0102-6): “Grid cells are thought to provide a multi-scale periodic representation that functions as a metric for coding space[7](https://www.jneurosci.org/content/28/27/6858),[8](https://www.mitpressjournals.org/doi/abs/10.1162/NECO_a_00319) and is critical for integrating self-motion (path integration)[6](https://www.nature.com/articles/nature03721),[7](https://www.jneurosci.org/content/28/27/6858),[9](https://www.nature.com/articles/nrn1932) and planning direct trajectories to goals (vector-based navigation)[7](https://www.jneurosci.org/content/28/27/6858),[10](https://onlinelibrary.wiley.com/doi/abs/10.1111/j.1460-9568.2012.08015.x),[11](https://www.cell.com/neuron/fulltext/S0896-6273(15)00628-5?_returnURL=https%3A%2F%2Flinkinghub.elsevier.com%2Fretrieve%2Fpii%2FS0896627315006285%3Fshowall%3Dtrue). Here we set out to leverage the computational functions of grid cells to develop a deep reinforcement learning agent with mammal-like navigational abilities… Our findings show that emergent grid-like representations furnish agents with a Euclidean spatial metric and associated vector operations, providing a foundation for proficient navigation. As such, our results support neuroscientific theories that see grid cells as critical for vector-based navigation [7](https://www.jneurosci.org/content/28/27/6858),[10](https://onlinelibrary.wiley.com/doi/abs/10.1111/j.1460-9568.2012.08015.x),[11](https://www.cell.com/neuron/fulltext/S0896-6273(15)00628-5?_returnURL=https%3A%2F%2Flinkinghub.elsevier.com%2Fretrieve%2Fpii%2FS0896627315006285%3Fshowall%3Dtrue), demonstrating that the latter can be combined with path-based strategies to support navigation in challenging environments” (abstract). [Cueva and Wei (2018)](https://arxiv.org/pdf/1803.07770.pdf): “we trained recurrent neural networks (RNNs) to perform navigation tasks in 2D arenas based on velocity inputs. Surprisingly, we find that grid-like spatial response patterns emerge in trained networks, along with units that exhibit other spatial correlates, including border cells and band-like cells. All these different functional types of neurons have been observed experimentally. The order of the emergence of grid-like and border cells is also consistent with observations from developmental studies. Together, our results suggest that grid cells, border cells and others as observed in EC may be a natural solution for representing space efficiently given the predominant recurrent connections in the neural circuits” (p. 1).\n\n\n[563.](https://www.openphilanthropy.org/brain-computation-report#footnoteref563_elckyzk)[Merel et al. (2020)](https://openreview.net/forum?id=SyxrxR4KPS): “In this work we develop a virtual rodent that learns to flexibly apply a broad motor repertoire, including righting, running, leaping and rearing, to solve multiple tasks in a simulated world. We analyze the artificial neural mechanisms underlying the virtual rodent’s motor capabilities using a neuroethological approach, where we characterize neural activity patterns relative to the rodent’s behavior and goals. We show that the rodent solves tasks by using a shared set of force patterns that are orchestrated into task-specific behaviors over longer timescales. Through methods familiar to neuroscientists, including representational similarity analysis, dimensionality reduction techniques, and targeted perturbations, we show that the networks produce these behaviors using at least two classes of behavioral representations, one that explicitly encodes behavioral kinematics in a task-invariant manner, and a second that encodes task-specific behavioral strategies. Overall, the virtual rat promises to facilitate grounded collaborations between deep reinforcement learning and motor neuroscience” (p. 1).\n\n\n[564.](https://www.openphilanthropy.org/brain-computation-report#footnoteref564_ozsru9o)[Lloyd (2000)](https://arxiv.org/pdf/quant-ph/9908043.pdf): “The amount of information that can be stored by the ultimate laptop, ≈ 1031 bits, is much higher than the ≈ 1010 bits stored on current laptops. This is because conventional laptops use many degrees of freedom to store a bit where the ultimate laptop uses just one. There are considerable advantages to using many degrees of freedom to store information, stability and controllability being perhaps the most important. Indeed, as the above calculation indicates, in order to take full advantage of the memory space available, the ultimate laptop must turn all its matter into energy. A typical state of the ultimate laptop’s memory looks like a plasma at a billion degrees Kelvin: the laptop’s memory looks like a thermonuclear explosion or a little piece of the Big Bang! Clearly, packaging issues alone make it unlikely that this limit can be obtained, even setting aside the difficulties of stability and control” (p. 11).\n\n\n[565.](https://www.openphilanthropy.org/brain-computation-report#footnoteref565_l5f0pu7)See calculations in [Section 4.2](https://www.openphilanthropy.org/brain-computation-report#FromBitErasuresToFlops).\n\n\n[566.](https://www.openphilanthropy.org/brain-computation-report#footnoteref566_o5iyyua)My thanks to Prof. David Wallace for discussion.\n\n\n[567.](https://www.openphilanthropy.org/brain-computation-report#footnoteref567_wlq2xsb)My thanks to Prof. David Wallace for suggesting this example.\n\n\n[568.](https://www.openphilanthropy.org/brain-computation-report#footnoteref568_lil196i)From [Open Philanthropy’s non-verbatim notes from a conversation with Dr. Paul Christiano](https://www.openphilanthropy.org/files/Conversations/Discussions%20with%20Dr.%20Paul%20Christiano.pdf): “The algorithmic overhead involved in reversible computing (specifically, the overhead involved in un-computing what you have already computed) is not that bad. Most of the difficulty lies in designing such efficient hardware. Partly for this reason, Dr. Christiano does not think that you can get an upper bound on the FLOP/s required to do what the brain does, purely by appealing to the energy required to erase bits. We believe that you can perform extremely complex computations with almost no bit erasures using good enough hardware” (p. 4). For discussion of some ongoing controversy related to the bit-erasures involved in reading/writing inputs and outputs repeatedly, see [Wolpert (2019)](https://arxiv.org/pdf/1901.00386.pdf), [Open Philanthropy’s non-verbatim notes from a conversation with Prof. David Wolpert](https://www.openphilanthropy.org/files/Conversations/A%20conversation%20with%20Professor%20David%20Wolpert,%20January%2023,%202020.pdf) (p. 2), and [Open Philanthropy’s non-verbatim notes from a conversation with Dr. Jess Riedel](https://www.openphilanthropy.org/files/Conversations/Discussions%20with%20Dr.%20Jess%20Riedel,%20Spring%202020.pdf) (p. 5).\n\n\n[569.](https://www.openphilanthropy.org/brain-computation-report#footnoteref569_s5x6y45)From [Open Philanthropy’s non-verbatim notes from a conversation with Dr. Michael Frank](https://www.openphilanthropy.org/files/Conversations/A%20conversation%20with%20Dr.%20Michael%20Frank,%20January%2022,%202020.docx.pdf) (p. 2):\n\n\nDr. Frank thinks that it is possible that there are processes in the brain that are close to thermodynamically reversible, and that play a role in computation. We don’t know enough about the brain to answer confidently either way…We don’t have positive evidence that such reversible effects exist and are important to cognition, but we also don’t have positive evidence that rules this out. However, Dr. Frank thinks that it’s a reasonable first-order assumption to assume that those effects, if they exist, would only have a small, second-order effect on the amount of computational work required to simulate the system. If these effects are there, they may be fairly subtle and gradual, acting in a long-term way on the brain, in a manner we are not close to understanding…Overall, Dr. Frank would lean weakly towards the view that you could make a digital model of cognition without including any subtle reversible processes, but because he is not an expert on the neural computation, he would not bet confidently one way or another.\n\n\nFrom [Open Philanthropy’s non-verbatim notes from a conversation with Dr. Stephen Larson](https://www.openphilanthropy.org/files/Conversations/Discussions%20with%20Dr.%20Stephen%20Larson,%202019-2020.pdf) (p. 4):\n\n\nDr. Larson is not persuaded that Landauer’s limit can be used to upper-bound the FLOP/s necessary to replicate the brain’s task-performance, as it seems possible to him that there could be computational processes occurring in the brain that do not require bit-erasures.\n\n\nProf. David Wallace was also skeptical that Landauer’s principle could be used to generate an informative upper bound on required FLOP/s.\n\n\n[570.](https://www.openphilanthropy.org/brain-computation-report#footnoteref570_eo2siag)From [Open Philanthropy’s non-verbatim notes from a conversation with Prof. Jared Kaplan](https://www.openphilanthropy.org/files/Conversations/A%20conversation%20with%20Professor%20Jared%20Kaplan,%20January%2023,%202020.pdf) (p. 2):\n\n\nMr. Carlsmith asked Prof. Kaplan’s opinion of the following type of upper bound on the compute required to replicate the brain’s task-performance. According to Landauer’s principle, the brain, given its energy budget (~20 W) can be performing no more than ~1e22 bit-erasures per second. And if the brain is performing less than 1e22 bit-erasures per second, the number of FLOP/s required to replicate its task-performance is unlikely to exceed 1e22. Prof. Kaplan thinks that this type of calculation provides a very reasonable loose upper bound on the computation performed by the brain, and that the actual amount of computation performed by the brain is almost certainly many orders of magnitude below this bound. Indeed, he thinks the true number is so obviously much lower than this that Landauer’s principle does not initially seem particularly germane to questions about brain computation. One analogy might be attempting to upper bound the number of fraudulent votes in a US presidential election via the total population of the world. However, he thinks that upper bounds based on Landauer’s principle are a helpful counter to views on which ‘we really just don’t know’ how much computation the brain performs, or on which doing what the brain does requires the type of compute that would be implicated by very detailed biophysical simulations.\n\n\nFrom [Open Philanthropy’s non-verbatim notes from a conversation with Dr. Jess Riedel](https://www.openphilanthropy.org/files/Conversations/Discussions%20with%20Dr.%20Jess%20Riedel,%20Spring%202020.pdf) (p. 2-3):\n\n\nDr. Riedel is very convinced by the claim that because of Landauer’s principle, the brain can be implementing no more than ~1e22 bit-erasures per second. And he also thinks it very reasonable to infer from this that the brain’s task performance can be replicated using less than 1e22 FLOP/s, conditional on the assumption that the brain’s computation is well-characterized as digital and/or analog computation that can be simulated on a digital computer with modest overhead (he assigns some small probability to this assumption being false, though he would find its falsehood fairly shocking). Indeed, Dr. Riedel expects the amount of computation performed by the brain to be much lower than the upper bound implied by Landauer’s principle. This is partly because, from a basic physics perspective, the vast majority of what’s going on in the brain (e.g., cell maintenance, other thermodynamic processes inside cells) generates entropy but has nothing to do with the computations that are happening.\n\n\nFrom [Open Philanthropy’s non-verbatim notes from a conversation with Dr. Paul Christiano](https://www.openphilanthropy.org/files/Conversations/Discussions%20with%20Dr.%20Paul%20Christiano.pdf) (p. 5):\n\n\nDr. Christiano expects that experts in physics, chemistry, and computer engineering would generally think it extremely unlikely that the brain is erasing less than one bit per computationally useful FLOP it performs. If the brain were doing this, Dr. Christiano believes that this would mean that the brain is qualitatively much more impressive than any other other biological machinery we are aware of…Dr. Christiano would be extremely surprised if the brain got more computational mileage out of a single ATP than human engineers can get out of a FLOP, and he would be very willing to bet that it takes at least 10 ATPs to get the equivalent of a FLOP. Mr. Carlsmith estimates that the brain can be using no more than ~1e20 ATPs/second. If this estimate is right, then Dr. Christiano is very confident that you do not need more than 1e20 FLOP/s to replicate the brain’s task-performance.\n\n\nSee also [Open Philanthropy’s non-verbatim notes from a conversation with Prof. David Wolpert](https://www.openphilanthropy.org/files/Conversations/A%20conversation%20with%20Professor%20David%20Wolpert,%20January%2023,%202020.pdf) (p. 3-4) for more discussion, though with less of an obvious upshot:\n\n\nMr. Carlsmith asked Prof. Wolpert whether one can use Landauer’s principle to upper bound the FLOP/s required to replicate the human brain’s task-performance… In Prof. Wolpert’s view, it is a subtle and interesting question how to do this type of calculation correctly. A rigorous version would require a large research project… Prof. Wolpert’s thinks that this calculation is legitimate as a first-pass, back-of-the-envelope upper bound on the bit-erasures that the brain could be implementing. It couldn’t get published in a physics journal, but it might get published in a popular science journal, and it helps get the conversation started. At a minimum, it’s a strong concern that advocates of extreme amounts of computational complexity in the brain (for example, advocates of the view that you need much more than 1e22 FLOP/s to replicate the brain’s computation) would need to address.\n\n\n[571.](https://www.openphilanthropy.org/brain-computation-report#footnoteref571_8uht8yk)This deference is not merely the result of tallying up the amount of expert support for different perspectives: it incorporates many more subjective factors involved in my evaluation of the overall evidence the expert opinions I was exposed to provides.\n\n\n[572.](https://www.openphilanthropy.org/brain-computation-report#footnoteref572_thl6qyw)From [Open Philanthropy’s non-verbatim notes from a conversation with Prof. Konrad Kording](https://www.openphilanthropy.org/files/Conversations/A%20conversation%20with%20Professor%20Konrad%20Kording,%20September%2011,%202019.pdf): “Examination of neurons reveals that they are actually very non-linear, and the computations involved in plasticity probably include a large number of factors distributed across the cell. In this sense, a neuron might be equivalent to a three-layer neural network, internally trained using backpropagation. In that case, you’d need to add another factor of roughly 105 to your compute estimate, for a total of 1020 multiplications per second. This would be much less manageable. … The difference between the estimates generated by these different approaches is very large – something like ten orders of magnitude. It’s unclear where the brain is on that spectrum” (p. 2). From [Open Philanthropy’s non-verbatim notes from a conversation with Prof. Eric Jonas](https://www.openphilanthropy.org/files/Conversations/A%20conversation%20with%20Professor%20Eric%20Jonas,%20September%2017,%202019.pdf): “Attempting to estimate the compute sufficient to replicate the brain’s task performance is an extremely challenging project. It’s worthwhile (indeed, it’s a common thought experiment amongst neuroscientists), but the error bars will be huge (e.g., something like ten orders of magnitude) … Active dendritic computation could conceivably imply something like 1-5 orders of magnitude more compute than a simple linear summation model of a neuron” (p. 3). If a simple linear summation model implies ~1e13-1e15 FLOP/s – e.g., ~1 FLOP per spike through synapse – this would suggest a range of 1e13-1e20 FLOP/s. From [Open Philanthropy’s non-verbatim notes from a conversation with Prof. Erik De Schutter](https://www.openphilanthropy.org/files/Conversations/A%20conversation%20with%20Professor%20Erik%20De%20Schutter,%20September%2017,%202019%20%20.pdf): “Prof. De Schutter thinks that at this point, we simply are not in a position to place any limits on the level of biological detail that might be relevant to replicating the brain’s task-performance” (p. 1). [Sandberg and Bostrom (2008)](https://www.fhi.ox.ac.uk/brain-emulation-roadmap-report.pdf) (p. 13), report that in an informal poll of attendees at a conference about the required level of resolution for whole brain emulation, the consensus appeared to be one of the following three levels: “Spiking neural network,” which Sandberg and Bostrom estimate would require 1e18 FLOP/s; “Electrophysiology,” which Sandberg and Bostrom estimate would require 1e22 FLOP/s; and “Metabolome,” which Sandberg and Bostrom estimate would require 1e25 FLOP/s; Henry Markham, in a [2018 video (18:28)](https://youtu.be/DvE-nphgswY?t=1112), estimates the FLOP/s burdens of running a “real-time molecular simulation of the human-brain” at 4e29 FLOP/s (and see [here](https://www.nextbigfuture.com/2009/11/henry-markram-calls-ibm-cat-scale-brain.html) for some arguments in which he seems to suggest that levels of detail in this vein are central to counting as a simulation of the brain); and [Bell (1999)](https://redwood.berkeley.edu/wp-content/uploads/2018/08/bell-levels-loops.pdf) appears to suggest that we cannot be confident that even a molecular level simulation of the brain would be adequate (p. 2018).\n\n\n[573.](https://www.openphilanthropy.org/brain-computation-report#footnoteref573_os59mr5)I’ve mostly relied on [Frank (2018)](https://arxiv.org/pdf/1901.10327.pdf), [Sagawa (2014)](https://arxiv.org/pdf/1311.1886.pdf), [Wolpert (2019)](https://iopscience.iop.org/article/10.1088/1751-8121/ab0850/pdf), and [Wolpert (2019a)](https://arxiv.org/pdf/1905.05669.pdf) for my understanding of the principle, together (centrally) with discussion with experts. [Feyman (1996)](https://www.amazon.com/Feynman-Lectures-Computation-Frontiers-Physics/dp/0738202967), Chapter 5, also contains a fairly accessible introduction. See [Landauer (1961)](http://worrydream.com/refs/Landauer%20-%20Irreversibility%20and%20Heat%20Generation%20in%20the%20Computing%20Process.pdf) for the original statement of the argument: “It is argued that computing machines inevitably involve devices which perform logical functions that do not have a single-valued inverse. This logical irreversibility is associated with physical irreversibility and requires a minimal heat generation, per machine cycle, typically of the order of *k*T for each irreversible function. This dissipation serves the purpose of standardizing signals and making them independent of their exact logical history” (p. 183).\n\n\n[574.](https://www.openphilanthropy.org/brain-computation-report#footnoteref574_blzf1pz)Here I am following [Frank (2018)](https://arxiv.org/pdf/1901.10327.pdf): “Let there be a countable (usually finite) set C = {ci} of distinct entities ci called computational states. Then a general definition of a (possibly stochastic) (computational) operation O is a function O : C → P(C), where P(C) denotes the set of probability distributions over C. That is, O(ci) for any given ci ∈ C is some corresponding probability distribution Pi : C → [0, 1]. The intent of this definition is that, when applied to an initial computational state ci, the computational operation transforms it into a final computational state ci, but in general, this process could be stochastic, meaning that, for whatever reason, having complete knowledge of the initial state does not imply having complete knowledge of the final state” (p. 11). See [Maroney (2005)](https://arxiv.org/abs/physics/0406137) for more discussion of stochastic computation in the context of Landauer’s principle.\n\n\n[575.](https://www.openphilanthropy.org/brain-computation-report#footnoteref575_0q78bdq)[Schroeder (2000)](https://www.amazon.com/Introduction-Thermal-Physics-Daniel-Schroeder/dp/0201380277): “Entropy is just the logarithm of the number of ways of arranging things in the system (times Boltzmann’s constant)” (p. 75). See also Wikipedia on [Boltzmann’s principle](https://en.wikipedia.org/wiki/Entropy_(statistical_thermodynamics)#Boltzmann's_principle). From [Open Philanthropy’s non-verbatim notes from a conversation with Prof. Jared Kaplan](https://www.openphilanthropy.org/files/Conversations/A%20conversation%20with%20Professor%20Jared%20Kaplan,%20January%2023,%202020.pdf): “Landauer’s principle states that erasing a bit of information requires a minimum energy expenditure – specifically, *k*T ln2, where *k* is Boltzmann’s constant, and T is the absolute temperature. This principle is grounded in the relationship between entropy and energy – the same relationship that grounds the fact that heat doesn’t flow from cold things to hot things, and the fact that you can’t create a perpetual motion machine or an arbitrarily efficient engine. For physicists, entropy is the logarithm of the number of accessible states. When a system changes, either this entropy stays the same, or it increases…” (p. 1).\n\n\n[576.](https://www.openphilanthropy.org/brain-computation-report#footnoteref576_032sado)I am using the term “logical bit-erasures” to quantify logical entropy drops of the kind to which Landauer’s principle, as I understand it, is relevant, even in a stochastic context. Discussions of Landauer’s principle sometimes assume a deterministic context, in which the relationship between decreases in logical entropy and logical irreversibility (e.g., the inability to reconstruct inputs on the basis of outputs) is more straightforward (e.g., logically irreversible operations necessarily decrease logical entropy). Stochastic contexts introduce more complexities (see e.g. [Frank (2018)](https://arxiv.org/pdf/1901.10327.pdf) and [Maroney (2018)](https://arxiv.org/abs/physics/0406137) for some discussion), but as I understand it, the basic fact that decreasing logical entropy implicates Landauer costs remains unaltered. See also [Kempes et al. (2017)](https://arxiv.org/pdf/1706.05043.pdf), who use a similar way of measuring Landauer costs in articulating what they call the “generalized Landauer bound” (p. 7), e.g.: “to focus on the specifically computation-based thermodynamic cost of a process, suppose that at any given time t all states x have the same energy. It is now known that in this situation the minimal work required to transform a distribution P0(x) at time 0 to a distribution P1(x) at time 1 is exactly *k*T[S(P0) − S(P1)] where S(.) is Shannon entropy and x lives in a countable space X” (p. 6). See also [Open Philanthropy’s non-verbatim notes from a conversation with Prof. David Wolpert](https://www.openphilanthropy.org/files/Conversations/A%20conversation%20with%20Professor%20David%20Wolpert,%20January%2023,%202020.pdf): “The generalized Landauer bound tells you the energy costs of performing a computation in a thermodynamically reversible way – energy that you could in principle get back. In particular: if you’re connected to a single heat bath, then regardless of whether your computation is deterministic or noisy, the generalized Landauer’s bound says that the minimum free energy you need to expend (assuming you perform the computation in a thermodynamically reversible way) is *k*T multiplied by the drop in the entropy. The total energy costs of a computation will then be the Landauer cost, plus the extra energy dissipated via the thermodynamically irreversible aspects of the physical process. This extra energy cannot be recovered” (p. 2).\n\n\n[577.](https://www.openphilanthropy.org/brain-computation-report#footnoteref577_r3a8l67)My (non-expert) understanding is that one way to loosely and informally express the basic idea here (without attempting to actually justify it technically) is that because the computer and the environment areassumed to be independent (at least with respect to the types of correlations we will realistically be able to keep track of), total entropy (call this *S*tot) is simply the entropy of the computer (*S*comp) plus the entropy of the environment (*S*env). And because the logical states are simply sets of computer microstates, the overall entropy of the computer (call this*S*comp) is just the logical entropy (*S*log), plus the entropy of the computer [conditioned on](https://en.wikipedia.org/wiki/Conditional_entropy) the logical state (call this, *S*comp | log). So *S*tot = *S*log + *S*comp | log + *S*env. This means that according to the second law, if *S*log goes down, then *S*comp | log and/or *S*env have to go up by an amount sufficient to render the total change in entropy non-negative (see [Sagawa (2014)](https://arxiv.org/pdf/1311.1886.pdf) (p. 15-17), for a more formal description of this basic framework. See also [Frank (2018)](https://arxiv.org/pdf/1901.10327.pdf), section 3.2, and especially p. 19; as well as his verbal description in [this lecture](https://youtu.be/IQZ_bQbxSXk?t=1304) (21:44)). And because the brain is a finite system with a finite capacity to absorb entropy, increasing *S*comp | log can only go so far if your computer is continuously processing. Eventually, if *S*log goes down, *S*envmust go up by a corresponding amount (see [Open Philanthropy’s non-verbatim notes from a conversation with Dr. Jess Riedel](https://www.openphilanthropy.org/files/Conversations/Discussions%20with%20Dr.%20Jess%20Riedel,%20Spring%202020.pdf): “A system like a brain or a computer contains non-information-bearing degrees of freedom that can absorb a finite amount of entropy. However, because the brain/computer is continuously processing and using energy, you can’t keep dumping entropy into those degrees of freedom indefinitely. Eventually, you need to start pushing entropy into the environment. If we assume that the states of the computer and the environment are not correlated (or at least, not in a way that we can realistically keep track of), then the total entropy will be the entropy of the computer plus the entropy of the environment. If the entropy of the computer goes down, the entropy of the environment must go up” (p. 2)).\n\n\n[578.](https://www.openphilanthropy.org/brain-computation-report#footnoteref578_zn11zo7)From [Open Philanthropy’s non-verbatim notes from a conversation with Dr. Jess Riedel](https://www.openphilanthropy.org/files/Conversations/Discussions%20with%20Dr.%20Jess%20Riedel,%20Spring%202020.pdf): “In certain rare environments, you can decrease entropy by paying costs in conserved quantities other than energy (for example, you can pay costs in angular momentum). But this is not relevant in the context of the brain.” See [Vaccaro and Barnett (2011)](https://arxiv.org/pdf/1004.5330.pdf) for more discussion.\n\n\n[579.](https://www.openphilanthropy.org/brain-computation-report#footnoteref579_p6uolql)From [Open Philanthropy’s non-verbatim notes from a conversation with Dr. Jess Riedel](https://www.openphilanthropy.org/files/Conversations/Discussions%20with%20Dr.%20Jess%20Riedel,%20Spring%202020.pdf): “Landauer’s principle follows almost trivially from basic principles of thermodynamics. Indeed, it can be understood simply as a rewriting of the definition of temperature. At a fundamental level, temperature is defined via the change in energy per unit change in entropy (up to a proportionality constant, Boltzmann’s constant). The practical and folk definitions of temperature, which focus on the amount of energy in a system (e.g., the kinetic energy of vibrating atoms), can be recovered from this more fundamental definition in all but a small number of exceptional cases. As the energy in a non-exceptional system increases, the number of states it can be in (and hence its maximum possible entropy) increases as well. If you have a system with a certain amount of energy, and you want to decrease its entropy, you need to put that entropy somewhere else, because total entropy is non-decreasing. Temperature gives us the exchange rate between energy and entropy. If you want to put some unit of entropy into a heat bath, you have to pay an energy cost, and the temperature of the bath is that cost” (p. 2). See also [Open Philanthropy’s non-verbatim notes from a conversation with Prof. Jared Kaplan](https://www.openphilanthropy.org/files/Conversations/A%20conversation%20with%20Professor%20Jared%20Kaplan,%20January%2023,%202020.pdf): “Almost all fixed systems have more accessible states as the energy goes up. Temperature just is how the energy changes as the entropy changes (textbooks will often state this as: the reciprocal of the temperature is the derivative of the entropy with respect to the energy). As an intuitive example: if your system (e.g., a set of gas molecules) has no energy at all, then all your molecules are just lying on the floor. As you add energy, they can bounce around, and there many more configurations they can be in. The energy of a single moving particle is another example. It’s kinetic energy is ½mass velocity2. The velocity is a vector, which in a three dimensional space will live on some sphere. As you make the energy bigger, the surface area of this sphere increases. This corresponds to a larger number of accessible states (at the quantum mechanical level, these states are discrete, so you can literally count them)” (p. 1-2).\n\n\n[580.](https://www.openphilanthropy.org/brain-computation-report#footnoteref580_nu4mnu7)[Schroeder (2000)](https://www.amazon.com/Introduction-Thermal-Physics-Daniel-Schroeder/dp/0201380277): “The temperature of a system is the reciprocal of the slope of its entropy vs. energy graph. The partial derivative is to be taken with the system’s volume and number of particles held fixed; more explicitly: 1/T = (dS/dUf)N,V (3.5). From now on I will take equation 3.5 to be the definition of temperature. You may be wondering why I do not turn the derivative upside down, and write equation 3.5 as T = (dU/dS)N,V (3.6). The answer is that there is nothing wrong with this, but it’s less convenient in practice, because rarely do you ever have a formula for energy in terms of entropy” (p. 88). See also [Jared Kaplan’s notes on Statistical Mechanics & Thermodynamics](https://sites.krieger.jhu.edu/jared-kaplan/files/2018/11/StatisticalMechanicsNotes.pdf), p. 24; [Wikipedia](https://en.wikipedia.org/wiki/Thermodynamic_temperature#Definition_of_thermodynamic_temperature), “Definition of thermodynamic temperature”); and the quotes in the previous endnote.\n\n\n[581.](https://www.openphilanthropy.org/brain-computation-report#footnoteref581_oyk7ql4)See [Bennett (2003)](https://www.cs.princeton.edu/courses/archive/fall06/cos576/papers/bennett03.pdf), section 2 (“Objections to Landauer’s principle”), for a description of the various objections, together with his replies (p. 502-508). Some aspects of the controversy, such as whether Landauer’s principle can exorcise Maxwell’s Demon without first assuming the second law (see e.g. [Earman and Norton (1998)](https://www.sciencedirect.com/science/article/pii/S1355219898000239) and [Norton (2004)](http://philsci-archive.pitt.edu/1729/2/Norton.pdf)) are not relevant for our purposes, as assuming the truth of second law is not a dialectical problem in this context.\n\n\nThe objection that logical irreversibility does not imply thermodynamic irreversibility (see e.g. [Maroney (2018)](https://arxiv.org/abs/physics/0406137)) might seem to have more force, as Landauer’s principle is indeed often understood as claiming or implying the contrary (see [Maroney (2018)](https://arxiv.org/abs/physics/0406137) for description of these interpretations; see also [Bub (2002)](https://arxiv.org/pdf/quant-ph/0203017.pdf) (p. 10):\n\n\na logically irreversible operation must be implemented by a physically irreversible device, which dissipates heat into the environment.\n\n\nMy own impression, from [Open Philanthropy’s non-verbatim notes from a conversation with Prof. David Wolpert](https://www.openphilanthropy.org/files/Conversations/A%20conversation%20with%20Professor%20David%20Wolpert,%20January%2023,%202020.pdf) and from [Sagawa (2014)](https://arxiv.org/pdf/1311.1886.pdf), is that this objection, applied to interpretations of Landauer’s principle inconsistent with it, is in fact correct, but that it does not alter the fact that bit-erasure requires transferring energy to the environment – it merely notes that such a transfer can, in principle, be performed in a thermodynamically reversible way. See e.g. [Kempes et al. (2017)](https://arxiv.org/pdf/1706.05043.pdf) (p. 6-7); [Wolpert’s (2019a)](https://arxiv.org/pdf/1905.05669.pdf) (p. 3); [Sagawa (2014)](https://arxiv.org/pdf/1311.1886.pdf) (p. 12):\n\n\nThe logically irreversible erasure can be performed in a thermodynamically reversible manner in the quasi-static limit.\n\n\nSee also [Open Philanthropy’s non-verbatim notes from a conversation with Prof. David Wolpert](https://www.openphilanthropy.org/files/Conversations/A%20conversation%20with%20Professor%20David%20Wolpert,%20January%2023,%202020.pdf) (p. 2). [Maroney (2018)](https://arxiv.org/abs/physics/0406137), after arguing that “logical reversibility neither implies, nor is implied by, thermodynamic reversibility” (p. 1), nevertheless acknowledges on page 14 that:\n\n\nThis does not contradict [Landauer (1961)](http://worrydream.com/refs/Landauer%20-%20Irreversibility%20and%20Heat%20Generation%20in%20the%20Computing%20Process.pdf) in the least. All that Landauer can be said to have shown was that a resetting operation required a generation of heat in the environment. However, a confusion then appears to arise through the incorrect use of the term ‘dissipation’. In [Landauer (1961)](http://worrydream.com/refs/Landauer%20-%20Irreversibility%20and%20Heat%20Generation%20in%20the%20Computing%20Process.pdf) and in much of the surrounding literature ‘dissipation’ is used more or less interchangeably with ‘heat generation’. Strictly, dissipation should be used only when the conversion of work to heat arises through dissipative forces (such as those involving friction) which are thermodynamically irreversible. Forces which are thermodynamically reversible are non-dissipative.\n\n\nThat said, I have not attempted to evaluate this debate in detail, and I try, in the section, to remain neutral about it where possible (for example, I try to avoid the suggestion that bit erasure requires *dissipating* energy, as opposed to simply transferring it, though I don’t think I will have entirely avoided controversy: see e.g. [Frank (2018)](https://arxiv.org/pdf/1901.10327.pdf) (p. 1), who argues that:\n\n\nLandauer’s Principle is not about general entropy transfers; rather, it more specifically concerns the ejection of (all or part of) some correlated information from a controlled, digital form (e.g., a computed bit) to an uncontrolled, non-computational form, i.e., as part of a thermal environment.\n\n\nI’m aware of at least one empirical result that presents itself as in tension with some versions of Landauer’s principle: [López-Suárex et al. (2016)](https://www.nature.com/articles/ncomms12068) (though[Kish (2016)](https://arxiv.org/pdf/1606.09493.pdf) (p. 1) suggests that their argument:\n\n\nneglects the dominant source of energy dissipation, namely, the charging energy of the capacitance of the input electrode, which totally dissipates during the full (0-1-0) cycle of logic values.\n\n\n[López-Suárex et al. (2016)](https://www.nature.com/articles/ncomms12068) (p.3) also note that:\n\n\nWe stress here that our experiment does not question the so-called Landauer-reset interpretation, where a net decrease of physical entropy requires a minimum energy expenditure. What we have here is a logically irreversible computation, that is a generic process where a decrease in the amount of information between the output and the input is realized with an arbitrarily small energy dissipation; this shows that logical reversibility and physical reversibility have to be treated on independent bases.\n\n\n[Frank (2018)](https://arxiv.org/pdf/1901.10327.pdf) (p. 36-37) claims that:\n\n\nthe only experiments that have claimed to demonstrate violations of Landauer’s limit have been ones in which the experimenters misunderstood some basic aspect of the Principle, such as the need to properly generalize the definition of logical reversibility, which was the subject of [[11](https://link.springer.com/book/10.1007%2F978-3-319-59936-6), [12](https://cfwebprod.sandia.gov/cfdocs/CompResearch/docs/grc-rc17-preprint2.pdf), [13](https://arxiv.org/abs/1806.10183)], or the role of correlations that we explained in §3.3 above.\n\n\nHowever, he does not give more details, in his 2018 paper, as to the experiments he has in mind or the misunderstandings he takes to be involved.\n\n\n[582.](https://www.openphilanthropy.org/brain-computation-report#footnoteref582_3gzquoh)[Wolpert (2019a)](https://arxiv.org/pdf/1905.05669.pdf): “This early work [by Landauer and Bennett] was grounded in the tools of equilibrium statistical physics. However, computers are highly nonequilbrium systems. As a result, this early work was necessarily semiformal, and there were many questions it could not address. On the other hand, in the last few decades there have been major breakthroughs in non-equilibrium statistical physics. Some of the most important of these breakthroughs now allow us to analyze the thermodynamic behavior of any system that can be modeled with a time-inhomogeneous continuous-time Markov chain (CTMC), even if it is open, arbitrarily far from equilibrium, and undergoing arbitrary external driving. In particular, we can now decompose the time-derivative of the (Shannon) entropy of such a system into an ‘entropy production rate’, quantifying the rate of change of the total entropy of the system and its environment, minus a ‘entropy flow rate’, quantifying the rate of entropy exiting the system into its environment. Crucially, the entropy production rate is non-negative, regardless of the CTMC. So if it ever adds a nonzero amount to system entropy, its subsequent evolution cannot undo that increase in entropy. (For this reason it is sometimes referred to as irreversible entropy production.) This is the modern understanding of the second law of thermodynamics, for systems undergoing Markovian dynamics. In contrast to entropy production, entropy flow can be negative or positive. So even if entropy flow increases system entropy during one time interval (i.e. entropy flows into the system), often its subsequent evolution can undo that increase” (see p. 2-3).\n\n\n[583.](https://www.openphilanthropy.org/brain-computation-report#footnoteref583_j5d1zze)Prof. David Wallace indicated that most physicists accept Landauer’s principle. Though see [Open Philanthropy’s non-verbatim notes from a conversation with Dr. Jess Riedel](https://www.openphilanthropy.org/files/Conversations/Discussions%20with%20Dr.%20Jess%20Riedel,%20Spring%202020.pdf): “Landauer’s principle follows almost trivially from basic principles of thermodynamics… There is some dispute over Landauer’s limit in the literature. Whether the basic assumptions it follows from apply in the real world is somewhat subtle” (p. 2).\n\n\n[584.](https://www.openphilanthropy.org/brain-computation-report#footnoteref584_bgqo2j3)See the review in [Frank (2018)](https://arxiv.org/pdf/1901.10327.pdf): “In 2012, Berut et al. tested Landauer’s Principle in the context of a colloidal particle trapped in a modulated double-well potential, an experimental setup designed to mimic the conceptual picture that we reviewed in Fig. 12. Their experimental results showed that the heat dissipated in the erasure operation indeed approached the Landauer value of *k*T ln 2 in the adiabatic limit. Also in 2012, Orlov et al. tested Landauer’s Principle in the context of an adiabatic charge transfer across a resistor, and verified that, in cases where the charge transfer is carried out in a way that does not erase known computational information, the energy dissipated can be much less than *k*T ln 2, which validates the theoretical rationale for doing reversible computing. In 2014, Jun et al. [[7](https://journals.aps.org/prl/abstract/10.1103/PhysRevLett.113.190601)] carried an even more high-precision version of the Berut experiment, verifying again the Landauer limit, and that similar, logically-reversible operations can, in contrast, be done in a way that approaches thermodynamic reversibility. Finally, in 2018, Yan et al. [8] carried out a quantum-mechanical experiment demonstrating that Landauer’s Principle holds at the single-atom level” (p. 36-37).\n\n\n[585.](https://www.openphilanthropy.org/brain-computation-report#footnoteref585_f5yts0q)[Aiello (1997)](http://www.scielo.br/scielo.php?script=sci_arttext&pid=S0100-84551997000100023): “On the basis of *in vivo* determinations, the mass-specific metabolic rate of the brain is approximately 11.2 W/kg (watts per kilogram). This is over 22 times the mass-specific metabolic rate of skeletal muscle (0.4 W/kg) ([Aschoff et al. (1971)](https://books.google.com/books/about/Energiehaushalt_und_Temperaturregulation.html?id=00dWGwAACAAJ)). A large brain would, therefore, be a considerable energetic investment. For example, an average human has a brain that is about 1 kg larger than would be expected for an average mammal of our body size (65 kg) and the metabolic cost of this brain would be just under 5 times that of the brain of the average mammal (humans = 14.6 watts, average mammal = 3.0 watts) ([Aiello and Wheeler (1995)](https://www.jstor.org/stable/2744104))” (see the section “The expensive brain”). Aiello and Wheeler (1995) contains the same estimate, citing [Aschoff et al. (1971)](https://books.google.com/books/about/Energiehaushalt_und_Temperaturregulation.html?id=00dWGwAACAAJ), which I have not attempted to access (and which appears to be in German). [Sarpeshkar (1997)](https://thesis.library.caltech.edu/3063/1/Sarpeshkar_R_1997.pdf): “The global power consumption of the brain has been measured numerous times by the Kety-Schmidt technique, and the measurements have generally been fairly consistent, even over 40 years. A recent measurement [38] yielded an oxygen uptake of 144 umol.100g-1.min-1. The glucose reaction yields, in in-vitro reactions, about 60 kJ/mol × 38 ATP/6 = 380 kJ/mol of oxygen consumed. The 60 kJ/mol. Value was obtained from [29]. The weight of the brain is about 1.3 kg [10]. Thus, the power consumption in watts is computed to 11.8W, a value that we shall round of 12 W” (p. 204, though in [Sarpeshkar (2010)](https://www.cambridge.org/core/books/ultra-low-power-bioelectronics/ED8504DA1504856B74E2502EA859FDEA) (p. 748), he uses the [Aiello (1997)](http://www.scielo.br/scielo.php?script=sci_arttext&pid=S0100-84551997000100023) estimate above). [Jabr (2012a)](https://www.scientificamerican.com/article/thinking-hard-calories/), writing for *Scientific American*, estimates 12.6W. [Merkle (1989)](https://www.merkle.com/brainLimits.html) cites Kandel et al. (1985) (though without a page number) for a 25W estimate, though he assumes that only 10W is actually used for computation. [Watts et al. (2018)](https://www.frontiersin.org/articles/10.3389/fnmol.2018.00216/full) write that “While making up only a small fraction of our total body mass, the brain represents the largest source of energy consumption—accounting for over 20% of total oxygen metabolism,” which would suggest ~16W if we used the ~80W estimate for the whole body cited in [Aiello (1997)](http://www.scielo.br/scielo.php?script=sci_arttext&pid=S0100-84551997000100023). Various citations listed [here](https://hypertextbook.com/facts/2001/JacquelineLing.shtml) say that 20% of body energy consumption goes to the brain, which the website’s author uses to generate an estimate of 20W for the brain, based on [100W](https://hypertextbook.com/facts/2003/WeiLiangMok.shtml) consumption by the human body as a whole. My impression is that the 20% number is used in numerous other contexts (see e.g. [Engl and Attwell (2015)](https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4560575/pdf/tjp0593-3417.pdf), who cite [Kety (1957)](https://www.sciencedirect.com/science/article/pii/B9780080090627500266?via%3Dihub); [Sokoloff (1960)](https://www.semanticscholar.org/paper/The-metabolism-of-the-central-nervous-system-in-Sokoloff/afb75236457912a504a1ed3bb3a2270ede3b2113), and [Rolfe and Brown (1997)](https://journals.physiology.org/doi/pdf/10.1152/physrev.1997.77.3.731) – though I haven’t followed up on these citations).\n\n\n[586.](https://www.openphilanthropy.org/brain-computation-report#footnoteref586_0kj3r0j)[Engl and Attwell (2015)](https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4560575/pdf/tjp0593-3417.pdf): “Current theoretical estimates and experimental data assessing the contribution of each ‘housekeeping’ process to the brain’s total energy budget are inconclusive for many processes, varying widely in some cases. Further research is needed to fill these gaps, and the 40% value shown (right), for the whole brain according to [Astrup et al. (1981a)](https://www.ahajournals.org/doi/10.1161/01.STR.12.6.726), as opposed to the 25% assumed for grey matter in Fig. 1, is quite uncertain” (p. 3424, Figure 5).\n\n\n[587.](https://www.openphilanthropy.org/brain-computation-report#footnoteref587_coqmiuc)See [Howarth et al. (2012)](https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3390818/pdf/jcbfm201235a.pdf): “As panel A, but including non-signaling energy use, assumed to be 6.81 × 1022 ATP/s/m3, that is, 1/3 of the neuronal signaling energy, so that housekeeping tasks are assumed to account for 25% of the total energy use. On this basis, resting potentials use 15%, action potentials 16%, and synaptic processes 44% of the total energy use” (p. 1224, Figure 1).\n\n\n[588.](https://www.openphilanthropy.org/brain-computation-report#footnoteref588_9jh0qzo)See [Engl and Attwell (2015)](https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4560575/pdf/tjp0593-3417.pdf) for some description of these tasks: “Perhaps surprisingly, a significant fraction of brain energy use (25–50%) in previous energy budgets has been assigned to non-signalling (so-called ‘housekeeping’) tasks, which include protein and lipid synthesis, proton leak across the mitochondrial membrane, and cytoskeletal rearrangements, the rate of ATP consumption on all of which is poorly understood” (p. 3418), though the Engl and Attwell emphasize that the methodology used to generate these estimates is quite uncertain.\n\n\n[589.](https://www.openphilanthropy.org/brain-computation-report#footnoteref589_i9hn469)See Figure 1.\n\n\n[590.](https://www.openphilanthropy.org/brain-computation-report#footnoteref590_l5olfze)[Wang et al. (2014)](https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4189373/pdf/fnins-08-00307.pdf): “On average, deep brain temperature is less than 1°C higher than body temperature in humans, unless cerebral injury is severe enough to significantly disrupt the brain-body temperature regulation (Soukup et al., 2002)” (p. 6). Thanks to Asya Bergal for this citation. See also [Nelson and Nunneley (1998)](https://www.ncbi.nlm.nih.gov/pubmed/9754976): “Cerebral temperatures were generally insensitive to surface conditions (air temperature and evaporation rate), which affected only the most superficial level of the cerebrum” (abstract). Human body temperature is about [37 oC](https://en.wikipedia.org/wiki/Human_body_temperature), [310 Kelvin](https://www.metric-conversions.org/temperature/celsius-to-kelvin.htm).\n\n\n[591.](https://www.openphilanthropy.org/brain-computation-report#footnoteref591_xynzc5c)From [Open Philanthropy’s non-verbatim notes from a conversation with Dr. Jess Riedel](https://www.openphilanthropy.org/files/Conversations/Discussions%20with%20Dr.%20Jess%20Riedel,%20Spring%202020.pdf): “The temperature relevant to applying Landauer’s limit to the brain is essentially that of the skull and blood. Even if the temperature outside the body is at a lower temperature, the brain will have to push entropy into its environment via those conduits. If there were some other cold reservoir inside the brain absorbing entropy (there isn’t), it would quickly be expended” (p. 3). [Sandberg (2016)](https://arxiv.org/pdf/1602.04019.pdf), in his attempt to apply Landauer’s limit to the brain, uses body temperature as well (see p. 5).\n\n\n[592.](https://www.openphilanthropy.org/brain-computation-report#footnoteref592_lpfk1mk)See calculation [here](https://www.wolframalpha.com/input/?i=310+kelvin+*+boltzmann%27s+constant+*+ln2).\n\n\n[593.](https://www.openphilanthropy.org/brain-computation-report#footnoteref593_gb1oa83)See calculation [here](https://www.wolframalpha.com/input/?i=20watts%2F3e-21joules). [Sandberg’s (2016)](https://arxiv.org/pdf/1602.04019.pdf) estimate is slightly higher: “20 W divided by 1.3 × 10-21 J (the Landauer limit at body temperature) suggests a limit of no more than 1.6 × 1022 irreversible operations per second” (p. 5). This is because his estimate of the Landauer limit at body temperature differs from mine by about a factor of two – I’m not sure why.\n\n\n[594.](https://www.openphilanthropy.org/brain-computation-report#footnoteref594_431fxfq)From [Open Philanthropy’s non-verbatim notes from a conversation with Prof. David Wolpert](https://www.openphilanthropy.org/files/Conversations/A%20conversation%20with%20Professor%20David%20Wolpert,%20January%2023,%202020.pdf): “In Prof. Wolpert’s view, it is a subtle and interesting question how to do this type of calculation correctly. A rigorous version would require a large research project. One complexity is that the brain is an open system, in what would be formally called a non-equilibrium steady state, which continually receives new inputs and performs many computations at the same time, even though its entropy does not change that much over time. Landauer’s principle, though, applies to drops in entropy that occur in each step of a calculation. Various other caveats would also be necessary. For example, there are long-range correlations between bits, and there are multiple heat baths in the brain. As a simplified toy model, however, we can imagine that the brain computes in a serial fashion. It gets new inputs for each computation (thereby reinflating the entropy), and each computation causes a drop in entropy. In this case, the upper bound on bit-erasures suggested by Mr. Carlsmith would apply. Prof. Wolpert’s thinks that this calculation is legitimate as a first-pass, back-of-the-envelope upper bound on the bit-erasures that the brain could be implementing. It couldn’t get published in a physics journal, but it might get published in a popular science journal, and it helps get the conversation started” (p. 3). I expect that further investigation would reveal other complexities as well.\n\n\n[595.](https://www.openphilanthropy.org/brain-computation-report#footnoteref595_35wpr0f)[Jared Kaplan’s notes on Statistical Mechanics & Thermodynamics](https://sites.krieger.jhu.edu/jared-kaplan/files/2018/11/StatisticalMechanicsNotes.pdf): “Say we add two numbers, eg 58 + 23 = 81. We started out with information representing both 58 and 23. Typically this would be stored as an integer, and for example a 16 bit integer has information, or entropy, 16 log 2. But at the end of the computation, we don’t remember what we started with, rather we just know the answer. Thus we have created an entropy S = 2 × (16 log 2) − (16 log 2) = 16 log 2 through the process of erasure!” (p. 59). See also Hänninen and Takala (2010): “The binary addition operation performs an unbalanced compression between the input and output state spaces, since the mapping between the values is not bijective. Medium-sized result values can originate from the largest set of possible input operand pairs. The addition of two n-bit binary operands results in at most an (n + 1)-bit result, and the result value 2n − 1 compresses the largest group of input pairs, 2n distinct cases, into the single output. Thus, the logical reversal of the addition requires the result word and n extra bits, which could be chosen simply to represent one of the input operands. The number of bits required to reverse the binary addition, as one indivisible logical operation, can be interpreted as the minimum amount of information lost in any irreversible adder structure at best. This loss determines the minimum achievable energy cost per operation” (p. 224). See also [Hänninen and Takala (2010)](https://ieeexplore.ieee.org/document/5697744): “The binary addition operation performs an unbalanced compression between the input and output state spaces, since the mapping between the values is not bijective. Medium-sized result values can originate from the largest set of possible input operand pairs. The addition of two n-bit binary operands results in at most an (n + 1)-bit result, and the result value 2n − 1 compresses the largest group of input pairs, 2n distinct cases, into the single output. Thus, the logical reversal of the addition requires the result word and n extra bits, which could be chosen simply to represent one of the input operands. The number of bits required to reverse the binary addition, as one indivisible logical operation, can be interpreted as the minimum amount of information lost in any irreversible adder structure at best. This loss determines the minimum achievable energy cost per operation” (p. 224). See also [Hänninen and Takala (2010)](https://ieeexplore.ieee.org/document/5697744) (p. 2370), for comparable discussion re: multiplication. [Hänninen et al. (2011)](https://www3.nd.edu/~lent/pdf/nd/IrreversibleBitErasuresHanninenLent2011.pdf) discuss the possibility of less-than-n bit erasures for word-length n operations in the context of “non-trivial multiplication,” which, at a glance, seems to involve excluding multiplications that take zero as an operand (see p. 2371).\n\n\n[596.](https://www.openphilanthropy.org/brain-computation-report#footnoteref596_u5h8jmh)[Hänninen et al. (2011)](https://www3.nd.edu/~lent/pdf/nd/IrreversibleBitErasuresHanninenLent2011.pdf) estimate the bit-erasures implicated by various proposed multiplier implementations. The array multiplier is the most efficient, at 8n2 for n-bit words (see Table II, p. 2372). 8 × 42 = 128; 83 = 512.\n\n\n[597.](https://www.openphilanthropy.org/brain-computation-report#footnoteref597_01sf3it)[Sarpeshkar (1998)](https://ieeexplore.ieee.org/document/6790538) discusses more efficient, analog implementations: “Items 1 through 3 show that analog computation can be far more efficient than digital computation because of analog computation’s repertoire of rich primitives. For example, addition of two parallel 8-bit numbers takes one wire in analog circuits (using Kirchoff’s current law), whereas it takes about 240 transistors in static CMOS digital circuits. The latter number is for a cascade of 8 full adders. Similarly an 8-bit multiplication of two currents in analog computation takes 4 to 8 transistors, whereas a parallel 8-bit multiply in digital computation takes approximately 3000 transistors” (p. 1605).\n\n\n[598.](https://www.openphilanthropy.org/brain-computation-report#footnoteref598_3fmu9d2)See also [Hänninen et al. (2011)](https://www3.nd.edu/~lent/pdf/nd/IrreversibleBitErasuresHanninenLent2011.pdf): “Present CMOS effectively performs an erasure every time a transistor switches states—generating hugely unnecessary levels of heat” (p. 2370).\n\n\n[599.](https://www.openphilanthropy.org/brain-computation-report#footnoteref599_uiinc54)[Sarpeshkar (1998)](https://ieeexplore.ieee.org/document/6790538): “an 8-bit multiplication of two currents in analog computation takes 4 to 8 transistors, whereas a parallel 8-bit multiply in digital computation takes approximately 3000 transistors” (p. 1605).\n\n\n[600.](https://www.openphilanthropy.org/brain-computation-report#footnoteref600_u40fxgd)[Asadi and Navi (2007)](https://www.idosi.org/wasj/wasj2(4)/12.pdf): “Table 3: comparison between 32 × 32 bit multipliers … Transistor counts: 21579.00, 25258.00, 32369.00” (Table 3, p. 346).\n\n\n[601.](https://www.openphilanthropy.org/brain-computation-report#footnoteref601_8tk7j4c)Given the probability distribution over inputs to which the brain is in fact exposed, that is.\n\n\n[602.](https://www.openphilanthropy.org/brain-computation-report#footnoteref602_cbw5h9p)My thanks to Prof. David Wallace for discussion.\n\n\n[603.](https://www.openphilanthropy.org/brain-computation-report#footnoteref603_fbg2jof)From [Open Philanthropy’s non-verbatim notes from a conversation with Dr. Jess Riedel](https://www.openphilanthropy.org/files/Conversations/Discussions%20with%20Dr.%20Jess%20Riedel,%20Spring%202020.pdf): “There is a simple algorithm for converting a computation that uses logically irreversible operations into an equivalent computation that uses logically reversible operations. This allows you to avoid almost all of the relevant logical bit-erasures” (p. 4). And from [Open Philanthropy’s non-verbatim notes from a conversation with Dr. Paul Christiano](https://www.openphilanthropy.org/files/Conversations/Discussions%20with%20Dr.%20Paul%20Christiano.pdf): “We believe that you can perform extremely complex computations with almost no bit erasures using good enough hardware” (p. 4). See also [Bennett (1989)](https://epubs.siam.org/doi/abs/10.1137/0218053?casa_token=vnD0zJclKZQAAAAA%3AK7-WmLzZs0hMB9f0RLP4QxScEYJ1S5lPtVdmT6QeFfF8ND24mDbadlMU5KzhivkC372qCMTHUw&journalCode=smjcat): “Reversible computers of various kinds (Turing machines, cellular automata, combinational logic) have been considered [1], [11], [12], [13], [6], [2], [14] especially in connection with the physical question of the thermodynamic cost of computation; and it has been known for some time that they can simulate the corresponding species of irreversible computers in linear time [1] (or linear circuit complexity 13]), provided they are allowed to leave behind at the end of the computation a copy of the input (thereby rendering the mapping between initial and final states 1:1 even though the input-output mapping may be many-to-one)” (p. 766). See also [Sagawa (2014)](https://arxiv.org/pdf/1311.1886.pdf), p. 8 in the arxiv version), and Bennett (1973). For disagreement/controversy, see [Wolpert (2019a)](https://arxiv.org/pdf/1905.05669.pdf): “Summarizing, it is not clear that there is a way to implement a logically irreversible function with an extended circuit built out of logically reversible gates that reduces the Landauer cost below the Landauer cost of an equivalent AO [“all at once”] device. The effect on the mismatch cost of using such a circuit rather than an AO device is more nuanced, varying with the priors, the actual distribution, etc.” (p. 33 of the arxiv paper). My understanding is that the crux of this objection hinges on the fact that the reversible circuit will need to be reused, which means that its inputs and outputs will need to be reinitialized: “In general, the Landauer cost and mismatch cost of answer-reinitialization of an extended circuit will be greater than the corresponding answer-reinitialization costs of an equivalent AO device. This is for the simple reason that the answer-reinitialization of the extended circuit must reinitialize the bits containing copies of x and m, which do not even exist in the AO device” (p. 30 of the arxiv paper). See also [Open Philanthropy’s non-verbatim notes from a conversation with Prof. David Wolpert](https://www.openphilanthropy.org/files/Conversations/A%20conversation%20with%20Professor%20David%20Wolpert,%20January%2023,%202020.pdf) (p. 2). Dr. Jess Riedel was skeptical of this sort of objection. From [Open Philanthropy’s non-verbatim notes from a conversation with Dr. Jess Riedel](https://www.openphilanthropy.org/files/Conversations/Discussions%20with%20Dr.%20Jess%20Riedel,%20Spring%202020.pdf): “Dr. Riedel is skeptical of objections to the viability of reversible computing that appeal to the bit-erasures involved in receiving new inputs and writing new final outputs. It’s true that reversible computing paradigms require bit-erasures for this, but for most interesting computations, the intermediate memory usage is much (often exponentially) larger than the input and output data” (p. 5). I have not attempted to evaluate this debate in detail. If Prof. Wolpert is correct, then algorithmic arguments look stronger.\n\n\n[604.](https://www.openphilanthropy.org/brain-computation-report#footnoteref604_mj08c9d)[Sagawa (2014)](https://arxiv.org/pdf/1311.1886.pdf): “A computational process C is logically reversible if and only if it is an[injection](https://en.wikipedia.org/wiki/Injective_function). In other words, C is logically reversible if and only if, for any output logical state, there is a unique input logical state. Otherwise, C is logically irreversible” (p. 7 in the arxiv version).\n\n\n[605.](https://www.openphilanthropy.org/brain-computation-report#footnoteref605_1egxzqt)[Hänninen and Takala (2010)](https://ieeexplore.ieee.org/document/5697744): “the logical reversal of the addition requires the result word and n extra bits, which could be chosen simply to represent one of the input operands” (p. 224). And see also [Jared Kaplan’s notes on Statistical Mechanics & Thermodynamics](https://sites.krieger.jhu.edu/jared-kaplan/files/2018/11/StatisticalMechanicsNotes.pdf): “In principle we can do even better through reversible computation. After all, there’s no reason to make erasures. For example, when adding we could perform an operation mapping (x, y) → (x, x + y), for example (58, 23) → (58, 81), so that no information is erased. In this case, we could in principle perform any computation we like without producing any waste heat at all. But we need to keep all of the input information around to avoid creating entropy and using up energy” (p. 60).\n\n\n[606.](https://www.openphilanthropy.org/brain-computation-report#footnoteref606_0kwbmh6)[Johnson (1999)](https://www.nytimes.com/1999/06/15/science/a-radical-computer-learns-to-think-in-reverse.html): “Efficient as such a system would be, there would still be drawbacks. In a complex calculation, the extra memory needed to save all the intermediary ”garbage bits” can grow wildly. As a compromise, Dr. Bennett devised a memory-saving method in which a computer would carry out a few steps of the calculation, copy the result and rewind. Then, starting with the copied result, it would take a few more steps. He likened the method to crossing a river using just a few stepping stones: one must backtrack to pick up the stones left behind, placing them in the path ahead. While the procedure would consume less memory, it would require more computational steps, slowing down the calculation. To computer scientists, this was a classic tradeoff: pay the computational cost with either memory space or processing time.” [Wolpert (2019b)](https://arxiv.org/pdf/1901.00386.pdf): “One of the properties of logically reversible gates that initially caused problems in designing circuits out of them is that running those gates typically produces “garbage” bits, to go with the bits that provide the output of the conventional gate that they emulate. The problem is that these garbage bits need to be reinitialized after the gate is used, so that the gate can be used again. Recognizing this problem, [[50](https://link.springer.com/article/10.1007/BF01857727)] shows how to avoid the costs of reinitializing any garbage bits produced by using a reversible gate in a reversible circuit C ′ , by extending C ′ with yet more reversible gates (e.g., Fredkin gates). The result is an extended circuit that takes as input a binary string of input data x, along with a binary string of “control signals” m ∈ M, whose role is to control the operation of the reversible gates in the circuit. The output of the extended circuit is a binary string of the desired output for input xIN , xOUT = f(x N), together with a copy of m, and a copy of xIN, which I will write as xINcopy. So in particular, none of the output garbage bits produced by the individual gates in the original, unextended circuit of reversible gates still exists by the time we get to the output bits of the extended circuit. While it removes the problem of erasing the garbage bits, this extension of the original circuit with more gates does not come for free. In general it requires doubling the total number of gates (i.e., the circuit’s size), doubling the running time of the circuit (i.e., the circuit’s depth), and increasing the number of edges coming out of each gate, by up to a factor of 3. (In special cases though, these extra cost can be reduced, sometimes substantially.)” (p. 28). See also Michael Frank’s comments [here](https://intelligence.org/2014/01/31/mike-frank-on-reversible-computing/): “It is probably the case that general reversible computations do require some amount of overhead in either space or time complexity; indeed, [Ammer and I proved rigorously](http://www.eng.fsu.edu/~mpf/revsep.pdf) that this is true in a certain limited technical context. But, the overheads of reversible algorithms can theoretically be overwhelmed by their energy-efficiency benefits, to improve overall cost-performance for large-scale computations.”\n\n\n[607.](https://www.openphilanthropy.org/brain-computation-report#footnoteref607_ugl1isr)From [Open Philanthropy’s non-verbatim notes from a conversation with Dr. Jess Riedel](https://www.openphilanthropy.org/files/Conversations/Discussions%20with%20Dr.%20Jess%20Riedel,%20Spring%202020.pdf): “For large computations, this conversion adds only a modest overhead in required time and memory. For example, the algorithm described in Charles Bennett’s 1989 paper ‘Time/Space Trade-Offs for Reversible Computation’ involves slow-downs of at worst a multiplicative factor, around 2-3× as slow” (p. 4). See also [Open Philanthropy’s non-verbatim notes from a conversation with Dr. Paul Christiano](https://www.openphilanthropy.org/files/Conversations/Discussions%20with%20Dr.%20Paul%20Christiano.pdf): “The algorithmic overhead involved in reversible computing (specifically, the overhead involved in un-computing what you have already computed) is not that bad. Most of the difficulty lies in designing such efficient hardware” (p. 4). [Bennett (1989)](https://epubs.siam.org/doi/abs/10.1137/0218053?casa_token=vnD0zJclKZQAAAAA%3AK7-WmLzZs0hMB9f0RLP4QxScEYJ1S5lPtVdmT6QeFfF8ND24mDbadlMU5KzhivkC372qCMTHUw&journalCode=smjcat): “Using a pebbling argument, this paper shows that, for any e > 0, ordinary multitape Turing machines using time T and space S can be simulated by reversible ones using time O(T1 + F) and space O(S log T) or in linear time and space O(STe)… The time/space cost of computing a 1:1 function on such a machine is equal within a small polynomial to the cost of computing the function and its inverse on an ordinary Turing machine” (p. 766). See also [Wolpert’s (2019a)](https://arxiv.org/pdf/1905.05669.pdf) overhead estimates, e.g.: “In general it requires doubling the total number of gates (i.e., the circuit’s size), doubling the running time of the circuit (i.e., the circuit’s depth), and increasing the number of edges coming out of each gate, by up to a factor of 3” (p. 28).\n\n\n[608.](https://www.openphilanthropy.org/brain-computation-report#footnoteref608_s8x6hw4)From [Open Philanthropy’s non-verbatim notes from a conversation with Dr. Jess Riedel](https://www.openphilanthropy.org/files/Conversations/Discussions%20with%20Dr.%20Jess%20Riedel,%20Spring%202020.pdf): “When humans write software to accomplish human objectives, they use a lot of irreversible steps (though there are some non-atomic reversible intermediate computations, like Fourier transforms)” (p. 4).\n\n\n[609.](https://www.openphilanthropy.org/brain-computation-report#footnoteref609_jt7azcy)From [Open Philanthropy’s non-verbatim notes from a conversation with Dr. Jess Riedel](https://www.openphilanthropy.org/files/Conversations/Discussions%20with%20Dr.%20Jess%20Riedel,%20Spring%202020.pdf): “When the world has some simple feature (e.g., the position and velocity of a rock heading towards your head), this feature is encoded in very complicated intermediate systems (e.g., the trillions of photons scattering from the rock and heading towards your eye). The brain has to distill an answer to a high-level question (e.g., “do I dodge left or right?”) from the complicated intermediate system, and this involves throwing out a lot of entropy” (p. 4).\n\n\n[610.](https://www.openphilanthropy.org/brain-computation-report#footnoteref610_nthn64o)From [Open Philanthropy’s non-verbatim notes from a conversation with Prof. Jared Kaplan](https://www.openphilanthropy.org/files/Conversations/A%20conversation%20with%20Professor%20Jared%20Kaplan,%20January%2023,%202020.pdf): “FLOPs in actual computers erase bits, and Prof. Kaplan expects that you generally have order one bit-erasures per operation in computational systems. That is, you don’t do a lot of complicated things with a bit, and then erase it, and then do another set of very complicated things with another bit, and then erase it, etc. Prof. Kaplan’s intuition in this respect comes from his understanding of certain basic operations you can do with small amounts of information. In principle you can perform a very complicated set of transformations on a piece of information, like an image, without erasing bits. Prof. Kaplan can imagine some kind of order one factor increase in required compute from this type of thing” (p. 4).\n\n\n[611.](https://www.openphilanthropy.org/brain-computation-report#footnoteref611_hbk4ixs)From [Open Philanthropy’s non-verbatim notes from a conversation with Dr. Jess Riedel](https://www.openphilanthropy.org/files/Conversations/Discussions%20with%20Dr.%20Jess%20Riedel,%20Spring%202020.pdf): “if (as in current conventional computers) you’re dissipating thousands of *k*T per operation, it isn’t worth transitioning to logically reversible operations, because other forms of energy dissipation dominate the Landauer-mandated energy costs of logical irreversibility” (p. 4).\n\n\n[612.](https://www.openphilanthropy.org/brain-computation-report#footnoteref612_rfaady7)From [Open Philanthropy’s non-verbatim notes from a conversation with Dr. Paul Christiano](https://www.openphilanthropy.org/files/Conversations/Discussions%20with%20Dr.%20Paul%20Christiano.pdf): “Dr. Christiano does not think that logically irreversible operations are a more natural or default computational unit than reversible ones. And once we’re engaging with models of brain computation that invoke computations performed by low-level, reversible elements, then we are assuming that the brain is able to make use of such elements, in which case it may well have evolved a reliance on them from the start. For example, if it were possible to use proteins to directly perform large tunable matrix multiplications, Landauer’s principle implies that those matrix multiplications would necessarily be invertible or even unitary. But unitary matrix multiplications are just as useful for deep learning as general matrix multiplications, so Landauer’s principle per se doesn’t tell us anything about the feasibility of the scenario. Instead the focus should be on other arguments (e.g. regarding consistency and flexibility)” (p. 4).\n\n\n[613.](https://www.openphilanthropy.org/brain-computation-report#footnoteref613_kj1u5wu)My thanks to Prof. David Wallace for discussion.\n\n\n[614.](https://www.openphilanthropy.org/brain-computation-report#footnoteref614_szi8ipl)Michael Frank gives a summary of the development of the literature on reversible computing [here](https://intelligence.org/2014/01/31/mike-frank-on-reversible-computing/) (see paragraphs starting with “I’ll summarize a few of the major historical developments…”).\n\n\n[615.](https://www.openphilanthropy.org/brain-computation-report#footnoteref615_dj1llbe)See [this](https://intelligence.org/2014/01/31/mike-frank-on-reversible-computing/) 2014 interview with the Machine Intelligence Research Institute.\n\n\n[616.](https://www.openphilanthropy.org/brain-computation-report#footnoteref616_tzyc5zc)From [Open Philanthropy’s non-verbatim notes from a conversation with Dr. Michael Frank](https://www.openphilanthropy.org/files/Conversations/A%20conversation%20with%20Dr.%20Michael%20Frank,%20January%2022,%202020.docx.pdf): “The biggest challenge is figuring out the fundamental physics involved in improving the trade-offs between energy dissipation and speed in reversible processes. We don’t know of any fundamental limits in this respect at the moment, but there may be some, and we need to understand them if so. One question is whether exploiting quantum phenomena can help. Dr. Frank is working on this at the moment. There are also practical issues involved in improving the degree of reversibility of mechanisms that we know how to design in principle, but which require a lot of advanced, high-precision engineering to get the level of efficiency we want. And there is a lot of engineering and design work to do at the level of circuits, architectures, design tools, and hardware description languages” (p. 2). See also page 1: “A lot of advanced physics and engineering is necessary for figuring out how to do reversible computing well. The goal is to create very fast, very energy-efficient systems. Currently, the closest examples are fairly rudimentary systems like simple oscillators. The transition to reversible computing won’t happen overnight, and it may take decades, even once fundamental problems are solved.”\n\n\n[617.](https://www.openphilanthropy.org/brain-computation-report#footnoteref617_8m0hxz4)From [Open Philanthropy’s non-verbatim notes from a conversation with Dr. Paul Christiano](https://www.openphilanthropy.org/files/Conversations/Discussions%20with%20Dr.%20Paul%20Christiano.pdf): “In irreversible computers, you do not need to keep track of and take into account what happens to each degree of freedom, because you are able to expend energy to reset the system to a state it needs to be in for your computation to proceed successfully. With reversible computers, however, you aren’t able to expend such energy, so what happens to any degree of freedom that could influence your computation starts to matter a lot; you can’t simply force the relevant physical variables into a particular state, so your computation needs to work for the particular state that those variables happen to be in. Given the reversibility of physics, this is a very difficult engineering challenge” (p. 5).\n\n\n[618.](https://www.openphilanthropy.org/brain-computation-report#footnoteref618_rlnaax9)This is based primarily on eyeballing the chart presented at [4:17](https://youtu.be/IQZ_bQbxSXk?t=257) in Michael Frank’s 2017 YouTube talk (Frank cites the [International Roadmap of Semiconductors 2015](https://www.semiconductors.org/resources/2015-international-technology-roadmap-for-semiconductors-itrs/), though I’m not sure where the specific information he’s pointing to comes from). According to Frank’s description of this chart, if you include various overhead factors that Frank suggests are extremely difficult to eliminate, we are currently dissipating around 10,000-50,000 *k*T per grounding of a circuit node at T=300K. The minimum energy used to switch the state of a minimum-sized transistor is smaller, between 100-1000 *k*T, but Frank suggests that using minimum-sized transistors is not always optimal for performance, and other overheads are in play as well. See also [Frank (2018)](https://arxiv.org/pdf/1901.10327.pdf): “As the end of the semiconductor roadmap approaches, there is today a growing realization among industry leaders, researchers, funding agencies and investors that a transition to novel computing paradigms will be required in order for engineers to continue improving the energy efficiency (and thus, cost efficiency) of computing technology beyond the expected final CMOS node, when minimal transistor gate energies are expected to plateau at around the 40-80 *k*T level (∼ 1-2 eV at room temperature), with typical total CV2 node energies plateauing at a much higher level of around 1-2 *k*eV” (p. 2). [Hänninen et al. (2011)](https://www3.nd.edu/~lent/pdf/nd/IrreversibleBitErasuresHanninenLent2011.pdf) also note that the Landauer limit is “nearly three orders of magnitude lower than end-of-the-roadmap CMOS transistors,” (p. 2370) which is roughly where Frank’s chart forecasts the asymptote for minimum-size transistors (if we include circuit-level overhead factors, it’s another couple orders of magnitude). Jess Riedel notes that humans can, if necessary, create very special-purpose computational devices that get much closer to Landauer’s limit (this, he suggests, is what the “experimental tests” of Landauer’s limit attempt to do), but that these aren’t useful for practical, large-scale computing (see [Open Philanthropy’s non-verbatim notes from a conversation with Dr. Jess Riedel](https://www.openphilanthropy.org/files/Conversations/Discussions%20with%20Dr.%20Jess%20Riedel,%20Spring%202020.pdf), p. 3). See also [this](https://intelligence.org/2014/04/03/erik-debenedictis/#endnote_0_10946) conversation with Erik DeBenedictis, who predicts 2000 *k*T/logic op by 2030, including interconnect wire.\n\n\n[619.](https://www.openphilanthropy.org/brain-computation-report#footnoteref619_idjmmaz)See calculation [here](https://www.wolframalpha.com/input/?i=%28%28300W%29%2F%28boltzmann%27s+constant+*+293+kelvin+*+ln2%29%29%2F1e14).\n\n\n[620.](https://www.openphilanthropy.org/brain-computation-report#footnoteref620_zc5j2rl)See [Aiello’s (1997)](http://www.scielo.br/scielo.php?script=sci_arttext&pid=S0100-84551997000100023) for some discussion. From [Open Philanthropy’s non-verbatim notes from a conversation with Prof. David Wolpert](https://www.openphilanthropy.org/files/Conversations/A%20conversation%20with%20Professor%20David%20Wolpert,%20January%2023,%202020.pdf): “Metabolic constraints are extremely important in evolutionary biology. But the field of evolutionary biology has not adequately incorporated discoveries about the energy costs of the computation. The massive energy costs of the brain ground a presumption that it has been highly optimized for thermodynamic efficiencies. Understanding better how the brain’s architecture balances energy costs with computational performance may lead to important breakthroughs. However, at this point we are basically clueless about how the brain’s computation works, so we can’t even state this problem precisely” (p. 3).\n\n\n[621.](https://www.openphilanthropy.org/brain-computation-report#footnoteref621_whosun2)See e.g. [Kempes et al. (2017)](https://arxiv.org/pdf/1706.05043.pdf): “Here we show that the computational efficiency of translation, defined as free energy expended per amino acid operation, outperforms the best supercomputers by several orders of magnitude, and is only about an order of magnitude worse than the Landauer bound” (p. 1). Rahul Sarpeshkar, in a [2018 TED talk](https://youtu.be/ZycidN_GYo0?t=207), suggests that cells are the most energy efficient computers that we know, and that they are already computing at an efficiency near the fundamental laws of physics (3:30-4:04). See also Laughlin et al. (1998): “Freed from heavy mechanical work, ion channels change conformation in roughly 100 μs32. In principle, therefore, a single protein molecule, switching at the rate of an ion channel with the stoichiometry of kinesin, could code at least 103 bit per second at a cost of 1 ATP per bit” (p. 39). See [Sarpeshkar (2013)](https://www.nature.com/articles/nature12148?proof=true&platform=oscar&draft=collection) for more on computation in cells, and [Sarpeshkar (2010)](https://www.cambridge.org/core/books/ultra-low-power-bioelectronics/ED8504DA1504856B74E2502EA859FDEA) for more on the energy-efficiency of biological systems more generally: “A single cell in the body performs ~10 million energy-consuming biochemical operations per second on its noisy molecular inputs with ~1 pW of average power. Every cell implements a ~30,000 node gene-protein molecular interaction network within its confines. All the ~100 trillion cells of the human body consume ~80 W of power at rest. The average energy for an elementary energy-consuming operation in a cell is about 20*k*T, where *k*T is a unit of thermal energy. In deep submicron processes today, switching energies are nearly 104 – 105*k*T for just an elementary 0->1 digital switching operation. Even at 10 nm, the likely end of business-as-usual transistor scaling in the future, it is unlikely that we will be able to match such energy efficiency. Unlike traditional digital computation, biological computation is tolerant to error in elementary devices and signals. Nature illustrates that it is significantly more energy efficient to compute with error-prone devices and signals and then correct for these errors through feedback-and-learning architectures than to make every device and every signal in a system robust, as in traditional digital paradigms thus far” (p. 18-19). [Bennett (1989)](https://epubs.siam.org/doi/abs/10.1137/0218053?casa_token=vnD0zJclKZQAAAAA%3AK7-WmLzZs0hMB9f0RLP4QxScEYJ1S5lPtVdmT6QeFfF8ND24mDbadlMU5KzhivkC372qCMTHUw&journalCode=smjcat) also suggests that “a few thermodynamically efficient data processing systems do exist, notably genetic enzymes such as RNA polymerase, which, under appropriate reactant concentrations, can transcribe information from DNA to RNA at a thermodynamic cost considerably less than *k*T per step” (p. 766); see also [Bennett (1973)](https://www.math.ucsd.edu/~sbuss/CourseWeb/Math268_2013W/Bennett_Reversibiity.pdf): “Tape copying is a logically reversible operation, and RNA polymerase is both thermodynamically and logically reversible” (p. 532). See also [Ouldridge and ten Wolde (2017)](https://journals.aps.org/prl/abstract/10.1103/PhysRevLett.118.158103), [Ouldridge (2017)](https://arxiv.org/abs/1702.00360), [Sartori et al. (2014)](https://journals.plos.org/ploscompbiol/article?id=10.1371/journal.pcbi.1003974), [Mehta and Schwab (2012)](https://www.pnas.org/content/109/44/17978), and [Mehta et al. (2016)](https://link.springer.com/article/10.1007%2Fs10955-015-1431-6). Though see also [Open Philanthropy’s non-verbatim notes from a conversation with Dr. Jess Riedel](https://www.openphilanthropy.org/files/Conversations/Discussions%20with%20Dr.%20Jess%20Riedel,%20Spring%202020.pdf): “Biology may be very energy efficient in certain cases, but Dr. Riedel still thinks it very unlikely that the efficiency of the brain’s computation is anywhere near Landauer’s limit. There are also likely to be other examples in which biology is extremely inefficient relative to Landauer’s principle, due to other constraints (for example, cases in which biological systems use chemical gradients involving billions of molecules to communicate ~5 bits of information). Humans can, if necessary, create very special-purpose computational devices that get close to Landauer’s limit (this is what “experimental tests” of Landauer’s limit attempt to do), and our power plants, considered as thermodynamic heat engines, are very efficient (e.g., nearing thermodynamic bounds). However, our useful, scalable computers are not remotely close to the minimal energy dissipation required by Landauer’s principle. This appears to be an extraordinarily hard engineering problem, and it’s reasonable to guess that brains haven’t solved it, even if they are very energy efficient elsewhere. ” (p. 3).\n\n\n[622.](https://www.openphilanthropy.org/brain-computation-report#footnoteref622_idt1o91)From [Open Philanthropy’s non-verbatim notes from a conversation with Dr. Michael Frank](https://www.openphilanthropy.org/files/Conversations/A%20conversation%20with%20Dr.%20Michael%20Frank,%20January%2022,%202020.docx.pdf): “In general, Dr. Frank does not see evidence that biology is attempting to do anything like what human engineers working on reversible computing are trying to do. Reversible computing is an extremely advanced tier of high-precision engineering, which we’re still struggling to figure out. Biology, by contrast, seems perfectly happy with what it can do with simple, irreversible mechanisms. … In general, most signaling mechanisms in biology are highly dissipative. For example, the biophysical processes involved in neural firing (e.g., vesicle release, action potential propagation, ion channels driving the ion concentrations to new states) dissipate lots of energy. Indeed, most of life seems to be based on strongly driven (e.g., irreversible) processes” (p. 4). From [Open Philanthropy’s non-verbatim notes from a conversation with Prof. David Wolpert](https://www.openphilanthropy.org/files/Conversations/A%20conversation%20with%20Professor%20David%20Wolpert,%20January%2023,%202020.pdf): “Prof. Wolpert also expects that using Landauer’s principle to estimate the amount of computation performed by the brain will result in substantial overestimates. A single neuron uses very complicated physical machinery to propagate a single bit along an axon. Prof. Wolpert expects this to be very far away from theoretical limits of efficiency. That said, some computational processes in biology are very energy efficient. For example, Prof. Wolpert recently co-authored a paper on protein synthesis in ribosomes, showing that the energy efficiency of the computation is only around two orders of magnitude worse than Landauer’s bound. Prof. Wolpert expects neurons to be much less efficient than this, but he doesn’t know” (p. 4).\n\n\n[623.](https://www.openphilanthropy.org/brain-computation-report#footnoteref623_ouzyy0b)See [Laughlin et al. (1998)](https://pubmed.ncbi.nlm.nih.gov/10195106/): “Synapses and cells are using 105 to 108 times more energy than the thermodynamic minimum. Thermal noise sets a lower limit of *k* · T Joules for observing a bit of information (*k*, Boltzmann’s constant; T, absolute temperature, 290K) and the hydrolysis of one ATP molecule to ADP releases about 25 *k*T” (p. 39). “Thermal noise sets a lower limit of *k* × T Joules for observing a bit of information (*k*, Boltzmann’s constant; T, absolute temperature, 290K” (p. 39). [Laughlin et al. (1998)](https://pubmed.ncbi.nlm.nih.gov/10195106/) also note that “At least two biophysical constraints will contribute to these systems’ costs. First, there is the uncertainty associated with molecular interactions. The stochastic nature of receptor activation (photon absorption), of molecular collision, of diffusion, and of vesicle release, degrades information by introducing noise (eqns. 1 and 7), thereby substantially increasing costs. Secondly, energy is required to distribute signals over relatively large distances. We suggest, therefore, that the high metabolic cost of information in systems is dictated by basic molecular and cellular constraints to cell signaling, as independently proposed by Sarpeshkar (see also [Sarpeshkar (1997)](https://thesis.library.caltech.edu/3063/1/Sarpeshkar_R_1997.pdf))” (p. 37).\n\n\n[624.](https://www.openphilanthropy.org/brain-computation-report#footnoteref624_ba11cu0)[Lennie (2003)](http://www2.bcs.rochester.edu/sites/plennie/pdfs/Lennie03a.pdf#page=3) writes that “The aggregate cost of a spike is 2.4 × 109 ATP molecules” (p. 493), and with [Laughlin et al. (1998)](https://pubmed.ncbi.nlm.nih.gov/10195106/), who write that “the hydrolysis of one ATP molecule to ADP releases about 25 *k*T” (p. 39) (see also discussion [here](http://book.bionumbers.org/how-much-energy-is-released-in-atp-hydrolysis/)). 2.4e9 × 25 = 6e10. See also [Bennett (1981)](https://www.pitt.edu/~jdnorton/lectures/Rotman_Summer_School_2013/thermo_computing_docs/Bennett_1982.pdf): “Macroscopic size also explains the poor efficiency of neurons, which dissipate about 1011 *k*T per discharge” (p. 907).\n\n\n[625.](https://www.openphilanthropy.org/brain-computation-report#footnoteref625_5ckpkz0)From [Open Philanthropy’s non-verbatim notes from a conversation with Prof. David Wolpert](https://www.openphilanthropy.org/files/Conversations/A%20conversation%20with%20Professor%20David%20Wolpert,%20January%2023,%202020.pdf): “Prof. Wolpert also expects that using Landauer’s principle to estimate the amount of computation performed by the brain will result in substantial overestimates. A single neuron uses very complicated physical machinery to propagate a single bit along an axon. Prof. Wolpert expects this to be very far away from theoretical limits of efficiency” (p. 4).\n\n\n[626.](https://www.openphilanthropy.org/brain-computation-report#footnoteref626_6d1dgk4)From [Open Philanthropy’s non-verbatim notes from a conversation with Dr. Jess Riedel](https://www.openphilanthropy.org/files/Conversations/Discussions%20with%20Dr.%20Jess%20Riedel,%20Spring%202020.pdf): “Presumably, we think we basically understand cases where the brain is sending very simple signals, like the signal to kick your leg. We know that the nerves involved in conveying these signals are operating in an irreversible way, and burning way more energy than the Landauer limit would say is necessary to communicate the number of bits needed to say e.g. how much to move the muscle. It seems this energy is required partly because the nerve is a big and complicated system, with many moving parts, so redundancy is necessary” (p. 3). From [Open Philanthropy’s non-verbatim notes from a conversation with Prof. Jared Kaplan](https://www.openphilanthropy.org/files/Conversations/A%20conversation%20with%20Professor%20Jared%20Kaplan,%20January%2023,%202020.pdf): “For example, a lot of synapses, not too dissimilar from synapses in the brain, are used to send information to e.g. a muscle. Those synapses are using a lot of energy, and the brain is clearly going through a lot of effort to convey the relevant information confidently” (p. 3).\n\n\n[627.](https://www.openphilanthropy.org/brain-computation-report#footnoteref627_witb5jz)[Laughlin et al. (1998)](https://pubmed.ncbi.nlm.nih.gov/10195106/) write that “the hydrolysis of one ATP molecule to ADP releases about 25 *k*T” (p. 39) (see also discussion [here](http://book.bionumbers.org/how-much-energy-is-released-in-atp-hydrolysis/)). [Sarpeshkar (2014)](https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3928905/) also mentions “20 *k*T per molecular operation (1 ATP molecule hydrolysed)” (section 1). [Swaminathan (2008)](https://www.scientificamerican.com/article/why-does-the-brain-need-s/) characterize ATP as “the primary source of cellular energy” in rat brains; and studies of brain metabolism like [Lennie (2003)](http://www2.bcs.rochester.edu/sites/plennie/pdfs/Lennie03a.pdf) use ATPs as the central basis for measuring the brain’s energy budget\n\n\n[628.](https://www.openphilanthropy.org/brain-computation-report#footnoteref628_ri2sirj)From [Open Philanthropy’s non-verbatim notes from a conversation with Dr. Paul Christiano](https://www.openphilanthropy.org/files/Conversations/Discussions%20with%20Dr.%20Paul%20Christiano.pdf): “Dr. Christiano would be extremely surprised if the brain got more computational mileage out of a single ATP than human engineers can get out of a FLOP, and he would be very willing to bet that it takes at least 10 ATPs to get the equivalent of a FLOP. Mr. Carlsmith estimates that the brain can be using no more than ~1e20 ATPs/second. If this estimate is right, then Dr. Christiano is very confident that you do not need more than 1e20 FLOP/s to replicate the brain’s task-performance” (p. 5).\n\n\n[629.](https://www.openphilanthropy.org/brain-computation-report#footnoteref629_lciffsw)Calculation [here](https://www.wolframalpha.com/input/?i=%2820W%29%2F%28boltzmann%27s+constant+*+310kelvin+*25+*+ln2%29). [This link](https://hypertextbook.com/facts/2000/AmberIqbal.shtml#:~:text=Hydrolysis%20of%20one%20gram%20mole,about%2010%E2%88%9219%20J.%22&text=All%20of%20the%20biosynthesis%20activities,the%20capacity%20to%20do%20work) also lists 1e-19 J per molecule, and 30-60 kJ per mole. [Lennie (2003)](http://www2.bcs.rochester.edu/sites/plennie/pdfs/Lennie03a.pdf) estimates a “gross consumption of 3.4 × 1021 molecules of ATP per minute” in the cortex, and that “in the normal awake state, cortex accounts for 44% of whole brain energy consumption,” suggesting [~6e19 ATPs/s in the cortex](https://www.wolframalpha.com/input/?i=3.4e21%2F60), and [~1e20 for the brain overall](https://www.wolframalpha.com/input/?i=5.6e19%2F.44).\n\n\n[630.](https://www.openphilanthropy.org/brain-computation-report#footnoteref630_piulllp)From [Open Philanthropy’s non-verbatim notes from a conversation with Prof. Jared Kaplan](https://www.openphilanthropy.org/files/Conversations/A%20conversation%20with%20Professor%20Jared%20Kaplan,%20January%2023,%202020.pdf): “In general, Prof. Kaplan thinks it unlikely that big, warm things are performing thermodynamically reversible computations” (p. 3). From [Open Philanthropy’s non-verbatim notes from a conversation with Dr. Jess Riedel](https://www.openphilanthropy.org/files/Conversations/Discussions%20with%20Dr.%20Jess%20Riedel,%20Spring%202020.pdf): “… It seems this energy is required partly because the nerve is a big and complicated system, with many moving parts, so redundancy is necessary” (p. 3). See also [Bennett (1981)](https://www.pitt.edu/~jdnorton/lectures/Rotman_Summer_School_2013/thermo_computing_docs/Bennett_1982.pdf): “Macroscopic size also explains the poor efficiency of neurons, which dissipate about 1011 *k*T per discharge” (p. 907).\n\n\n[631.](https://www.openphilanthropy.org/brain-computation-report#footnoteref631_sxie74g)From [Open Philanthropy’s non-verbatim notes from a conversation with Prof. Jared Kaplan](https://www.openphilanthropy.org/files/Conversations/A%20conversation%20with%20Professor%20Jared%20Kaplan,%20January%2023,%202020.pdf): “In general, Prof. Kaplan thinks it unlikely that big, warm things are performing thermodynamically reversible computations” (p. 3).\n\n\n[632.](https://www.openphilanthropy.org/brain-computation-report#footnoteref632_1c7ilx2)From [Open Philanthropy’s non-verbatim notes from a conversation with Prof. Jared Kaplan](https://www.openphilanthropy.org/files/Conversations/A%20conversation%20with%20Professor%20Jared%20Kaplan,%20January%2023,%202020.pdf): “If you’re in a regime where there is some signal to noise ratio, and you make your signal big to avoid noise, you can’t be doing something thermodynamically reversible: the noise is creating waste heat, and you’re extending your signal to get above that. Prof. Kaplan would have thought that basically all of the processes in the brain have this flavor” (p. 3). [Laughlin et al. (1998)](https://pubmed.ncbi.nlm.nih.gov/10195106/) also note that “At least two biophysical constraints will contribute to these systems’ costs. First, there is the uncertainty associated with molecular interactions. The stochastic nature of receptor activation (photon absorption), of molecular collision, of diffusion, and of vesicle release, degrades information by introducing noise (eqns. 1 and 7), thereby substantially increasing costs. Secondly, energy is required to distribute signals over relatively large distances. We suggest, therefore, that the high metabolic cost of information in systems is dictated by basic molecular and cellular constraints to cell signaling, as independently proposed by Sarpeshkar (see also [Sarpeshkar (1997)](https://thesis.library.caltech.edu/3063/1/Sarpeshkar_R_1997.pdf))” (p. 37).\n\n\n[633.](https://www.openphilanthropy.org/brain-computation-report#footnoteref633_uea7gjc)From [Open Philanthropy’s non-verbatim notes from a conversation with Prof. Jared Kaplan](https://www.openphilanthropy.org/files/Conversations/A%20conversation%20with%20Professor%20Jared%20Kaplan,%20January%2023,%202020.pdf): “Processes that involve diffusion also cannot be thermodynamically reversible. Diffusion increases entropy. For example, if you take two substances and mix them together, you have increased the entropy of that system” (p. 3). From [Open Philanthropy’s non-verbatim notes from a conversation with Dr. Michael Frank](https://www.openphilanthropy.org/files/Conversations/A%20conversation%20with%20Dr.%20Michael%20Frank,%20January%2022,%202020.docx.pdf): “One example difference is that reversible computing engineers can use inertia to propagate signals at the speed of light, with very little energy dissipation. They can also achieve similarly efficient, high-speed results by sending magnetic flux quanta through superconducting circuits. The brain, however, relies on diffusion, which cannot take advantage of such inertia” (p. 4).\n\n\n[634.](https://www.openphilanthropy.org/brain-computation-report#footnoteref634_l4y3bjd)From [Open Philanthropy’s non-verbatim notes from a conversation with Prof. Jared Kaplan](https://www.openphilanthropy.org/files/Conversations/A%20conversation%20with%20Professor%20Jared%20Kaplan,%20January%2023,%202020.pdf) (p. 3):\n\n\nIn general, it’s extremely difficult to build reversible computers. For example, all of the quantum computers we have are very rudimentary (quantum computers are a type of reversible computer), and it’s hard to keep them running for very long without destroying information. In order to be performing thermodynamically reversible computations, each neuron would have to have some sort of very specialized component, operating in a specialized environment crafted in order to perform the computation in a thermodynamically reversible way. It would be hard to keep this running for very long, and Prof. Kaplan doesn’t think this is happening. From [Open Philanthropy’s non-verbatim notes from a conversation with Dr. Jess Riedel](https://www.openphilanthropy.org/files/Conversations/Discussions%20with%20Dr.%20Jess%20Riedel,%20Spring%202020.pdf) (p. 3):\n\n\n\n> Humans can, if necessary, create very special-purpose computational devices that get close to Landauer’s limit (this is what ‘experimental tests’ of Landauer’s limit attempt to do), and our power plants, considered as thermodynamic heat engines, are very efficient (e.g., nearing thermodynamic bounds). However, our useful, scalable computers are not remotely close to the minimal energy dissipation required by Landauer’s principle. This appears to be an extraordinarily hard engineering problem, and it’s reasonable to guess that brains haven’t solved it, even if they are very energy efficient elsewhere.\n> \n> \n\n\nFrom [Open Philanthropy’s non-verbatim notes from a conversation with Dr. Michael Frank](https://www.openphilanthropy.org/files/Conversations/A%20conversation%20with%20Dr.%20Michael%20Frank,%20January%2022,%202020.docx.pdf) (p. 3-4):\n\n\n\n> In general, Dr. Frank does not see evidence that biology is attempting to do anything like what human engineers working on reversible computing are trying to do. Reversible computing is an extremely advanced tier of high-precision engineering, which we’re still struggling to figure out. Biology, by contrast, seems perfectly happy with what it can do with simple, irreversible mechanisms.\n> \n> \n\n\nFrom the non-verbatim notes from my conversation with Dr. Paul Christiano (p. 5):\n\n\n\n> Dr. Christiano expects that experts in physics, chemistry, and computer engineering would generally think it extremely unlikely that the brain is erasing less than one bit per computationally useful FLOP it performs. If the brain were doing this, Dr. Christiano believes that this would mean that the brain is qualitatively much more impressive than any other other biological machinery we are aware of.\n> \n> \n\n\n[635.](https://www.openphilanthropy.org/brain-computation-report#footnoteref635_nus8h3c)The FLOP/s costs of the models in [Beniaguev et al. (2020)](https://www.biorxiv.org/content/10.1101/613141v2.full.pdf), [Maheswaranathan et al. (2019)](https://www.biorxiv.org/content/10.1101/340943v5.full.pdf), and [Batty et al. (2017)](https://openreview.net/pdf?id=HkEI22jeg) are the most salient exception.\n\n\n[636.](https://www.openphilanthropy.org/brain-computation-report#footnoteref636_eui5yec)I don’t give much weight to the energy costs of current digital multiplier implementations, given that analog implementations may be much more efficient (see [Sarpeshkar (1998)](https://ieeexplore.ieee.org/document/6790538) (p. 1605)).\n\n\n[637.](https://www.openphilanthropy.org/brain-computation-report#footnoteref637_a35u76r)A number of my confusions center on theoretical issues related to identifying the set of the computations that a physical system can be said to implement (see [Piccinini (2017)](https://plato.stanford.edu/entries/computation-physicalsystems/) for an introduction). For example, a simulation of a physical system at any level of detail is interpretable as a set of (possibly stochastic) transitions between logical states, and hence as a computation implemented by this system. In this sense, any physical system, dissipating a given amount of energy (a box of gas, a hurricane, etc.), implements an extremely complex computation that describes exactly what it in fact does or would do given different inputs. What’s more, there are broader questions about whether a given physical system can be understood as implementing *any* computation, given a sufficiently unnatural carving of logical states (see e.g. [Aaronson (2011)](https://arxiv.org/abs/1108.1791) (p. 23); [Drescher (2006)](https://www.gwern.net/docs/statistics/decision/2006-drescher-goodandreal.pdf), Chapter 2, and [Hemmo and Shenker (2019)](https://philpapers.org/rec/HEMTPO-7)). I feel very unclear about how both of these theoretical issues interact with constraints imposed by Landauer’s principle, and with estimates of the FLOP/s required to re-implement the computations in question. Indeed, note if it were possible to move easily from bit-erasures to FLOP/s, then naively applied, the Landauer argument discussed here seems to suggest that you can cap the FLOP/s required to *simulate* a physical system via the energy that system dissipates – a conclusion which fits poorly with the extreme computational costs of simulating low-level physical systems like interacting molecules or proteins in lots of detail. Tom Davidson also suggested that this understanding of Landauer’s principle has the somewhat strange implication that a system that gives the same output regardless of the input would have the highest Landauer energy costs, which seems somewhat strange to me (especially if we’re allowed to interpret any set of microstates as an output state). Prof. David Wolpert suggested a number of other possible complexities in our conversation (see [Open Philanthropy’s non-verbatim notes from a conversation with Prof. David Wolpert](https://www.openphilanthropy.org/files/Conversations/A%20conversation%20with%20Professor%20David%20Wolpert,%20January%2023,%202020.pdf) (p. 3)) that I haven’t engaged with, and I expect that further investigation would uncover more.\n\n\n[638.](https://www.openphilanthropy.org/brain-computation-report#footnoteref638_deto8ig)In the context of human hardware, I’ll use the term to cover both on-chip memory bandwidth and bandwidth between chips, since brain-equivalent systems can use multiple chips; in some contexts, like a [TPU](https://storage.googleapis.com/nexttpu/index.html), we might also include very short-distance communication taking place between ALUs. From [Open Philanthropy’s non-verbatim notes from a conversation with Dr. Paul Christiano](https://www.openphilanthropy.org/files/Conversations/Discussions%20with%20Dr.%20Paul%20Christiano.pdf): “Across many different models of computation (e.g. Turing Machines, RAM machines, circuits, etc.), computational resources tend to fall into a number of broad categories, including:\n\n\nMemory (e.g., data the computer can store),\n\n\nCommunication (roughly, the amount of information the computer can send from one part to another),\n\n\nCompute/number of operations.\n\n\nThe exact meaning of these concepts varies across models, but they are often useful to work with” (p. 1).\n\n\n[639.](https://www.openphilanthropy.org/brain-computation-report#footnoteref639_0rkmd4k)[Howarth et al. (2012)](https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3390818/pdf/jcbfm201235a.pdf), Figure 1, estimate that maintaining resting potentials uses 15% of the total energy in the cortex (20% of signaling energy in the cortex), and action potentials use 16% (21% of signaling energy). Synaptic processes account for an additional 44% (see p. 1224). [Schlaepfer et al. (2006)](https://watermark.silverchair.com/9-2-147.pdf?token=AQECAHi208BE49Ooan9kkhW_Ercy7Dm3ZL_9Cf3qfKAc485ysgAAAlowggJWBgkqhkiG9w0BBwagggJHMIICQwIBADCCAjwGCSqGSIb3DQEHATAeBglghkgBZQMEAS4wEQQM7tVtAcbSAZN_fuqeAgEQgIICDULOOL-A8EtpbxSlz1Tm1g2Mu-ApcOl4SoPlDq8TsxsVkrI942Z4QxqxBrKjCxki2BfoTmBvideuPuNVvSq74jY7R_QWunUUhCESx4ez_DVbIu_pWX3a2XFWimQuY79o9xoA45xFEjtOIKHv04jloN_gI7-80ACxE7LfMM2wQHRwCjT3vfN5fjED6qzr-a1fk9tim3iXXR-88IT_vlyOUURGKXzH2Vj1HoOfQJAfGBvLb76Ay-Tmt7XHveLmx1Vc2TU0em4TvvQ61KOxM_aYT4Egb5K_TRrjkSJ2W0gzJiKZIV2MU80kvtfbVSoQgPXceOYBNC15QcNsXfRMx4TTNNIVUf9UHo5XPUJCMionysPNTRmK83zUUm0isdX1-YasUR501FHuYG6ibf-_FdeGpO_cBp2P4xzqlxwmM-3WmNy8e-6SGHcijS7Y5LNVg96wFs6wX3UxbsCqwUN2i8qzmEcR8x23POg6N2ZtH1dWdmZ03YChoPkjqCUm_n7MGwFbW2p2UAGnNTPckJAkbq2oNlZuTs5u0WWbUcnNkFsCayK_KH3LfpEciOlgkJv6g6pCxvswiJLyvebY8cCpKXsTox78qUkIbZf0CP3hIv0Isrr0Rx9Sgllf6oNKd9yLXbOSavz5aLwYSgpAZahIKk7-039YE4ZxpkEWFeEIdzL8oHdLbQCLr3yAMfCLzMSiTw), Table 1, suggests that [white matter](https://en.wikipedia.org/wiki/White_matter), which largely consists of myelinated axons, is about 30% of brain volume (p. 150). See [Diamond (1996)](https://www.nature.com/articles/382756a0.pdf) for discussion of evolutionary pressures on metabolism and brain volume (p. 757).\n\n\n[640.](https://www.openphilanthropy.org/brain-computation-report#footnoteref640_lumpguk)See [Dayan and Abbott (2001)](https://www.amazon.com/Theoretical-Neuroscience-Computational-Mathematical-Modeling/dp/0262541858), Chapter 4 (p. 123-150); [Zador (1998)](http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.125.8765&rep=rep1&type=pdf); [Tsubo et al. (2012)](https://journals.plos.org/ploscompbiol/article?id=10.1371/journal.pcbi.1002461), [Fuhrmann et al. (2001)](https://lobster.ls.huji.ac.il//idan/files/Fuhrmann_etal_2002.pdf), [Mainen and Sejnowski (1995)](http://www.math.pitt.edu/~bard/classes/compneuro/mainensej.pdf), [van Steveninck et al. (1997)](https://pubmed.ncbi.nlm.nih.gov/9065407/).\n\n\n[641.](https://www.openphilanthropy.org/brain-computation-report#footnoteref641_51m5rgp)From [Open Philanthropy’s non-verbatim notes from a conversation with Dr. Paul Christiano](https://www.openphilanthropy.org/files/Conversations/Discussions%20with%20Dr.%20Paul%20Christiano.pdf): “One can also distinguish between the bandwidth available at different distances. Axons vary in length, shorter-distance communication in neurons occurs via dendrites, and at sufficiently short distances, the distinction between communication and computation becomes blurry. For example, a multiply is in some sense mostly communication, and one can think of different processes taking place within neurons as communication as well. For longer-distance communication, though, axons seems like the brain’s primary mechanism” (p. 2).\n\n\n[642.](https://www.openphilanthropy.org/brain-computation-report#footnoteref642_oc8rgq8)See discussion in [Section 2.3](https://www.openphilanthropy.org/brain-computation-report#OtherSignalingMechanisms). From [Open Philanthropy’s non-verbatim notes from a conversation with Dr. Paul Christiano](https://www.openphilanthropy.org/files/Conversations/Discussions%20with%20Dr.%20Paul%20Christiano.pdf): “There are other communication mechanisms in the brain (e.g., glia, neuromodulation, ephaptic effects), but Dr. Christiano expects that these will be lower-bandwidth than axon communication” (p. 2). This point is fairly similar to ones made in Section 2.3, but the idea here is that speed limits the information these mechanism can send different distances, rather than the amount of processing of information they can perform\n\n\n[643.](https://www.openphilanthropy.org/brain-computation-report#footnoteref643_d1ddinp)From [Open Philanthropy’s non-verbatim notes from a conversation with Dr. Paul Christiano](https://www.openphilanthropy.org/files/Conversations/Discussions%20with%20Dr.%20Paul%20Christiano.pdf): “the brain invests a sizeable portion of its energy and volume into communication via axons, which would be a strange investment if it had some other, superior communication mechanism available” (p. 2).\n\n\n[644.](https://www.openphilanthropy.org/brain-computation-report#footnoteref644_mh2z3de)From [Open Philanthropy’s non-verbatim notes from a conversation with Dr. Paul Christiano](https://www.openphilanthropy.org/files/Conversations/Discussions%20with%20Dr.%20Paul%20Christiano.pdf): “You can roughly estimate the bandwidth of axon communication by dividing the firing rate by the temporal resolution of spiking. Thus, for example, if the temporal precision is 1 ms, and neurons are spiking at roughly 1 Hz, then each spike would communicate ~10 bits of information (e.g., log2(1000)). If you increase the temporal precision to every microsecond, that’s only a factor of two difference (e.g., log2(1,000,000) = ~20 bits)… Roughly 1e8 axons cross the corpus callosum, and these account for a significant fraction of the length of all axons (AI Impacts has some estimates in this regard). Based on estimates Dr. Christiano has seen for the total length of all axons and dendrites, and the estimate that 1 spike/second = 10 bits/second across each, he thinks the following bounds are likely: 1e9 bytes/s of long-distance communication (across the brain), 1e11 bytes/s of short-distance communication (where each neuron could access about 10 million nearby neurons), and larger amounts of very-short distance communication.” (p. 2-3). See also [Zhou et al. (2013)](https://www.pnas.org/content/pnas/110/29/E2714.full.pdf): “The largest commissural tract in the human brain is the corpus callosum (CC), with more than 200 million axons connecting the two cerebral hemispheres” (p. E2714).\n\n\n[645.](https://www.openphilanthropy.org/brain-computation-report#footnoteref645_9zp4o4y)[AI Impacts](https://aiimpacts.org/brain-performance-in-teps/): “[Traversed edges per second](http://en.wikipedia.org/wiki/Traversed_edges_per_second) (TEPS) is a metric that was recently developed to measure communication costs, which were seen as neglected in high performance computing.[8](https://aiimpacts.org/brain-performance-in-teps/#easy-endnote-bottom-8-510) The TEPS benchmark measures the time required to perform a [breadth-first search](http://en.wikipedia.org/wiki/Breadth-first_search) on a large random graph, requiring propagating information across every edge of the graph (either by accessing memory locations associated with different nodes, or communicating between different processors associated with different nodes). You can read about the benchmark in more detail at the [Graph 500 site](https://graph500.org/).”\n\n\n[646.](https://www.openphilanthropy.org/brain-computation-report#footnoteref646_80dlca5)Their estimate makes a number of assumptions, including that (1) most relevant communication is between neurons (as opposed to e.g. internal to neurons); (2) that traversing an edge is relevantly similar to spiking; (3) that the distribution of edges traversed doesn’t make a material difference, and (4) that the graph characteristics are relevantly similar. I can imagine objections to (1) that focus on the possibility that important communication is taking place within dendrites (though tree structure arguments might limit the difference this makes); and objections, more generally, that focus on alternative conceptions of how many relevant “vertices” there are in the brain.\n\n\n[647.](https://www.openphilanthropy.org/brain-computation-report#footnoteref647_ir2mh5p)Here I describe a specific version of a general type of argument suggested by Dr. Paul Christiano. From [Open Philanthropy’s non-verbatim notes from a conversation with Dr. Paul Christiano](https://www.openphilanthropy.org/files/Conversations/Discussions%20with%20Dr.%20Paul%20Christiano.pdf): “Dr. Christiano puts some weight on the following type of a priori argument: if you have two computers that are comparable on one dimension (e.g., communication), but you can’t measure how they compare along any other dimensions, then a priori your median guess should be that they are comparable on these other dimensions as well (e.g., it would be strange to have a strong view about which is better)” (p. 2). The argument described above also incorporates the constraint that the dimension in question be important to task-performance, and appeals to the skill of the engineers in question.\n\n\n[648.](https://www.openphilanthropy.org/brain-computation-report#footnoteref648_pi74pxr)The argument appears in a different light if all you know is that e.g. both computers are green (though even there, it would seem strange to think that e.g. the one on the left is probably better than the one on the right, if you have no information to distinguish them). My thanks to Paul Christiano for discussion.\n\n\n[649.](https://www.openphilanthropy.org/brain-computation-report#footnoteref649_i1xhpcb)From [Open Philanthropy’s non-verbatim notes from a conversation with Dr. Paul Christiano](https://www.openphilanthropy.org/files/Conversations/Discussions%20with%20Dr.%20Paul%20Christiano.pdf): “A [V100 GPU](https://www.nvidia.com/en-us/data-center/v100/) has about 1e12 bytes/s of memory bandwidth on the chip (~10x the brain’s 1e11 bytes of short-distance communication, estimated above), and 3e11 bytes/s of off-chip bandwidth (~300x the brain’s 1e9 bytes/s of long-distance communication, estimated above). Dr. Christiano thinks that these memory access numbers are comparable, based on matching up the memory of a V100 (respectively, cluster of V100s) to the amount of information stored in synapses accessible by the “short-distance” (respectively, “long-distance”) connections described above” (p. 4).\n\n\n[650.](https://www.openphilanthropy.org/brain-computation-report#footnoteref650_66z363e)From [Open Philanthropy’s non-verbatim notes from a conversation with Dr. Paul Christiano](https://www.openphilanthropy.org/files/Conversations/Discussions%20with%20Dr.%20Paul%20Christiano.pdf) (p. 2-3).\n\n\n[651.](https://www.openphilanthropy.org/brain-computation-report#footnoteref651_839fkzs)From [Open Philanthropy’s non-verbatim notes from a conversation with Dr. Paul Christiano](https://www.openphilanthropy.org/files/Conversations/Discussions%20with%20Dr.%20Paul%20Christiano.pdf) (p. 2-3).\n\n\n[652.](https://www.openphilanthropy.org/brain-computation-report#footnoteref652_rt99452)From [Open Philanthropy’s non-verbatim notes from a conversation with Dr. Paul Christiano](https://www.openphilanthropy.org/files/Conversations/Discussions%20with%20Dr.%20Paul%20Christiano.pdf): “If we knew nothing else about the brain, then, this might suggest that the brain’s computational capacity will be less than, or at least comparable to, a V100’s computational capacity (~1e14 FLOP/s) as well. And even if our compute estimates for the brain are higher, communication estimates are plausibly more robust, and they provide a different indication of how powerful the brain is relative to our computers” (p. 4).\n\n\n[653.](https://www.openphilanthropy.org/brain-computation-report#footnoteref653_jrisi61)From [Open Philanthropy’s non-verbatim notes from a conversation with Dr. Kate Storrs](https://www.openphilanthropy.org/files/Conversations/A%20conversation%20with%20Dr.%20Kate%20Storrs,%20June%2011,%202020.pdf): “Dr. Storrs’ sense is that, in the parts of the field she engages with most closely (e.g., systems level modeling, visual/cognitive/perceptual modeling, human behavior), and maybe more broadly, a large majority of people treat synaptic weights as the core learned parameters in the brain. That said, she is not a neurophysiologist, and so isn’t the right person to ask about what sort of biophysical complexities could imply larger numbers of parameters. She is peripherally aware of papers suggesting that glia help store knowledge, and there are additional ideas as well. The truth probably involves mechanisms other than synaptic weights, but she believes that the consensus is that such weights hold most of the knowledge” (p. 2). Though see [Trettenbrein (2016)](https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5112247/) and [Langille and Brown (2018)](https://www.ncbi.nlm.nih.gov/pmc/articles/PMC6212519/) for some complications. And see [here](https://www.brainpreservation.org/quotes-on-synaptic-encoding-of-memory/) for a long list of quotes attesting to the role of synapses in memory.\n\n\n[654.](https://www.openphilanthropy.org/brain-computation-report#footnoteref654_y82s4pl)See [Section 2.1.1](https://www.openphilanthropy.org/brain-computation-report#SynapticTransmission).\n\n\n[655.](https://www.openphilanthropy.org/brain-computation-report#footnoteref655_yipubo2)[Bartol et al. (2015)](https://elifesciences.org/articles/10778) suggest a minimum of “4.7 bits of information at each synapse” (they don’t estimate a maximum).\n\n\n[656.](https://www.openphilanthropy.org/brain-computation-report#footnoteref656_lej5yi3)See [Section 4.1.2](https://www.openphilanthropy.org/brain-computation-report#OverallBitErasures).\n\n\n[657.](https://www.openphilanthropy.org/brain-computation-report#footnoteref657_qct2u3j)Here I’m treating a synapse weight as ~1 byte.\n\n\n[658.](https://www.openphilanthropy.org/brain-computation-report#footnoteref658_4y0qw2a)From [Open Philanthropy’s non-verbatim notes from a conversation with Dr. Paul Christiano](https://www.openphilanthropy.org/files/Conversations/Discussions%20with%20Dr.%20Paul%20Christiano.pdf): “In designing brains, evolution had to make trade-offs in allocating resources (e.g., energy consumption, space) to additional communication mechanisms, vs. additional mechanisms used for computation. Human engineers designing chips also have to make trade-offs in budgeting resources (energy, chip real-estate) to computation vs. communication. Equipped with an estimate of the communication profile of the brain, then, we might be able to use our knowledge of how to balance communication and computation in human computers to estimate what it would take to match the compute power of the brain, or to match its overall performance” (p. 2).\n\n\n[659.](https://www.openphilanthropy.org/brain-computation-report#footnoteref659_ohmcp3y)See [here](https://aiimpacts.org/cost-of-teps/#Relationship_between_TEPS_and_FLOPS): “The [eight] supercomputers measured here consistently achieve around 1-2 GTEPS per scaled TFLOPS (see Figure 3). The median ratio is 1.9 GTEPS/TFLOPS, the mean is 1.7 GTEPS/TFLOP, and the variance 0.14 GTEPS/TFLOP.” However, AI Impacts notes that they only looked at data about the relationship between TEPS and FLOP/s in a small number of computers, and they have not investigated whether it makes sense to extrapolate from this data to the brain.\n\n\n[660.](https://www.openphilanthropy.org/brain-computation-report#footnoteref660_rhp4lzx)See [here](https://aiimpacts.org/brain-performance-in-flops/#:~:text=We%20also%20estimate%20that%20the,%E2%80%93%2033.7%20*%201016%20FLOPS.): “Among a small number of computers we compared[4](https://aiimpacts.org/brain-performance-in-flops/#easy-endnote-bottom-4-596), FLOPS and TEPS seem to vary proportionally, at a rate of around 1.7 GTEPS/TFLOP. We also [estimate](http://aiimpacts.org/brain-performance-in-teps/) that the human brain performs around 0.18 – 6.4 × 1014 TEPS. Thus if the FLOPS:TEPS ratio in brains is similar to that in computers, a brain would perform around 0.9 – 33.7 × 1016 FLOPS.[5](https://aiimpacts.org/brain-performance-in-flops/#easy-endnote-bottom-5-596) We have not investigated how similar this ratio is likely to be.” 1e12/1.7e9=~600.\n\n\n[661.](https://www.openphilanthropy.org/brain-computation-report#footnoteref661_08pqa0i)From [Open Philanthropy’s non-verbatim notes from a conversation with Dr. Paul Christiano](https://www.openphilanthropy.org/files/Conversations/Discussions%20with%20Dr.%20Paul%20Christiano.pdf): “Dr. Christiano’s approach requires some sort of production function relating the returns from investment in communication to investment in compute. Dr. Christiano’s starting point would be something like logarithmic returns (though there aren’t really two buckets, so a more accurate model would be much messier), and he thinks that when you have two complimentary quantities (say, X and Y), a 50/50 resource split between them is reasonable across a wide range of production functions. After all, a 50% allocation to X will likely give you at least 50% of the maximal value that X can provide, and halving your allocation to X will only allow you to increase your allocation to Y by 50%” (p. 3).\n\n\n[662.](https://www.openphilanthropy.org/brain-computation-report#footnoteref662_u3roqr6)From [Open Philanthropy’s non-verbatim notes from a conversation with Dr. Paul Christiano](https://www.openphilanthropy.org/files/Conversations/Discussions%20with%20Dr.%20Paul%20Christiano.pdf): “Such a production function would also allow you to estimate what it would take to match the overall performance of the brain, even without matching its compute capacity. Thus, for example, it’s theoretically possible that biological systems have access to large amounts of very efficient computation. If we assume that the value of additional computation diminishes if communication is held fixed, though, then even if the brain has substantially more computation than human computers can mobilize, we might be able to match its overall performance regardless, by exceeding its communication capacity (and hence increasing the value of our marginal compute to overall performance)” (p. 3).\n\n\n[663.](https://www.openphilanthropy.org/brain-computation-report#footnoteref663_ci7qpit)From [Open Philanthropy’s non-verbatim notes from a conversation with Dr. Paul Christiano](https://www.openphilanthropy.org/files/Conversations/Discussions%20with%20Dr.%20Paul%20Christiano.pdf): “One complication here is that the communication to computation ratio in human computers has changed over time. For example, traditional CPUs had less computation per unit communication than the current hardware used for AI applications, like GPUs (Dr. Christiano says that this is partly because it is easier to write software if you can operate on anything in memory rather than needing to worry about communication and parallelization). If we applied CPU-like ratios to the brain, we would get very low compute estimates. Current supercomputers, though, spend more comparable amounts of energy on communication (including within chips) and compute” (p. 3).\n\n\n[664.](https://www.openphilanthropy.org/brain-computation-report#footnoteref664_7biniyh)See [Open Philanthropy’s non-verbatim notes from a conversation with Prof. Barak Pearlmutter](https://www.openphilanthropy.org/files/Conversations/A%20conversation%20with%20Prof.%20Barak%20Pearlmutter.pdf): “Prof. Hans Moravec attempted to derive estimates of the computational capacity of the brain from examination of the retina. Prof. Pearlmutter thought that Moravec’s estimates for the computational costs of robotic vision were likely accurate, given Moravec’s expertise in vision” (p. 3).\n\n\n[665.](https://www.openphilanthropy.org/brain-computation-report#footnoteref665_tn6tkkb)From [Open Philanthropy’s non-verbatim notes from a conversation with Dr. Paul Christiano](https://www.openphilanthropy.org/files/Conversations/Discussions%20with%20Dr.%20Paul%20Christiano.pdf): “If you include a sufficiently broad range of tasks that the human brain can perform, and require similarly useful task-performance across the full range of inputs to which the brain could be exposed, it is likely that for at least one of the tasks in the relevant profile, for some set of inputs, the brain’s method will (a) be close to maximally algorithmically efficient (e.g., within an order of magnitude or two), and (b) use a substantial portion of the computational resources that the brain has available. For example, if you take a computer from the 60s, and you look at all of the tasks it could perform, Dr. Christiano expects that many of the algorithms it was running (for example: sorting), were close to optimally efficient. As another example, there is a very inefficient algorithm for SAT solving, which takes 2n time. For many inputs, we can improve on this algorithm by a huge amount, but we can’t for every input: indeed, there is a rough consensus amongst computer scientists that the very inefficient algorithm is close to the best one can do. Indeed, Dr. Christiano expects that for most algorithms, there will be some family of instances on which it does reasonably well. And given how large the space of possible tasks the brain performs is (we can imagine a very wide set of evaluation metrics and input regimes), the density of roughly-optimal-on-some-inputs algorithms doesn’t need to be that high for them to appear in the brain” (p. 7).\n\n\n[666.](https://www.openphilanthropy.org/brain-computation-report#footnoteref666_lkj7d2k)See [here](https://www.thinkmate.com/product/nvidia/900-2g500-0010-000) for V100 prices (currently ~$8799); and [here](https://www.nytimes.com/2020/06/22/technology/japanese-supercomputer-fugaku-tops-american-chinese-machines.html) the $1 billion Fugaku pricetag: “The six-year budget for the system and related technology development totaled about $1 billion, compared with the $600 million price tags for the biggest planned U.S. systems.” Fugaku FLOP/s performance is listed [here](https://www.top500.org/lists/top500/2020/06/), at around 4e17-5e17 FLOP/s. Google’s TPU supercomputer, which recently broke records in training ML systems, can also do ~4e17 FLOP/s, though I’m not sure the costs. See [Kumar (2020)](https://cloud.google.com/blog/products/ai-machine-learning/google-breaks-ai-performance-records-in-mlperf-with-worlds-fastest-training-supercomputer): “In total, this system delivers over 430 PFLOPs of peak performance.” The A100, for ~$200,000, can do 5e15 FLOP/s – see [Mehar (2020)](https://www.inceptivemind.com/nvidia-dgx-a100-world-first-5-petaflops-system/13267/#:~:text=NVIDIA%20DGX%20A100%20packs%20record%205%20petaflops%20of%20AI%20performance.&text=NVIDIA%20has%20unveiled%20the%20third,the%20new%20NVIDIA%20DGX%20A100.). NVIDIA’s newest SuperPOD can deliver ~7×1017 of AI performance – see [Paikeday (2020)](https://blogs.nvidia.com/blog/2020/05/14/dgx-superpod-a100/).\n\n\n[667.](https://www.openphilanthropy.org/brain-computation-report#footnoteref667_tjgxrde)See my colleague Ajeya Cotra’s investigation focuses on these issues.\n\n\n[668.](https://www.openphilanthropy.org/brain-computation-report#footnoteref668_9ly5n8i)From [Open Philanthropy’s non-verbatim notes from a conversation with Prof. Eve Marder](https://www.openphilanthropy.org/files/Conversations/A%20conversation%20with%20Professor%20Eve%20Marder,%20May_June%202020.pdf): “There are also some circuits in leeches, *C. elegans*, flies, and electric fish that are relatively well-characterized” (p. 4).\n\n\n[669.](https://www.openphilanthropy.org/brain-computation-report#footnoteref669_wupg9k2)This is a criterion suggested by Dr. Paul Christiano. From [Open Philanthropy’s non-verbatim notes from a conversation with Dr. Paul Christiano](https://www.openphilanthropy.org/files/Conversations/Discussions%20with%20Dr.%20Paul%20Christiano.pdf): “In thinking about conceptual standards to use in generating estimates for the FLOP/s necessary to run a task-functional model of a computational system that exhibits some degree of similarity to that system, one constraint is that when you apply your standard to digital systems that actually perform FLOPs, it ought to yield an answer of one FLOP per FLOP (e.g., your estimate for a V100, which performs ~1e14 FLOP/s, should be 1e14 FLOP/s). That is, it shouldn’t yield an estimate of the FLOPs necessary to e.g. model every transistor, or to model lower-level physical processes in transistors leading to e.g. specific patterns of mistaken bit-flips” (p. 7-8).\n\n\n[670.](https://www.openphilanthropy.org/brain-computation-report#footnoteref670_jw4u8cr)From [Open Philanthropy’s non-verbatim notes from a conversation with Dr. Paul Christiano](https://www.openphilanthropy.org/files/Conversations/Discussions%20with%20Dr.%20Paul%20Christiano.pdf): “If you include a sufficiently broad range of tasks that the human brain can perform, and require similarly useful task-performance across the full range of inputs to which the brain could be exposed, it is likely that for at least one of the tasks in the relevant profile, for some set of inputs, the brain’s method will (a) be close to maximally algorithmically efficient (e.g., within an order of magnitude or two), and (b) use a substantial portion of the computational resources that the brain has available. For example, if you take a computer from the 60s, and you look at all of the tasks it could perform, Dr. Christiano expects that many of the algorithms it was running (for example: sorting), were close to optimally efficient. As another example, there is a very inefficient algorithm for SAT solving, which takes 2n time. For many inputs, we can improve on this algorithm by a huge amount, but we can’t for every input: indeed, there is a rough consensus amongst computer scientists that the very inefficient algorithm is close to the best one can do. Indeed, Dr. Christiano expects that for most algorithms, there will be some family of instances on which it does reasonably well. And given how large the space of possible tasks the brain performs is (we can imagine a very wide set of evaluation metrics and input regimes), the density of roughly-optimal-on-some-inputs algorithms doesn’t need to be that high for them to appear in the brain” (p. 7).\n\n\n[671.](https://www.openphilanthropy.org/brain-computation-report#footnoteref671_b54wdai)From [Open Philanthropy’s non-verbatim notes from a conversation with Prof. Rosa Cao](https://www.openphilanthropy.org/files/Conversations/A%20conversation%20with%20Professor%20Rosa%20Cao,%20August%207,%202019.pdf): “Prof. Cao does not believe that there is a privileged description of the computations that the brain is performing. We can imagine many different possible computational models of the brain, which will replicate different types of behavior, to within a given error-tolerance, in a given circumstance. In order to determine which biophysical processes are important, and what level of precision and detail you need in a model, you first need to specify the particular type of input-output relationship that you care about, and how the relevant outputs need to be produced. More generally, Prof. Cao thinks that the computational paradigm in neuroscience is conceptually underspecified. That is, the field is insufficiently clear about what it means to talk about the computations that the brain is performing” (p. 1).\n\n\n[672.](https://www.openphilanthropy.org/brain-computation-report#footnoteref672_wdmtpbo)From [Open Philanthropy’s non-verbatim notes from a conversation with Dr. Paul Christiano](https://www.openphilanthropy.org/files/Conversations/Discussions%20with%20Dr.%20Paul%20Christiano.pdf): “In the case of the brain, for example, a high-level description might be something like ‘it divides the work between these two hemispheres in the following way.’ Thus, to meet the relevant standard, ‘brain-like’ computational models will only need to replicate that hemispheric division. Beyond that, they can just employ the maximally efficient way of performing the task” (p. 8).\n\n\n[673.](https://www.openphilanthropy.org/brain-computation-report#footnoteref673_49xptcx)See [Marr (1982)](https://www.amazon.com/Vision-Computational-Investigation-Representation-Information/dp/0262514621) (p. 25).\n\n\n[674.](https://www.openphilanthropy.org/brain-computation-report#footnoteref674_rno2ahm)From [Open Philanthropy’s non-verbatim notes from a conversation with Prof. Chris Eliasmith](https://www.openphilanthropy.org/files/Conversations/A%20conversation%20with%20Professor%20Chris%20Eliasmith,%20September%2025,%202019.pdf): “There is no privileged model of the brain which can claim to be the model of how the brain performs tasks. You can’t answer someone’s question about how the brain works without knowing exactly what the question is. Nor is there a privileged level of biological detail that a model needs to include in order count as a brain model, as all models are wrong to some extent. You can, though, specify a particular set of functions that a model needs to reproduce, with a particular degree of similarity to human behavior and anatomical and physiological data. Prof. Eliasmith’s work is basically oriented towards building a brain model that satisfies constraints of this type” (p. 4). From [Open Philanthropy’s non-verbatim notes from a conversation with Prof. Rosa Cao](https://www.openphilanthropy.org/files/Conversations/A%20conversation%20with%20Professor%20Rosa%20Cao,%20August%207,%202019.pdf): “Prof. Cao does not believe that there is a privileged description of the computations that the brain is performing. We can imagine many different possible computational models of the brain, which will replicate different types of behavior, to within a given error-tolerance, in a given circumstance. In order to determine which biophysical processes are important, and what level of precision and detail you need in a model, you first need to specify the particular type of input-output relationship that you care about, and how the relevant outputs need to be produced. More generally, Prof. Cao thinks that the computational paradigm in neuroscience is conceptually underspecified. That is, the field is insufficiently clear about what it means to talk about the computations that the brain is performing” (p. 1).\n\n\n[675.](https://www.openphilanthropy.org/brain-computation-report#footnoteref675_9k50exu)See [Bell (1999)](https://royalsocietypublishing.org/doi/pdf/10.1098/rstb.1999.0540), [Hanson (2011)](https://www.overcomingbias.com/2011/01/signal-processors-decouple.html), and [Lee (2011)](http://timothyblee.com/2011/01/13/emulation-simulation-and-the-human-brain/) for some discussion.\n\n\n[676.](https://www.openphilanthropy.org/brain-computation-report#footnoteref676_iy552go)E.g., we can talk about how many FLOP/s it takes to run an EfficientNet-B2 at 10 Hz, given a description of the model.\n\n\n[677.](https://www.openphilanthropy.org/brain-computation-report#footnoteref677_hi4agpq)See [Piccinini (2017)](https://plato.stanford.edu/entries/computation-physicalsystems/#AccConCom) for discussion of related issues.\n\n\n[678.](https://www.openphilanthropy.org/brain-computation-report#footnoteref678_2sei8sa)For an example of the types of debates in this vein that do not seem to me particularly relevant or productive in this context, see [here](https://www.nextbigfuture.com/2009/11/henry-markram-calls-ibm-cat-scale-brain.html).\n\n\n[679.](https://www.openphilanthropy.org/brain-computation-report#footnoteref679_81n6ah5)From [Open Philanthropy’s non-verbatim notes from a conversation with Dr. Paul Christiano](https://www.openphilanthropy.org/files/Conversations/Discussions%20with%20Dr.%20Paul%20Christiano.pdf): “Attempting to use some standard like “the description of the system you would give if you really understood how the system worked” might well result in over-estimates, since it would plausibly result in descriptions at lower levels, like transistors or NAND gates” (p. 8).\n\n\n[680.](https://www.openphilanthropy.org/brain-computation-report#footnoteref680_f6pnsji)This definition is based on the definition of when one computational method represents another offered by [Knuth (1997)](https://doc.lagout.org/science/0_Computer%20Science/2_Algorithms/The%20Art%20of%20Computer%20Programming%20(vol.%201_%20Fundamental%20Algorithms)%20(3rd%20ed.)%20%5BKnuth%201997-07-17%5D.pdf), p. 467, problem 9. See also [Sandberg and Bostrom (2008)](https://www.fhi.ox.ac.uk/brain-emulation-roadmap-report.pdf): “A strict definition of simulation might be that a system S consists of a state x(t) evolving by a particular dynamics f, influenced by inputs and producing outputs: x(t+1) = f(I,x(t)), O(t)=g(x(t)). Another system T simulates S if it produces the same output (within a tolerance) for the same input time series starting with a given state (within a tolerance): X(t+1)=F(I, X(t)), O(t)=G(X(t)) where |x(t)‐X(t)|< ε1 and X(0)=x(0)+ ε2. The simulation is an emulation if F=f (up to a bijective transformation of X(t)), that is, the internal dynamics is identical and similar outputs are not due to the form of G(X(t)).”\n\n\n[681.](https://www.openphilanthropy.org/brain-computation-report#footnoteref681_kqxrzps)See e.g. [Sandberg and Bostrom (2008)](https://www.fhi.ox.ac.uk/brain-emulation-roadmap-report.pdf), who note that the brain is not strictly simulable on their definition, due to chaotic dynamics, but that “there exists a significant amount of noise in the brain that does not prevent meaningful brain states from evolving despite the indeterminacy of their dynamics. A “softer” form of emulation may be possible to define that has a model or parameter error smaller than the noise level and is hence practically indistinguishable from a possible evolution of the original system” (p. 7).\n\n\n[682.](https://www.openphilanthropy.org/brain-computation-report#footnoteref682_770xpd7)E.g., whether a given method of transitioning between states in a way that doesn’t map to the brain is OK or not will depend on whether this is construed as part of the “algorithm” or part of its “implementation.” But implementation itself takes place at many levels of abstraction, which can themselves be described in algorithmic terms.\n\n\n[683.](https://www.openphilanthropy.org/brain-computation-report#footnoteref683_mw8jmkf)See [this post](https://aiimpacts.org/how-ai-timelines-are-estimated/) by AI impacts for a framework somewhat reminiscent of this conception, which plots indifference curves for different combinations of hardware and software sophistication. The post treats the brain as the point that combines “human-level hardware” and “evolution level software engineering.” But we can also imagine defining human-level hardware as the amount of hardware that someone with “evolution level software engineering skill” would need in order to create a computational system that matches human-level task performance. My thanks to Paul Christiano, Katja Grace, and Ajeya Cotra for discussion of this approach.\n\n\n[684.](https://www.openphilanthropy.org/brain-computation-report#footnoteref684_msasop7)See discussion [Schneider and Gersting (2018)](https://www.amazon.com/Invitation-Computer-Science-G-Michael-Schneider/dp/1337561916) (p. 96-100): “To measure time efficiency, we identify the fundamental unit (or units) of work of an algorithm and count how many times the work unit is executed” (p. 96). From [Open Philanthropy’s non-verbatim notes from a conversation with Dr. Jess Riedel](https://www.openphilanthropy.org/files/Conversations/Discussions%20with%20Dr.%20Jess%20Riedel,%20Spring%202020.pdf): “In the context of a computational system, you can think of an ‘operation’ as a small computation that can be treated as atomic, at least with respect to a particular architecture” (p. 5).\n\n\n[685.](https://www.openphilanthropy.org/brain-computation-report#footnoteref685_8zfif31)See e.g. [Thagard (2002)](http://cogsci.uwaterloo.ca/Articles/molecules.html), who chooses to count proteins instead of neurons.\n\n\n[686.](https://www.openphilanthropy.org/brain-computation-report#footnoteref686_6ii2fio)If we construe the type of task-performance at stake in the “no constraints” option above as including any task the brain can perform in the sense at stake here, then the two collapse into each other. However, my sense is that when people talk about matching human-level task-performance, they generally have in mind the type of task-performance humans do in fact display, rather than the type of task-performance they *could* display in principle if “programmed” with arbitrary skill.\n\n\n[687.](https://www.openphilanthropy.org/brain-computation-report#footnoteref687_umk89hh)My thanks to Ajeya Cotra for discussion.\n\n\n[688.](https://www.openphilanthropy.org/brain-computation-report#footnoteref688_lsmybmg)Strictly, they would need to correspond to the neurons and synapses in a particular human brain; but as I noted in [Section 1.5](https://www.openphilanthropy.org/brain-computation-report#NeuroscienceBasics), at the level of precision relevant to this report, I’m treating normal adult human brains as equivalent.\n\n\n[689.](https://www.openphilanthropy.org/brain-computation-report#footnoteref689_p4dnrjw)This is meant to exclude the possibility of using some other part of the model to do what is intuitively “all of the work,” but in some hyper-efficient manner.\n\n\n[690.](https://www.openphilanthropy.org/brain-computation-report#footnoteref690_ycmtlwk)In particular, despite the amount of evidence discussed in the report, I don’t think of these probabilities as particularly “robust.” Even in the final stages of this project, they’ve continued to vary somewhat as I’ve been exposed to new evidence, and as different considerations have become more or less salient to me (for example, whether 1e15 has fallen above or below my median has varied), and I expect that they will continue to do so, especially in response to more data about expert opinion. The numbers offered here are just a coarse-grained snap-shot. I’ve also erred on the side of round numbers to avoid suggesting too much precision.\n\n\n[691.](https://www.openphilanthropy.org/brain-computation-report#footnoteref691_q5zcgaa)The estimate can be seen as keyed to a concept that combines “just pick a degree of brain-like-ness” with “reasonably brain-like.” It has the disadvantages of both – namely, arbitrariness and vagueness.\n\n\n[692.](https://www.openphilanthropy.org/brain-computation-report#footnoteref692_fihq3uf)See [Izhikevich (2004)](https://www.izhikevich.org/publications/whichmod.pdf) (p. 1066); and the chart in [Section 2.1.2.3](https://www.openphilanthropy.org/brain-computation-report#CrabsLocustsAndOtherConsiderations).\n\n\n[693.](https://www.openphilanthropy.org/brain-computation-report#footnoteref693_48mj2qy)See endnotes in [Section 2.1.2.4](https://www.openphilanthropy.org/brain-computation-report#ExpertOpinionAndPractice) for examples.\n\n\n[694.](https://www.openphilanthropy.org/brain-computation-report#footnoteref694_i1k83xq)See endnotes in [Section 2.1.2.4.](https://www.openphilanthropy.org/brain-computation-report#ExpertOpinionAndPractice)", "url": "https://www.openphilanthropy.org/brain-computation-report", "title": "How Much Computational Power Does It Take to Match the Human Brain?", "source": "html_articles", "source_type": "blogPost", "source_filetype": "pdf", "date_published": "2020-09-10T22:00:00Z", "authors": ["Joseph Carlsmith"], "summary": [], "id": "91554bc682d0f754238efadb8e75f3f5"} {"text": "Published: June 25, 2021 | by [Tom Davidson](/about/team/tom-davidson) \nThis report evaluates the likelihood of ‘explosive growth’, meaning > 30% annual growth of gross world product (GWP), occurring by 2100. Although frontier GDP/capita growth has been constant for 150 years, over the last 10,000 years GWP growth has accelerated significantly. Endogenous growth theory, together with the empirical fact of the demographic transition, can explain both trends. Labor, capital and technology were accumulable over the last 10,000 years, meaning that their stocks all increased as a result of rising output. Increasing returns to these accumulable factors accelerated GWP growth. But in the late 19th century, the demographic transition broke the causal link from output to the quantity of labor. There were not increasing returns to capital and technology alone and so growth did not accelerate; instead frontier economies settled into an equilibrium growth path defined by a balance between a growing number of researchers and diminishing returns to research.\n\n\nThis theory implies that explosive growth could occur by 2100. If automation proceeded sufficiently rapidly (e.g. due to progress in AI) there *would* be increasing returns to capital and technology alone. I assess this theory and consider counter-arguments stemming from alternative theories; expert opinion; the fact that 30% annual growth is wholly unprecedented; evidence of diminishing returns to R&D; the possibility that a few non-automated tasks bottleneck growth; and others. Ultimately, I find that explosive growth by 2100 is plausible but far from certain.\n\n\n1. How to read this report\n--------------------------\n\n\nRead the [summary](https://www.openphilanthropy.org/research/could-advanced-ai-drive-explosive-economic-growth/#3-summary) (~1 page). Then read the [main report](https://www.openphilanthropy.org/research/could-advanced-ai-drive-explosive-economic-growth/#4-main-report) (~30 pages).\n\n\nThe rest of the report contains extended appendices to the main report. Each appendix expands upon specific parts of the main report. Read an appendix if you’re interested in exploring its contents in greater depth.\n\n\nI describe the contents of each appendix [here](https://www.openphilanthropy.org/research/could-advanced-ai-drive-explosive-economic-growth/#Structure). The best appendix to read is probably the first, [Objections to explosive growth](https://www.openphilanthropy.org/research/could-advanced-ai-drive-explosive-economic-growth/#AppendixA). Readers may also be interested to read [reviews of the report](https://www.openphilanthropy.org/research/could-advanced-ai-drive-explosive-economic-growth/#AppendixH).\n\n\nThough the report is intended to be accessible to non-economists, readers without an economics background may prefer to read the [accompanying blog post](https://www.openphilanthropy.org/research/report-on-whether-ai-could-drive-explosive-economic-growth/).\n\n\n\n\n---\n\n\n2. Why we are interested in explosive growth\n--------------------------------------------\n\n\nOpen Philanthropy wants to understand how far away we are from developing [transformative artificial intelligence](https://www.openphilanthropy.org/research/some-background-on-our-views-regarding-advanced-artificial-intelligence/)(TAI). Difficult as it is, a working timeline for TAI helps us prioritize between our cause areas, including [potential risks from advanced AI](https://www.openphilanthropy.org/research/potential-risks-from-advanced-artificial-intelligence/).\n\n\nIn her [draft report](https://www.alignmentforum.org/posts/KrJfoZzpSDpnrv9va/draft-report-on-ai-timelines), my colleague [Ajeya Cotra](https://www.openphilanthropy.org/about/team/ajeya-cotra/) uses TAI to mean ‘AI which drives Gross World Product (GWP) to grow at ~20-30% per year’ – roughly ten times faster than it is growing currently. She estimates a high probability of TAI by 2100 (~80%), and a substantial probability of TAI by 2050 (~50%). These probabilities are broadly consistent with the results from expert surveys,[1](https://www.openphilanthropy.org/could-advanced-ai-drive-explosive-economic-growth#footnote1_ag65872 \" Grace et al. (2017) ‘When Will AI Exceed Human Performance? Evidence from AI Experts.’\") and with plausible priors for when TAI might be developed.[2](https://www.openphilanthropy.org/could-advanced-ai-drive-explosive-economic-growth#footnote2_anuh8e5 \"Davidson (2020a). \")\n\n\nNonetheless, intuitively speaking these are high probabilities to assign to an ‘extraordinary claim’. Are there strong reasons to dismiss these estimates as too high? One possibility is economic forecasting. If economic extrapolations gave us strong reasons to think GWP will grow at ~3% a year until 2100, this would rule out explosive growth and so rule out TAI being developed this century.\n\n\nI find that economic considerations don’t provide a good reason to dismiss the possibility of TAI being developed in this century. In fact, there is a plausible economic perspective from which sufficiently advanced AI systems are *expected* to cause explosive growth.\n\n\n\n\n---\n\n\n3. Summary\n----------\n\n\n*If you’re not familiar with growth economics, I recommend you start by reading [this glossary](https://www.openphilanthropy.org/research/could-advanced-ai-drive-explosive-economic-growth/#AppendixI) or my [blog post about the report](https://www.openphilanthropy.org/research/report-on-whether-ai-could-drive-explosive-economic-growth/).*\n\n\nSince 1900, frontier GDP/capita has grown at about 2% annually.[3](https://www.openphilanthropy.org/could-advanced-ai-drive-explosive-economic-growth#footnote3_gwy1rm6 \" The ‘frontier’ refers to the country, or group of countries, with the highest levels of technology and GDP/capita. \") There is no sign that growth is speeding up; if anything recent data suggests that growth is slowing down*.* So why think that > 30% annual growth of GWP (‘explosive growth’) is plausible this century?\n\n\nI identify three arguments to think that sufficiently advanced AI could drive explosive growth:\n\n\n1. **Idea-based models of very long-run growth imply AI could drive explosive growth.**\n\t* **G****rowth rates have significantly increased** (super-exponential growth) over the past 10,000 years, and even over the past 300 years. This is true both for GWP growth, and frontier GDP/capita growth.\n\t* Idea-based models explain increasing growth with an *ideas feedback loop*: **more ideas → more output → more people → more ideas…** Idea-based models seem to have a good fit to the long-run GWP data, and offer a plausible explanation for increasing growth.\n\t* After the [demographic transition](https://en.wikipedia.org/wiki/Demographic_transition) in ~1880, **more output** did *not* lead to **more people**; instead people had fewer children as output increased. This broke the ideas feedback loop, and so idea-based theories expect growth to stop increasing shortly after the time. Indeed, this is what happened. Since ~1900 growth has not increased but has been roughly constant.\n\t* Suppose we develop AI systems that can substitute very effectively for human labor in producing output and in R&D. The following ideas feedback loop could occur: **more ideas → more output → more AI systems → more ideas…** Before 1880, the ideas feedback loop led to super-exponential growth. So our default expectation should be that this new ideas feedback loop will again lead to super-exponential growth.\n2. **A wide range of growth models predict explosive growth if capital can substitute for labor.** Here I draw on models designed to study the recent period of exponential growth. If you alter these models with the assumption that capital can substitute very effectively for labor, e.g. due to the development of advanced AI systems, they typically predict explosive growth. The mechanism is similar to that discussed above. Capital accumulation produces a powerful feedback loop that drives faster growth: **more capital → more output → more capital …**. These first two arguments both reflect an insight of endogenous growth theory: increasing returns to accumulable inputs can drive accelerating growth.\n3. **An ignorance perspective assigns some probability to explosive growth.** We may not trust highly-specific models that attempt to explain why growth has increased over the long-term, or why it has been roughly constant since 1900. But we do know that the pace of growth has increased significantly over the course of history. Absent deeper understanding of the mechanics driving growth, it would be strange to rule out growth increasing again. 120 years of steady growth is not enough evidence to rule out a future increase.\n\n\nI discuss a number of objections to explosive growth:\n\n\n* 30% growth is very far out of the observed range.\n* Models predicting explosive growth have implausible implications – like output going to infinity in finite time.\n* There’s no evidence of explosive growth in any subsector of the economy.\n* Limits to automation are likely to prevent explosive growth.\n* Won’t diminishing marginal returns to R&D prevent explosive growth?\n* And many others.\n\n\nAlthough some of these objections are partially convincing, I ultimately conclude that explosive growth driven by advanced AI is a plausible scenario.\n\n\nIn addition, the report covers themes relating to the possibility of *stagnating* growth; I find that it is a highly plausible scenario. Exponential growth in the number of researchers has been accompanied by merely constant GDP/capita growth over the last 80 years. This trend is well explained by semi-endogenous growth models in which ideas are getting harder to find.[4](https://www.openphilanthropy.org/could-advanced-ai-drive-explosive-economic-growth#footnote4_hsb5rtg \" More precisely, models in which each successive 1% increase in the level of technology requires more research effort than the last. \") As population growth slows over the century, number of researchers will likely grow more slowly; semi-endogenous growth models predict that GDP/capita growth will slow as a result.\n\n\nThus I conclude that the possibilities for long-run growth are wide open. Both explosive growth and stagnation are plausible.\n\n\n**Acknowledgements:** My thanks to Holden Karnofsky for prompting this investigation; to Ajeya Cotra for extensive guidance and support throughout; to Ben Jones, Dietrich Vollrath, Paul Gaggl, and Chad Jones for helpful comments on the report; to Anton Korinek, Jakub Growiec, Phil Trammel, Ben Garfinkel, David Roodman, and Carl Shulman for reviewing drafts of the report in depth; to Harry Mallinson for reviewing code I wrote for this report and helpful discussion; to Joseph Carlsmith, Nick Beckstead, Alexander Berger, Peter Favaloro, Jacob Trefethen, Zachary Robinson, Luke Muehlhauser, and Luisa Rodriguez for valuable comments and suggestions; and to Eli Nathan for extensive help with citations and the website.\n\n\n\n\n---\n\n\n4. Main report\n--------------\n\n\n*If you’re not familiar with growth economics, I recommend you start by reading [this glossary](https://www.openphilanthropy.org/research/could-advanced-ai-drive-explosive-economic-growth/#AppendixI) or my [blog post about the report](https://www.openphilanthropy.org/research/report-on-whether-ai-could-drive-explosive-economic-growth/).*\n\n\nHow might we assess the plausibility of explosive growth (>30% annual GWP) occurring by 2100? First, I consider the raw empirical data; then I address a number of additional considerations.\n\n\n* What do experts think ([here](https://www.openphilanthropy.org/research/could-advanced-ai-drive-explosive-economic-growth/#ExpertOpinion))?\n* How does economic growth theory affect the case of explosive growth ([here](https://www.openphilanthropy.org/research/could-advanced-ai-drive-explosive-economic-growth/#TheoreticalModels) and [here](https://www.openphilanthropy.org/research/could-advanced-ai-drive-explosive-economic-growth/#AdvancedAI))?\n* How strong are the objections to explosive growth ([here](https://www.openphilanthropy.org/research/could-advanced-ai-drive-explosive-economic-growth/#ObjectionsToExplosiveGrowth))?\n* Conclusion ([here](https://www.openphilanthropy.org/research/could-advanced-ai-drive-explosive-economic-growth/#Conclusion)).\n\n\n#### 4.1 Empirical data without theoretical interpretation\n\n\nWhen looking at the raw data, two conflicting trends jump out.\n\n\nThe first trend is the **constancy of frontier GDP/capita growth over the last 150 years**.[5](https://www.openphilanthropy.org/could-advanced-ai-drive-explosive-economic-growth#footnote5_748imw3 \" The ‘frontier’ refers to the country, or group of countries, with the highest levels of technology and GDP/capita. Why focus on frontier GDP/capita? Many economists separate GWP growth into three components: growth of frontier GDP/capita, catch-up growth and population growth. They forecast that frontier GDP/capita growth will be the main contributor to GWP growth out to 2100. This is because population growth is projected to slow down and perhaps stop altogether by 2100 (e.g. by the UN) and the scope for catch-up growth is limited.\") The US is typically used to represent this frontier. The following graph from [Our World in Data](https://ourworldindata.org/economic-growth) shows US GDP/capita since 1870.\n\n\n![](https://www.openphilanthropy.org/wp-content/uploads/imageL-1.png)\n\n\nThe y-axis is logarithmic, so the straight line indicates that growth has happened at a constant exponential rate – ~2% per year on average.[6](https://www.openphilanthropy.org/could-advanced-ai-drive-explosive-economic-growth#footnote6_zxpo86a \" The trend of constant exponential growth is fairly striking for the US, with the only real exception being the Great Depression of the 1930s. However, the trend is not as striking for other regions near the frontier. For example, in England (here) and in Western Europe as a whole (here), growth is noticeably higher in the second half of the 20th century than in the first half.\")Extrapolating the trend, frontier GDP/capita will grow at ~2% per year until 2100. GWP growth will be slightly larger, also including a small boost from population growth and catch-up growth. Explosive growth would be a *very large* break from this trend.\n\n\nI refer to forecasts along these lines as the *standard story*. Note, I intend the *standard story* to encompass a wide range of views, including the view that growth will slow down significantly by 2100 and the view that it will rise to (e.g.) 4% per year.\n\n\nThe second trend is the **super-exponential growth of GWP over the last 10,000 years.[7](https://www.openphilanthropy.org/could-advanced-ai-drive-explosive-economic-growth#footnote7_30117rz \" Why not focus on GWP per capita? Our focus on GWP, rather than GWP per capita, is natural because we are forecasting GWP, not GWP/capita. In addition, I find that the data series of GWP provides the strongest argument for explosive growth. Although GWP per capita displays clear super-exponential growth (here), the trend is a worse fit for the endogenous growth models discussed below.\")** (Super-exponential means the growth rate increases over time.) Another graph from [Our World in Data](https://ourworldindata.org/economic-growth) shows GWP over the last 2,000 years:\n\n\n![](https://www.openphilanthropy.org/wp-content/uploads/imageS.png)\n\n\nAgain, the y-axis is logarithmic, so the increasing steepness of the slope indicates that the growth rate has increased.\n\n\nIt’s not just GWP – there’s a similar super-exponential trend in long-run GDP/capita in many developed countries – see the graphs of US, English, and French GDP/capita in section 14.3.[8](https://www.openphilanthropy.org/could-advanced-ai-drive-explosive-economic-growth#footnote8_n9n1d07 \" Romer (1986) discusses the super-exponential growth in GDP/capita for a number of developed countries. \") (Later I discuss whether we can trust these pre-modern data points.)\n\n\nIt turns out that a simple equation called a ‘power law’ is a good fit to GWP data going all the way back to 10,000 BCE. The following graph (from my colleague [David Roodman](https://www.openphilanthropy.org/about/team/david-roodman/)) shows the fit of a power law (and of exponential growth) to the data. The axes of the graph are chosen so that the power law appears as a straight line.[9](https://www.openphilanthropy.org/could-advanced-ai-drive-explosive-economic-growth#footnote9_sstcpum \" The y-axis is logarithmic. On the x-axis, years are spaced according to the formula -log(2050 - year). So the following data points are equally spaced: 2000, 1950, 1850, 1650, and 1250. (For each successive data point, 2047 - year doubles and log(2050 - year) increases by a fixed amount.) The power-law implies GWP will go to infinity in 2047; 2050, rather than 2047, is used for convenience.\")\n\n\n![](https://www.openphilanthropy.org/wp-content/uploads/imageI-1.png)\n\n\nIf you extrapolate this power law trend into the future, it implies that the growth rate will continue to increase into the future and that GWP will approach infinity by 2047![10](https://www.openphilanthropy.org/could-advanced-ai-drive-explosive-economic-growth#footnote10_slnnzmw \" See David Roodman’sblog post for a longer and more accessible explanation of these ideas.\")\n\n\nMany other simple curves fit to this data also predict explosive (>30%) growth will occur in the next few decades. Why is this? The core reason is that the data shows the growth rate increasing more and more quickly over time. It took thousands of years for growth to increase from 0.03% to 0.3%, but only a few hundred years for it to increase from 0.3% to 3%.[11](https://www.openphilanthropy.org/could-advanced-ai-drive-explosive-economic-growth#footnote11_sm96ci8 \" The GWP data used in Roodman’s report shows that GWP growth first exceeded 0.03% in 5000 BCE, 0.3% in 1400, and 3% shortly after 1900. \") If you naively extrapolate this trend, you predict that growth will increase again from 3% to 30% within a few decades.\n\n\nWe can see this pattern more clearly by looking at a graph of how GWP *growth* has changed over time.[12](https://www.openphilanthropy.org/could-advanced-ai-drive-explosive-economic-growth#footnote12_f4g2euz \" We again choose the axes so that a power law is a straight line. The y-axis is logarithmic. On the x-axis, years are spaced according to the formula log(2050 - year). A straight line fit indicates that growth increased by the same proportion (e.g. doubling) during each of the following periods: 1250 → 1650, 1650 → 1850, 1850 → 1950, 1950 → 2000. \")\n\n\n![](https://www.openphilanthropy.org/wp-content/uploads/imageR-1.png)\n\n\nThe graph shows that the time needed for the growth rate to double has fallen over time. (Later I discuss whether this data can be trusted.) Naively extrapolating the trend, you’d predict explosive growth within a few decades.\n\n\nI refer to forecasts along these lines, that predict explosive growth by 2100, as the *explosive growth story*.\n\n\nSo we have two conflicting stories. The *standard story* points to the steady ~2% growth in frontier GDP/capita over the last 150 years, and expects growth to follow a similar pattern out to 2100. The *explosive growth story* points to the super-exponential growth in GWP over the last 10,000 years and expects growth to increase further to 30% per year by 2100.\n\n\nWhich story should we trust? Before taking into account further considerations, I think we should put some weight on both. For predictions about the near future I would put more weight on the *standard story* because its data is more recent and higher quality. But for predictions over longer timescales I would place increasing weight on the *explosive growth story* as it draws on a longer data series.\n\n\nBased on the two empirical trends alone, I would neither confidently rule out explosive growth by 2100 nor confidently expect it to happen. My attitude would be something like: ‘*Historically, there have been significant increases in growth. Absent a deeper understanding of the mechanisms driving these increases, I shouldn’t rule out growth increasing again in the future.*’ I call this attitude the *ignorance story.[13](https://www.openphilanthropy.org/could-advanced-ai-drive-explosive-economic-growth#footnote13_fulntur \" I discuss the ignorance story more in an appendix.\")* The rest of the main report raises considerations that can move us away from this attitude (either towards the *standard story* or towards the *explosive growth story*).\n\n\n#### 4.2 Expert opinion\n\n\nIn the most recent and comprehensive [expert survey](https://www.pnas.org/content/115/21/5409) on growth out to 2100 that I could find, all the experts assigned low probabilities to explosive growth.\n\n\nAll experts thought it 90% likely that the average annual GDP/capita growth out to 2100 would be below 5%.[14](https://www.openphilanthropy.org/could-advanced-ai-drive-explosive-economic-growth#footnote14_mq8n729 \" See Figure S7 in the appendix.\") Strictly speaking, the survey data is compatible with experts thinking there is a 9% probability of explosive growth this century, but this seems unlikely in practice. The experts’ quantiles, both individually and in aggregate, were a good fit for normal distributions which would assign ≪ 1% probability to explosive growth.[15](https://www.openphilanthropy.org/could-advanced-ai-drive-explosive-economic-growth#footnote15_od7hozl \" See Figure S7 in the appendix.\")\n\n\nExperts’ mean estimate of annual GWP/capita growth was 2.1%, with standard deviation 1.1%.[16](https://www.openphilanthropy.org/could-advanced-ai-drive-explosive-economic-growth#footnote16_kwpeuel \"See more detail on the expert survey in this appendix.\") So their views support the *standard story* and are in tension with the *explosive growth story*.\n\n\nThere are three important caveats:\n\n\n1. **Lack of specialization.** My impression is that long-run GWP forecasts are not a major area of specialization, and that the experts surveyed weren’t experts specifically in this activity. Consonant with this, survey participants did not consider themselves to be particularly expert, self-reporting their level of expertise as 6 out of 10 on average.[17](https://www.openphilanthropy.org/could-advanced-ai-drive-explosive-economic-growth#footnote17_3s74scs \"From p. 13 of the appendix:A rating of 1 indicates little expertise, a rating of 5 indicates the expertise of someone who has studied the subject but is not a specialist, and a rating of 10 indicates expertise that is among the leading experts. The mean self-reported level of expertise is 5.99 and the median is 6.\")\n2. **Lack of appropriate prompts.** Experts were provided with the data about the growth rates for the period 1900-2000, and primed with a ‘warm up question’ about the recent growth of US GDP/capita. But no information was provided about the longer-run super-exponential trend, or about possible mechanisms for producing explosive growth (like advanced AI). The respondents may have assigned higher probabilities to explosive growth by 2100 if they’d been presented with this information.\n3. **No focus on tail outcomes.** Experts were not asked explicitly about explosive growth, and were not given an opportunity to comment on outcomes they thought were < 10% likely to occur.\n\n\n#### 4.3 Theoretical models used to extrapolate GWP out to 2100\n\n\nPerhaps economic growth theory can shed light on whether to extrapolate the exponential trend (*standard story*) or the super-exponential trend (*explosive growth story)*.\n\n\nIn this section I ask:\n\n\n* Do the growth models of the *standard story* give us reason beyond the empirical data to think 21st century growth will be exponential or sub-exponential?\n\t+ They could do this if they point to a mechanism explaining recent exponential growth, and this mechanism will continue to operate in the future.\n* Do the growth models of the *explosive growth story* give us reason beyond the empirical data to think 21st century growth will be super-exponential?\n\t+ They could do this if they point to a mechanism explaining the long-run super-exponential growth, and this mechanism will continue to operate in the future.\n\n\nMy starting point is the models actually used to extrapolate GWP to 2100, although I draw upon economic growth theory more widely in making my final assessment. First, I give a brief explanation of how growth models work.\n\n\n#### 4.3.1 How do growth models work?\n\n\nIn economic growth models, a number of *inputs* are combined to produce *output*. Output is interpreted as GDP (or GWP). Typical inputs include capital (e.g. equipment, factories), labor (human workers), human capital (e.g. skills, work experience), and the current level of technology.[18](https://www.openphilanthropy.org/could-advanced-ai-drive-explosive-economic-growth#footnote18_mnpxa3b \" This graph, and the ones that follow, are taken from the blog post of my colleague, David Roodman.\")\n\n\n![](https://www.openphilanthropy.org/wp-content/uploads/imageT.png)\n\n\nSome of these inputs are *endogenous,[19](https://www.openphilanthropy.org/could-advanced-ai-drive-explosive-economic-growth#footnote19_hcb7ii4 \" The term ‘endogenous’ can be used to describe individual inputs (as I use it here), or growth theories as a whole.\")* meaning that the model explains how the input changes over time. Capital is typically endogenous; output is invested to sustain or increase the amount of capital.[20](https://www.openphilanthropy.org/could-advanced-ai-drive-explosive-economic-growth#footnote20_d9eutyd \" The standard reinvestment equation is dK/dt = s × Y - δ × K. In sophisticated models the fraction s of output that is reinvested may depend on numerous further factors.\") In the following diagram, capital and human capital are endogenous:\n\n\n![](https://www.openphilanthropy.org/wp-content/uploads/imageZ.png)\n\n\nOther inputs may be *exogenous*. This means their values are determined using methods external to the growth model. For example, you might make labor exogenous and choose its future values using UN population projections. The growth model does not (attempt to) explain how the exogenous inputs change over time.\n\n\nWhen a growth model makes more inputs endogenous, it models more of the world. It becomes more ambitious, and so more debatable, but it also gains the potential to have greater explanatory power.\n\n\n#### 4.3.2 Growth models extrapolating the exponential trend to 2100\n\n\nI looked at a number of papers in line with the *standard story* that extrapolate GWP out to 2100. Most of them treated technology as exogenous, typically assuming that technology will advance at a constant exponential rate.[21](https://www.openphilanthropy.org/could-advanced-ai-drive-explosive-economic-growth#footnote21_hfoylil \" The most highly cited papers, and those used in climate change forecasts, tended to be exogenous. For example, the following papers all assume technology grows exponentially: Foure (2012), Johansson (2013), Crespo (2017), Leimbach (2016), and Riahi (2017). The DICE climate change model of Nordhaus and Sztorc (2013) assumes technology follows a logistic curve, growing ever more slowly over time. Kruse-Anderson (2017) fits endogenous models to historical data and projects out to 2100 using endogenous growth models, predicting slowing growth. \")In addition, they all treated labor as exogenous, often using UN projections. These growth models can be represented as follows:\n\n\n![](https://www.openphilanthropy.org/wp-content/uploads/imageY.png)\n\n\nThe blue ‘+’ signs represent that the increases to labor and technology each year are exogenous, determined outside of the model.\n\n\nIn these models, the positive feedback loop between output and capital is not strong enough to produce sustained growth. This is due to *diminishing marginal returns* to capital. This means that each new machine adds less and less value to the economy, holding the other inputs fixed.[22](https://www.openphilanthropy.org/could-advanced-ai-drive-explosive-economic-growth#footnote22_j7nrukj \" Imagine adding more and more machines, holding fixed the number of workers and the level of technology. Eventually, all the workers would have their hands full running the machines that already exist, and more machines would increase output by very little. \") Even the feedback loop between output and (capital + human capital) is not strong enough to sustain growth in these models, again due to diminishing returns.\n\n\nInstead, long-run growth is driven by the growth of the exogenous inputs, labor and technology. For this reason, these models are called *exogenous growth models*: the ultimate source of growth lies outside of the model. (This is contrasted with *endogenous growth models*, which try to explain the ultimate source of growth.)\n\n\nIt turns out that long run growth of GDP/capita is determined solely by the growth of technology.[23](https://www.openphilanthropy.org/could-advanced-ai-drive-explosive-economic-growth#footnote23_s0ctbmq \" The long-run growth rate of output (GDP) is the sum of the growth rates of the exogenous inputs, labor and technology. The long-run growth rate of GDP/capita is the growth rate of technology, because (in the long-run) growth of labor doesn’t affect GDP/capita. (This is because GDP/capita = (output / labor), and long-run growth of labor increases both the numerator and the denominator by the same amount.)\")These models do not (try to) explain the pattern of technology growth, and so they don’t ultimately explain the pattern of GDP/capita growth.\n\n\n#### 4.3.2.1 Evaluating models extrapolating the exponential trend\n\n\nThe key question of this section is: **Do the growth models of the** **standard story** **give us reason beyond the empirical data to think 21st century growth of frontier GDP/capita will be exponential or sub-exponential?** \n\n\nMy answer is ‘yes’. Although the exogenous models used to extrapolate GWP to 2100 don’t ultimately explain why GDP/capita has grown exponentially, there are endogenous growth models that address this issue. Plausible endogenous models explain this pattern and imply that 21st century growth will be sub-exponential. This is consistent with the standard story. Interestingly, I wasn’t convinced by models implying that 21st century growth will be exponential.\n\n\nThe rest of this section explains my reasoning in more detail.\n\n\nEndogenous growth theorists have for many decades sought theories where long-run growth is robustly exponential. However, they have found it strikingly difficult. In endogenous growth models, long-run growth is typically only exponential if some *knife-edge* condition holds. A parameter of the model must be *exactly* equal to some specific value; the smallest disturbance in this parameter leads to completely different long-run behavior, with growth either approaching infinity or falling to 0. Further, these knife-edges are typically *problematic*: there’s no particular reason to expect the parameter to have the precise value needed for exponential growth. This problem is often called the ‘linearity critique’ of endogenous growth models.\n\n\nAppendix B [argues](https://www.openphilanthropy.org/research/could-advanced-ai-drive-explosive-economic-growth/#AppendixB) that many endogenous growth models contain problematic knife-edges, drawing on discussions in [Jones (1999)](https://web.stanford.edu/~chadj/scaleff10.pdf), [Jones (2005)](https://web.stanford.edu/~chadj/JonesHandbook2005.pdf), [Cesaratto (2008)](https://www.boeckler.de/pdf/v_2008_10_31_cesaratto.pdf), and [Bond-Smith (2019)](https://bcec.edu.au/assets/2019/06/BCEC-Working-Papers-19_02-Steven-Bond-Smith-The-decades-long-dispute-over-scale-effects.pdf).\n\n\n[Growiec (2007)](https://www.researchgate.net/publication/24057379_Beyond_the_Linearity_Critique_The_Knife-edge_Assumption_of_Steady-state_Growth) proves that a wide class of endogenous growth models require a knife-edge condition to achieve constant exponential growth, generalizing the proof of [Christiaans (2004)](https://www.sciencedirect.com/science/article/abs/pii/S0165176503003021). The proof doesn’t show that all such conditions are *problematic*, as there could be mechanisms explaining why knife-edges hold. However, combined with the observation that many popular models contain problematic knife-edges, the proof suggests that it may be generically difficult to explain exponential growth without invoking problematic knife-edge conditions.\n\n\nTwo attempts to address this problem stand out:\n\n\n1. Claim that **exponential population growth has driven exponential GDP/capita growth**. This is an implication of semi-endogenous growth models ([Jones 1995](https://www.jstor.org/stable/2138581?seq=2#metadata_info_tab_contents)). These models are consistent with 20th century data: exponentially growing R&D effort has been accompanied by exponential GDP/capita growth. Appendix B argues that semi-endogenous growth models offer the best framework for explaining the recent period of exponential growth.[24](https://www.openphilanthropy.org/could-advanced-ai-drive-explosive-economic-growth#footnote24_hc50owa \" I discuss semi-endogenous models in this subsection of Appendix B.\")However, I do not think their ‘knife-edge’ assumption that population will grow at a constant exponential rate is likely to be accurate until 2100. In fact, the UN [projects](https://population.un.org/wpp/) that population growth will slow significantly over the 21st century. With this projection, semi-endogenous growth models imply that GDP/capita growth will slow.[25](https://www.openphilanthropy.org/could-advanced-ai-drive-explosive-economic-growth#footnote25_ta5loqb \"Why do semi-endogenous growth models have this implication? They assume that ideas are getting harder to find, where each ‘idea’ is understood as increasing people’s incomes by a fixed %. This assumption is used to explain why exponentially growing research effort has led to a constant flow of ideas. But if research effort stops growing, and is instead constant, then this assumption implies that we will find fewer new ideas each year. As a result growth in GDP/capita will slow. The case for sub-exponential growth is strengthened by noting that the fraction of people doing R&D has grown rapidly over the past 100 years, and this growth cannot be maintained indefinitely. To sustain the historical rate of GDP/capita growth, semi-endogenous models imply we’d have to maintain the historical growth rates of both the population and the fraction of people doing R&D. \") So these models imply 21st century growth will be sub-exponential rather than exponential.[26](https://www.openphilanthropy.org/could-advanced-ai-drive-explosive-economic-growth#footnote26_k108lue \" Slower future growth is also suggested by the slowing growth over the past ~20 years, some of the arguments in Vollrath’s recent book Fully Grown, and of course the arguments in Robert Gordon’s book The Rise and Fall of American Growth. \")\n2. **Claim that market equilibrium leads to exponential growth without knife-edge conditions.** \n\t* In a 2020 paper *[Robust Endogenous Growth](http://public.econ.duke.edu/~peretto/Robust%20Endogenous%20Growth.pdf)*, Peretto outlines a fully endogenous growth model that achieves constant exponential growth of GDP/capita without knife-edge conditions. The model displays increasing returns to R&D investment, which would normally lead to super-exponential growth. However, these increasing returns are ‘soaked up’ by the creation of new firms which dilute R&D investment. Market incentives ensure that new firms are created at *exactly* the rate needed to sustain exponential growth.\n\t* The model seems to have some implausible implications. Firstly, it implies that there should be a huge amount of market fragmentation, with the number of firms growing more quickly than the population. This contrasts with the striking pattern of market *concentration* we see in many areas.[27](https://www.openphilanthropy.org/could-advanced-ai-drive-explosive-economic-growth#footnote27_1m65cg8 \" See examples of market concentration here and an analysis here. \") Secondly, it implies that if no new firms were introduced – e.g. because this was made illegal – then output would reach infinity in finite time. This seems to imply that there is a huge [market failure](https://en.wikipedia.org/wiki/Market_failure): private incentives to create new firms massively reduce long-run social welfare.\n\t* Despite these problems, the model does raise the possibility that an apparent knife-edge holds in reality due to certain equilibrating pressures. Even if this model isn’t quite right, there may still be equilibrating pressures of some sort.[28](https://www.openphilanthropy.org/could-advanced-ai-drive-explosive-economic-growth#footnote28_f7mpr72 \" Galor and Weil (2000) suggest an alternative equilibration mechanism. In their model, faster growth reduces the fertility rate, which in turn slows growth. Conversely, slower growth boosts the fertility rate, which in turn speeds up growth. The model implies the population level (or growth rate) will remain constant, holding the growth rate of technology constant. However, I wouldn’t trust the predictions of this model out to 2100, as the UN forecasts population growth to slow. \")\n\t* Overall, this model slightly raises my expectation that long-run growth will be exponential.[29](https://www.openphilanthropy.org/could-advanced-ai-drive-explosive-economic-growth#footnote29_mjj02y7 \" I discuss this model in more detail here. \")\n\n\nThis research shifted my beliefs in a few ways:\n\n\n* I put more probability (~75%) on semi-endogenous growth models explaining the recent period of exponential growth.[30](https://www.openphilanthropy.org/could-advanced-ai-drive-explosive-economic-growth#footnote30_se9dn2o \" More precisely, I think it’s ~75% likely that the recent exponential growth of GDP/capita is ultimately explained by the exponential growth of human population. Semi-endogenous models embody this claim and highlight the importance of targeted R&D to growth, but other models embody the claim and highlight the importance of learning by doing. \")\n\t+ So I put more weight on 21st century growth being sub-exponential.\n\t+ We’ll see later that these models imply that sufficiently advanced AI could drive explosive growth. So I put more weight on this possibility as well.\n* It was harder than I expected to for growth theories to adequately explain why income growth should be exponential in a steady state (rather than sub- or super-exponential). So I put more probability on the recent period of exponential growth being transitory, rather than part of a steady state.\n\t+ For example, the recent period could be a transition between past super-exponential growth and future sub-exponential growth, or a temporary break in a longer pattern of super-exponential growth.\n\t+ This widens the range of future trajectories that I regard as being plausible.\n\n\n#### 4.3.3 Growth models extrapolating the super-exponential trend\n\n\nSome growth models extrapolate the long-run super-exponential trend to predict explosive growth in the future.[31](https://www.openphilanthropy.org/could-advanced-ai-drive-explosive-economic-growth#footnote31_4p93heq \" See for example Lee (1988), Kremer (1993) and Roodman (2020). Roodman (2020) reviews other long-run explosive models.\") Let’s call them *long-run* *explosive models*. The ones I’m aware of are ‘fully endogenous’, meaning *all* inputs are endogenous.[32](https://www.openphilanthropy.org/could-advanced-ai-drive-explosive-economic-growth#footnote32_e8ezd9y \" They often have a ‘fixed factor’, land, that is exogenous. They’re called ‘fully endogenous’ because all the non-fixed factors are endogenous. \")\n\n\nCrucially, *long-run explosive* models claim that **more output → more people**. This makes sense (for example) when food is scarce: more output means more food, allowing the population to grow. This assumption is important, so it deserves a name. Let’s say these models make population *accumulable*. More generally, an input is accumulable just if **more output → more input.**[33](https://www.openphilanthropy.org/could-advanced-ai-drive-explosive-economic-growth#footnote33_bothkg6 \"More precisely, let X be the amount of an input and Y be the quantity of output. X is accumulable just if dX/dt is an increasing function of Y. One way to think about this is that accumulable inputs are bottlenecked by the amount of output.A simple example is the equation for capital reinvestment: dK/dt = s × Y - δ × K. Others examples can be found in Lee (1998): dL/dt = L × α × [log(Y/L) - constant], dA/dt = constant × A × log((Y/A)m).\")\n\n\nThe term ‘accumulable’ is from the growth literature; the intuition behind it is that the input can be accumulated by increasing output.\n\n\nIt’s significant for an input to be accumulable as it allows a feedback loop to occur: **more output → more input → more output →**… Population being accumulable is the most distinctive feature of *long-run explosive* models.\n\n\n*Long-run explosive models* also make technology accumulable: **more output → more people → more ideas (technological progress)**.\n\n\nAll growth models, even exogenous ones, imply that capital is accumulable: **more output → more reinvestment → more capital.**[34](https://www.openphilanthropy.org/could-advanced-ai-drive-explosive-economic-growth#footnote34_urps57m \" Increases in capital are typically modeled as resulting from the direct investment of a fraction sK of output: dK = sK × Y. In Roodman’s model, the mechanism for increasing population is identical: dP = sP × Y. In Lee (1988) the mechanism is slightly different; we can roughly represent it as dP = sP × ln(Y). In Kremer (1993) Section 1, all output is converted directly into population; we can roughly represent this as dP = (conversion factor) × dY.\") In this sense, *long-run explosive* models are a natural extension of the exogenous growth models discussed above: a similar mechanism typically used to explain capital accumulation is used to explain the accumulation of technology and labor.\n\n\nWe can roughly represent *long-run explosive* models as follows:[35](https://www.openphilanthropy.org/could-advanced-ai-drive-explosive-economic-growth#footnote35_ya06i0h \" Note: explosive models may contain many relationships not displayed in the diagram. The diagram is just designed to highlight some of the important features.\")\n\n\n![](https://www.openphilanthropy.org/wp-content/uploads/imageU.png)\n\n\nThe orange arrows show that all the inputs are accumulable: a marginal increase in output leads to an increase in the input. Fully endogenous growth models like these attempt to model more of the world than exogenous growth models, and so are more ambitious and debatable; but they potentially have greater explanatory power.\n\n\nWhy do these models predict super-exponential growth? The intuitive reason is that, with so many accumulable inputs, the feedback loop between the inputs and output is powerful enough that growth becomes faster and faster over time.\n\n\nMore precisely, the key is **increasing returns to scale in accumulable inputs**: when we double the level of every accumulable input, output *more* than doubles.[36](https://www.openphilanthropy.org/could-advanced-ai-drive-explosive-economic-growth#footnote36_ei0ct1j \" In Cobb-Douglas models, this assumption corresponds to the claim that the sum of the exponents of accumulable inputs exceeds 1.\")\n\n\nWhy are there increasing returns to scale? The key is the insight, from [Romer (1990)](http://web.stanford.edu/~klenow/Romer_1990.pdf), that technology is non-rival. If you use a new solar panel design in your factory, that doesn’t prevent me from using that same design in my factory; whereas if you use a particular machine/worker, that *does* prevent me from using that same machine/worker.\n\n\nImagine doubling the quantity of labor and capital, holding technology fixed. You could literally replicate every factory and worker inside it, and make everything you currently make a second time. Output would double. Crucially, you wouldn’t need to double the level of technology because ideas are non-rival: twice as many factories could use the same stock of ideas without them ‘running out’.\n\n\nNow imagine *also* doubling the level of technology. We’d still have twice as many factories and twice as many workers, but now each factory would now be more productive. Output would *more* than double. This is increasing returns to scale: double the inputs, *more than* double the output.[37](https://www.openphilanthropy.org/could-advanced-ai-drive-explosive-economic-growth#footnote37_xio1261 \" For more on this, see the introduction of Jones (2005) or Romer (1990).\")\n\n\n*Long-run explosive models* assume that capital, labor and technology are all accumulable. Even if they include a fixed input like land, there are typically increasing returns to accumulable inputs. This leads to super-exponential growth as long unless the diminishing returns to technology R&D are very steep.[38](https://www.openphilanthropy.org/could-advanced-ai-drive-explosive-economic-growth#footnote38_e8zoph5 \" Why do increasing returns naturally lead to super-exponential growth? Let’s explain the intuition using a simple example where output Y is just produced by capital K. Y = Kα, dK/dt = s × Y. Increasing returns means that α > 1. If so, then by the time K doubles, Ymore than doubles, so dX/dt more than doubles. This means the growth rate of K, (dK/dt)/K, increases. In other words, the growth rate of K increases when K doubles. More generally, increasing returns make it possible for inputs’ growth rates to increase when the system doubles in size.\") For a wide of plausible parameter values, these models predict super-exponential growth.[39](https://www.openphilanthropy.org/could-advanced-ai-drive-explosive-economic-growth#footnote39_sbk7wmg \" Appendix C supports this claim by analyzing the precise conditions for growth in many long-run explosive models - see here. \")\n\n\nThe key feedback loop driving increasing returns and super-exponential growth in these models can be summarized as **more ideas (technological progress) → more output → more people → more ideas→…**\n\n\nThese models seem to be a good fit to the long-run GWP data. The model in [Roodman (2020)](https://www.openphilanthropy.org/wp-content/uploads/Modeling-the-human-trajectory.pdf) implies that GWP follows a ‘power-law’, which seems to fit the data well.\n\n\n![](https://www.openphilanthropy.org/wp-content/uploads/imageI-2.png)\n\n\nLong-run explosive models fitted to the long-run GWP data typically predict that explosive growth (>30% per year) is *a few decades away*. For example, you can ask the model in Roodman (2020) ‘*When will be the first year of explosive growth?*’ Its median prediction is 2043 and the 80% confidence range is [2034, 2065].\n\n\n#### 4.3.3.1 Evaluating models extrapolating the super-exponential trend\n\n\nThe key question of this section is: **Do the growth models of the** **explosive growth story** **give us reason to think 21st century growth will be super-exponential?** My answer in this section is ‘no’, because the models are not well suited to describing post-1900 growth. In addition, it’s unclear how much we should trust their description of pre-1900 growth. (However, the next section argues these models can be trusted if we develop sufficiently powerful AI systems.)\n\n\n#### 4.3.3.1.1.Problem 1: *Long-run explosive models* are not suitable for describing post-1900 growth\n\n\nThe central problem is that long-run explosive models assume population is accumulable.[40](https://www.openphilanthropy.org/could-advanced-ai-drive-explosive-economic-growth#footnote40_ywr9sui \"This statement is an oversimplification in relation to Roodman’s univariate model. That model does not model population explicitly at all - its sole variable refers to GWP. However, the model is the univariate analogue of a model in which all inputs are accumulable, including population.Technically, the univariate model can approximate a multivariate model where population isn’t accumulable if increasing returns to the other accumulable inputs are powerful enough to drive super-exponential growth. However, this doesn't happen for realistic parameter values (more).\") While it is plausible than in pre-modern times **more output → more people**, this hasn’t been true in developed countries over the last ~140 years. In particular, since ~1880 fertility rates have *declined* despite increasing GDP/capita.[41](https://www.openphilanthropy.org/could-advanced-ai-drive-explosive-economic-growth#footnote41_zfkcphx \" See data on UK, France, Netherlands and US in this graph from Galor (2012).\") This is known as the [demographic transition](https://en.wikipedia.org/wiki/Demographic_transition). Since then, more output has not led to more people, but to richer and better educated people: **more output →** **more richer people.** Population is no longer accumulable (in the sense that I’ve defined the term).[42](https://www.openphilanthropy.org/could-advanced-ai-drive-explosive-economic-growth#footnote42_zn8063e \" If population were accumulable then, holding all else constant, increasing GDP should increase future population. But since ~1880 increases in GDP, holding population constant, have decreased population growth.\") The feedback loop driving super-exponential growth is broken: **more ideas → more output →** **more** **richer people** **→ more ideas****.**\n\n\nHow would this problem affect the models’ predictions? If population is not accumulable, then the returns to accumulable inputs are lower, and so growth is slower. We’d expect *long-run explosive models* to predict faster growth than we in fact observe after ~1880; in addition we wouldn’t expect to see super-exponential growth after ~1880.[43](https://www.openphilanthropy.org/could-advanced-ai-drive-explosive-economic-growth#footnote43_mc942n7 \" When labor isn’t accumulable, the returns to accumulable inputs are not large enough to overcome diminishing returns to R&D, with realistic parameter values (see more).\")\n\n\nIndeed, this is what the data shows. *Long-run explosive models* are surprised at how slow GWP growth is since 1960 ([more](https://www.openphilanthropy.org/research/could-advanced-ai-drive-explosive-economic-growth/#RecentGWPGrowth)), and surprised at how slow frontier GDP/capita growth is since 1900 ([more](https://www.openphilanthropy.org/research/could-advanced-ai-drive-explosive-economic-growth/#ExpertOpinion)). It is not surprising that a structural change means a growth model is no longer predictively accurate: growth models are typically designed to work in bounded contexts, rather than being universal theories of growth.\n\n\nA natural hypothesis is that **the reason** **why** **long-run explosive models** **are a poor fit to the post-1900 data is that they make an assumption about population that has been inaccurate since ~1880**. The recent data is not evidence against *long-run explosive models* per se, but confirmation that their predictions can only be trusted when population is accumulable.\n\n\nThis explanation is consistent with some prominent idea-based theories of very long-run growth.[44](https://www.openphilanthropy.org/could-advanced-ai-drive-explosive-economic-growth#footnote44_0sps1do \" For example, see Jones (2001), Galor and Weil (2000), and the Kremer (1993) Part 3.\") These theories use the same mechanism as *long-run explosive models* to explain pre-1900 super-exponential growth: labor and technology are accumulable, so there are increasing returns to accumulable inputs,[45](https://www.openphilanthropy.org/could-advanced-ai-drive-explosive-economic-growth#footnote45_d8si24t \" In Galor and Weil (2000), there are strictly speaking only constant returns to accumulable factors. The model, however, is still characterized by increasing returns because once the population has doubled, the growth rates of technology and labor both increase. In addition, increasing human capital driven by education investment plays an important part in generating super-exponential growth around the industrial revolution. \") so there’s super-exponential growth. They feature the same ideas feedback loop: **more ideas → more output → more people → more ideas→…**[46](https://www.openphilanthropy.org/could-advanced-ai-drive-explosive-economic-growth#footnote46_riy07ku \" There is a slight difference in emphasis in Jones (2001) and Galor and Weil (2000). Their feedback loop is more naturally described as: more ideas → more output/capita → more people → more ideas... They specify a relationship between output/capita and fertility directly, rather than between output and population increases. As mentioned above, Galor and Weil (2000) emphasizes educational investment boosting growth around the industrial revolution: more ideas → more output/capita → more and better educated people → more ideas...\")\n\n\nThese idea-based theories are made consistent with recent exponential growth by adding an additional mechanism that makes the fertility rate drop once the economy reaches a mature stage of development,[47](https://www.openphilanthropy.org/could-advanced-ai-drive-explosive-economic-growth#footnote47_di9mc93 \" What are these mechanisms? In Jones (2001), fertility decreases with GDP/capita and so the demographic transition occurs when people become sufficiently rich. In Galor and Weil (2000), fertility decreases with the growth rate of technology and so the demographic transition occurs once the growth rate becomes sufficiently high.\") mimicking the effect of the demographic transition. After this point, population isn’t accumulable and the models predict exponential growth by approximating some standard endogenous or semi-endogenous model.[48](https://www.openphilanthropy.org/could-advanced-ai-drive-explosive-economic-growth#footnote48_91rxui2 \" In particular, Galor and Weil (2000) approximates the Romer model and Jones (2001) approximates a semi-endogenous growth model. As discussed above, my view is that semi-endogenous models are more plausible and that they imply 21st century growth will be sub-exponential.\")\n\n\nThese idea-based models provide a good explanation of very long-run growth and modern growth. They increase my confidence in the main claim of this section: *long-run explosive models* are a poor fit to the post-1900 data because they (unrealistically) assume population is accumulable. However, idea-based models are fairly complex and were *designed to* explain long-run patterns in GDP/capita and population; this should make us wary to trust them too much.[49](https://www.openphilanthropy.org/could-advanced-ai-drive-explosive-economic-growth#footnote49_bo9fcrf \" I explain the dynamics of Jones (2001) and Galor and Weil (2000) in this technical appendix.\")\n\n\n#### 4.3.3.1.2 Problem 2: It is unclear how much we should trust *long-run explosive models*’ explanation of pre-1900 growth\n\n\nNone of the problems discussed above dispute the *explosive growth story*’s explanation of pre-1900 growth. How much weight should we put on its account?\n\n\nIt emphasizes the non-rivalry of ideas and the mechanism of increasing returns to accumulable factors. This mechanism implies growth increased fairly smoothly over hundreds and thousands of years.[50](https://www.openphilanthropy.org/could-advanced-ai-drive-explosive-economic-growth#footnote50_777j3i1 \" Increasing returns leads to a smooth curve of super-exponential growth, where growth increases very slowly at first and then more and more quickly over time. There are no structural breaks. I say 'fairly' smooth because increasing return models may allow for random influences on growth, as in Roodman (2020).\") We saw that the increasing-returns mechanism plays a central role in several prominent models of long-run growth.[51](https://www.openphilanthropy.org/could-advanced-ai-drive-explosive-economic-growth#footnote51_1pnhp2x \" Galor and Weil (2000), Jones (2001), Kremer (1993), and Lee (1988). \")\n\n\nHowever, most papers on very long run growth emphasize a different explanation, where a structural transition occurs around the industrial revolution.[52](https://www.openphilanthropy.org/could-advanced-ai-drive-explosive-economic-growth#footnote52_c58y278 \" For example, Hansen and Prescott (2002) discuss a model in which a phase transition increases growth. Initially the economy faces diminishing returns to labor due to the fixed factor land. But once exogenously growing technology is high enough, it becomes profitable for firms to use less land-intensive production processes; this phase transition increases growth. Other examples include Goodfriend and McDermott (1995), Lucas (1998), Stokey (2001), Tamura (2002) and Hanson (2000). \") Rather than a smooth increase, this suggests a single step-change in growth occurred around the industrial revolution, without growth increasing before or after the step-change.[53](https://www.openphilanthropy.org/could-advanced-ai-drive-explosive-economic-growth#footnote53_owigcbw \" Note, Galor and Weil (2000) and Jones (2001) feature both increasing returns to accumulable inputs and a structural change around the industrial revolution that speeds up technological progress. In Jones (2001) there’s an increase in the fraction of the population doing R&D; in Galor and Weil (2000) there’s a shift towards more education.\")\n\n\nThough a ‘step-change’ view of long-run growth rates will have a lesser tendency to predict explosive growth by 2100, it would not rule it out. For this, you would have to explain why step change increases have occurred in the past, but no more will occur in the future.[54](https://www.openphilanthropy.org/could-advanced-ai-drive-explosive-economic-growth#footnote54_nubueri \" I discuss the step-change view in more detail here.\")\n\n\nHow much weight should we place in the increasing-returns mechanism versus the step-change view? The ancient data points are highly uncertain, making it difficult to adjudicate empirically.[55](https://www.openphilanthropy.org/could-advanced-ai-drive-explosive-economic-growth#footnote55_0i68hil \" I discuss the uncertainty of the ancient data points more here.\") Though GWP growth seems to have increased across the whole period 1500 – 1900, this is compatible with there being one slow step-change.[56](https://www.openphilanthropy.org/could-advanced-ai-drive-explosive-economic-growth#footnote56_oykf9mq \" Ben Garfinkel explicitly proposes a slow step-change view here. Such a view should probably allow for another step-change increase in growth around 10,000 BCE; growth seems to have increased in this period, plausibly due to the Neolithic Revolution. This strengthens the case for this view being open to another step-change occurring in the future.\")\n\n\nThere is some informative evidence:\n\n\n* Kremer (1993) gives evidence for the increasing-returns mechanism. He looks at the development of 5 isolated regions and finds that the technology levels of the regions in 1500 are perfectly rank-correlated with their initial populations in 10,000 BCE. This is just what the increasing returns mechanism would predict.[57](https://www.openphilanthropy.org/could-advanced-ai-drive-explosive-economic-growth#footnote57_9r49c0f \" There may be other plausible explanations for some of these rankings. For example, Eurasia seems to have started with a better supply of domesticable plants and animals than Australia; this factor alone may have been enough to cause Australia to discover farming later. Early population levels may also correlate with biodiversity, which could help with the early stages of technological development. Thanks to Ben Garfinkel for making the point. \")\n* Roodman (2020) gives evidence for the step-change view. Roodman finds that his own model, which uses the *increasing-returns* mechanism, is surprised by the speed of growth around the industrial revolution (see [more](https://www.openphilanthropy.org/research/could-advanced-ai-drive-explosive-economic-growth/#AppendixI)).\n\n\nOverall, I think it’s likely that the increasing-returns mechanism plays an important role in explaining very long-run growth. As such I think we should take *long-run explosive models* seriously (if population is accumulable). That said, they are not the whole story; important structural changes happened around the industrial revolution.[58](https://www.openphilanthropy.org/could-advanced-ai-drive-explosive-economic-growth#footnote58_kgmbpko \" I was not able to spend much time investigating the relative importance of increasing returns vs other mechanisms in explaining long run growth; we hope to do more work on this in the future. Ben Garfinkel argues that new ideas were not the central driver of growth before the industrial revolution, and suggests that population data doesn’t show much evidence of increasing growth rates in the period 5,000 BCE to 1500 CE. One possibility Ben mentions is that the increasing returns mechanism became the central driver of growth around the time of the industrial revolution, when the population and research effort became large enough for new ideas to become a dominant driver of growth. \")\n\n\n#### 4.3.4 Summary of theoretical models used to extrapolate GWP out to 2100\n\n\nI repeat the questions asked at the start of this section, now with their answers:\n\n\n* Do the growth models of the *standard story* give us reason beyond the empirical data to think 21st century growth will be exponential or sub-exponential?\n\t+ Yes, plausible models imply that growth will be sub-exponential. Interestingly, I didn’t find convincing reasons to expect exponential growth.\n* Do the growth models of the *explosive growth story* give us reason beyond the empirical data to think 21st century growth will be super-exponential?\n\t+ No, *long-run explosive models* assume population is accumulable, which isn’t accurate after ~1880.\n\t+ However, the next section argues that advanced AI could make this assumption accurate once more. So I think these models do give us reason to expect explosive growth *if* sufficiently advanced AI is developed.\n\n\n\n\n| | | |\n| --- | --- | --- |\n| | **STANDARD STORY** | **EXPLOSIVE GROWTH STORY** |\n| Preferred data set | Frontier GDP/capita since 1900 | GWP since 10,000 BCE |\n| Predicted shape of long-run growth | Exponential or sub-exponential | Super-exponential (for a while, and then eventually sub-exponential) |\n| Models used to extrapolate GWP to 2100 | Exogenous growth models | Endogenous growth model, where population and technology are accumulable. |\n| Evaluation | Semi-endogenous growth models are plausible and predict 21st century growth will be sub-exponential. Theories predicting exponential growth rely on problematic knife-edge conditions. | Population is no longer accumulable, so we should not trust these models by default. However, advanced AI systems could make this assumption realistic again, in which case the prediction of super-exponential can be trusted. |\n\n\n#### \n\n\n#### 4.4 Advanced AI could drive explosive growth\n\n\nIt is possible that significant advances in AI could allow capital to much more effectively substitute for labor.[59](https://www.openphilanthropy.org/could-advanced-ai-drive-explosive-economic-growth#footnote59_8m5d36u \" Technological advances other than AI could potentially make population accumulable. Examples include whole-brain emulations, artificial wombs, and genetic engineering. behavioral changes could also make population accumulable, e.g. if everyone tried to have as many kids as biologically possible. This report focuses on advanced AI because we believe it is more likely to occur this century than these alternatives, and because it ties in with Open Philanthropy’s focus area of risks from advanced AI.\") Capital is accumulable, so this could lead to increasing returns to accumulable inputs, and so to super-exponential growth.[60](https://www.openphilanthropy.org/could-advanced-ai-drive-explosive-economic-growth#footnote60_5208hb3 \" Again, if diminishing marginal returns to technology R&D are steep enough, this could prevent super-exponential growth. Plausible parameter values suggest this would not happen if capital can substitute for labor in all jobs.\") I’ll illustrate this point from two complementary perspectives.\n\n\n#### 4.4.1 AI robots as a form of labor\n\n\nFirst, consider a toy scenario in which Google announces tomorrow that it’s developed AI robots that can perform *any*task that a human laborer can do for a smaller cost. In this (extreme!) fiction, AI robots can *perfectly* substitute for all human labor. We can write (total labor) = (human labor) + (AI labor). We can invest output to build more AI robots,[61](https://www.openphilanthropy.org/could-advanced-ai-drive-explosive-economic-growth#footnote61_718krsu \" AI robots are a form of capital, so it’s natural to use the same reinvestment equation as for capital: dR/dt = s × Y - δ × R.\") and so increase the labor supply: **more output → more labor (AI robots)**. In other words, **labor is accumulable again**. When this last happened there was super-exponential growth, so our default expectation should be that this scenario will lead to super-exponential growth.\n\n\nTo look at it another way, AI robots would reverse the effect of the demographic transition. Before that transition, the following feedback loop drove increasing returns to accumulable inputs and super-exponential growth:\n\n\n**More ideas → more output → more labor (people) → more ideas →…**\n\n\nWith AI robots there would be a closely analogous feedback loop:\n\n\n**More ideas → more output → more labor (AI robots) → more ideas →…**\n\n\n\n\n| | | | |\n| --- | --- | --- | --- |\n| **PERIOD** | **FEEDBACK LOOP?** | **IS TOTAL LABOR ACCUMULABLE?** | **PATTERN OF GROWTH** |\n| Pre-1880 | Yes: More ideas → more output → more people → more ideas →… | Yes | GWP grows at an increasing rate. |\n| 1880 – present | No: More ideas → more output → more richer people → more ideas →… | No | GWP grows at a ~constant rate. |\n| AI robot scenario | Yes: More ideas → more output → more AI systems → more ideas →… | Yes | GWP grows at an increasing rate. |\n\n\nIndeed, plugging the AI robot scenario into a wide variety of growth models, including exogenous growth models, you find that increased returns to accumulable inputs drives super-exponential growth for plausible parameter values.[62](https://www.openphilanthropy.org/could-advanced-ai-drive-explosive-economic-growth#footnote62_jrifbxj \" I discuss these models in Appendix C - see here.\")\n\n\nThis first perspective, analysing advanced AI as a form of labor, emphasizes the similarity of pre-1900 growth dynamics to those of a possible future world with advanced AI. If you think that the increasing-returns mechanism increased growth in the past, it’s natural to think that the AI robot scenario would increase growth again.[63](https://www.openphilanthropy.org/could-advanced-ai-drive-explosive-economic-growth#footnote63_iyj6gae \" The hardware-software model in Growiec (2020) offers a unified model for explaining pre-modern growth, the industrial revolution, and what he calls the 'digital revolution' that has only just started. Capital and labor are replaced by hardware (‘brawn’) and software (‘brains’) as the fundamental inputs to production. In the digital revolution advanced AI decouples overall software supply from the size of the human population; this makes software accumulable and leads to an increase in growth. \")\n\n\n#### 4.4.2 AI as a form of capital\n\n\nThere are currently diminishing returns to accumulating more capital, holding the amount of labor fixed. For example, imagine creating more and more high-quality laptops and distributing them around the world. At first, economic output would plausibly increase as the laptops made people more productive at work. But eventually additional laptops would make no difference as there’d be no one to use them. The feedback loop ‘**more output → more capital → more output →…**’ peters out.\n\n\nAdvances in AI could potentially change this. By automating wide-ranging cognitive tasks, they could allow capital to substitute more effectively for labor. As a result, there may no longer be diminishing returns to capital accumulation. AI systems could replace both the laptops *and* the human workers, allowing capital accumulation to drive faster growth.[64](https://www.openphilanthropy.org/could-advanced-ai-drive-explosive-economic-growth#footnote64_iaatm75 \" Intuitively, human workers are bottlenecking growth; advanced AI would release that bottleneck and increase growth. By analogy, the fixed supply of land may have bottlenecked growth in ancient times; the industrial revolution may have released that bottleneck and increased growth. (During the industrial revolution, we moved over to less land-intensive production processes.)\")\n\n\nEconomic growth models used to explain growth since 1900 back up this point. In particular, if you adjust these models by assuming that capital substitutes more effectively for labor, they predict increases in growth.\n\n\nThe basic story is: capital substitutes more effectively for labor → capital’s share of output increases → larger returns to accumulable inputs → faster growth. In essence, the feedback loop ‘**more output → more capital → more output → …’** becomes more powerful and drives faster growth.\n\n\nWhat level of AI is required for explosive (>30%) growth in these models? The answer varies depending on the particular model:[65](https://www.openphilanthropy.org/could-advanced-ai-drive-explosive-economic-growth#footnote65_x0g014m \" The papers I’ve studied most closely are Nordhaus (2021), Aghion et al. (2017), and Hanson (2001), and the AI growth literature review Trammell and Korinek (2021).\")\n\n\n* Often the crucial condition is that the elasticity of substitution between capital and labor rises above 1. This means that some (perhaps very large) amount of capital can completely replace any human worker, though it is a weaker condition than perfect substitutability.[66](https://www.openphilanthropy.org/could-advanced-ai-drive-explosive-economic-growth#footnote66_qeu5f04 \" What is the difference between this condition and that of perfect substitutability? The key parameter is the elasticity of substitution, σ. σ > 1 is a weaker claim than perfect substitution, which corresponds to σ = ∞. I like to think about the difference as follows. Imagine replacing human workers with capital one by one. When σ = ∞, the amount of capital needed to replace each worker is fixed. It’s like we replace each worker with an AI robot at fixed cost. But when 1 <σ <∞, the amount of capital needed to replace each worker increases as fewer workers remain. For example, one unit of capital replaces the first worker, two units replace the second worker, three units replace the third, etc. It’s as if each worker does a different role, and the initial roles are cheaper to automate than the latter ones. For both 1 <σ <∞ and σ = ∞, the growth rate of output ultimately approaches the growth rate of capital. What about σ <1? In this case output cannot exceed a fixed ceiling no matter how much capital you have, holding labor constant. Intuitively, no amount of capital can fully replace a human worker. \")\n* In the task-based model of Aghion et al. (2017), automating a fixed set of tasks leads to only a temporary boost in growth. A constant stream of automation (or full automation) is needed to maintain faster growth.[67](https://www.openphilanthropy.org/could-advanced-ai-drive-explosive-economic-growth#footnote67_m6jaw99 \" Two clarifications. Firstly, the rate of task automation would have to increase from its current value to boost growth. Secondly, to increase the rate of exponential growth we must automate a constant fraction of non-automated tasks each year (e.g. the total fraction of automated tasks goes 0%, 50%, 75%, 87.5%,... - we automate half the non-automated tasks each year). Thirdly, super-exponential growth is possible if we automate an increasingfraction of non-automated tasks each year (e.g. the total fraction of automated tasks goes 0%, 50%, 80%, 95%,... - we automate 1/2 the tasks in the first year, 2/3 in the second year, 3/4 in the third year). For super-exponential growth there must also be some capital augmenting technological progress in the background. \")\n* Appendix C discusses the conditions for super-exponential growth in a variety of such models (see [here](https://www.openphilanthropy.org/research/could-advanced-ai-drive-explosive-economic-growth/#StandardGrowthModels)).\n\n\nOverall, what level of AI would be sufficient for explosive growth? Based on a number of models, I think that explosive growth would require AI that substantially accelerates the automation of a very wide range of tasks in the production of goods and services, R&D, and the implementation of new technologies. The more rapid the automation, and the wider the range of tasks, the faster growth could become.[68](https://www.openphilanthropy.org/could-advanced-ai-drive-explosive-economic-growth#footnote68_zl227tb \" I explain my thinking about what AI would be sufficient for explosive growth in more detail here.\")\n\n\nIt is worth emphasizing that these models are simple extensions of standard growth models; the only change is to assume that capital can substitute more effectively for labor. With this assumption, semi-endogenous models with reasonable parameter values predict explosive growth, as do exogenous growth models with constant returns to labor and capital.[69](https://www.openphilanthropy.org/could-advanced-ai-drive-explosive-economic-growth#footnote69_b18lkdr \" I analyze the conditions for super-exponential growth in semi-endogenous models here, and the conditions in exogenous models here.\")\n\n\nA [draft literature review](https://docs.google.com/document/d/1XCn4Pk44evZEjbmPD-zKlj26Z_bhf0ABeWVID_4L5sg/edit) on the possible growth effects of advanced AI includes many models in which AI increases growth via this mechanism (capital substituting more effectively for labor). In addition, it discusses several other mechanisms by which AI could increase growth, e.g. changing the mechanics of idea discovery and changing the savings rate.[70](https://www.openphilanthropy.org/could-advanced-ai-drive-explosive-economic-growth#footnote70_ye86yau \" I personally find these mechanisms more speculative than the one I’ve focused on.\")\n\n\n#### 4.4.3 Combining the two perspectives\n\n\nBoth the ‘AI robots’ perspective and the ‘AI as a form of capital’ perspective make a similar point: if advanced AI can substitute very effectively for human workers, it could precipitate explosive growth by increasing the returns to accumulable inputs. In many growth models with plausible parameter values this scenario leads to explosive growth.\n\n\nPreviously, we said we should not trust *long-run explosive models* as they unrealistically assume population is accumulable. We can now qualify this claim. We should not trust these models *unless* AI systems are developed that can replace human workers.\n\n\n#### 4.4.4 Could sufficiently advanced AI be developed in time for explosive growth to occur this century?\n\n\nThis is not a focus of this report, but other evidence suggests that this scenario is plausible:\n\n\n* A survey of AI practitioners asked them about the probability of developing AI that would enable full automation.[71](https://www.openphilanthropy.org/could-advanced-ai-drive-explosive-economic-growth#footnote71_hto14ly \" Grace, Katja (2017). \") Averaging their responses, they assigned ~30% or ~60% probability to this possibility by 2080, depending on how the question is framed.[72](https://www.openphilanthropy.org/could-advanced-ai-drive-explosive-economic-growth#footnote72_66hmpaz \" I discuss the framing issues more in a footnote here.\")\n* My colleague [Joe Carlsmith’s](https://www.openphilanthropy.org/about/team/joseph-carlsmith/) [report](https://www.openphilanthropy.org/research/new-report-on-how-much-computational-power-it-takes-to-match-the-human-brain/) estimates the computational power needed to match the human brain. Based on this and other evidence, my colleague [Ajeya Cotra](https://www.openphilanthropy.org/about/team/ajeya-cotra/)’s [draft](https://www.alignmentforum.org/posts/KrJfoZzpSDpnrv9va/draft-report-on-ai-timelines) [report](https://www.alignmentforum.org/posts/KrJfoZzpSDpnrv9va/draft-report-on-ai-timelines) estimates when we’ll develop human-level AI; she finds we’re ~70% likely to do so by 2080.\n* In a [previous report](https://www.openphilanthropy.org/research/report-on-semi-informative-priors/) I estimated the probability of developing human-level AI based on analogous historical developments. My framework finds a ~15% probability of human-level AI by 2080.\n\n\n#### 4.5 Objections to explosive growth\n\n\nMy responses are brief, and I encourage interested readers to read [Appendix A](https://www.openphilanthropy.org/research/could-advanced-ai-drive-explosive-economic-growth/#AppendixA), which discusses these and other objections in more detail.\n\n\n#### 4.5.1 What about diminishing returns to technological R&D?\n\n\n**Objection:** There is good evidence that [ideas are getting harder to find](https://web.stanford.edu/~chadj/IdeaPF.pdf). In particular, it seems that exponential growth in the number of researchers is needed to sustain constant exponential growth in technology (TFP).\n\n\n**Response:** The models I have been discussing take this dynamic into account. They find that, with realistic parameter values, increasing returns to accumulable inputs is powerful enough to overcome diminishing returns to technological progress if AI systems can replace human workers. This is because the feedback loop ‘**more output → more labor (AI systems) → more output**’ allows research effort to grow *super-exponentially* , leading to super-exponential TFP growth despite ideas becoming harder to find (see [more](https://www.openphilanthropy.org/research/could-advanced-ai-drive-explosive-economic-growth/#AppendixDiminishing)).\n\n\n**Related objection:** You claimed above that the demographic transition caused super-exponential growth to stop. This is why you think advanced AI could restart super-exponential growth. But perhaps the real cause was that we hit more sharply diminishing returns to R&D in the 20th century.\n\n\n**Response:** This could be true. Even if true, though, this wouldn’t rule out explosive growth occurring this century: it would still be possible that returns to R&D will become less steep in the future and the historical pattern of super-exponential growth will resume.[73](https://www.openphilanthropy.org/could-advanced-ai-drive-explosive-economic-growth#footnote73_ty4q4pk \" Agrawal et al. (2019) discuss a mechanism where AI assistance in research raises the returns to human research efforts.\")\n\n\nHowever, I investigated this possibility and came away thinking that diminishing returns probably didn’t explain the end of super-exponential growth.\n\n\n* Various endogenous growth models suggest that, had population remained accumulable throughout the 20th century, growth would have been super-exponential *despite* the sharply diminishing returns to R&D that we have observed.\n* Conversely, these models suggest that the demographic transition would have ended super-exponential growth even if diminishing returns to R&D had been much less steep.\n* This all suggests that the demographic transition, not diminishing returns, is the crucial factor in explaining the end of super-exponential growth (see [more](https://www.openphilanthropy.org/research/could-advanced-ai-drive-explosive-economic-growth/#CanDiminishingReturns)).\n\n\nThat said, I do think it’s reasonable to be uncertain about why super-exponential growth came to an end. The following diagram summarizes some possible explanations for the end of super-exponential growth in the 20th century, and their implications for the plausibility of explosive growth this century.\n\n\n![](https://www.openphilanthropy.org/wp-content/uploads/image400.png)\n\n\n#### 4.5.2 30% growth is very far out of the observed range\n\n\n**Objection**: Explosive growth is so far out of the observed range! Even when China was charging through catch-up growth it never sustained more than 10% growth. So 30% is out of the question.\n\n\n**Response:** Ultimately, this is not a convincing objection. If you had applied this reasoning in the past, you would have been repeatedly led into error. The 0.3% GWP growth of 1400 was higher than the previously observed range, and the 3% GWP growth of 1900 was higher than the previously observed range. There is historical precedent for growth increasing to levels far outside of the previously observed range (see [more](https://www.openphilanthropy.org/research/could-advanced-ai-drive-explosive-economic-growth/#ExplosiveGrowth)).\n\n\n#### 4.5.3 Models predicting explosive growth have implausible implications\n\n\n**Objection:** Endogenous growth models imply output becomes infinite in a finite time. This is impossible and we shouldn’t trust such unrealistic models.\n\n\n**Response:** First, models are always intended to apply only within bounded regimes; this doesn’t mean they are bad models. Clearly these endogenous growth models will stop applying before we reach infinite output (e.g. when we reach physical limits); they might still be informative before we reach this point. Secondly, not all models predicting explosive growth have this implication; some models imply that growth will rise without limit but never go to infinity (see [more](https://www.openphilanthropy.org/research/could-advanced-ai-drive-explosive-economic-growth/#ModelsPredictingExplosiveGrowth)).\n\n\n#### 4.5.4 There’s no evidence of explosive growth in any economic sub-sector\n\n\n**Objection:** If GWP growth rates were soon going to rise to 30%, we’d see signs of this in the current economy. But we don’t – [Nordhaus (2021)](https://www.aeaweb.org/articles?id=10.1257/mac.20170105&&from=f) looks for such signs and doesn’t find them.\n\n\n**Response:** The absence of these signs in macroeconomic data is reason to doubt explosive growth will occur within the next couple of decades. Beyond this time frame, it is hard to draw conclusions. Further, it’s possible that the recent fast growth of machine learning is an early sign of explosive growth (see [more](https://www.openphilanthropy.org/research/could-advanced-ai-drive-explosive-economic-growth/#NoEvidence)).\n\n\n#### 4.5.5 Why think AI automation will be different to past automation?\n\n\n**Objection:** We have been automating parts of our production processes and our R&D processes for many decades, without growth increasing. Why think AI automation will be different?\n\n\n**Response:** To cause explosive growth, AI would have to drive much faster and widespread automation than we have seen over the previous century. If AI ultimately enabled *full* automation, models of automation suggest that the consequences for growth would be much more radical than those from the partial automation we have had in the past (see [more](https://www.openphilanthropy.org/research/could-advanced-ai-drive-explosive-economic-growth/#WhyThinkAIAutomation)).\n\n\n#### 4.5.6 Automation limits\n\n\n**Objection:** [Aghion et al. (2017)](https://web.stanford.edu/~chadj/AJJ-AIandGrowth.pdf) considers a model where growth is bottlenecked by tasks that are essential but hard to improve. If we’re unable to automate just one essential task, this would prevent explosive growth.\n\n\n**Response:** This correctly highlights that AI may lead to very widespread automation without explosive growth occurring. One possibility is that an essential task isn’t automated because we care intrinsically about having a human perform the task, e.g. a carer.\n\n\nI don’t think this provides a decisive reason to rule out explosive growth. Firstly, it’s possible that we will ultimately automate all essential tasks, or restructure work-flows to do without them. Secondly, there could be a significant boost in growth rates, at least temporarily, even without full automation (see [more](https://www.openphilanthropy.org/research/could-advanced-ai-drive-explosive-economic-growth/#AutomationLimits)).[74](https://www.openphilanthropy.org/could-advanced-ai-drive-explosive-economic-growth#footnote74_pfgc4qn \" Appendix A also discusses two other objections from Aghion et al. (2017): 'search limits' and 'Baumol tasks'.\")\n\n\n#### 4.5.7 Limits to how fast a human economy can grow\n\n\n**Objection:** The economic models predicting explosive growth ignore many possible bottlenecks that might slow growth. Examples include regulation of the use of AI systems, extracting and transporting important materials, conducting physical experiments on the world needed to make social and technological progress, delays for humans to adjust to new technological and social innovations, fundamental limits to how advanced technology can become, fundamental limits of how quickly complex systems can grow, and other unanticipated bottlenecks.[75](https://www.openphilanthropy.org/could-advanced-ai-drive-explosive-economic-growth#footnote75_mcrntzb \"For an example of an objection in this vein, see Point 9 in this blog post by Bryan Caplan.\")\n\n\n**Response:** I do think that there is some chance that one of these bottlenecks will prevent explosive growth. On the other hand, no individual bottleneck is certain to apply and there are some reasons to think we could grow at 30% per year:\n\n\n* There will be huge incentives to remove bottlenecks to growth, and if there’s just one country that does this it would be sufficient.\n* Large human economies have already grown at 10% per year (admittedly via catch up growth), explosive growth would only be 3X as fast.[76](https://www.openphilanthropy.org/could-advanced-ai-drive-explosive-economic-growth#footnote76_jaslj14 \" Between 1979 and 2018, Chinese GDP grew by an average of 9.5% per year (source). \")\n* Humans oversee businesses growing at 30% per year, and individual humans can adjust to 30% annual increases in wealth and want more.\n* AI workers could run much faster than human workers.[77](https://www.openphilanthropy.org/could-advanced-ai-drive-explosive-economic-growth#footnote77_tx6c81k \" In his review of this report, Anton Korinek raises the intriguing possibility that although the human economy does not grow at 30% per year, a virtual AI economy with which the human economy interacts does grow at 30%. \")\n* Biological populations can grow faster than 30% a year, suggesting that it is physically possible for complex systems to grow this quickly.[78](https://www.openphilanthropy.org/could-advanced-ai-drive-explosive-economic-growth#footnote78_uxclzxb \" Bacteria populations can double in size once every 10 minutes under ideal conditions; there’s evidence that phytoplankton populations can double once every day. \")\n\n\nThe arguments on both sides are inconclusive and inevitably speculative. I feel deeply uncertain about how fast growth could become before some bottleneck comes into play, but personally place less than 50% probability on a bottleneck preventing 30% GWP growth. That said, I have spent very little time thinking about this issue, which would be a fascinating research project in its own right.\n\n\n#### 4.5.8 How strong are these objections overall?\n\n\nI find some of the objections unconvincing:\n\n\n* **Diminishing returns.** The models implying that full automation would lead to explosive growth take diminishing returns into account.\n* **30% is far from the observed range**. Ruling out 30% on this basis would have led us astray in the past by ruling out historical increases in growth.\n* **Models predicting explosive growth have implausible implications**. We need not literally believe that output will go to infinity to trust these models, and there are models that predict explosive growth without this implication.\n\n\nI find other objections partially convincing:\n\n\n* **No evidence of explosive growth in any economic sub-sector.** Trends in macroeconomic variables suggest there won’t be explosive growth in the next 20 years.\n* **Automation limits**. A few essential but unautomated tasks might bottleneck growth, even if AI drives widespread automation.\n* **Limits to how fast a human economy can grow.** There are many possible bottlenecks on the growth of a human economy; we have limited evidence on whether any of these would prevent 30% growth in practice.\n\n\nPersonally, I assign substantial probability (> 1/3) that the AI robot scenario would lead to explosive growth despite these objections.\n\n\n#### 4.6 Conclusion\n\n\nThe *standard story* points to the constant exponential growth of frontier GDP/capita over the last 150 years. Theoretical considerations suggest 21st century growth is more likely to be sub-exponential than exponential, as slowing population growth leads to slowing technological progress. I find this version of the standard story highly plausible.\n\n\nThe *explosive growth story* points to the significant increases in GWP growth over the last 10,000 years. It identifies an important mechanism explaining super-exponential growth before 1900: increasing returns to accumulable inputs. If AI allows capital to substitute much more effectively for human labor, a wide variety of models predict that increasing returns to accumulable inputs will again drive super-exponential growth. On this basis, I think that ‘advanced AI drives explosive growth’ is a plausible scenario from the perspective of economics.\n\n\nIt is reasonable to be skeptical of all the growth models discussed in the report. It is hard to get high quality evidence for or against different growth models, and empirical efforts to adjudicate between them often give conflicting results.[79](https://www.openphilanthropy.org/could-advanced-ai-drive-explosive-economic-growth#footnote79_9eq8jxz \" For example, see Section 4 of this review.\")  It is possible that we do not understand key drivers of growth. Someone with this view should probably adopt the *ignorance story:* growth has increased significantly in the past, we don’t understand why, and so we should not rule out significant increases in growth occurring in the future. If someone wishes to rule out explosive growth, they must positively reject any theory that implies it is plausible; this is hard to do from a position of ignorance.\n\n\nOverall, I assign > 10% probability to explosive growth occurring this century. This is based on > 30% that we develop sufficiently advanced AI in time, and > 1/3 that explosive growth actually occurs conditional on this level of AI being developed.[80](https://www.openphilanthropy.org/could-advanced-ai-drive-explosive-economic-growth#footnote80_ge536ur \" I explain my overall probabilities and how I reached them in Appendix G.\") Barring this kind of progress in AI, I’m most inclined to expect sub-exponential growth. In this case, projecting GWP is closely entangled with forecasting the development of advanced AI.\n\n\n#### 4.6.1 Are we claiming ‘this time is different’?\n\n\nIf you extrapolate the returns from R&D efforts over the last century, you will not predict that sustaining these efforts might lead to explosive growth this century. Achieving 3% growth in GDP/capita, let alone 30%, seems like it will be very difficult. When we forecast non-trivial probability of explosive growth, are we essentially claiming ‘this time will be different because AI is special’?\n\n\nIn a certain sense, the answer is ‘yes’. We’re claiming that economic returns to AI R&D will ultimately be much greater than the average R&D returns over the past century.\n\n\nIn another sense, the answer is ‘no’. We’re suggesting that sufficiently powerful AI would, by allowing capital to replace human labor, lead to a return to a dynamic present throughout much of human history where labor was accumulable. With this dynamic reestablished, we’re saying that ‘this time will be *the same*’: this time, as before, the economic consequence of an accumulable labor force will be super-exponential growth.\n\n\n#### 4.7 Further research\n\n\n* **Why do experts rule out explosive growth?** This report argues that one should not confidently rule out explosive growth. In particular, I suggest assigning > 10% to explosive growth this century. Experts seem to assign much lower probabilities to explosive growth. Why is this? What do they make of the arguments of the report?\n* **Investigate evidence on endogenous growth theory.**\n\t+ *Assess* *Kremer’s rank-correlation* *argument**.* Does the ‘more people → more innovation’ story actually explain the rank correlation, or are there other better explanations?\n\t+ *Investigate theories of long-run growth.* How important is the increasing returns mechanism compared to other mechanisms in explaining the increase in long-run growth?\n\t+ *Empirical evidence on different growth theories.* What can 20th century empirical evidence tell us about the plausibility of various growth theories? I looked into this briefly and it seemed as if the evidence did not paint a clear picture.\n* **Are we currently seeing the early signs of explosive GDP growth?**\n\t+ How long before explosive growth of GDP would we see signs of it in some sector of the economy?\n\t+ What exactly would these signs look like? What can we learn from the economic signs present in the UK before the onset of the industrial revolution?\n\t+ Does the fast growth of current machine learning resemble these signs?\n* **Do returns to technological R&D change over time?** How uneven has the technological landscape been in the past? Is it common to have long periods where R&D progress is difficult punctuated by periods where it is easier? More technically, how much does the ‘fishing out’ parameter change over time?\n* **Are there plausible theories that predict exponential growth?** Is there a satisfactory explanation for the constancy of frontier per capita growth in the 20th century that implies that this trend will continue even if population growth slows? Does this explanation avoid problematic knife-edge conditions?\n* **Is there evidence of super-exponential growth before the industrial revolution?** My [sensitivity analysis](https://www.openphilanthropy.org/research/could-advanced-ai-drive-explosive-economic-growth/#AppendixI) suggested that there is, but Ben Garfinkel did a longer [analysis](https://forum.effectivealtruism.org/posts/CWFn9qAKsRibpCGq8/does-economic-history-point-toward-a-singularity) and reached a different conclusion. Dig into this apparent disagreement.\n\t+ **Length of data series**: How long must the data series be for there to be clear evidence of super-exponential growth?\n\t+ **Type of data:** How much difference does it make if you use population vs GWP data?\n* **How likely is a bottleneck to prevent an AI-driven growth explosion?**\n\n\n\n\n---\n\n\n5. Structure of the rest of the report\n--------------------------------------\n\n\nThe rest of the report is **not designed to be read end to end.** It consists of extended appendices that expand upon specific claims made in the main report. Each appendix is designed so that it can be read end to end.\n\n\nThe appendices are as follows:\n\n\n* **Objections to explosive growth** (see [here](https://www.openphilanthropy.org/research/could-advanced-ai-drive-explosive-economic-growth/#AppendixA)).\n\t+ This is a long section, which contains many of the novel contributions of this report.\n\t+ It’s probably the most important section to read after the main report, expanding upon objections to explosive growth in detail.\n* **Exponential growth is a knife-edge condition in many growth models** (see [here](https://www.openphilanthropy.org/research/could-advanced-ai-drive-explosive-economic-growth/#AppendixB))**.** \n\t+ I investigate one reason to think long-run growth *won’t* be exponential: exponential growth is a knife-edge condition in many economic growth models.\n\t+ This is not a core part of my argument for explosive growth.\n\t+ The section has three key takeaways:\n\t\t1. Sub-exponential is more plausible than exponential growth, out to 2100.\n\t\t2. There don’t seem to be especially strong reasons to expect exponential growth, raising the theoretical plausibility of stagnation and of explosive growth.\n\t\t3. Semi-endogenous models offer the best explanation of the exponential trend. When you add to these models the assumption that capital and substitute effectively for human labor, they predict explosive growth. This raises my probability that advanced AI could drive explosive growth.\n* **Conditions for super-exponential growth** (see [here](https://www.openphilanthropy.org/research/could-advanced-ai-drive-explosive-economic-growth/#AppendixC)).\n\t+ I report the conditions for super-exponential growth (and thus for explosive growth) in a variety of economic models.\n\t+ These include models of very long-run historical growth, and models designed to explain modern growth altered by the assumption that capital can substitute for labor.\n\t+ I draw some tentative conclusions about what kinds of AI systems may be necessary for explosive growth to occur.\n\t+ This section is math-heavy.\n* **Ignorance story** (see [here](https://www.openphilanthropy.org/research/could-advanced-ai-drive-explosive-economic-growth/#AppendixD)).\n\t+ I briefly explain what I call the ‘ignorance story’, how it might relate to the view that there was a step-change in growth around the industrial revolution, and how much weight I put on this story.\n* **Standard story** (see [here](https://www.openphilanthropy.org/research/could-advanced-ai-drive-explosive-economic-growth/#AppendixE)).\n\t+ I explain some of the models used to project long-run GWP by the *standard story*.\n\t+ These models forecast GWP/capita to grow at about 1-2% annually out to 2100.\n\t+ I find that the models typically only use post-1900 data and assume that technology will grow exponentially. However, the models provide no more support for this claim than is found in the uninterpreted empirical data.\n\t\t1. Other endogenous models do provide support for this claim. I explore such models in [Appendix B](https://www.openphilanthropy.org/research/could-advanced-ai-drive-explosive-economic-growth/#AppendixB).\n\t+ I conclude that these models are suitable for projecting growth to 2100 on the *assumption* that 21st growth resembles 20th century growth. They are not well equipped to assess the probability of a structural break occurring, after which the pattern of 20th growth no longer applies.\n* **Explosive growth before 2100 is robust to accounting for today’s slow GWP growth** (see [here](https://www.openphilanthropy.org/research/could-advanced-ai-drive-explosive-economic-growth/#AppendixF))\n\t+ *Long-run explosive models* predict explosive growth within a few decades. From an outside view perspective[81](https://www.openphilanthropy.org/could-advanced-ai-drive-explosive-economic-growth#footnote81_nyskxuc \" By this I mean ignoring theoretical considerations like 'What explains the rise in growth rates?' and 'Is population accumulable?', and only taking into account the historical growth data. \"), it is reasonable to put some weight on such models. But these models typically imply growth should *already* be at ~7%, which we know is false.\n\t+ I adjust for this problem, developing a ‘growth multiplier’ model. It maintains the core mechanism driving increases in growth in the *explosive growth story*, but anchors its predictions to the fact that GWP growth over the last 20 years has been about 3.5%. As a result, its prediction of explosive growth is delayed by about 40 years.\n\t+ From an outside view perspective, I personally put more weight on the ‘growth multiplier model’ than Roodman’s *long-run explosive model*.\n\t+ In this section, I explain the growth multiplier model and conduct a sensitivity analysis on its results.\n* **How I decide my probability of explosive growth** (see [here](https://www.openphilanthropy.org/research/could-advanced-ai-drive-explosive-economic-growth/#AppendixG)).\n\t+ Currently I put ~30% on explosive growth occurring by 2100. This section explains my reasoning.\n* **Links to reviews of the report** (see [here](https://www.openphilanthropy.org/research/could-advanced-ai-drive-explosive-economic-growth/#AppendixH)).\n* **Technical appendices** (see [here](https://www.openphilanthropy.org/research/could-advanced-ai-drive-explosive-economic-growth/#AppendixI)).\n\t+ These contain a number of short technical analyses that support specific claims in the report.\n\t+ I only expect people to read these if they follow a link from another section.\n\n\n\n\n---\n\n\n6. Appendix A: Objections to explosive growth\n---------------------------------------------\n\n\nCurrently, I don’t find any of these objections entirely convincing. Nonetheless, taken together, the objections shift my confidence away from the *explosive growth* *story* and towards the *ignorance story* instead.\n\n\nI initially discuss general objections to explosive growth, then objections targeted specifically at using long-run growth data to argue for explosive growth.\n\n\nHere are the objections, in the order in which I address them:\n\n\n**General objections to explosive growth**\n\n\n*Partially convincing objections*\n\n\n[No evidence of explosive growth in any subsector of the economy](https://www.openphilanthropy.org/research/could-advanced-ai-drive-explosive-economic-growth/#NoEvidence) \n\n[Growth models predicting explosive growth are unconfirmed](https://www.openphilanthropy.org/research/could-advanced-ai-drive-explosive-economic-growth/#NoEvidence) \n\n[Why think AI automation will be different to past automation?](https://www.openphilanthropy.org/research/could-advanced-ai-drive-explosive-economic-growth/#WhyThinkAIAutomation) \n\n[Automation limits](https://www.openphilanthropy.org/research/could-advanced-ai-drive-explosive-economic-growth/#AutomationLimits) \n\n[Diminishing returns to R&D](https://www.openphilanthropy.org/research/could-advanced-ai-drive-explosive-economic-growth/#AppendixDiminishing) (+ ‘search limits’) \n\n[Baumol tasks](https://www.openphilanthropy.org/research/could-advanced-ai-drive-explosive-economic-growth/#615-baumol-tasks)\n\n\n*Ultimately unconvincing objections*\n\n\n[Explosive growth is so far out of the observed range](https://www.openphilanthropy.org/research/could-advanced-ai-drive-explosive-economic-growth/#ExplosiveGrowth) \n\n[Models predicting explosive growth have unrealistic implications](https://www.openphilanthropy.org/research/could-advanced-ai-drive-explosive-economic-growth/#ModelsPredictingExplosiveGrowth)\n\n\n**Objections to using long-run growth to argue for explosive growth**\n\n\n*Partially convincing objections*\n\n\n[The ancient data points are unreliable](https://www.openphilanthropy.org/research/could-advanced-ai-drive-explosive-economic-growth/#TheAncientData) \n\n[Recent data shows that super-exponential growth in GWP has come to an end](https://www.openphilanthropy.org/research/could-advanced-ai-drive-explosive-economic-growth/#RecentGWPGrowth) \n\n[Frontier growth shows a clear slowdown](https://www.openphilanthropy.org/could-advanced-ai-drive-explosive-economic-growth#FrontierGrowth)\n\n\n*Slightly convincing objections*\n\n\n[Long-run explosive models don’t anchor predictions to current growth levels](https://www.openphilanthropy.org/research/could-advanced-ai-drive-explosive-economic-growth/#LongRunAnchor) \n\n[Long-run explosive models don’t discount pre-modern data](https://www.openphilanthropy.org/research/could-advanced-ai-drive-explosive-economic-growth/#LongRunDiscount) \n\n[Long-run explosive models don’t seem to apply to time before the agricultural revolution; why expect them to apply to a new future growth regime?](https://www.openphilanthropy.org/research/could-advanced-ai-drive-explosive-economic-growth/#LongRunSeem)\n\n\n#### 6.1 General objections to explosive growth\n\n\n#### 6.1.1 No evidence of explosive growth in any sub sector of the economy\n\n\n**Summary of objection:** If GWP growth rates were soon going to rise to 30%, we’d see signs of this in the current economy. We’d see 30% growth in sectors of the economy that have the potential to account for the majority of economic activity. For example, before the industrial revolution noticeably impacted GDP, the manufacturing sector was growing much faster than the rest of the economy. But no sector of the economy shows growth anywhere near 30%; so GWP won’t be growing at 30% any time soon.\n\n\n**Response:** I think this objection might rule out explosive growth in the next few decades, but I’d need to see further investigation to be fully convinced of this.\n\n\nI agree that there should be signs of explosive growth before it registers on any country’s GDP statistics. Currently, this makes me somewhat skeptical that there will be explosive growth in the next two decades. However, I’m very uncertain about this due to being ignorant about several key questions.\n\n\n* How long before explosive growth of GDP would we see signs of it in some sector of the economy?\n* What exactly would these signs look like?\n* Are there early signs of explosive growth in the economy?\n\n\nI’m currently very unsure about all three questions above, and so am unsure how far into the future this objection rules out explosive growth. The next two sections say a little more about the third question.\n\n\n#### 6.1.1.1.Does the fast growth of machine learning resemble the early signs of explosive growth?\n\n\nWith regards the penultimate question, Open Philanthropy believes that there is a non-negligible chance (> 15%) of very powerful AI systems being developed in the next three decades. The economic impact of machine learning is already growing fast with use in Google’s search algorithm, targeted ads, product recommendations, translation, and voice recognition. One recent [report](https://www.marketsandmarkets.com/Market-Reports/deep-learning-market-107369271.html) forecasts an average of 42% annual growth of the deep learning market between 2017 and 2023.\n\n\nOf course, many small sectors show fast growth for a time and do not end up affecting the overall rate of GWP growth! It is the further fact that machine learning seems to be a general purpose technology, whose progress could ultimately lead to the automation of large amounts of cognitive labor, that raises the possibility that its fast growth might be a precursor of explosive growth.\n\n\n#### 6.1.1.2 Are there signs of explosive growth in US macroeconomic variables?\n\n\n[Nordhaus (2021)](https://www.aeaweb.org/articles?id=10.1257/mac.20170105&&from=f) considers the hypothesis that explosive growth will be driven by fast productivity growth in the IT sector. He proposes seven empirical tests of this hypothesis. The tests make predictions about patterns in macroeconomic variables like TFP, real wages, capital’s share of total income, and the price and total amount of capital. He runs these tests with US data. Five of the tests suggest that we’re not moving towards explosive growth; the other two suggest we’re moving towards it only very slowly, such that a naive extrapolation implies explosive growth will happen around 2100.[82](https://www.openphilanthropy.org/could-advanced-ai-drive-explosive-economic-growth#footnote82_t11z8hr \" Upchurch (2018) has a similar thesis to Nordhaus (2021), but I haven’t investigated its claims in depth.\")\n\n\nNordhaus runs three of his tests with data specific to the IT sector.[83](https://www.openphilanthropy.org/could-advanced-ai-drive-explosive-economic-growth#footnote83_bwy25za \" One of these - Test 6 - specifically relates to the share of information capital as a proportional of total capital. Two of the other tests - Tests 3 and 4 - Norhaus primarily applies to capital stock as a whole, but he also tests with data specific to information capital. \") This data is more fine-grained than macroeconomic variables, but it’s still much broader than machine learning as a whole. The IT data is slightly more optimistic about explosive growth, but still suggests that it won’t happen within the next few decades.[84](https://www.openphilanthropy.org/could-advanced-ai-drive-explosive-economic-growth#footnote84_np3n2wm \" Test 6 naively suggests that explosive growth will happen in > 100 years; Test 4 with IT-specific data suggests that explosive growth will happen but Nordhaus doesn’t calculate the expected date; Test 3 with IT-specific data suggests explosive growth won’t happen.\")\n\n\nThese empirical tests suggest that, as of 2014, the patterns in US macroeconomic variables are not what you’d expect if explosive growth driven by AI R&D was happening soon. But how much warning should we expect these tests to give? I’m not sure. Nordhaus himself says that his ‘conclusion is tentative and is based on economic trends to date’. I would expect patterns in macroeconomic variables to give more warning than trends in GWP or GDP, but less warning than trends in the economic value of machine learning. Similarly, I’d expect IT-specific data to give more warning than macroeconomic variables, but less than data specific to machine learning.\n\n\n[Brynjolfsson (2017)](https://www.nber.org/papers/w24001)[85](https://www.openphilanthropy.org/could-advanced-ai-drive-explosive-economic-growth#footnote85_9y7bg9w \" Niochoj (2018) has a similar thesis.\") suggests economic effects will lag decades behind the potential of the technology’s cutting edge, and that national statistics could underestimate the longer term economic impact of technologies. As a consequence, disappointing historical data should not preclude forward-looking technological optimism.[86](https://www.openphilanthropy.org/could-advanced-ai-drive-explosive-economic-growth#footnote86_okrtkd9 \"Namely, there is no inherent inconsistency between forward-looking technological optimism and backward-looking disappointment. Both can simultaneously exist. Indeed, there are good conceptual reasons to expect them to simultaneously exist when the economy undergoes the kind of restructuring associated with transformative technologies. In essence, the forecasters of future company wealth and the measurers of historical economic performance show the greatest disagreement during times of technological change. In this paper we argue and present some evidence that the economy is in such a period now… Implicit or explicit in the pessimistic view of the future is that the recent slowdown in productivity growth portends slower productivity growth in the future. We begin by establishing one of the most basic elements of the story: that slow productivity growth today does not rule out faster productivity growth in the future. In fact, the evidence is clear that it is barely predictive at all.\")\n\n\nOverall, Nordhaus’ analysis reduces my probability that we will see explosive growth by 2040 (three decades after his latest data point) but it doesn’t significantly change my probability that we see it in 2050 – 2100. His analysis leaves open the possibility that we are seeing the early signs of explosive growth in data relating to machine learning specifically.\n\n\n#### 6.1.2 The evidence for endogenous growth theories is weak\n\n\n**Summary of objection:** Explosive growth from sufficiently advanced AI is predicted by certain endogenous growth models, both theories of very long-run growth and semi-endogenous growth models augmented with the assumption that capital can substitute for labor.\n\n\nThe mechanism posited by these models is increasing returns to accumulable inputs.\n\n\nBut these endogenous growth models, and the mechanisms behind them, have not been confirmed. So we shouldn’t pay particular attention to their predictions. In fact, these models falsely predict that larger economies should grow faster.\n\n\n**Response summary**:\n\n\n* There is some evidence for endogenous growth models.\n* Endogenous growth models do *not* imply that larger economies should grow faster than smaller ones.\n* As well as endogenous growth models, some *exogenous* growth models predict that AI could bring about explosive growth by increasing the importance of capital accumulation: **more output → more capital → more output →…** (see [more](https://www.openphilanthropy.org/research/could-advanced-ai-drive-explosive-economic-growth/#ExogenousGrowthModels)).\n\n\nThe rest of this section goes into the first two points in more detail.\n\n\n#### 6.1.2.1 Evidence for endogenous growth theories\n\n\n#### 6.1.2.1.1 Semi-endogenous growth models\n\n\nThese are simply standard semi-endogenous growth theories. Under realistic parameter values, they predict explosive growth when you add the assumption that capital can substitute for labor (elasticity of substitution > 1).\n\n\nWhat evidence is there for these theories?\n\n\n* Semi-endogenous growth theories are inherently plausible. They extend standard exogenous theories with the claim that directed human effort can lead to technological progress.\n* Appendix B argues that semi-endogenous growth theories offer a good explanation of the recent period of exponential growth.\n* However, there have not been increasing returns to accumulable inputs in the recent period of exponential growth because labor has not been accumulable. This might make us doubt the predictions of semi-endogenous models in a situation in which there *are* increasing returns to accumulable inputs, and thus doubt their prediction of explosive growth.\n\n\n#### 6.1.2.1.2 Theories of very long-run growth featuring increasing returns\n\n\nSome theories of very long-run growth feature increasing returns to accumulable inputs, as they make technology accumulable and labor accumulable (in the sense that **more output → more people**). If AI makes labor accumulable again, these theories predict there will be explosive growth under realistic parameter values.\n\n\nWhat evidence is there for these theories?\n\n\n* These ‘increasing returns’ models seem to correctly describe the historical pattern of accelerating growth.[87](https://www.openphilanthropy.org/could-advanced-ai-drive-explosive-economic-growth#footnote87_ocerc7s \" Indeed, Romer (1986), the first paper in the 'endogenous growth' wave, starts by looking at Maddison data over centuries. \") However, the data is highly uncertain and it is possible that growth did not accelerate between 5000 BCE and 1500. If so, this would undermine the empirical evidence for these theories.\n* Other evidence comes from [Kremer (1993)](https://www.ssc.wisc.edu/~walker/wp/wp-content/uploads/2012/01/kremer1993.pdf#page=31). He looks at five regions – Flinders Island, Tasmania, Australia, the Americas and the Eurasian continent – that were isolated from one another 10,000 years ago and had significantly varying populations. Initially all regions contained hunter gathers, but by 1500 CE the technology levels of these regions had significantly diverged. Kremer shows that the 1500 technology levels of these regions were perfectly rank-correlated with their initial populations, as predicted by endogenous growth models.\n\n\n#### 6.1.2.2 Endogenous growth models are not falsified by the faster growth of smaller economies.\n\n\nDifferent countries share their technological innovations. Smaller economies can grow using the innovations of larger economies, and so the story motivating endogenous growth models does *not* predict that countries with larger economies should grow faster. As explained by [Jones (1997)](https://www.nber.org/papers/w6285.pdf):\n\n\n\n> The Belgian economy does not grow solely or even primarily because of ideas invented by Belgians… this fact makes it difficult… to test the model with cross-section evidence [of different countries across the same period of time]. Ideally one needs a cross-section of economies that cannot share ideas.\n> \n> \n\n\nIn other words, the standard practice of separating technological progress into catch-up growth and frontier growth is fully consistent with applying endogenous growth theories to the *world* economy. Endogenous growth models are not falsified by the faster growth of smaller economies.\n\n\n#### 6.1.3 Why think AI automation will be different to past automation?\n\n\n**Objection:** Automation is nothing new. Since 1900, there’s been massive automation in both production and R&D (e.g. no more calculations by hand). But growth rates haven’t increased. Why should future automation have a different effect?\n\n\n**Response:** If AI merely continues the previous pace of automation, then indeed there’s no particular reason to think it would cause explosive growth. However, if AI allows us to approach *full automation*, then it may well do so.\n\n\nA plausible explanation for why previous automation hasn’t caused explosive growth is that growth ends up being bottlenecked by non-automated tasks. For example, suppose there are three stages in the production process for making a cheese sandwich: make the bread, make the cheese, combine the two together. If the first two stages are automated and can proceed much more quickly, the third stage can still bottleneck the speed of sandwich production if it isn’t automated. Sandwich production as a whole ends up proceeding at the same pace as the third stage, despite the automation of the first two stages.\n\n\nNote, whether this dynamic occurs depends on people’s preferences, as well as on the production possibilities. If people were happy to just consume bread by itself and cheese by itself, all the necessary steps would have been automated and output could have grown more quickly.\n\n\nThe same dynamic as with sandwich production can happen on the scale of the overall economy. For example, hundreds of years ago agriculture was a very large share of GDP. Total GDP growth was closely related to productivity growth in agriculture. But over the last few hundred years, the sector has been increasingly automated and its productivity has risen significantly. People in developed countries now generally have plenty of food. But as a result, GDP in developed countries is now more bottlenecked by things other than agriculture. Agriculture is now only a small share of GDP, and so productivity gains in agriculture have little effect on overall GDP growth.\n\n\nAgain this relates to people’s preferences. Once people have plenty of food, they value further food much less. This reduces the price of food, and reduces agriculture’s share of GDP. If people had wanted to consume more and more food without limit, agriculture’s share of the economy would not have fallen so much.[88](https://www.openphilanthropy.org/could-advanced-ai-drive-explosive-economic-growth#footnote88_sofn71p \" This effect is closely related to Baumol’s cost disease. Baumol found that sectors with high productivity growth often have a declining share of GDP. As a result, sectors with lower productivity growth are increasingly important to GDP and the GDP growth rate is dominated by these slow-growing sectors.\")\n\n\nSo, on this account, the reason why automation doesn’t lead to growth increases is because the non-automated sectors bottleneck growth.\n\n\nClearly, this dynamic won’t apply if there is full automation, for example if we develop AI systems that can replace human workers in any task. There would be no non-automated sectors left to bottleneck growth. This insight is consistent with models of automation, for example [Growiec (2020)](https://ideas.repec.org/p/sgh/kaewps/2020048.html) and [Aghion et al. (2017)](https://web.stanford.edu/~chadj/AJJ-AIandGrowth.pdf) – they find that the effect of full automation is qualitatively different from that of partial automation and leads to larger increases in growth.\n\n\nThe next section discusses whether full automation is plausible, and whether we could have explosive growth without it.\n\n\n#### 6.1.4 Automation limits\n\n\n**Objection:** [Aghion et al.](https://web.stanford.edu/~chadj/AJJ-AIandGrowth.pdf) [(2017)](https://web.stanford.edu/~chadj/AJJ-AIandGrowth.pdf) considers a growth model that does a good job in explaining the past trends in automation and growth. In particular, their model is consistent with the above explanation for why automation has not increased growth in the past: growth ends up being bottlenecked by non-automated tasks.\n\n\nIn their model, output is produced by a large number of tasks that are *gross complements*. Intuitively, this means that each task is essential. More precisely, if we hold performance on one task fixed, there is a limit to how large output can be no matter how well we perform other tasks.[89](https://www.openphilanthropy.org/could-advanced-ai-drive-explosive-economic-growth#footnote89_14rzyj2 \" Technically, this means that the elasticity of substitution between tasks is below one.\")  As a result, ‘output and growth end up being determined not by what we are good at, but by what is essential but hard to improve’.\n\n\nThe model highlights that if there is one essential task that we cannot automate, this will ultimately bottleneck growth. Growth will proceed at the rate at which we can improve performance at this non-automated task.**Response :** There are two questions in assessing this objection:\n\n\n1. Will there be an essential task that we cannot automate?\n2. If there is such a task, would this preclude explosive growth?\n\n\n#### 6.1.4.1 Will there be an essential task that we cannot automate?\n\n\nThe first question cannot be answered without speculation.\n\n\nIt does seem possible that we make very impressive progress in AI, automating wide-ranging cognitive abilities, but that there are some essential tasks that we still cannot automate. It is unclear how stable this situation would be: with many cognitive abilities automated, a huge cognitive effort could be made to automate the remaining tasks. Further, if we can restructure workflows to remove the necessity of an un-automated task, the bottleneck will disappear.\n\n\nOne reason to think full automation is plausible is that humans may ultimately have a finite set of capabilities (including the capability to learn certain types of new tasks quickly). Once we’ve developed machines with the same capabilities across the board, there will be nothing more to automate. When new tasks are created, machines will learn them just as quickly as humans.\n\n\nOne possibility is that tasks that will not be automated because we care intrinsically about having a biological human perform the task (e.g. carers, athletes, priests). I don’t expect this to be the *sole* factor preventing explosive growth:\n\n\n* In this scenario, if just *one* group didn’t have this intrinsic preference for human workers, it could grow explosively and ultimately drive explosive growth of GWP. So this scenario seems undermined by the heterogeneity of human preferences.\n* In this scenario the growth model of [Aghion et al. (2017)](https://web.stanford.edu/~chadj/AJJ-AIandGrowth.pdf) implies that the percentage of GDP spent on tasks where we prefer human workers approaches 100%.[90](https://www.openphilanthropy.org/could-advanced-ai-drive-explosive-economic-growth#footnote90_mkuuwpz \" As output of automated tasks increases, the percentage of GDP spent on completing them falls (as the % spend on agriculture has fallen). \") But this seems unlikely to happen. Tasks crucial for gaining relative power in society, e.g. control of resources and military technology, can in principle be automated in this scenario. It seems unlikely that all actors would allow their spending on these tasks to approach 0%, essentially giving up relative power and influence.\n\t+ If instead a constant fraction of output is spent on automated tasks, we could model this with a task-based Cobb Douglas production function. With this model, explosive growth then occurs if a sufficiently large fraction of output is spent on the automated tasks (see [this model](https://www.openphilanthropy.org/research/could-advanced-ai-drive-explosive-economic-growth/#EndogenousGrowth)).\n\n\n#### 6.1.4.2 If there’s an essential task we cannot automate, does this preclude explosive growth?\n\n\nSlightly more can be said about the second question.\n\n\nFirstly, there can be super-exponential growth without full automation *ever* occurring. If we automate an increasing fraction of non-automated tasks each year, there can be super-exponential growth.\n\n\nFor example, the total fraction of automated tasks goes 0%, 50%, 80%, 95%,… We automate 1/2 the non-automated tasks in the first year, 2/3 in the second year, 3/4 in the third year, and so on. In this scenario, the economy is *asymptotically* automated, but never fully automated.\n\n\n* This situation implies that for any task *i*, that task is eventually automated. But this is also implied by the scenario favored in Aghion et al. (2017), in which a *constant* fraction of non-automated tasks are automated each year.\n* I am not claiming here that we *will* automate an increasing fraction of tasks each year, but just that such a situation is plausible (and perhaps similarly plausible to automating a constant fraction each year).\n* Note, super-exponential growth can only be sustained if there is some capital-augmenting technological progress happening in the background.[91](https://www.openphilanthropy.org/could-advanced-ai-drive-explosive-economic-growth#footnote91_ijbq4rq \" In this scenario, the model implies that growth cannot exceed s × A - δ. The reinvestment rate s is bounded below 1 and δ is constant, and so super-exponential growth can only be sustained if A, the level of technology, grows. \")\n\n\nWhat about if there’s some fixed fraction of tasks that we cannot automate?\n\n\nThis does rule out growth increasing without limit.[92](https://www.openphilanthropy.org/could-advanced-ai-drive-explosive-economic-growth#footnote92_rc5k5y3 \" For growth to permanently increase in this model, we must automate a constant fraction of non-automated tasks each year. If some fixed fraction of tasks can never be automated, this process cannot continue indefinitely.\") However, it doesn’t rule out a significant but temporary increase in growth. There may be a long time before non-automated tasks become a bottleneck in practice, and growth may rise considerably during this time. For example, suppose that the number of human carers ultimately bottlenecks growth. In the long-run, most of GDP is spent on humans carers and productivity improvements elsewhere will make little difference to GDP growth. Nonetheless, there can be an interim period where human carers are still only a small share of GDP but the quantities of other goods and services are growing extremely rapidly, driving explosive growth of GDP. This explosive growth would end once spending on human carers is a large fraction of GDP.\n\n\nIndeed, the authors of Aghion et al. (2017) acknowledge that even if there’s a limit to automation, ‘growth rates may still be larger with more automation and capital intensity’. Whether growth gets as high as 30% depends on how quickly the other tasks are automated,[93](https://www.openphilanthropy.org/could-advanced-ai-drive-explosive-economic-growth#footnote93_yo6j35o \" If tasks are automated faster, peak growth will be higher.\") how quickly we increase the stock of capital,[94](https://www.openphilanthropy.org/could-advanced-ai-drive-explosive-economic-growth#footnote94_sqxcaep \" The speed of capital accumulation depends on the following equation: dK/dt = s × A × F(K, L) - δ × K, where s is the investment rate and A is the level of technology. It’s not possible to sustain faster output growth than s × A - δ. \")how important the non-automated task is to the economy,[95](https://www.openphilanthropy.org/could-advanced-ai-drive-explosive-economic-growth#footnote95_wqg1yrf \" In the language of the model, this corresponds to the fraction of tasks that we cannot automate.\") and how well we initially perform the non-automated task.[96](https://www.openphilanthropy.org/could-advanced-ai-drive-explosive-economic-growth#footnote96_s60c9gl \" If we are initially very productive at the non-automated task compared to the other tasks, it will be longer before it becomes a bottleneck. \")\n\n\n#### 6.1.4.3 A drawback of the model\n\n\nThe model does not seem well suited for thinking about the introduction of new tasks. In their model, introducing a new task can only ever decrease output.[97](https://www.openphilanthropy.org/could-advanced-ai-drive-explosive-economic-growth#footnote97_bf3mlwd \" Thanks to Trammell and Korinek (2021) for this insight.\")\n\n\n#### 6.1.4.4 Conclusion\n\n\nThis objection correctly highlights the possibility that very impressive progress in AI doesn’t lead to explosive growth due a few non-automatable tasks. This is a plausible scenario. Nonetheless, explosive growth could occur if we will eventually automate all tasks, or if we automate an increasing fraction of tasks each year, or if growth increases significantly before bottlenecks kick in.\n\n\n#### 6.1.5 Baumol tasks\n\n\n**Objection:** Even if we automate both goods and ideas production, [Aghion et al. (2017)](https://web.stanford.edu/~chadj/AJJ-AIandGrowth.pdf) raises the possibility that physical limits could constrain growth.[98](https://www.openphilanthropy.org/could-advanced-ai-drive-explosive-economic-growth#footnote98_t76sn36 \" See their 'Baumol tasks' objection.\")  In particular, they consider a model where each task has its own productivity. If there’s an absolute limit on the productivity of any essential task, then this ultimately limits overall TFP and can prevent explosive growth.\n\n\n**Response:** This objection is correct: ultimately the growth process will come up against physical limits and TFP will reach an absolute ceiling. However, this doesn’t give us much reason to rule out explosive growth.\n\n\nFirstly, even once TFP reaches its ceiling we could have fast exponential growth. If we automate all tasks *Y = AmaxK*; reinvestment is *ΔK = sY – δK*; *Amax* is the ceiling for TFP fixed by physical limits. The growth rate of the system is *Amaxs – δ*, which could be very high indeed.\n\n\nSecondly, we may be a long way from achieving the maximum possible TFP. Before we reach this point, there could be super-exponential growth. The model raises the possibility that we may be closer to the ceiling than we think: if just one essential task hits a limit then this will limit total TFP. However, we should be wary of placing too much weight on this perspective. TFP has not yet been permanently limited by an essential but hard to improve task, despite the economy containing a huge array of tasks and experiencing lots of TFP growth. This is somewhat surprising to an advocate for Baumol tasks: surely just one of the *many* essential tasks should have hit a limit by now? The evidence to the contrary speaks to our ability to increase productivity in essential tasks despite physical limits, or to replace them with new tasks that avoid these limits.\n\n\n#### 6.1.6 What about diminishing returns to technological R&D?\n\n\n**Objection:** There is good evidence that [ideas are getting harder to find](https://web.stanford.edu/~chadj/IdeaPF.pdf), at least when these ideas are weighted by their effects on economic growth.\n\n\nEconomists often understand ‘ideas’ in units such that a constant flow of ideas leads to constant exponential growth in *A*; each idea raises income by a constant percentage.\n\n\nIt is common to represent this effect using the parameter *φ* in the equation *Ȧ = AφX*, where *X* measures the amount of research effort (e.g. number of scientists) and *A* represents TFP. If ideas are getting harder to find, this means that *φ* < 1. This condition is important; it implies that *X* must increase exponentially to sustain exponential growth in *A*.\n\n\nBloom et al. (2020) observes steeply diminishing returns in 20th century R&D; they estimate *φ* = -2.1. Such steeply diminishing returns will surely prevent explosive growth. Perhaps they also explain the end of super-exponential growth in the 20th century.\n\n\n**Response:** The feedback loop between output and inputs can be powerful enough to overcome these diminishing returns, especially if there are increasing returns to accumulable inputs. This is because the feedback loop can be strong enough for *X* to grow *super-exponentially*, leading to super-exponential growth in *A*.\n\n\nThis happens if increasing returns to accumulable inputs are powerful enough to overcome the diminishing returns to R&D.[99](https://www.openphilanthropy.org/could-advanced-ai-drive-explosive-economic-growth#footnote99_mp8irzn \" In these models, there are two main factors determining whether growth is super-exponential. Firstly, the importance of accumulable inputs. By an input’s ‘importance’ I mean its output share; this is given by the input’s exponent in Cobb-Douglas models. This first factor depends on whether there is a fixed factor, and whether capital can substitute for labor. Secondly, the diminishing returns to R&D. \") If labor is accumulable, or capital is substitutable with labor (elasticity of substitution > 1), models with plausible parameter values suggest there will be super-exponential growth *despite* the sharply diminishing to R&D observed by Bloom et al. [More on the conditions for super-exponential growth in these models.](https://www.openphilanthropy.org/research/could-advanced-ai-drive-explosive-economic-growth/#AppendixC)\n\n\nConsistent with this, various endogenous growth models suggest that the period of super-exponential growth did not end because the diminishing returns to R&D became too steep. Rather, they suggest that the demographic transition, which meant labor was no longer accumulable (in the sense that **more output → more labor**), was the key factor (see [more)](https://www.openphilanthropy.org/research/could-advanced-ai-drive-explosive-economic-growth/#CanDiminishingReturns).\n\n\nLastly, even if 20th century diminishing returns *did* rule out explosive growth, it is possible that returns will diminish less steeply in the future (the value of *φ* could increase).[100](https://www.openphilanthropy.org/could-advanced-ai-drive-explosive-economic-growth#footnote100_ssyo3r4 \" Agrawal et al. (2019) discuss a dynamic where AI assistance in research raises φ.\") There could be an uneven technological landscape, where progress is slow for a time and then quicker again.\n\n\n**Further objection:** [Aghion et al.](https://web.stanford.edu/~chadj/AJJ-AIandGrowth.pdf) [(2017)](https://web.stanford.edu/~chadj/AJJ-AIandGrowth.pdf) consider a model in which ideas production is fully automated, *Ȧ = AφK*, but growth still does not increase due to ‘search limits’. Importantly, in their model goods production is bottlenecked by labor, *Y* = *AL*.[101](https://www.openphilanthropy.org/could-advanced-ai-drive-explosive-economic-growth#footnote101_nezemj7 \" You get the same qualitative result if Y is a CES production function of labor and capital F(L, K) with the elasticity of substitution is less than 1: Y = A × F(L, K).\") If φ > 0, the growth rate increases without limit, but if φ < 0, the growth rate decreases over time. φ < 0 is plausible. Theoretically, it could be explained by a fishing-out process, in which fewer and fewer good ideas remain to be discovered over time. Empirically, Bloom et al. (2020) estimates φ = -2.1 based on 80 years of US data.\n\n\n**Response:** This correctly highlights the possibility that we fully automate R&D without seeing explosive growth. However, I still expect that full R&D automation would lead to explosive growth.\n\n\nFirstly, in this model there would still be a temporary boost in growth while the ideas production was being automated. The automation process would cause research effort *X* to increase, perhaps very rapidly, leading to much faster growth temporarily.\n\n\nSecondly, full automation of ideas production might facilitate full automation of the *goods* production (e.g. if it allows us to automate the process of automating tasks), *Y* = *AK*. Automating tasks is naturally thought of as a research activity. Full automation of goods production would lead to super-exponential growth, no matter what the value of φ.[102](https://www.openphilanthropy.org/could-advanced-ai-drive-explosive-economic-growth#footnote102_46m3ngx \" Aghion et al. (2017) considers a model where goods production is automated and technological progress is exogenous and finds that the growth rate increases without limit. Further, if both goods production and ideas production are fully automated -- Y = AK and dA/dt = Aφ × K -- then the growth rate increases without limit regardless of the value of φ.\")This is the response I find most convincing.\n\n\nThirdly, even if φ < 0 in the economy on *aggregate*, it may be that >0 in certain important subsectors of the economy and this is sufficient for explosive growth. Of particular importance may be subsectors relating to how efficiently output can be reinvested to create more AI systems. If φ > 0 in these subsectors then, even if φ < 0 on aggregate, the number of AI systems can grow super-exponentially. This could in turn drive super-exponential growth of technology in *all* sectors, and thus drive explosive growth of output. I describe a toy model along these lines in [this technical appendix](https://www.openphilanthropy.org/research/could-advanced-ai-drive-explosive-economic-growth/#AppendixI).\n\n\nIs φ > 0 in the relevant subsectors? The subsectors relating to how efficiently output can be reinvested to make AI systems are likely to be computer hardware and AI software. Bloom et al. (2020) find φ = 0.8 for a measure of computer hardware performance, and data from [Besiroglu (2020)](https://static1.squarespace.com/static/5fb98ea9a787c521ab066091/t/5fba5c3ddb275d51d91825eb/1606048834827/AreModels.pdf) finds φ = 0.85 for a measure machine learning software performance. Of course this doesn’t show that this scenario is likely to happen, but reinforces the point that there is no easy inference from ‘φ < 0 in the aggregate’ to ‘AI automation of R&D wouldn’t drive explosive growth’.\n\n\nLastly, some papers find φ > 0. Even if it is currently below 0, it may change over time, and rise above 0.\n\n\n#### 6.1.7 Explosive growth is so far out of the observed range\n\n\n**Summary of objection:** No country has ever grown at *anywhere near* 30%. Even when China was at its peak rate of catch-up growth, benefitting significantly from adopting advanced western technology, it grew at 8%. Never in history has a country grown faster than 10%. Explosive growth is so far out of the observed range that it should be regarded as highly improbable.\n\n\n**Response:** This is a very natural objection, but ultimately I find it unconvincing.\n\n\nThe same kind of reasoning would have led people in 1750, when growth had never been higher than 0.3%, to rule out growth of 3%. And the same reasoning again would have led hypothetical economists alive in 5000 BCE, when the rate of growth had never been higher than 0.03%, to rule out growth of 0.3%. Growth rates have increased by two orders of magnitude throughout history, and so the reasoning ‘growth rates will stay within the historically observed ranges’ would have repeatedly led to false predictions.\n\n\nIt is true that a 30% growth *by 2100* would involve a ten-fold increase in growth happening more quickly than any comparable increase in history. The increase from 0.3% to 3% took more than 150 years to occur and there are only 80 years left until 2100. But historically, increases in the growth rate have happened over progressively shorter time periods. For example, the increase from 0.03% to 0.3% took 6000 years. In 1700 it would have been a mistake to say ‘it took thousands of years for growth rates to increase ten-fold from 0.03% to 0.3%, so it will be thousands of years before growth increases ten-fold again to 3%’. This reasoning would ignore the historical pattern whereby growth increases more quickly over time. Similarly, it would be a mistake now to reason ‘it took hundreds of years for growth rates to increase from 0.3% to 3%, so it will be hundreds of years before growth could reach 30%’.[103](https://www.openphilanthropy.org/could-advanced-ai-drive-explosive-economic-growth#footnote103_bjf4okl \" It could be objected that long before 3% growth we had seen that after plagues or access to new lands human populations could grow rapidly given abundant resources. This could have enabled us to speculate that growth as high as 3% might be possible. But similarly, by looking at the growth of mice and bacteria we can say that growth of a system can in principle be much faster than 30% per year. By a similar token, we could use this observed growth to speculate that 30% growth might be possible.\")\n\n\nSo the fact that growth has never previously been anywhere near as high as 30% is not by itself a good reason to rule out explosive growth.\n\n\nRelatedly, it would be unreasonable to assign an extremely low prior to 30% growth occurring.[104](https://www.openphilanthropy.org/could-advanced-ai-drive-explosive-economic-growth#footnote104_e3nwlbd \" As Bryan Caplan seems to do here.\")Priors assigning tiny probabilities to GWP growth increasing well above its observed range would have been hugely surprised by the historical GWP trend. They should be updated to assign more probability to extreme outcomes.\n\n\n#### 6.1.8 Models predicting explosive growth have implausible implications\n\n\n**Summary of objection:** The very same endogenous growth models that predict explosive growth by 2100 also predict that GWP will go to infinity in finite time. This prediction is absurd, and so the models shouldn’t be trusted.\n\n\nThis objection is in the spirit of a comment from economist Robert Solow:\n\n\n\n> It is one thing to say that a quantity will eventually exceed any bound. It is quite another to say that it will exceed any stated bound before Christmas.[105](https://www.openphilanthropy.org/could-advanced-ai-drive-explosive-economic-growth#footnote105_aazje1x \" Solow (1994) p. 50.\")\n> \n> \n\n\n**Response:** Ultimately, I find this objection unconvincing.\n\n\nClearly, the economy cannot produce infinite output from a finite input of resources. And indeed this is exactly what certain endogenous growth models predict. But there are two ways to interpret this result.\n\n\n1. These models’ description of super-exponential growth is not realistic in any circumstances.\n2. Endogenous growth models’ description of super-exponential growth is only realistic up to a certain point, after which it ceases to be realistic.\n\n\nI favor the second explanation for two reasons.\n\n\nFirstly, it is very common for scientific theories to be accurate only in certain bounded regimes. This is true of both the hard sciences[106](https://www.openphilanthropy.org/could-advanced-ai-drive-explosive-economic-growth#footnote106_9tkk411 \" For example, Newtonian mechanics is accurate only when objects are moving much slower than the speed of light, Newton’s theory of gravity is accurate only when objects’ masses are sufficiently small, and protons and neutrons are not predictively useful concepts in very high energy conditions (under such conditions particle-like objects of this sort do not emerge from quantum field theory). \") and the social sciences.[107](https://www.openphilanthropy.org/could-advanced-ai-drive-explosive-economic-growth#footnote107_y5yw90f \" There is a large literature on circumstances in which actual human behavior differs from the predictions of economics’ rational agent model. Nonetheless, the rational agent model is fairly accurate in many situations. \") As such, pointing out that a theory breaks down *eventually* only provides a very weak reason to think that it isn’t realistic in any circumstances. So the first explanation seems like an overreaction to the fact that theory breaks down eventually.\n\n\nSecondly, it is independently plausible that the mechanism for super-exponential growth will break down eventually in the face of physical limits.\n\n\nThe mechanism is more output → more capital → better technology → more output →… But this cycle will eventually run up against physical limits. Eventually, we will be using the fixed input of physical resources in the best possible way to produce output, and further increases in output will be capped. At this stage, it won’t be possible to reinvest output in such a way as to significantly increase future output and the cycle will fizzle out.\n\n\nIn other words, we have a specific explanation for why we will never produce infinite output that leaves open the possibility that explosive growth occurs in the medium term.\n\n\nSo the fact that super-exponential growth must approach limits *eventually* – this particular objection – is itself only weak evidence that we have already reached those limits.\n\n\nIn addition to the above, many models predict explosive growth without implying output rises to infinity in finite time. For example, Nordhaus (2021) and Aghion et al. (2017) consider a model in which good production is fully automated but technological progress is still exogenous. This leads to a ‘type 1 singularity’ in which the growth rate increases without limit but never goes to infinity. Similarly, the models in Lee (1993) and [Growiec (2020)](https://econpapers.repec.org/paper/sghkaewps/2019042.htm) both predict significant increases in growth but again the growth rate remains finite.\n\n\n#### 6.2 Objections to using the long-run growth to argue for explosive growth\n\n\n#### 6.2.1 The ancient data points used to estimate long-run explosive models are highly unreliable\n\n\n**Objection:** We have terrible data on GWP before ~1500, so the results of models trained on this ‘data’ are meaningless.\n\n\n**Response:** Data uncertainties don’t significantly affect the predictions of the long-run explosive models. However, they do undermine the empirical support for these models, and the degree of trust we should have in their conclusions.\n\n\n#### 6.2.1.1 Data uncertainties don’t significantly alter the predictions of long-run explosive models\n\n\nDespite very large uncertainties in the long-run GWP data, it is clearly true that growth rates used to be much lower than they are today. This alone implies that, if you fit endogenous growth models to the data, you’ll predict super-exponential growth. Indeed, Roodman fit his model to several different data sets, and did a robustness test where he pushed all the data points to the tops and bottoms of their uncertainty ranges; in all cases the median predicted date of explosive growth was altered by < 5 years. This all suggests that data uncertainties, while significant, don’t drive significant variation in the predictions of long-run explosive models.\n\n\nUsing alternative data series, like GWP/capita and frontier GDP/capita, change the expected year of explosive growth by a few decades, but they still expect it before 2100.[108](https://www.openphilanthropy.org/could-advanced-ai-drive-explosive-economic-growth#footnote108_gp0eqyk \" See Roodman (2020) Table 4 - p. 42.\")\n\n\nI did a sensitivity analysis, fitting Roodman’s univariate model to shortened GWP data sets starting in 10,000 BCE, 2000 BCE, 1 CE, 1000 CE, 1300 CE, 1600 CE, and 1800 CE. In every case, the fitted model expects explosive growth to happen eventually. (This is no surprise: as long as growth increases on average across the data set, long-run explosive models will predict explosive growth eventually.) The median predicted date for explosive growth is increasingly delayed for the shorter data sets;[109](https://www.openphilanthropy.org/could-advanced-ai-drive-explosive-economic-growth#footnote109_mt8gre9 \" Intuitively, this is because the post-1950 slowdown in GWP growth has more influence over the model’s predictions for the shorter data sets.\") the model still assigns > 50% probability to explosive growth by 2100 if the data starts in 1300 CE or earlier. [Sensitivity analysis on shortened data sets.](https://www.openphilanthropy.org/research/could-advanced-ai-drive-explosive-economic-growth/#AppendixI)\n\n\nSo the predictions of explosive growth can be significantly delayed by completely removing old data points; the obvious drawback is that by removing these old data points you lose information. Apart from this, the predictions of long-run explosive models do not seem to be sensitive to reasonable alterations in the data.\n\n\n#### 6.2.1.2 Data uncertainties undermine the empirical support for long-run explosive models\n\n\nThe long-run explosive models I’ve seen explain very long-run growth using the increasing returns mechanism. This mechanism implies growth should increase smoothly over hundreds and thousands of years.[110](https://www.openphilanthropy.org/could-advanced-ai-drive-explosive-economic-growth#footnote110_xzf0phe \" The mechanism is also used by Jones (2001) and Galor and Weil (2000). These theories don’t predict explosive growth due as they model the demographic transition (see more).\")\n\n\nThe data seems to show growth increasing fairly smoothly across the entire period 10,000 BCE to 1950 CE; this is a good fit for the increasing returns mechanism. However, I think the uncertainty of pre-modern data is great enough that the true data may show the growth in the period 5000 BCE to 1600 CE growth to be roughly constant. This would undermine the empirical support for the long-run explosive models, even if it wouldn’t substantially change their predictions.\n\n\nDoubts about the goodness of fit are reinforced by the fact that alternative data series, like GWP/capita and frontier GDP/capita are less of a good fit to the increasing returns mechanism than the GWP series.\n\n\nAs an alternative to the increasing returns mechanism, you might instead place weight on a theory where there’s a single slow step-change in growth rates that happens between 1500 and 1900 (Ben Garfinkel proposes such a view [here](https://forum.effectivealtruism.org/posts/CWFn9qAKsRibpCGq8/does-economic-history-point-toward-a-singularity?commentId=3D8hpEFbYmEGA8i5P)). Though a ‘slow step-change’ view of long-run growth rates will have a lesser tendency to predict explosive growth by 2100, it would not rule it out. For this, it would have to explain why step change increases in growth rate have occurred in the past, but more could not occur in the future.\n\n\n* [More on the slow step-change view](https://www.openphilanthropy.org/research/could-advanced-ai-drive-explosive-economic-growth/#TheStepChange).\n* [Adjudicating between the slow step-change view and the increasing returns mechanism](https://www.openphilanthropy.org/research/could-advanced-ai-drive-explosive-economic-growth/#AppendixI).\n\n\nDespite these concerns, it still seems likely to me that the increasing return mechanism offers an important role in explaining the long-run growth data. This suggests we should place weight on long-run explosive models, as long as population is accumulable.\n\n\n#### 6.2.2 Recent GWP growth shows that super-exponential growth has come to an end\n\n\n**Objection:** Recently, GWP growth has been much lower than long-run explosive models have predicted. This shows that these models are no longer useful for extrapolating GWP\n\n\n**Response**: Roodman (2020) does a careful analysis of how ‘surprised’ his model is by the recent data. His model is somewhat surprised at how slow GWP growth has been since 1970. But the data are not in very sharp conflict with the model and only provide a moderate reason to distrust the model going forward.\n\n\nWe can assess the size of the conflict between the model and the recent data in three ways: eyeballing the data, quantifying the conflict using Roodman’s model, and comparing the recent slowdown to historical slowdowns.\n\n\n(Note, by ‘slowdown’ I mean ‘period where growth either remains at the same level or decreases’. This is a ‘slowdown’ compared to the possibility of super-exponential growth, even if growth remains constant.)\n\n\n#### 6.2.2.1 Eyeballing how much the recent data conflicts with Roodman’s model\n\n\nFirst, here’s the graph we saw earlier of GWP against time. Though the recent points deviate slightly from Roodman’s trend, the difference is not significant. It looks smaller than previous historical deviations after which the trend resumed again.\n\n\n![](https://www.openphilanthropy.org/wp-content/uploads/imageI-3.png)\n\n\nA representation that highlights the deviation from the expected trend more clearly is to plot GWP against its average growth in the following period:\n\n\n![](https://www.openphilanthropy.org/wp-content/uploads/imageA-1.png)\n\n\nThe last five data points indicate the growth after 1970 is surprisingly low. But again they do not seem to be in very sharp conflict with the trend.\n\n\n#### 6.2.2.2 Quantifying how much the recent data conflicts with Roodman’s model\n\n\nIt’s possible to quantify how surprised Roodman’s model is by a data point, given the previous data points ([more](https://www.openphilanthropy.org/research/could-advanced-ai-drive-explosive-economic-growth/#AppendixI)). The results are that:\n\n\n* 1980 GWP is between the 40th and 50th percentiles, so isn’t surprising.\n* 1990, 2000, 2010, and 2019 GWP are between the 20th and 30th percentiles, so are surprising but not hugely surprising. If Roodman’s model incorporated serial correlation between random deviations from the underlying trend, the surprise would be smaller still.\n\n\n#### 6.2.2.3 The recent slowdown is large compared to other slowdowns in GWP growth\n\n\nGrowth in the period 1970 – 2020 has been slower than previously. During this time the economy has increased in size by a factor of 5.4. We can compare this to previous slowdowns after which the long-run super-exponential trend reasserted itself. If the recent growth slowdown is similar in size or smaller, this weakly suggests that the super-exponential trend will reassert itself once again, by analogy with previous slowdowns.\n\n\nThere are a couple of other slowdowns in GWP growth in the historical data:\n\n\n* Growth in the period 200 BCE – 1000 CE was consistently slower than in the previous thousand years. In this time the economy increased in size by a factor of 1.7.\n* Growth in the period 1200 CE – 1400 CE was slower than the previous period. In this time the economy did not increase in size.\n\n\nSo it seems the recent slowdown is shorter than previous slowdowns in terms of calendar years but *longer* when measured by the fractional increase of GWP.[111](https://www.openphilanthropy.org/could-advanced-ai-drive-explosive-economic-growth#footnote111_yz9qab6 \" I feel that both the length of the slowdown in calendar time and the fractional increase in GWP during the slowdown are relevant. The first is relevant because slowdowns are caused by dynamics that play out over roughly fixed amounts of calendar time, like pandemics and human rulers. The second is relevant because (to oversimplify) the endogenous growth models we’ve focused on suggest that when GWP doubles, its growth should increase by some percentage (in Roodman’s model this is about 46%). So if growth stays constant (or decreases) during a period, the model is surprised to the extent that GWP increases over that period. To the extent that slowdowns are caused by unevenness in the technological landscape (see next section), we should measure their length by the amount of technological progress that is made during the slowdown. On this measure, the current slowdown is much longer than past slowdowns.\") This weakly suggests the slowdown is not just random, but rather the result of some systematic factor. The return to super-exponential growth after past slowdowns is not a strong indicator that we’ll return to super-exponential growth after the current one.\n\n\nThe next section aims to strengthen this evidence further, by focusing on the growth of frontier economies (e.g. US, UK, France), rather than just merely GWP growth.\n\n\n#### 6.2.2.4 So what?\n\n\nIf we think the demographic transition explains the recent slowdown, we may not be moved by this objection. I argued in the main report that we can think of highly substitutable AI as reversing the demographic transition, after which we would expect super-exponential growth to resume. The report’s basic thesis that sufficiently advanced AI could lead to explosive growth is consistent with the recent data.\n\n\nAlternatively, we might have a more agnostic approach to the causes of long-run growth and the recent slowdown (i.e. the [ignorance story](https://www.openphilanthropy.org/research/could-advanced-ai-drive-explosive-economic-growth/#AppendixD)). In this case, the recent data provides a stronger reason to reduce the probability we assign to explosive growth. However, it doesn’t provide a decisive reason: the recent data is not *hugely* improbable according to Roodman’s model.\n\n\n#### 6.2.3 Frontier growth shows a clear slowdown\n\n\n#### 6.2.3.1 Summary of objection\n\n\nThe prolonged lack of super-exponential growth of GDP per capita in frontier countries is striking. US per capita income has grown steadily at 1.8% for 150 years ([since 1870](https://ourworldindata.org/economic-growth)), and other frontier countries show similar trends. The only reason GWP data doesn’t show the same pattern is catch-up growth. The lack of super-exponential growth over such a long period is strong evidence against long-run explosive models.\n\n\nEven the trend in frontier GDP/capita may be overly generous to long-run explosive models. Frontier GDP/capita has recently been boosted from a number of one-off changes: e.g. the reallocation of people of color from low wage professions to high wage professions, the entry of women into the workforce, and improved educational achievement. [Hsieh et al. (2013)](http://klenow.com/HHJK.pdf) estimates that improvements in the allocation of talent may explain a significant part of U.S. economic growth over the last 60 years.[112](https://www.openphilanthropy.org/could-advanced-ai-drive-explosive-economic-growth#footnote112_l2iexki \" It finds that 20 - 40% of growth in output per person can be explained by improved talent allocation.\") If we adjusted for these factors, the trend in frontier GDP/capita would likely be even more at odds with the predictions of long-run explosive models.\n\n\nThis strengthens the objection of the previous section.\n\n\n#### 6.2.3.2 Elaboration of objection\n\n\nThis objection is hard to spell out in a conceptually clean way because *endogenous growth models like Roodman’s are only meant to be applied to the global economy as a whole, and so don’t necessarily make explicit* *predictions about frontier growth**.* The reason for this is that the growth of any part of the global economy will be influenced by the other parts, and so modeling only a part will necessarily omit dynamics relevant to its growth. For example, if you only model the US you ignore R&D efforts in other countries that are relevant to US growth.\n\n\nNonetheless, I do feel that there is something to this objection. GWP cannot grow super-exponentially for long without the frontier growing super-exponentially.\n\n\nIn the rest of this section I:\n\n\n* Suggest the size of the ‘frontier growth slowdown’ is about twice as big as the already-discussed GWP slowdown.\n* Suggest that the most natural application of Roodman’s univariate model to frontier growth allows the objection to go through.\n\n\n(Again, I use ‘slowdown’ to refer to a period of merely exponential growth, which is ‘slower’ than the alternative to super-exponential growth.)\n\n\n#### 6.2.3.2.1 How much bigger is the frontier growth slowdown than the GWP slowdown?\n\n\nI have briefly [investigated](https://www.openphilanthropy.org/research/could-advanced-ai-drive-explosive-economic-growth/#AppendixD) the timescales over which frontier growth has been exponential, rather than super-exponential, by eyeballing GDP and GDP/capita data for the US, England, and France. My current opinion is that the frontier shows clear super-exponential growth if you look at data from 1700, and still shows super-exponential growth in data from 1800. However data from about 1900 shows very little sign of super-exponential growth and looks exponential. So the slowdown in frontier growth is indeed more marked than that for GWP growth. Rather than just 50 years of slowdown during which GWP increased by a factor of 5.4, there’s more like 120 years of slowdown during which GDP increased by about 10-15X.[113](https://www.openphilanthropy.org/could-advanced-ai-drive-explosive-economic-growth#footnote113_zqwzo5w \" The ratio of English GDP between 2016 and 1900 is roughly 10. The ratio of per capita US GDP between 1870 and 2016 is about 14.\")\n\n\nMy current view is that considering frontier GDP/capita data increases the size of the deviation from the super-exponential trend by a factor of 2-3 compared to just using GWP data. This is because the deviation’s length in calendar time is 2-3 times bigger (120 years rather than 50 years) and the GDP increase associated with the deviation is 2-3 times bigger (GDP increases 10-15X rather than 5X). Recent frontier growth poses a bigger challenge to the explosive growth theory than recent GWP growth.\n\n\nThis is consistent with the [results](https://www.openphilanthropy.org/research/could-advanced-ai-drive-explosive-economic-growth/#graph-of-how-surprised) Roodman got when fitting his model to French per capita GDP. Every observation after 1870 was below the model’s predicted median, and most lay between the 20th and 35th percentiles. The model was consistently surprised at the slow pace of progress.\n\n\n#### 6.2.3.2.2 The simplest way of extending Roodman’s model to frontier countries implies they should grow super-exponentially\n\n\nRoodman’s model implies that GWP should grow super-exponentially but does not say how the extent to which this growth results from frontier vs catch-up growth should change over time.\n\n\nThe simplest answer seems to be that both frontier and catch-up growth is super-exponential. The same story that explains the possibility of super-exponential growth for the total world economy – namely increasing returns to endogenous factors including technology – could also be applied to those countries at the frontier. If frontier countries invested their resources in helping others catch up we might expect something different. But on the realistic assumption that they invest in their own growth, it seems to me like the story motivating Roodman’s model would predict super-exponential growth at the frontier.\n\n\nThe lack of frontier super-exponential growth is especially surprising given that frontier countries have been significantly increasing their proportional spend on R&D.[114](https://www.openphilanthropy.org/could-advanced-ai-drive-explosive-economic-growth#footnote114_0r6l4dt \" See data here.\") Roodman’s model assumes that a constant fraction of resources are invested and predicts super-exponential growth. How much more surprising that we see only constant growth at the frontier when the fraction of resources spent on R&D is increasing! The expansion of the size of the frontier (e.g. to include Japan), increasing the resources spent on frontier R&D even further, strengthens this point.\n\n\n**Response: deny the frontier should experience smooth super-exponential growth**\n\n\nA natural response is to posit a more complex relationship between frontier and catch-up growth. You could suggest that while GWP as a whole grows at a fairly smooth super-exponential rate, progress at the frontier comes in spurts. The cause of GWP’s smooth increase alternates between spurts of progress at the frontier and catch-up growth. The cause of this uneven progress on the frontier might be an uneven technological landscape, where some advances unlock many others in quick succession but there are periods where progress temporarily slows.\n\n\nI think that accepting this response should increase our skepticism about the precise predictions of Roodman’s model, moving us from the *explosive-growth story* towards the *ignorance story*. It would be a surprising coincidence if GWP follows a predictable super-exponential curve despite frontier growth being the result of a hard-to-anticipate and uneven technological landscape.[115](https://www.openphilanthropy.org/could-advanced-ai-drive-explosive-economic-growth#footnote115_ce792co \" For GWP growth to be smooth, we would need the effect of catch-up growth on GWP to exactly cancel the non-smooth progress of the frontier.\") So, for all we know, the next spurt of frontier progress may not happen for a long time, or perhaps ever.\n\n\n#### 6.2.3.3 So what?\n\n\nAgain, this objection may not move you much if you explain the slowdown via the demographic transition. The recent data would not undermine the belief that super-exponential growth will occur *if* we get sufficiently substitutable AI.\n\n\nIf you are more agnostic, this will provide a stronger reason to doubt whether explosive growth will occur. The length of the slowdown suggests a structural break has occurred, and the super-exponential trend has finished (at least temporarily). Still, without an explanation for why growth increased in the past, we should not rule out more increases in the future. 120 years of exponential growth, after centuries of increasing growth rates, suggests agnosticism about whether growth will increase again in the next 80 years.\n\n\n#### 6.2.4 Long-run explosive models don’t anchor predictions to current growth levels\n\n\n**Objection:** The models predicting explosive growth within a few decades typically expect growth to *already* be very high. For example, the median prediction of Roodman’s model for 2020 growth is 7%. Its predictions aren’t anchored sufficiently closely to recent growth. I analyze this problem in more detail in an [appendix](https://www.openphilanthropy.org/research/could-advanced-ai-drive-explosive-economic-growth/#roodmans-model-is-overly-surprised).\n\n\n**Response:** I developed a variant of Roodman’s model that is less theoretically principled but models a correlation between growth in adjacent periods. This ‘growth differences’ model anchors its predictions about future growth to the current GWP growth rate of 3%.\n\n\nThe model’s median predicted year for explosive growth is 2082 (Roodman: 2043), a delay of about 40 years; its 80% confidence interval is [2035, 2870] (Roodman: [2034, 2065]). This suggests that adjusting for this problem delays explosive growth but still leaves a significant probability of explosive growth by 2100. [Explanation of the model I developed.](https://www.openphilanthropy.org/research/could-advanced-ai-drive-explosive-economic-growth/#AppendixF)\n\n\nI find this model most useful as an ‘outside-view’ that projects GWP based solely off past data, without taking into account specific hypotheses like ‘the demographic transition ended the period of super-exponential growth’, or ‘we’d only expect to see super-exponential growth again once advanced AI is developed’. If we embrace specific inside-view stories like these, we’d want to make adjustments to the model’s predictions. (For the examples given, we’d want to further delay the predicted dates of explosive growth based on how far we are from AI that’s sufficiently advanced to boost the growth rate.)\n\n\nHow might we adjust the model’s predictions further based on our beliefs about AI timelines?\n\n\nSuppose you think it will be (e.g.) three decades before we have AI systems that allow us to increase the rate of growth (systems before this point might have ‘level effects’ but not noticeably impact growth). You could make a further adjustment by assuming we’ll continue on our current growth trajectory for three decades, and then growth will change as shown in the graph. In other words, you’d delay your median predicted year for explosive growth by another 30 years to about 2110. However, you’ll still assign some probability to explosive growth occurring by the end of the century.\n\n\nI plotted the 10th, 50th, and 90th percentiles over GWP from three methods:[116](https://www.openphilanthropy.org/could-advanced-ai-drive-explosive-economic-growth#footnote116_4a4742m \" These plots are generated by the final section of this python notebook.\")\n\n\n* Surveying economists about GWP/capita and combining their answers with UN population projections to forecast GWP (‘ ’).\n* Fitting [David Roodman’s growth model](https://www.openphilanthropy.org/research/modeling-the-human-trajectory/) to long-run historical GWP data (‘ ’).\n* Fitting my variant on Roodman’s model to long-run GWP data (‘ ’). \n\n![](https://www.openphilanthropy.org/wp-content/uploads/imageH-1.png)\n\n\nI am currently inclined to trust the projections somewhere in between growth differences and Roodman’s model if we develop highly substitutable[117](https://www.openphilanthropy.org/could-advanced-ai-drive-explosive-economic-growth#footnote117_ww9uc66 \" See my best guess about what would count as ‘highly substitutable’ here.\") AI systems (though I don’t think any model is a reliable guide to growth in this scenario), and the projections of the standard story if we don’t.\n\n\nSee code producing these plots at the bottom of [this notebook](https://colab.research.google.com/drive/11oAdADbcd6GCslV0P5ESubqghaQlpyh2?usp=sharing). (If the link doesn’t work, the colab file can be found in [this folder](https://drive.google.com/drive/folders/1dzO1eZ8xSeePOntXOGNhSK5qqsgteHSp).)\n\n\n#### 6.2.5 Long-run explosive models don’t discount pre-modern data\n\n\n**Objection:** For example, Roodman’s model downweights ancient data points for their uncertainty, but does not additionally downweight them on the basis that they are less relevant to our current growth regime. But more recent data *is* more likely to be relevant because the underlying dynamics of growth may have changed.\n\n\n**Response:** My ‘growth-differences’ model allows the user to specify the rate at which ancient data points are discounted.[118](https://www.openphilanthropy.org/could-advanced-ai-drive-explosive-economic-growth#footnote118_7ozusj6 \" A datapoint when GWP was 1/2n times its current value is discounted by a factor dn, d<1. So the discount is not applied at a fixed rate per unit time. \") For my preferred discount rate,[119](https://www.openphilanthropy.org/could-advanced-ai-drive-explosive-economic-growth#footnote119_lw93wiz \" My preferred discount implies that, compared to a 2000 data point, a 1940 data point has weight 0.73, a 1820 data point has weight 0.53, and a 3000 BCE data point has weight 0.23.\") this delays explosive growth by another 15 years to ~2090; it still assigns a 10% chance of explosive growth by 2040. Adjusting for this problem delays explosive growth further but leaves a significant probability of explosive growth by 2100.[120](https://www.openphilanthropy.org/could-advanced-ai-drive-explosive-economic-growth#footnote120_mybj6mk \" This discount rate may be an unhappy compromise. If output cannot easily be reinvested to increase the size of labor supply (as will be true by default unless we develop highly substitutable AI), this approach may still put too much weight on pre-modern data points when labor was accumulable. On the other hand, if AI systems means that output can be easily reinvested to increase the generalized labor supply (= human labor + AI labor), then placing more weight on recent data points may be inappropriate as these are the data points for which labor isn’t accumulable.\") Again, if you think AI won’t start to affect growth for several decades, you would need to delay your median projection further (see [more](https://www.openphilanthropy.org/research/could-advanced-ai-drive-explosive-economic-growth/#a-reasonable-discount-can-delay)).\n\n\nI also perform a sensitivity analysis on the effects of removing pre-modern data points. I find that the prediction of explosive growth by 2100 is robust to removing data points before 1300, but not to removing data points before 1600 (see [more](https://www.openphilanthropy.org/research/could-advanced-ai-drive-explosive-economic-growth/#investigation-is-super-exponential-growth)).\n\n\n#### 6.2.6 Long-run explosive models don’t seem to apply to the time before the agricultural revolution; why expect them to apply to a growth regime in the future?\n\n\n**Summary of objection:** Roodman (2020) does the most sophisticated analysis on the fit of his model to data before 10,000 BCE. He finds that if he fits his model to data from 1 million years ago to the modern day, the estimated model is not a good fit to the data series. It confidently predicts that civilization will collapse within the first few 100,000 years, with a 98% chance of eventual collapse. Given that Roodman’s model did not describe a previous era – that of hunter gatherers – we should not trust its predictions about a future era of supposed explosive growth.\n\n\n**Response:** I think this objection might potentially justify agnosticism about explosive growth, but it doesn’t confidence that it will not occur.\n\n\nLet’s distinguish between three attitudes towards explosive growth:\n\n\n1. Confidence that explosive growth will occur (*explosive growth story*).\n2. Ignorance about whether explosive growth will occur (*ignorance story*).\n3. Confidence that explosive growth *won’t* occur (*standard story*).\n\n\nI think that, at most, this objection might move you from Attitude 1 towards Attitude 2. It’s not an argument for Attitude 3. The objection provides a reason to doubt the predictions of Roodman’s model, but doesn’t provide any specific reason to rule out explosive growth.\n\n\nI personally regard this objection as only a weak argument against Attitude 1. This is because a key part of technological progress, the driver to super-exponential growth, is the ability for new ideas to spread throughout society. But human societies with natural language only developed 50,000 – 150,000 years ago.[121](https://www.openphilanthropy.org/could-advanced-ai-drive-explosive-economic-growth#footnote121_xzwjmfk \" See here. \") So we wouldn’t expect Roodman’s model to be accurate before this point. As Roodman points out:\n\n\n\n> Through language, humans could share ideas more efficiently and flexibly than any organism before. Arguably, it was then that technology took on its modern, alchemical character as a force in economic development. Before, hominins had developed important technologies such as handaxes. But it is not obvious that those intellectual mutations spread or evolved any faster than the descendants of those who wrought them. After, innovations could diffuse through natural language, the first new medium of arbitrary expressiveness on Earth since DNA.\n> \n> \n\n\nIn addition, humans couldn’t accumulate capital until we became sedentary. This happened around the neolithic era, giving another reason to think growth dynamics would be different before 10,000 BCE.\n\n\n\n\n---\n\n\n7. Appendix B: Constant exponential growth is a knife-edge condition in many growth models\n------------------------------------------------------------------------------------------\n\n\nThe growth literature has found it very difficult to find a satisfactory theoretical explanation for why long-term growth would be exponential, despite decades of effort. In many endogenous growth models, long-run growth is only exponential under knife-edge conditions. This means that constant exponential growth only occurs when some parameter is *exactly* equal to some value; the smallest disturbance in this parameter leads to a completely different long-run behavior, with growth either going to infinity or to 0. Further, it seems that these knife-edge conditions are problematic: there’s no particular reason to expect the parameter to have the precise value that leads to constant exponential growth.\n\n\nI argue the best candidates for addressing this problem are semi-endogenous models. Here the ‘knife-edge condition’ is merely that the population grows exponentially.[122](https://www.openphilanthropy.org/could-advanced-ai-drive-explosive-economic-growth#footnote122_cyxx4ug \" See data on frontier population growth here.\") For this and other reasons discussed in this section, I place more weight on semi-endogenous models (~75%) than on any other models in explaining the recent trend of exponential growth.\n\n\nThe UN forecast that population growth will slow over the 21st century. When you plug this assumption into semi-endogenous growth models, they predict that GDP/capita growth will slow. This raises my probability that 21st century growth will be sub-exponential. The difficulty of finding a non-knife edge explanation of exponential growth also raises my credence that the pattern of exponential growth is a transitional rather than the beginning of a steady state regime.[123](https://www.openphilanthropy.org/could-advanced-ai-drive-explosive-economic-growth#footnote123_rtio41p \" It would be transitional, for example, if it was a temporary deviation from the historical pattern of super-exponential growth, or a transitional period between pre-1900 super-exponential growth and post-2000 sub-exponential growth.\") Nonetheless, I still assign substantial probability (~20%) that there is some mechanism generating exponential growth that will continue to function until 2100, although I’m not sure what it would be.\n\n\nThe rest of this section is as follows:\n\n\n* I explain my intuitive understanding of the claim that constant exponential growth is an unmotivated knife-edge condition ([here](https://www.openphilanthropy.org/research/could-advanced-ai-drive-explosive-economic-growth/#AnIntuitiveExplanation)).\n* I review the knife-edge conditions in a number of endogenous growth models ([here](https://www.openphilanthropy.org/research/could-advanced-ai-drive-explosive-economic-growth/#KnifeEdges)).\n\t+ This section also makes some other objections to certain models, explaining my preference for semi-endogenous models.\n* I briefly review the sub literature that claims that a very large class of models have knife-edge conditions for exponential growth ([here](https://www.openphilanthropy.org/research/could-advanced-ai-drive-explosive-economic-growth/#AnEconomic)).\n* I discuss a recent model that claims to produce exponential growth without knife-edge condition ([here](https://www.openphilanthropy.org/research/could-advanced-ai-drive-explosive-economic-growth/#MightMarketDynamics)).\n\n\n#### 7.1 An intuitive explanation for why exponential growth might be a knife-edge condition in endogenous growth models\n\n\nLet’s focus on the endogenous factor of technology. Assume that we invest a constant fraction of output into technology R&D. This investment causes the level of technology to improve by a certain percentage each year. We’re interested in how this percentage changes over time, as technology advances. In other words, we’re interested in how the rate of technological progress changes over time, with this progress measured as a percentage.\n\n\nAs technology improves, there are (at least) two things that might affect the rate of future progress. Firstly, in the future there may be less low hanging fruit as we have made all the easy technological discoveries and only difficult ones remain. Call this the *fishing out* effect. Secondly, we can use the new technology in our future research, increasing the effectiveness of future R&D efforts (e.g. use of the internet). Call this the *standing on shoulders* effect.\n\n\nThese two effects point in opposite directions but there is no reason to expect them to cancel out exactly. The *fishing out* effect relates to the landscape of technological discoveries, and how quickly the easy discoveries dry up; the *standing on shoulders* effect relates to the extent to which we can harness new technologies to improve the process of R&D. The two effects relate to very different things. So by default, we should expect these factors *not* to cancel out exactly. And so we should expect the rate of technological progress to either speed up or to slow, depending on which effect is more powerful. But there’s no reason to think that the rate of progress should stay exactly constant over time. This would be like giving one tennis player a broken arm and their opponent a broken leg, and expecting the two effects to cancel out exactly.\n\n\nMore nuanced models add additional factors that influence the rate of technological progress (e.g. the ‘stepping on toes effect’). But these additional factors don’t make it any more plausible that everything should cancel out and growth should be exponential.\n\n\nThe conclusion of this line of thinking is that, theoretically speaking, we shouldn’t expect technology to grow exponentially.\n\n\nA similar argument can be applied to output as a whole, rather than just technology. Consider a growth model where all inputs are endogenous. The intuition behind the argument is that some factors suggest growth should increase over time, other factors suggest growth should slow over time; further, there’s no particular reason to expect these factors to cancel out exactly. So we should expect growth to either slow down, or speed up over time.\n\n\nMore precisely, we’re interested in the percentage increase in the total output each year. We want to know how this percentage changes over time as total output increases. There are again (at least) two effects relevant to this question.\n\n\nThe first effect is that, as the endogenous inputs to production increase over time, they become harder to increase by a fixed percentage. This is true because i) a fixed percentage is an increasingly large absolute amount, ii) there may be diminishing marginal returns to efforts to improve the factor, and iii) because of other complex factors.[124](https://www.openphilanthropy.org/could-advanced-ai-drive-explosive-economic-growth#footnote124_2ngk98s \" For example, when output per capita becomes large people may choose to have fewer children. This would reduce the percentage increase of labor in subsequent years.\") If inputs are harder to increase by a fixed percentage, then output as a whole is also harder to increase by a fixed percentage. Let’s call this effect *percentage improvements become harder;* it roughly corresponds to the *fishing out* effect in the previous section.\n\n\nThe second effect is that, as the endogenous inputs increase, we have more resources to invest in increasing the inputs. This increased investment allows greater absolute increases to be made to the inputs, and so to output as a whole. Call this effect *greater investment;* it corresponds to the *standing on shoulders* effect from the previous section.\n\n\nAgain, these two effects point in opposite directions. The *percentage improvements become harder* effect suggests growth will slow over time, the *greater investment* effect suggests that growth will increase. Again, I know of no reason to think these effects should *exactly* cancel out.[125](https://www.openphilanthropy.org/could-advanced-ai-drive-explosive-economic-growth#footnote125_s9hamwa \" One reason they might cancel exactly would be if the production function displayed constant returns to scale. If this were the case, and the difficulty of making absolute improvements to each factor did not change as the factor increased (a fairly natural assumption), then there would be exponential growth. But production functions only express constant returns to scale when technology is excluded; when technology is endogenous there are typically increasing returns to scale in the total stock of factors. \") If they don’t cancel, growth won’t be exponential.\n\n\nTo be clear, I do not think that this intuitive argument is itself sufficient to establish that exponential growth is a knife-edge condition and highly surprising. I include because it generalizes the specific argument I make below in the context of specific models.\n\n\n#### 7.2 Knife-edges in popular endogenous growth models\n\n\nMost endogenous growth models can be broadly divided into two camps: accumulation based models and idea-based models.[126](https://www.openphilanthropy.org/could-advanced-ai-drive-explosive-economic-growth#footnote126_dlyfe4f \" Thanks to Phil Trammell for suggesting this distinction.\") In the former, the ultimate source of growth in GDP/capita is the accumulation of physical or human capital. In the latter, the ultimate source of growth is targeted R&D leading to technological progress; although there is capital accumulation, it isn’t the ultimate source of growth.[127](https://www.openphilanthropy.org/could-advanced-ai-drive-explosive-economic-growth#footnote127_f2q95n8 \" More precisely, if we held the level of technology constant then accumulation alone would not deliver sustained growth.\")\n\n\nI will discuss the knife-edge conditions in popular growth models of both types. I think the knife-edge conditions are more problematic in the idea-based models; although accumulation based models face further objections.\n\n\nVery little of the content here is original; knife-edge critiques of endogenous models are discussed in [Cesaratto (2008)](https://www.boeckler.de/pdf/v_2008_10_31_cesaratto.pdf), [Jones (1999)](https://web.stanford.edu/~chadj/scaleff10.pdf), and [Jones (2005)](https://web.stanford.edu/~chadj/JonesHandbook2005.pdf). The problem is often discussed with different terminology, referring to the difficulty of avoiding ‘scale effects’ or the ‘linearity critique’ of endogenous growth models. I expect all economists familiar with endogenous growth models will be aware that knife-edge assumptions are typically needed for constant exponential growth. I expect most of them won’t draw my conclusion: that the best account that avoids knife-edge conditions implies that 21st century growth will be sub-exponential.\n\n\nOne strong objection to accumulation based models that I don’t discuss in this report on is their tension with growth accounting exercises, e.g. [Fernald and Jones (2014)](https://web.stanford.edu/~chadj/FernaldJones2014.pdf). These empirical exercises decompose growth into its constituent parts, and typically find that TFP growth accounts for the majority of growth rather than the accumulation of physical or human capital. I think this gives us a good reason to prefer idea-based models.\n\n\n#### 7.2.1 Accumulation based models\n\n\nPerhaps the most standard mechanism for growth here is the accumulation of physical capital. This is the strategy of the AK model, and variants thereof. I’ll start by discussing the model of Frankel (1962) and the variant proposed by Arrow (1962). Then I’ll briefly comment on some other capital accumulation models.\n\n\n#### 7.2.1.1 Frankel (1962)\n\n\nThe production function in Frankel (1962) starts out as:\n\n\n \n\n\n\\( Y=AK^α(BL)^{1−α} \\)\nwhere *B* is labor augmenting technology. Technological progress is endogenous and happens as a by-product of capital accumulation. The equation for *B* is:\n\n\n \n\n\n\\( {B}= (\\frac {K}{L})^γ \\)\nFrankel assumes γ = 1. In other words, labor augmenting technology is the capital per worker. Twice as much capital per worker makes workers twice as productive. With this assumption production is simply:\n\n\n \n\n\n\\( Y=AK \\)\nHere and in all other models in this section, I assume the standard reinvestment equation for capital: *K̇* = *sY* – *δK*. This implies that growth is exponential.\n\n\nThe knife-edge condition is γ = 1. To simplify the analysis, assume *L* is constant. If γ > 1, there are increasing returns to *K* and *Y* goes to infinity in finite time.[128](https://www.openphilanthropy.org/could-advanced-ai-drive-explosive-economic-growth#footnote128_pc6w8a0 \" An alternative version of the AK model might be Y = F(K, BL), where the arguments of F are gross complements (elasticity of substitution less than one). If B = (K/L)γ, then γ > 1 would lead to super-exponential growth for a while, and then exponential growth. We’d reach exponential growth because the second argument would grow more quickly than the first, so the function would approximate Y = K. At this point however, the capital share would be at 1, so this model is not realistic as a description of the modern regime of exponential growth.\")If γ < 1, there are diminishing returns to *K* and growth tends to 0.\n\n\nIs this knife-edge condition problematic? I think so. It claims that doubling the amount of capital per worker *exactly* doubles the productivity per worker. But why not think it would increase productivity by a factor of 1.9, or 2.1?\n\n\nThe problem becomes more acute when we realize that there are two distinct mechanisms by which capital accumulation increases labor productivity. The first is that each worker has more machinery to work with, increasing their productivity.[129](https://www.openphilanthropy.org/could-advanced-ai-drive-explosive-economic-growth#footnote129_cx4g5ad \" This mechanism plausibly faces diminishing returns: if you keep doubling the number of machines overseen by each worker they must spend less time per machine and reduce their output per machine. If this weren’t the case, you could leave one worker in charge of all the machines in a factory (or indeed the world!).\") The second mechanism is that capital accumulation leads to new technologies via the process of ‘learning by doing’. These improvements have spillover effects as new technologies can be adopted by all firms. But it is mysterious why these two very different mechanisms should combine such that γ = 1 exactly. If the spillover effects were ever so slightly bigger or smaller, or if the benefits of having more machinery were ever so slightly bigger or smaller, growth would go to 0 or infinity rather than being constant.\n\n\nRobert Solow comments, on this topic, ‘This version of the endogenous-growth model is very unrobust. It can not survive without exactly constant returns to capital. But you would have to believe in the tooth fairy to expect that kind of luck.’[130](https://www.openphilanthropy.org/could-advanced-ai-drive-explosive-economic-growth#footnote130_mfh95ni \" Perspectives on Growth Theory (Journal of Economic Perspectives, 1994).\") In support of this comment, I argue in this [technical appendix](https://www.openphilanthropy.org/research/could-advanced-ai-drive-explosive-economic-growth/#if-we-believed-frankels-model) that even the constancy of 20th century growth wouldn’t convince us that long-run growth would be constant if we believed that Frankel’s growth model was literally correct.\n\n\nThere is another problem with Frankel’s *AK* model. In order to get constant returns to capital accumulation, but avoid increasing returns to capital and labor in combination, the model removes the effect of labor on output entirely. The seemingly absurd implication is that adding more workers won’t increase output. A defense might be that the model is intended for a simplified setting where labor is constant. If so, then the model doesn’t seem to be appropriate for explaining the recent period of growth, during which there has been significant population growth.\n\n\nOne last thing to note about this *AK* model is that if there is any capital augmenting technological progress (e.g. an increase in *A*), this will increase growth.[131](https://www.openphilanthropy.org/could-advanced-ai-drive-explosive-economic-growth#footnote131_fd8blow \" This is because it will increase the reinvestment in K: gK = sY/K = sA.\")\n\n\n#### 7.2.1.2 Arrow (1962)\n\n\nArrow (1962) develops a similar AK model.[132](https://www.openphilanthropy.org/could-advanced-ai-drive-explosive-economic-growth#footnote132_snfwda6 \" Cesaratto (2008) provides a useful discussion of various AK models and their interrelations.\") His definition of labor augmenting technology depends on the total capital accumulated rather than the capital accumulated *per person*.\n\n\n\\( B=K^γ \\)\nwith γ < 1.[133](https://www.openphilanthropy.org/could-advanced-ai-drive-explosive-economic-growth#footnote133_17sdu4c \" If γ = 1, then population growth will lead the growth rate of output to increase without limit. γ = 1 implies Y = AKL(1-α). Therefore gY = gK + (1 - α) gL. The reinvestment equation implies that in a steady state gY = gK. Therefore in the steady state growth is infinite. \") This leads to:\n\n\n\\( Y=AK^α(BL)^{1−α}=AK^μL^{1−α} \\)\nwith μ = γ(1 – α) < 1.\n\n\nThis model does not, in my view, have a problematic knife-edge. However, it does imply that growth will be sub-exponential over the 21st century.\n\n\nThe growth rate of *y* = *Y*/*L* turns out to be:\n\n\n\\( g\\_y=g\\_{L} \\frac {γ}{(1−γ)} \\)\nIf the labor force doesn’t grow, then neither will GDP/capita. This prediction is not actually falsified by observation, as the population *has* grown continuously since the industrial revolution. In fact, I think that exponential population growth is the most plausible root explanation for the historical observed pattern of exponential growth.\n\n\nThis model is structurally very similar to the semi-endogenous model developed by Jones that I discuss later. In both models, the ultimate driver of exponential income growth is exponential growth in labor. Both models imply that growth over the 21st century will be sub-exponential, as population growth is expected to slow.\n\n\n(A quick aside: if capital were perfectly substitutable with labor – the AI robot scenario – then this model predicts explosive growth. In this scenario, capital can play the same role as labor in production and so, if AI robots are cheaper than human labor, the model will ultimately approximate: *Y* = *AK1 + γ(1-α)*. There are increasing returns to capital accumulation and so super-exponential growth. This is just to demonstrate that some accumulation models do imply that this scenario would lead to explosive growth.)\n\n\n#### 7.2.1.3 Other capital accumulation stories\n\n\n[Jones and Manuelli (1990)](https://www.jstor.org/stable/2937622?seq=1) develop a model in returns to capital fall, but rather than falling to 0 as in most models falls to a constant and then stays at that constant. This means that capital accumulation is sufficient for sustained growth. Growth from capital accumulation will be sub-exponential as the returns to capital diminish towards the constant, and afterwards it will be exponential.\n\n\nFor this model to explain the recent period of exponential growth, then, it must claim that returns to capital have long ago diminished to their lowest possible value, and are now constant. Intuitively, this claim doesn’t seem plausible: returns to capital would diminish further if we equipped every worker with the highest quality equipment possible. Putting that aside though, the model in essence behaves the same way as AK in the regime where returns to capital are constant. So the same problems we saw above will apply.\n\n\nIndeed, the knife-edge analogous to the one considered above applies. In the limit where returns to capital are constant we have:\n\n\n\\( \\frac {dY}{dK}=K^ϕ \\)\nwith φ = 0. If φ > 0, growth from capital accumulation is super-exponential; if φ < 0, growth goes to 0. We can ask why φ = 0. The value of φ is again plausibly the product of two mechanisms: additional capital can be used directly to produce more output; accumulating capital involves some ‘learning by doing’ and produces new technologies that can be copied by others. I can see no reason for these two mechanisms to lead to exactly constant returns. Ultimately, I think [Jones and Manuelli (1990)](https://www.jstor.org/stable/2937622?seq=1) faces the same objections as the AK model; its main advantage is that it formally acknowledges diminishing returns to capital (though not during the regime where exponential growth is occurring).\n\n\nAnother way capital accumulation can lead to sustained growth is by using a [CES production function](https://en.wikipedia.org/wiki/Constant_elasticity_of_substitution#CES_production_function) where the elasticity of substitution between capital and labor is above 1. In this case, as with Jones and Manuelli, the returns to capital diminish initially and then approach some constant. While the returns are diminishing, growth from capital accumulation is sub-exponential; in the limit where these returns are constant, growth from capital accumulation is exponential. In the limit the model faces the same ‘knife-edge objection’ as [Jones and Manuelli (1990)](https://www.jstor.org/stable/2937622?seq=1): why would the direct and spillover effects of capital accumulation net out at exactly constant returns?[134](https://www.openphilanthropy.org/could-advanced-ai-drive-explosive-economic-growth#footnote134_d7sisly \" The assumption of constant returns to capital and labor in combination, embodied by the CES production function, is reasonable when we only consider direct effects. If you double the number of workers and the factories and machines at their disposal, you’ll produce twice as much. But once you account for spillover effects from capital accumulation, as a plausible theory without a distinct representation of technology must do, there is no particular reason to think there should be exactly constant returns.\")\n\n\nThere is another problem for the CES production function approach. In the limit where growth is exponential, the capital share is 1. The capital share has been around 0.3 for the last 50 years (although it has recently increased somewhat), so this model wouldn’t offer a good explanation of the recent period of exponential growth.\n\n\n#### 7.2.1.4 Human capital accumulation\n\n\n[Lucas (1988)](https://www.parisschoolofeconomics.eu/docs/darcillon-thibault/lucasmechanicseconomicgrowth.pdf) suggests the ultimate driver of growth is not physical but *human* capital. The model is as follows:\n\n\n\\( Y=AK^α(lhL)^{1−α} \\)\n\\( \\dot h=ϕh(1−l) \\)\n\nwhere *h* is human capital per person, *l* is the proportion of time spent working, 1 – *l* is the proportion of time spent increasing *h*, φ is a constant, and *A* is a constant.\n\n\nThe knife-edge here is that *ḣ* = *constant* × *hφ* with φ = 1 exactly. If φ < 1, there would be diminishing returns to human capital accumulation and growth would fizzle out; if φ > 1 growth would go to infinity in finite time.\n\n\nIs this knife-edge problematic? Again, I think so. There are two possible interpretations of *h*;[135](https://www.openphilanthropy.org/could-advanced-ai-drive-explosive-economic-growth#footnote135_98s7jyn \" I borrow these interpretations from Carroll (2020).\") I think the condition is problematic in both cases.\n\n\nThe first interpretation is that *h* is the knowledge and skills of an individual agent; 1 – *l* is the proportion of their time they spend studying.[136](https://www.openphilanthropy.org/could-advanced-ai-drive-explosive-economic-growth#footnote136_6409cbo \" This is probably the intended interpretation as Lucas l is chosen via an individual optimization decision.\") Here, the knife-edge φ = 1 means that if you know twice as much, you can learn and teach exactly twice as quickly. But why not think it allows me to learn only 1.9 times as quickly, or 2.1 times as quickly? Why is my learning speed exactly proportional to my knowledge? As with physical capital, there are both direct and spillover benefits of increasing *h*. The direct benefit is that I leverage my knowledge and skills to learn more effectively in the future. The spillover effect is that others may copy my discoveries and knowledge; this can help their future learning. It is again problematic that these two distinct effects combine to give φ = 1 exactly.\n\n\nThere’s another problem with this first interpretation. In addition, our minds and capabilities are limited by our finite minds and lifespans. Our knowledge and skills can’t grow exponentially without limit, but ultimately hit diminishing returns.\n\n\nThe second interpretation is that *h* represents all the accumulated technical and scientific knowledge of humanity; 1 – *l*is the proportion of people who are scientists.[137](https://www.openphilanthropy.org/could-advanced-ai-drive-explosive-economic-growth#footnote137_u91etkm \" This interpretation is argued for in Mankiw (1995).\")φ = 0 would mean that each absolute increase in knowledge was equally difficult. φ = 1 means that if humanity knows twice as much, an absolute increase in our knowledge becomes exactly twice as easy to achieve. This is a very particular degree of increasing returns.[138](https://www.openphilanthropy.org/could-advanced-ai-drive-explosive-economic-growth#footnote138_mdqjcg6 \" This mirrors the criticism of Romer (1990) made in Jones (1995).\") There are (at least) two relevant effects. If we know more, then perhaps we’ve made all the easy discoveries and new ideas will be harder to find (‘fishing out’). Or perhaps our knowledge will make our future learning more effective (‘standing on shoulders’). I see no reason to think these forces should net out so that φ = 1 exactly.\n\n\nThe second interpretation faces another severe problem: the rate of knowledge discovery *ḣ* depends on the fraction of people who are scientists but not the absolute number. If we alter this so that *ḣ* increases with *L*, then (still assuming φ = 1), an exponentially growing population would lead to an exponentially increasing growth rate.\n\n\n(A quick aside: if capital were perfectly substitutable with labor – the AI robot scenario – then this model would display constant returns to accumulable inputs *L* and *K*. If *h* – which would then be interpreted as ‘AI robot capital’ rather than ‘human capital’ – continues to increase, then output will grow super-exponentially. This is again to demonstrate that some accumulation models do imply that this scenario would lead to explosive growth. However, if the model was adjusted to include a fixed factor so that there were slightly diminishing returns to capital and labor, then AI robots would not lead to explosive growth. Instead it would lead to a one-off step-change in growth rates, assuming that *h*continued to grow exponentially.)\n\n\n\n#### 7.2.2 Idea-based models\n\n\nI’ve argued that some central capital accumulation and human accumulation models don’t provide compelling explanations of the observed pattern of exponential growth, partly because they make problematic knife-edge assumptions.\n\n\nOne general drawback of accumulation-based models is that they don’t directly engage with what seems to be an important part of the rise in living standards over the last 100 years: discovery of new ideas through targeted R&D. Private and public bodies spend trillions of dollars each year on developing and implementing new technologies and designs that are non-rival and can eventually be adopted by others.\n\n\nIdea-based models represent this process explicitly, and see it as the ultimate source of growth. Whereas accumulation models emphasize that growth involves increasing the number of physical machines and gadgets per person (perhaps with technological progress as a side-effect), Idea-based models emphasize that it involves purposely developing new (non-rival) designs for machines, gadgets, and other technologies.\n\n\nThis section is heavily based on Jones (1999). I simply pull out of relevant points.\n\n\nJones groups idea-based models into three camps based on important structural similarities between them:\n\n\n1. *R* / *GH* / *AH*\n\t* These are from [Romer (1990)](http://web.stanford.edu/~klenow/Romer_1990.pdf), [Grossman and Helpman (1991)](https://mitpress.mit.edu/books/innovation-and-growth-global-economy) and [Aghion and Howitt (1992)](https://www.jstor.org/stable/2951599?seq=1).\n\t* The knife-edge condition here is to assume a particular degree of increasing returns to R&D effort. This is equivalent to the assumption that φ = 1 in the [Lucas (1988)](https://www.parisschoolofeconomics.eu/docs/darcillon-thibault/lucasmechanicseconomicgrowth.pdf) model discussed just above.\n2. Y/P/AH/DT\n\t* These are from [Young (1998)](https://www.jstor.org/stable/10.1086/250002), [Peretto (1998)](https://link.springer.com/article/10.1023/A:1009799405456), [Aghion and Howitt (1998 Chapter 12)](https://mitpress.mit.edu/books/endogenous-growth-theory), and [Dinopoulos and Thompson (1998)](https://link.springer.com/article/10.1007/s001910050079).\n\t* There are two knife-edge conditions. First, assuming a particular degree of increasing returns to R&D effort exactly as *R* / *GH* / *AH* do. Secondly, assuming that the number of product lines grows in proportion to the population.\n3. *J* / *K* / *S*\n\t* These are from [Jones (1995)](https://www.jstor.org/stable/2138581?seq=1), [Kortum (1997)](https://www.jstor.org/stable/2171741?seq=1) and [Segerstrom (1998)](https://www.jstor.org/stable/116872?seq=1). These are known as semi-endogenous growth models.\n\t* The knife-edge condition is that there’s exactly exponential growth in the number of workers.\n\n\nI think the knife-edge conditions for exponential growth for *R* / *GH* / *AH* and *Y* / *P* / *AH* / *DT* models are just as problematic, if not more problematic, than those for accumulation based models discussed above.\n\n\nFor semi-endogenous models (*J* / *K* / *S*), the knife-edge condition is much less problematic. Indeed we know empirically population growth has been roughly exponential over the last 100 years.[139](https://www.openphilanthropy.org/could-advanced-ai-drive-explosive-economic-growth#footnote139_4y98lb5 \" Population growth has slowed somewhat, but I suggest that this isn’t strong evidence against semi-endogenous models.\") However, this will not continue until 2100. The UN projects that population growth will slow; *J* / *K* / *S* semi-endogenous models imply GDP/capita growth will slow as a result.\n\n\n#### 7.2.2.1 *R* / *GH* / *AH* models\n\n\nOutput is given by:\n\n\n\\( Y=A^σK^α{L\\_Y}^{1−α} \\)\n*LY* is the number of workers in goods production. There are constant returns to *K* and *LY*, and increasing returns to *K*, *LY*and *A*.\n\n\nNew ideas are produced via:\n\n\n\\( \\dot A=δA^ϕL\\_A \\)\nfor some constant δ. *LA* is the number of workers in knowledge production. A constant fraction of people do research: *LA* = *fL*, *LA* + *LY* = *L*.\n\n\nThe knife-edge assumption is φ = 1. If φ ≠ 1, then growth over time either goes to 0 or infinity, as in the above examples. To repeat my comments on Lucas (1988): there are (at least) two relevant mechanisms affecting φ. If *A* is larger, then perhaps we’ve made all the easy discoveries and new ideas will be harder to find (‘fishing out’). This suggests lower value of φ. Conversely, perhaps we can leverage our knowledge to make our future learning more effective (‘standing on shoulders’). I see no reason to think these forces should net out so that φ = 1 exactly.\n\n\n#### 7.2.2.2 Y / GH / AH models\n\n\n\\( Y=NZ^σK^α{L\\_Y}^{1−α} \\)\nwhere *N* is the number of product lines and *Z* is the *average* level of technology per product line.\n\n\nThe number of products increases with the size of the total population:\n\n\n\\( N=L^β \\)\nThe rate of technological progress depends on the number of researchers per product line:\n\n\n\\( \\dot Z= \\frac {δZ^ϕL}{N}=δZ^ϕ{L\\_A}^{1−β} \\)\nIt turns out that exponential growth relies on two knife-edge conditions in this model: β = 1 and φ = 1.\n\n\nIf φ ≠ 1 , then growth over time either goes to 0 or infinity, as above. And again, the assumption that φ = 1 involves a very specific degree of increasing returns to knowledge accumulation despite plausible mechanisms pointing in different directions (‘fishing out’ and ‘standing on shoulders’).\n\n\nIf β ≠ 1, the number of researchers per firm changes over time, and this changes the growth rate.\n\n\n#### 7.2.2.3 J / K / S models\n\n\nWe can represent these models as:\n\n\n\\( Y=A^σK^α{L\\_Y}^{1−α} \\)\n\\( \\dot A=δ{L\\_A}^{λ}A^ϕ \\)\n\\( \\dot L=nL \\)\nwith *n* > 1, φ < 1 and λ < 1. As before, we assume that a constant fraction of people do research: *LA* = *fL*, *LA* + *LY* = *L*.\n\n\nThe exponential growth in *L* drives exponential growth in *A*: φ < 1 implies each new % increase in *A* requires more effort than the last, but exponentially growing labor is able to meet this requirement. Exponential growth in *A* then drives exponential growth in *Y* and *K* and thus of GDP/capita.\n\n\nOften *L* is made exogenous, but [Jones (1997)](https://www.nber.org/papers/w6285.pdf) makes it endogenous, using fertility equations such that population growth tends to a positive constant in the long-run.\n\n\nThe knife-edge condition here is the exponential growth of labor: *L̇* = *nLφ* and φ = 1 *exactly*.\n\n\n[Jones (1997)](https://www.nber.org/papers/w6285.pdf) justifies this by appealing to biology: *‘it is a biological fact of nature that people reproduce in proportion to their number’*. Indeed, population growth was positive throughout the 20th century for the [world as a whole](https://ourworldindata.org/world-population-growth-past-future) or for the [US](https://www.ibrc.indiana.edu/ibr/2001/spring01/03.pdf).[140](https://www.openphilanthropy.org/could-advanced-ai-drive-explosive-economic-growth#footnote140_8it8m3f \" In addition, the proportion of the workforce engaged in R&D increased exponentially during the 20th century. The number of researchers is what matters for knowledge production.\") So it does seem that the model matches the rough pattern of 20th century growth.\n\n\nPopulation growth fell over the 20th century.[141](https://www.openphilanthropy.org/could-advanced-ai-drive-explosive-economic-growth#footnote141_yqxuh8l \" See data on frontier population growth here.\")  If there was no lag between research effort and productivity improvements, perhaps this theory implies we should have seen a more noticeable slowdown in frontier GDP/capita growth as a result. However, some lag is realistic, and there does seem to have been such a growth slowdown since 2000. In addition, numerous factors may have offset slowing population growth: increases in the fraction of people doing R&D, more countries on the economic frontier (and so a higher fraction of scientists doing R&D pushing forward that frontier), increased job access for women and people of color (reduced misallocation), increased educational attainment, and possibly random fluctuations in the economic returns to R&D (e.g. the IT boom).\n\n\nGrowth accounting exercises suggest that these other factors are significant. [Fernald and Jones (2014)](https://web.stanford.edu/~chadj/FernaldJones2014.pdf) suggest that the growing fraction of people doing R&D accounts for 58% of the growth since 1950, and education improvements account for 20%. [Hsieh et al. (2013)](http://klenow.com/HHJK.pdf) estimates that improvement in talent allocation can account for more than 20% of income increases since 1950.\n\n\nGiven these other factors, the juxtaposition of slowing population growth and steady income growth during the 20th century is only weak evidence against semi-endogenous growth theories. (Indeed, high quality empirical evidence on growth theories is very hard to come by.)\n\n\nOverall, it seems that semi-endogenous growth theory does a good job of explaining the general pattern of 20th century growth and that it’s hard to adjudicate beyond this point due to the effects of numerous other important factors.[142](https://www.openphilanthropy.org/could-advanced-ai-drive-explosive-economic-growth#footnote142_c4gzaml \" Some papers try to empirically distinguish between J / K / S models and Y / GH / AH models, but I think this is a very difficult task. Such attempts often give conflicting results (e.g. see Section 4 of this review). This may be because a number of messy empirical factors make testing very difficult: unknown time lags between R&D and subsequent TFP growth, other significant factors influencing TFP growth other than targeted R&D, the possibility of a factor influencing both R&D effort and subsequent TFP growth, and somewhat arbitrary choices about how to define the inputs to R&D efforts (this is especially true for Y / GH / AH models where we must calculate R&D effort per product line).\")\n\n\nWhat does semi-endogenous growth theory imply about 21st century growth? It’s The UN [population projections](https://population.un.org/wpp/) – which have a fairly good track record – over the 21st century imply that population growth will slow significantly. In addition, the historical growth of the *fraction* of people doing R&D cannot be maintained indefinitely, as it is bounded below 1. Both these trends, the slowing of population growth and the slowing growth of the fraction of researchers, imply that the growth of the number of researchers will slow. When you plug this into semi-endogenous growth theory, it predicts that the GDP/capita growth rate will also slow.\n\n\nWhere does this prediction come from? Semi-endogenous models imply each % increase in GDP/capita requires more research than the last. If the number of researchers is constant, each % increase in GDP/capita will take longer to achieve and growth will slow. If the number of researchers does grow, but at an ever slower rate, the model still predicts that GDP/capita growth will slow.\n\n\nJones draws just this implication himself in [Jones (2020)](https://web.stanford.edu/~chadj/emptyplanet.pdf); [Fernald and Jones (2014)](https://web.stanford.edu/~chadj/FernaldJones2014.pdf) discuss how slowing growth in educational achievement and the fraction of workers doing R&D, as well as population, might slow future GDP/capita growth. [Kruse-Andersen (2017)](https://papers.ssrn.com/sol3/papers.cfm?abstract_id=2947528) projects growth out to 2100 with a semi-endogenous model and predicts average GDP/capita growth of 0.45%, without even taking into account slowing population growth.\n\n\nSo *J* / *K* / *S* theories offer plausible explanations of 20th century exponential growth and ultimately suggest that 21st century growth will be sub-exponential.[143](https://www.openphilanthropy.org/could-advanced-ai-drive-explosive-economic-growth#footnote143_8373dr0 \" There are of course possible mechanisms by which fertility could pick up again in the long run, which could lead to exponential growth once more.\")\n\n\n#### 7.2.2.3.1 Additional knife-edges in *J* / *K* / *S* models?\n\n\n*J* / *K* / *S* models make use of the ‘knife-edge’ claim that the number of researchers has grown exponentially. I argued that this is not problematic for explaining the past as the empirical evidence shows that the assumption is approximately true.\n\n\nBut it could be argued that the power-law structure of *J* / *K* / *S* models is an additional knife-edge. Consider the knowledge production:\n\n\n\\( \\dot A=δ{L\\_A}^{λ}A^ϕ \\)\nThe model assumes that φ is constant over time. If φ rose as *A* increased, then exponential growth in researchers would lead to *super-*exponential growth. If φ fell as *A* increased, then exponential growth in researchers would lead to *sub-*exponential growth.\n\n\nTo explain sustained exponential growth, *J* / *K* / *S* must assume that φ is constant over time, or at least asymptotes towards some value.\n\n\nIn my mind, this knife-edge is considerably less problematic than those of other models considered.\n\n\nFirstly, a small deviation from the assumption does not cause growth to tend to 0 or infinity. If φ changes slightly over time, the rate of exponential growth will vary but it will not tend to 0 or infinity. For this to happen, φ would have to increase enough to exceed 1 (growth then tends to infinity) or decrease without bound (growth then tends to 0). But both these trajectories for φ are extreme, and so there is a vast region of possibilities where growth remains positive but bounded. I.e. a less idealized model might claim that φ varies over time but typically stays within some region (e.g. -3 < φ < 1). This broad assumption avoids extreme growth outcomes.\n\n\nSecondly, *all* the endogenous models considered in this section use some sort of power-law structure like the *J* / *K* / *S*model. They are all guilty of some ‘knife-edge’ assumption equivalent to assuming that φ is constant over time. However, the other models in the section *additionally* assume that the power takes a particular value. In addition to assuming that φ is constant over time, they assume that φ takes a particular value. And I’ve argued that the particular value chosen is without good justification, and that changing that value ever so slightly would cause growth to go to 0 or infinity.\n\n\n#### 7.3 An economic sub-literature claims constant exponential growth is a knife-edge condition in a wide class of growth models\n\n\n[Growiec (2007)](https://www.researchgate.net/publication/24057379_Beyond_the_Linearity_Critique_The_Knife-edge_Assumption_of_Steady-state_Growth) proves[144](https://www.openphilanthropy.org/could-advanced-ai-drive-explosive-economic-growth#footnote144_h6r62lr \" The paper has 27 citations, none of which seem to dispute the proof. Growiec and his colleagues have published two furtherpapers that generalize and reformulate these arguments.\") that:\n\n\n\n> Steady-state growth… necessarily requires some knife-edge condition which is not satisfied by typical parameter values. Hence, balanced growth paths are fragile and sensitive to the smallest disturbances in parameter values. Adding higher order differential/difference equations to a model does not change the knife-edge character of steady-state growth.\n> \n> \n\n\nIt generalizes the proof of [Christiaans (2004)](https://www.researchgate.net/publication/24057379_Beyond_the_Linearity_Critique_The_Knife-edge_Assumption_of_Steady-state_Growth), which applies to a more restricted setting.\n\n\nMy own view is that these proofs suggest that knife-edge problems are generic and hard to avoid, but do not establish that the knife-edge conditions of all models are problematic. Growiec has agreed with me on this point in private discussions, and in fact helped me understand why.\n\n\nThe reason is that not all knife-edges are problematic. Here are a few examples:\n\n\n* It’s plausible that there are constant returns to labor, capital, and land taken together, holding technology constant. This is supported by a thought experiment. Double the number of factories, the equipment inside them, and the workers in them; this should double output as you can make twice as much of each item. If this was the same knife-edge that was required for exponential growth, it would be less problematic than the knife-edges considered above (which roughly speaking requires constant returns to capital and technology holding labor constant).\n* Galor and Weil (2000) use a negative feedback loop to explain exponential growth. The more people there are, the more R&D effort there is and the faster the economy grows. In addition, when growth is faster people have fewer kids, instead focusing on education. This leads to the following dynamic: **higher growth → lower fertility → lower growth**. And conversely: **lower growth → higher fertility → higher growth.** This negative feedback loop stabilizes growth. It doesn’t involve any problematic knife-edge conditions, even though the theory satisfies the axioms of Growiec (2007). I don’t find this particular story convincing, as I trust the UN forecast that fertility will indeed fall over the century. Nonetheless, it is an existence proof of a theory without a problematic knife-edge condition.\n* There may be an alternative framework in which the ‘knife-edge’ case occurs for a thick set of parameter values. Indeed I discuss an attempt to do this for Y / GH / AH models in the next section, though I know of no other explicit attempts to do this.\n* The knife-edge may not be problematic at all if it involves the introduction of a completed new unwarranted term to the equations.[145](https://www.openphilanthropy.org/could-advanced-ai-drive-explosive-economic-growth#footnote145_xhnalrt \" For a striking example along these lines consider the thermostat equation dY/dt = k - Y. This equation says that the value of Y will tend towards k. Although it seems stable, it has a knife-edge according to Growiec’s theorem. We expand the initial equation to dY/dt = (k - Y) + φ × Y2. The 'knife-edge' is that φ is exactly equal to 0. If it differs at all from this value, then a large enough initial value of Y will cause the system to explode, with Y going to infinity in finite time. This may be a knife-edge in the sense defined by Growiec (2007), but it is not problematic: there’s no motivation for the introduction of a term that can have such large effects for large Y, and even the altered system is robust if the initial value of Y is not too high. Perhaps there are theories predicting that long-run growth is exponential that have similarly unproblematic knife-edges. \") Some of the knife-edges discussed above involved introduced a new exponent φ that was implicitly set to 1 in the original model. How problematic the knife-edge is depends on whether the new class of theories introduced is a natural extension of the original. In other words, are other values of φ plausible, or is φ = 1 a privileged case that we can expect to hold exactly? I argued that other values are plausible on a case by case basis above. But this is a matter of judgement; more of an art than a science.[146](https://www.openphilanthropy.org/could-advanced-ai-drive-explosive-economic-growth#footnote146_0xh5pbh \" A case that does seem knife-edge to me is Cobb-Douglas. It assumes that the elasticity of substitution is exactly 1; deviating from this assumption ever so slightly produces very different qualitative behavior. However, like the assumption of exponential growth, it has empirical support. So I still place weight on Cobb-Douglas models, just like I place weight on exponential GWP extrapolations.\")\n\n\n#### 7.4 Might market dynamics eliminate the need for a knife-edge condition?\n\n\nIn the ambitious 2020 paper [Robust Endogenous Growth](http://public.econ.duke.edu/~peretto/Robust%20Endogenous%20Growth.pdf), Peretto outlines a fully endogenous growth model that (he claims) achieves constant growth in equilibrium without knife-edge conditions. I consider the paper to be a significant technical contribution, and a very impressive attempt to meet the knife-edge challenge. However, I doubt that is ultimately successful.\n\n\nThe mechanism for achieving stable growth is somewhat complex – indeed the model as a whole is extremely complex (though well-explained). Very briefly, the economy is split into *N* firms, and the average quality of technology at a firm is denoted by *Z*. *N* increases when individuals decide to invest in creating new firms, *Z* increases when individuals decide to invest in improving their firm’s technological level. These decisions are all made to maximize individual profit.\n\n\nThere are increasing returns to investment in *Z*. This means that if *N* were held fixed and a constant share of output were invested in increasing *Z* then growth would explode (going to infinity in finite time). In this sense, the system has explosive potential.\n\n\nHowever, this explosive potential is curbed by the creation of new firms. Once new firms are created, subsequent investment in *Z* is diluted, spread out over a greater number of firms, and *Z* grows more slowly. Creating new firms raises output in the short-term but actually reduces the growth of the economy in the long run.[147](https://www.openphilanthropy.org/could-advanced-ai-drive-explosive-economic-growth#footnote147_h5i9hhs \" This is a critical difference with standard growth models. Normally all endogenous factors positively reinforce each other, in that an increase in one factor would increase output and so increase investment in the other factors. But in this system there’s a negative feedback cycle: increases in N dampens returns to investment in Z. \")\n\n\nThere are diminishing returns to *N*, so creation of new firms does not lead to explosive growth. We can think of the diminishing returns of *N* as ‘soaking up’ the excess produced from the increasing returns to *Z*.\n\n\nI believe that if the growth rate of *N* was slightly faster or slower then long-run growth would diverge (either be explosive or tend to 0). If so, there should be a robust explanation for why *N* grows at exactly the rate that it does.\n\n\nSo the key question from the perspective of the knife-edge critique is:\n\n\n\n> Why does N grow just fast enough to curb the explosive growth potential of *Z*, but not fast enough to make long-run growth sub-exponential (tending to 0 in the long run)?\n> \n> \n\n\nDespite studying the paper fairly closely, and it being well explained, I don’t have a fully satisfactory answer to this question. I discuss my best answer in [this appendix](https://www.openphilanthropy.org/research/could-advanced-ai-drive-explosive-economic-growth/#graphs-showing-frontier).\n\n\nDoes the model fulfill its promise of avoiding knife-edge conditions? A recent [review article](https://bcec.edu.au/assets/2019/06/BCEC-Working-Papers-19_02-Steven-Bond-Smith-The-decades-long-dispute-over-scale-effects.pdf) answers with an emphatic ‘yes’, and I couldn’t see any papers disputing this result. However, the paper was only published in 2020, so there has not been much time for scrutiny. Although there seem to be no knife-edge conditions in the production function, it is possible that they are located elsewhere, e.g. in the equations governing firms’ profits. Indeed, in private correspondence Growiec has indicated that he believes there must be a knife-edge condition somewhere that Peretto does not explicitly discuss and may not even be aware of.\n\n\nMy own guess is that a knife-edge is present in the expression for the fixed cost a firm must pay to produce goods. This fixed cost is assumed to be proportional to *Z*. I believe that if it were proportional to *Zφ* with φ ≠ 1, then growth would either tend to infinity or to 0. If so, φ = 1 would be a knife-edge condition. Indeed, Peretto confirmed in private correspondence that if instead the fixed cost were proportional to *Z0.9*, the model would not produce exponential growth, and he thought the same was likely true if they were proportional to *Z1.1*. Growiec also thought this seemed like a plausible candidate for such a knife-edge condition. However, no-one has worked through the maths to confirm this hypothesis with a high degree of confidence. Further, this ‘knife-edge’ may not be problematic: φ = 1 may be the only assumption that prevents fixed costs from tending to 0% or 100% of the total costs of production.\n\n\nPutting the knife-edge issue aside, the model seems to have two implausible problems:\n\n\n1. *Problem 1.* Though the model avoids knife-edge conditions, it has a perplexing implication. In particular, like all Schumpeterian growth models,[148](https://www.openphilanthropy.org/could-advanced-ai-drive-explosive-economic-growth#footnote148_e59fpdc \" See Section III of Jones (1999) for a brief introduction to Schumpeterian growth models and discussion of the knife-edge conditions they typically use to achieve constant exponential growth. \") it implies that if no new products were introduced – e.g. because this was made illegal – and we invested a constant fraction of output in improving technology then there would be explosive growth and output would approach infinity in finite time. This means that there is a huge [market failure](https://www.investopedia.com/terms/m/marketfailure.asp#:~:text=Market%20failure%20is%20the%20economic,rational%20outcomes%20for%20the%20group.): private incentives to create new companies *massively* reduce long-run social welfare.\n2. *Problem 2.* In addition, it is not clear that market fragmentation happens as much as the model implies. A small number of organizations have large market shares of industries like mass media, pharmaceuticals, meat packing, search engines, chip production, AI research, and social networks.[149](https://www.openphilanthropy.org/could-advanced-ai-drive-explosive-economic-growth#footnote149_3yp0p3c \" Examples from https://en.wikipedia.org/wiki/Market_concentration#Real_World_Examples. \") Indeed, in some areas [market concentration has been increasing](https://www.oecd.org/daf/competition/market-concentration.htm),[150](https://www.openphilanthropy.org/could-advanced-ai-drive-explosive-economic-growth#footnote150_8diphtk \" This objection interprets the ‘firms’ in the model as referring to organizations in the real world. Perhaps though they’re better interpreted as referring to distinct products. Even with this interpretation, it’s unclear to me whether the number of products is growing as fast as the model implies.\") and market concentration is one of the stylized facts of the digital era.[151](https://www.openphilanthropy.org/could-advanced-ai-drive-explosive-economic-growth#footnote151_05s0grq \" See Autor et al. (2017).\")\n\n\nOverall, this impressive paper seems to offer a fully endogenous growth model in which constant growth is not knife-edged. Though I doubt it is ultimately successful, it does identify a mechanism (individual incentives) which can cause an apparent knife-edge to hold in practice. The paper slightly raises my expectation that long-run growth is exponential.\n\n\n#### 7.5 Conclusion\n\n\nIt seems that many, and perhaps all, endogenous growth models only display constant exponential growth only under problematic knife-edge conditions that we have little reason to suppose hold *exactly*. The main exception is semi-endogenous growth models *J* / *K* / *S*, but these imply that 21st century growth will be sub-exponential given the projected slowing population growth.\n\n\nThere are a few important takeaways from the perspective of this report:\n\n\n* Theoretical considerations, combined with the empirical prediction that population growth will slow, implies 21st century growth will not be exponential, but rather sub-exponential.\n* The semi-endogenous models that I argue give better explanations for 20th century growth also imply that full automation of goods and knowledge production would lead to explosive growth. In particular, when you add to these models the assumption that capital can substitute for labor, they predict explosive growth. (See the endogenous models discussed [here](https://www.openphilanthropy.org/research/could-advanced-ai-drive-explosive-economic-growth/#StandardGrowth).)\n* It’s surprisingly hard to find a robust theoretical explanation of the empirical trend of exponential growth that implies it will continue until 2100. This suggests that exponential may be transitory, rather than a steady state. This in turn should raise our probability that future growth is sub- or super-exponential.\n\n\nThere are three caveats to these conclusions.\n\n\nFirstly, a very recent endogenous growth model seems to allow for constant growth that does not depend on knife-edge conditions. Although I’m not convinced by the model, it highlights possible mechanisms that could justify a seemingly problematic knife-edge condition in practice.\n\n\nSecondly, I have not done a review of all growth models. Perhaps an existing endogenous growth model avoids problematic knife-edge conditions and delivers exponential growth. I would be surprised if this is the case as there is a sub-literature on this topic that I’ve read many papers from (linked during this section), and they don’t mention any such model. For example, [this review article](https://bcec.edu.au/assets/2019/06/BCEC-Working-Papers-19_02-Steven-Bond-Smith-The-decades-long-dispute-over-scale-effects.pdf) on knife-edge problems doesn’t mention any such model, and argues that only Peretto’s 2020 paper solves the knife-edge problem.\n\n\nThirdly, perhaps there is a mechanism producing exponential growth that growth theorists aren’t aware of. The process of economic growth is extremely complex, and it’s hard to develop and test growth theories. If there is such a mechanism, it may well continue to produce exponential growth until 2100.\n\n\nBased on these caveats, I still assign ~25% probability to ~2% exponential growth in frontier GDP/capita continuing until 2100, even if there’s sub-exponential growth in population.\n\n\n\n\n---\n\n\n8. Appendix C: Conditions for super-exponential growth\n------------------------------------------------------\n\n\nThis section lays out the equation for various growth models, and the conditions under which super-exponential growth occurs. I don’t make derivations or explain the results. Its purpose is to support some key claims made in the main report.\n\n\nThere are two high-level sections, each of which support a key claim in the main report:\n\n\n* **[Long-run explosive models](https://www.openphilanthropy.org/research/could-advanced-ai-drive-explosive-economic-growth/#LongRun):**\n\t+ Key claim: *Long-run explosive models assume that capital, labor and technology are all accumulable. Even if they include a fixed factor like land, there are increasing returns to accumulable inputs. This leads to super-exponential growth as long unless the diminishing returns to technology R&D are very steep. For a wide range of plausible parameter values, these models predict super-exponential growth.*\n* **[Standard growth models adjusted to study the effects of AI](https://www.openphilanthropy.org/research/could-advanced-ai-drive-explosive-economic-growth/#StandardGrowth):**\n\t+ Key claim: *The basic story is: capital substitutes more effectively for labor → capital becomes more important → larger returns to accumulable inputs → faster growth. In essence, the feedback loop ‘more output → more capital → more output → …’ becomes more powerful and drives faster growth.*\n\n\nI also use this section to evidence my claims about the AI robot scenario, in which AI substitutes perfectly for human labor (the AI robot scenario):\n\n\n\n> Indeed, plugging this [AI robot] scenario into a range of growth models, you find that super-exponential growth occurs for plausible parameter values, driven by the increased returns to accumulable inputs.\n> \n> \n\n\nThis third claim is evidenced at the bottom of both the high-level sections, [here](https://www.openphilanthropy.org/research/could-advanced-ai-drive-explosive-economic-growth/#HowDoesTheCaseOf) and [here](https://www.openphilanthropy.org/research/could-advanced-ai-drive-explosive-economic-growth/#HowDoesTheCase).\n\n\nLastly, I use this section to evidence one further claim:\n\n\n\n> This suggests that the demographic transition, not diminishing returns, explains the end of super-exponential growth.\n> \n> \n\n\nI evidence this final claim [here](https://www.openphilanthropy.org/research/could-advanced-ai-drive-explosive-economic-growth/#CanDiminishingReturns).\n\n\n#### 8.1 Long-run explosive models\n\n\nLong-run explosive models are endogenous growth models fit to long-run GWP that predict explosive growth will occur in a few decades.\n\n\nIn the main report I claim:\n\n\n\n> Long-run explosive models assume that capital, labor and technology are all accumulable. Even if they include a fixed factor like land, there are increasing returns to accumulable inputs. This leads to super-exponential growth as long unless the diminishing returns to technology R&D are very steep. For a wide range of plausible parameter values, these models predict super-exponential growth.\n> \n> \n\n\nI support these claims by analysing some long-run explosive models.\n\n\n#### 8.1.1 Roodman (2020)\n\n\nI analyze a simplified version of the model.[152](https://www.openphilanthropy.org/could-advanced-ai-drive-explosive-economic-growth#footnote152_7psp2b7 \" I remove the input ‘human capital’, set the exponent on technology to 1, and set a number of constants to 0 - those controlling the effect of technological advance on reinvestment in non-technology inputs. (Roodman considers a similar simplification at the top of p. 12.)\")\n\n\nThe equations for the model:\n\n\n\\( Y=AK^αL^βW^{1−α−β} \\)\n\\( \\dot K={s\\_K}Y−{δ\\_K}K \\)\n\\( \\dot L={s\\_L}Y−{δ\\_L}L \\)\n\\( \\dot A={s\\_A}A^{ϕA}Y−{δ\\_A}A \\)\n*A* is technology, *K* is capital, *L* is labor; all three of these inputs are accumulable. *W* is the constant stock of land (**fixed factor**), φA controls the diminishing return to technology R&D, and δi controls the depreciation of the inputs.[153](https://www.openphilanthropy.org/could-advanced-ai-drive-explosive-economic-growth#footnote153_92n9bf3 \" Note: φA has a different meaning to a similar parameter in semi-endogenous growth models. This is because Roodman assumes Y is the R&D input, whereas semi-endogenous growth models typically use L as the R&D input.\")\n\n\nThere are increasing returns to accumulable inputs. If you double *A*, *K* and *L* then *Y* more than doubles. (In Cobb Douglas models like this, there are increasing returns to some inputs just when the sum of the exponents of those inputs exceeds 1. In this case 1 + α + β > 1.)\n\n\nA sufficient condition for super-exponential growth (deduced [here](https://www.openphilanthropy.org/research/could-advanced-ai-drive-explosive-economic-growth/#roodman-2020)) is:[154](https://www.openphilanthropy.org/could-advanced-ai-drive-explosive-economic-growth#footnote154_zr7jrio \" Technically, these are the conditions under which there’s either super exponential growth or the system decays towards 0. But if we assume positive growth then they are the conditions for super exponential growth. If we set the δs to 0, these would be conditions for super exponential growth. Derived from Equation 16 in Roodman (2020).\")\n\n\n\\( α+β> \\frac {−ϕA}{1−ϕA} \\)\nThis inequality reflects the claim ‘there’s super-exponential growth if the increasing returns to accumulable factors [α + β] is strong enough to overcome diminishing returns to technological R&D’.\n\n\nIf α + β = 0.9 (the fixed factor has exponent 0.1) then the condition on φA is φA > -9. Even the cautionary data of Bloom et al. (2020) suggests φA = -3.[155](https://www.openphilanthropy.org/could-advanced-ai-drive-explosive-economic-growth#footnote155_0gbcx1e \" Roodman reruns their analysis with his model.\") So there is super-exponential growth for a wide range of plausible parameter values.\n\n\n#### 8.1.2 Kremer (1993)\n\n\nI analyze the version of the model in Section 2:[156](https://www.openphilanthropy.org/could-advanced-ai-drive-explosive-economic-growth#footnote156_n5ow2p2 \" The version in Section 1 is more simple, so the conditions for explosion are less informative. The version in Section 3 doesn’t predict explosive growth due to an additional mechanism corresponding to the demographic transition.\")\n\n\n\\( Y=Ap^{α}W^{1−α} \\)\n\\( \\dot A=δA^{ϕ}p^{λ} \\)\n*A* is technology, *p* is population, *W* is the fixed factor land. δ is constant, φ and λ control the diminishing return to technology R&D.\n\n\nKremer assumes GDP/capita is fixed at some Malthusian level ȳ:\n\n\n\\( p= \\frac {Y}{ \\bar y} \\)\nSo larger *Y* → larger *p:* population is accumulable. Further, larger *Y* → larger *p* → larger Ȧ: technology is also accumulable. There are increasing returns to accumulable factors: 1 + α > 1.\n\n\nA sufficient condition for super-exponential growth (deduced [here](https://www.openphilanthropy.org/research/could-advanced-ai-drive-explosive-economic-growth/#kremer-1993)):\n\n\n\\( α> \\frac {−λ}{1−ϕ}+1 \\)\nAgain it depends on whether increasing returns to accumulable factors can overcome diminishing returns to technology R&D.\n\n\nBloom et al. (2020) derive φ = -2.1, on the assumption that λ = 1. This estimate of φ is conservative compared to others. The condition then reduces to α > 2/3. This is plausible given that 1- α is the exponent on the fixed factor land.[157](https://www.openphilanthropy.org/could-advanced-ai-drive-explosive-economic-growth#footnote157_sfkndmc \" Kremer (1993) uses 1/3 as a high-end estimate of land’s share of output, based on evidence from share-cropping contracts.\") (To look at it another way, if we added capital to the model – *Y* = *ApαKβW1-α-β* – the condition would become something like α + β > 2/3.)\n\n\n#### 8.1.3 Lee (1988)\n\n\n\\( Y=Ap^{α}W^{1−α} \\)\n\\( \\frac {\\dot A}{A}=δlog(p), A\\_0 \\, given \\)\n\\( \\frac {\\dot p}{p}=[log ( \\frac {Y}{p})−log(\\bar y)]×constant, p\\_0 \\, given \\)\nConstants have the same meaning as in Kremer (1993). Both population and technology are accumulable, and there are increasing returns to both in combination (1 + α > 1). The system grows super-exponentially.[158](https://www.openphilanthropy.org/could-advanced-ai-drive-explosive-economic-growth#footnote158_rir4sb0 \" In this system, the work producing super-exponential growth is done more by the dynamical equations describing how the inputs change, which directly state that the growth rate of inputs increases with the size of the system. The increasing returns in the production function is less important. This reflects a general truth. Super-exponential growth is produced by the production function in combination with the dynamical equations. In some models more work is done by the former, in others by the latter.\") There is no parameter describing diminishing returns to R&D efforts, so no inequality.\n\n\n#### 8.1.4 Jones (2001)\n\n\n\\( Y=A^{σ}{L\\_Y}^{α}W^{1−α} \\)\n\\( \\dot A=δA^ϕ{L\\_A}^λ \\)\n*LY* is the amount of labor spent on producing output, *LA* is the amount of labor spent on research.[159](https://www.openphilanthropy.org/could-advanced-ai-drive-explosive-economic-growth#footnote159_dpy66ed \" People choose how to divide their time between three activities: producing output, doing research, and having children.\") Other symbols are as in Kremer (1993).\n\n\nChanges in total labor *L* depend on GDP/capita, *Y*/*L*. The exact relationship is complex, but *L̇*/*L* is an upside-down U-shaped function of income *Y*/*L*. (Initially *L̇*/*L* increases with income, then it decreases.) In the initial period, *L* is output bottlenecked: higher *Y* → higher income → higher *L̇*/*L*.\n\n\n*A* is also accumulable: higher *Y* → higher *L* → higher *Ȧ*.\n\n\nThe system cannot be solved analytically, but the system grows super-exponentially if the following condition holds (as explained [here](https://www.openphilanthropy.org/research/could-advanced-ai-drive-explosive-economic-growth/#jones-2001)):[160](https://www.openphilanthropy.org/could-advanced-ai-drive-explosive-economic-growth#footnote160_1w5lb12 \" Jones writes that:In particular, under the crucial assumption of increasing returns to accumulable factors (θ > 0), the general pattern is for growth rates of both population and standards of living to first increase and then to decrease… My condition rearranges his condition θ > 0.\")\n\n\n\\( α> \\frac {−λσ}{1−ϕ}+1 \\)\nThis is very similar to the condition in Kremer. Again, we have super-exponential growth as long as increasing returns to accumulable factors (α, σ) are sufficiently powerful enough to overcome diminishing returns.\n\n\nBloom et al. (2020) derive φ = -2.1, on the assumption that λ = 1 and σ = 1. The condition then reduces to α > 2/3. This is plausible given that 1 – α is the exponent on the fixed factor land.\n\n\n#### 8.1.5 How does the case of perfect substitution (‘AI robots’) relate to these models?\n\n\nAI is naturally thought of as a form of capital, and most of the above models do not contain capital. However, I [suggest](https://www.openphilanthropy.org/research/could-advanced-ai-drive-explosive-economic-growth/#AIRobots) above that we can also think of AI as making the labor accumulable (the ‘AI robot’ scenario). With this assumption, all the above models predict super-exponential growth under a range of plausible parameter values.\n\n\n#### 8.1.6 Can diminishing returns to innovative effort explain the end of super-exponential growth?\n\n\nPerhaps the diminishing returns to innovative effort have become steeper over time. Jones (2001) estimates φ = 0.5 from population and GDP/capita from the last 10,000 years.[161](https://www.openphilanthropy.org/could-advanced-ai-drive-explosive-economic-growth#footnote161_bpnnk1k \" He does not estimate φ from the data, but tries out different values and chooses the one that seems to give the best fit - see p. 22.\") Bloom et al. (2020) estimate φ = -2 from 20th century data on US R&D efforts and TFP growth. Could increasingly steep returns to innovative effort explain the end of super-exponential growth?\n\n\n*Summary*\n\n\nThe models considered above suggest that the answer is ‘no’. When labor *is* accumulable, they predict super-exponential growth even with the conservative estimate of φ from Bloom et al. (2020). By contrast, when labor is *not*accumulable (it grows exponentially) they predict exponential growth for a wide range of φ values. In other words, changing φ from 0.5 to -2 doesn’t change whether growth is super-exponential; for any φ in this range (and indeed a larger range), growth is super-exponential just if labor is accumulable.\n\n\nIn these models, the key factor determining whether growth is super-exponential is not the value of φ, but whether labor is accumulable. While diminishing returns to innovative effort may be part of the story, it does not seem to be the key factor.\n\n\n*Analysis*\n\n\nWe’ve seen above that when labor **is** accumulable, these models comfortably predict super-exponential growth even with the conservative estimate of φ = -2 from Bloom et al. (2020); they also predict super-exponential growth higher larger values of φ. Growth is super-exponential under a wide range of values for φ.\n\n\nBy contrast, when labor is **not** accumulable, but instead grows exponentially regardless of output, these models predict *exponential* growth for a wide range of φ values.\n\n\n* Jones (2001) and Kremer (1993) Part 3 make exactly this assumption. They specify fertility dynamics leading to exponential population growth, and GDP/capita growth is exponential as long as φ < 1. Growth is exponential for a wide range of φ.\n* We can also see this in the case of Roodman (2020). When labor grows exogenously, there’s exponential growth if: \n\n\\( α< \\frac {−ϕA}{1−ϕA} \\) where α is the exponent on capital. The capital share suggests α = 0.4, This implies there’s exponential growth as long as φA < -0.67. (This threshold is much higher than the estimate φA = -3 derived from Bloom et al. (2020) data.) Again, for a wide range of φA values, growth is exponential when labor isn’t accumulable. You can get a similar result for the endogenous growth model inspired by Aghion et al. (2017) discussed [below](https://www.openphilanthropy.org/research/could-advanced-ai-drive-explosive-economic-growth/#EndogenousGrowth).\n\t+ Roodman (2020) estimates φA = 0.2 based on data going back to 10,000 BCE. This implies super-exponential growth, even with exogenous labor.\n\t+ However, the absence of super-exponential growth over the last 120 seems like strong evidence against such high values of φA being accurate in the modern regime. Indeed, if you restrict the data set to start in 1000 AD, Roodman’s methodology implies φA = -1.3. With this value we again predict exponential growth when exogenous labor.\n\t+ It is possible Roodman’s estimate unintentionally includes the effect of one-off changes like improved institutions for R&D and business innovation, rather than just estimating the diminishing returns to R&D.\n\n\n#### 8.2 Standard growth models adjusted to study the effects of AI\n\n\nThis section looks at standard growth models adjusted to study the possible growth effects of AI. These models treat AI as a form of capital. Some have their roots in the automation literature.\n\n\nIn the main report I claim that in many such models:\n\n\n\n> The basic story is: capital substitutes more effectively for labor → capital becomes more important → larger returns to accumulable inputs → faster growth. In essence, the feedback loop ‘more output → more capital → more output → …’ becomes more powerful and drives faster growth.\n> \n> \n\n\nHere I look at a series of models. First I consider endogenous growth models, then exogenous ones, then a task-based model. Within each class, I consider a few different possible models.\n\n\n#### 8.2.1 Endogenous growth models\n\n\n*Explosive growth with partial automation, Cobb-Douglas*\n\n\nFirst consider a Cobb-Douglas model where both goods production and knowledge production are produced by a mixture of capital and labor:\n\n\n\\( Y=A^ηK^α{L\\_Y}^{γ}W^{1−α−γ} \\)\n\\( \\dot A =A^{ϕ}K^{β}{L\\_A}^λW^{1−β−λ} \\)\n\\( \\dot K=sY−δK \\)\n*A* is technology and *K* is capital – both of these factors are accumulable. *LA* and *LY* are the human labor assigned to goods and knowledge production respectively – they are either constant or growing exponentially (it doesn’t affect the result either way). *W* is a fixed factor that can be interpreted as land or natural resources (e.g. a constant annual supply of energy from the sun).\n\n\nThe model is from Aghion et al. (2017), but I have added the fixed factor of land to make the model more conservative.\n\n\nIt is essentially a simple extension of the standard semi-endogenous model from Jones (1995), recognizing the roles of capital and natural resources as well as labor.\n\n\nThere is super-exponential growth, with growth rising without bound, if:\n\n\n\\( \\frac {ηβ}{1−α}>1−ϕ \\)\n(This claim is proved in [this technical appendix](https://www.openphilanthropy.org/research/could-advanced-ai-drive-explosive-economic-growth/#cobb-douglas-model).)\n\n\nIntuitively, the condition holds if the increasing returns to accumulable factors (represented by α, β , η) are stronger than the diminishing returns to technology R&D (represented by 1 – φ).\n\n\nHow far is this condition from being satisfied? Bloom et al. (2020) estimates φ = -2 on the assumption that η = 1 (which can be seen as a choice about the definition of *A*).[162](https://www.openphilanthropy.org/could-advanced-ai-drive-explosive-economic-growth#footnote162_psl9ibn \" Note, Bloom et al. (2020) use a knowledge production function where only labor is an input. There is no role for capital, as in this model. This might change the estimate of φ somewhat.\") This estimate of φ is more conservative than other estimates. The condition becomes:\n\n\n\\( \\frac {β}{1−α}>3 \\)\nRecent data puts the capital share at 40%, suggesting α = β = 0.4:\n\n\n\\( \\frac {0.4}{0.6}>3 \\)\nThe condition is not satisfied. It would be satisfied, however, if the capital share rose above 0.75 in both goods and knowledge production.[163](https://www.openphilanthropy.org/could-advanced-ai-drive-explosive-economic-growth#footnote163_k87j1m9 \" Alternatively, if labor were automated it would be satisfied. The sum of exponents of capital and labor are typically taken to be close to 1 and so > 0.75.\") At current trends, this is unlikely to happen in the next couple of decades,[164](https://www.openphilanthropy.org/could-advanced-ai-drive-explosive-economic-growth#footnote164_t070ka7 \" The capital share has risen by 5% in the last 20 years (source).\") but could happen by the end of the century. (This condition can be thought of as an empirical for whether explosive growth is near, like these discussed in [Nordhaus (2021)](https://www.aeaweb.org/articles?id=10.1257/mac.20170105&&from=f). It lowers my probability that TAI will happen in the next 20 years, but not far beyond that.)\n\n\n(Note: Arrow (1962) is another Cobb Douglas endogenous model which implies advanced AI can drive explosive growth – see [here](https://www.openphilanthropy.org/research/could-advanced-ai-drive-explosive-economic-growth/#arrow-1962).)\n\n\n*Explosive growth with full automation, CES production function.*\n\n\nCobbs Douglas models assume that the elasticity of substitution = 1. [Constant Elasticity of Substitution (CES) production functions](https://en.wikipedia.org/wiki/Constant_elasticity_of_substitution#CES_production_function) provide a more general setting in which the elasticity of substitution can take on any value. The expression *KαL1-α* is replaced by:\n\n\n\\( F\\_σ(K,L)=(αK^ρ+(1−α)L^ρ)^{\\frac {1}{ρ}}, with \\, ρ=\\frac {σ−1}{σ} \\)\nWe can use this to generalize the above model as follows:\n\n\n\\( Y=A^ηF\\_{σY}(K,L)^αW^{1−α} \\)\n\\( \\dot A=A^ϕF\\_{σA}(K,L)^βW^{1−β} \\)\n\\( \\dot K=sY−δK \\)\nwhere σY and σA are the elasticities of substitution between capital and labor in goods and knowledge production respectively. When σY = σA = 1, this reduces to the Cobb-Douglas system above. α and β now represent the returns to doubling both labor *and* capital. It is standard to assume α = β = 1 but I continue to include *W* so that the model is conservative.\n\n\n(Again this model is a generalization of the endogenous growth model in Aghion et al. (2017). A similar model is analyzed very carefully in Trammell and Korinek (2021) [Section 5.2](https://docs.google.com/document/d/1XCn4Pk44evZEjbmPD-zKlj26Z_bhf0ABeWVID_4L5sg/edit#heading=h.tjszzu4xeruo).)\n\n\nIn this setting, σY and σA are the crucial determinants of whether there is explosive growth. The tipping point is when these parameters rise above 1. This has an intuitive explanation. When σ < 1, *Fσ*(*K*, *L*) is bottlenecked by its smallest argument. If *L* is held fixed, there is limit to how large *Fσ*(*K*, *L*) can be, no matter how large *K* becomes. But when σ > 1, there is no such bottleneck: capital accumulation alone can cause *Fσ*(*K*, *L*) to rise without limit, even with fixed *L*.\n\n\nThe conditions for sustained super-exponential growth depend on whether and are above or below 0. I discuss four possibilities.\n\n\nWhen σY < 1, σA < 1, there is not super-exponential growth unless φ > 1, as shown [here](https://www.openphilanthropy.org/research/could-advanced-ai-drive-explosive-economic-growth/#case-1).\n\n\nWhen σY > 1, σA < 1 the condition is φ > 1 *or* α ≥ 1, as shown [here](https://www.openphilanthropy.org/research/could-advanced-ai-drive-explosive-economic-growth/#case-2). In other words, if there are constant returns to labor and capital in combination, a standard assumption, then increasing σY above 1 leads to super-exponential growth. (Note: even if α < 1, there may be an increase in growth when σY rises above 1. I discuss this dynamic more in the task-based model below.)\n\n\nWhen σY < 1, σA > 1, a sufficient condition is (as deduced [here](https://www.openphilanthropy.org/research/could-advanced-ai-drive-explosive-economic-growth/#case-3)):\n\n\n\\( ηβ>1−ϕ \\)\nSuper-exponential growth occurs if increasing returns are sufficient to overpower diminishing returns to technology R&D. Aghion et al. (2017) analyze the standard case where η = 1 and β = 1. The condition becomes:\n\n\n\\( ϕ>0 \\)\n(I discuss the related ‘search limits’ objection to explosive growth in [a previous section](https://www.openphilanthropy.org/research/could-advanced-ai-drive-explosive-economic-growth/#AppendixDiminishing).)\n\n\nWhen σY > 1, σA > 1, a sufficient condition is (as deduced [here](https://www.openphilanthropy.org/research/could-advanced-ai-drive-explosive-economic-growth/#case-4)):\n\n\n\\( \\frac {ηβ}{1−α}>1−ϕ \\)\nRemember, α and β now represent the returns to doubling both labor *and* capital, so values close to 1 are reasonable. Let’s take α + β =0.9. Bloom et al. (2020) estimate φ = -2 on the assumption that η = 1; let’s use these values. The condition is satisfied:\n\n\n\\( 9>3 \\)\n(The latter two conditions can be derived from the from Cobb-Douglas condition using the following substitutions:\n\n\n\\( F\\_{σ<1}(K,L)→L \\)\n\\( F\\_{σ>1}(K,L)→K \\)\nThese substitutions can also be used to derive super-exponential growth conditions when σA = 1, σY ≠ 1, or when σA ≠ 1, σY = 1.)\n\n\nThe takeaway is that if AI increases the substitutability between labor and capital in either goods or knowledge production, this could lead to super-exponential growth. Reasonable parameter values suggest doing it in both would lead to super-exponential growth, but doing so in just one may not be sufficient.\n\n\nTrammell and Korinek (2021) [Section 5.1.](https://docs.google.com/document/d/1XCn4Pk44evZEjbmPD-zKlj26Z_bhf0ABeWVID_4L5sg/edit#heading=h.8jc2k47bcoe9) discusses an endogenous ‘learning by doing’ model where a similar mechanism can lead to super-exponential growth.\n\n\n#### 8.2.2 Exogenous growth models\n\n\n*No fixed factor*\n\n\nNordhaus (2021) considers the following model:\n\n\n\\( Y=F\\_σ(AK,L) \\)\n\\( \\dot K=sY−δK \\)\n\\( A=A\\_0e^{gt} \\)\nThe key differences with the endogenous growth model considered above are:\n\n\n* No ideas production function: technology is exogenous.\n* No fixed factor *W* in the goods production. We add this later.\n* Technology only augments capital. This doesn’t affect the result.\n\n\nIf σ > 1 then the capital share rises to unity and the model approximates the following:\n\n\n\\( Y=AK \\)\n\\( \\dot K=sAK−δK \\)\n\\( A=A\\_0e^{gAt} \\)\n(This approximation, as well as the case σ < 1, is discussed in detail in [this technical appendix](https://www.openphilanthropy.org/research/could-advanced-ai-drive-explosive-economic-growth/#case-14-15-4-1).)\n\n\nNow the growth rate of capital *itself* grows exponentially:\n\n\n\\( gK=sA−δ=sA\\_0e^{gAt}−δ≃sA\\_0e^{gAt} \\)\nThe growth rate of output follows suit:\n\n\n\\( gY=gK+gA=sA\\_0e^{gt}−δ+gA≃sA\\_0e^{gAt} \\)\nGrowth is super-exponential. (Note: although growth increases without bound, output does not go to infinity in finite time.) Again the pattern of explanation is: capital becomes more substitutable with labor → capital becomes more important → growth increases.\n\n\nEven if technological progress halts altogether, growth is still:\n\n\n\\( gY=gK=sA\\_f−δ \\)\nwhere *Af* is the final level of technology. This growth could be very fast.\n\n\nHow robust is this result to our initial assumptions?\n\n\n* We would have the same result if the model had been *Y* = *AFσ*(*K*, *L*) rather than *Y* = *Fσ*(*AK*, *L*). If the model was *Y* = *Fσ*(*AK*, *L*), we would not have unbounded growth.[165](https://www.openphilanthropy.org/could-advanced-ai-drive-explosive-economic-growth#footnote165_h4y5i9j \" We’d approximate an AK model with constant A and growth driven by capital accumulation. \")\n* You get the same result in the human-capital accumulation model of Lucas (1988) – see [here](https://www.openphilanthropy.org/research/could-advanced-ai-drive-explosive-economic-growth/#HumanCapital).\n* The result really depends on constant returns to *K* and *L*, combined with some form of capital augmenting technological progress.\n* The next section relaxes the assumption of constant returns to *K* and *L*.\n\n\n*With a fixed factor*\n\n\nLet’s consider a more conservative case, where there are diminishing returns to labor and capital in combination due to some fixed factor and where full automation doesn’t occur. This model is inspired by Hanson (2001):[166](https://www.openphilanthropy.org/could-advanced-ai-drive-explosive-economic-growth#footnote166_xhm48gc \" I found the presentation in Trammell and Korinek (2021) Section 3.3 helpful here.\")\n\n\n\\( Y=(AK)^αL^βW^{1−α−β} \\)\nThe equations for *A* and *K* are as above. We assume *L* is constant.\n\n\nThe steady state growth rate is (proof [here](https://www.openphilanthropy.org/research/could-advanced-ai-drive-explosive-economic-growth/#hanson-2001)):\n\n\n\\( g\\_y= \\frac {αg\\_A}{1−α} \\)\nIf there is an increase in the capital share due to AI, growth will increase.\n\n\nSuppose AI increases the capital share from α to α + *f*β. (In a task-based model this corresponds to automating fraction *f* of tasks.) Production becomes:\n\n\n\\( Y=(AK)^{α+fβ}L^{(1−f)β}W^{1−α−β} \\)\nGrowth increases to (proof [here](https://www.openphilanthropy.org/research/could-advanced-ai-drive-explosive-economic-growth/#AppendixI)):\n\n\n\\( g\\_Y= \\frac {(α+fβ)g\\_A}{1−α−fβ} \\)\nAgain, the basic story is that the importance of (accumulable) capital increases, and growth increases as a result.[167](https://www.openphilanthropy.org/could-advanced-ai-drive-explosive-economic-growth#footnote167_ixm6tbg \" You get slightly more moderate growth increases if you treat A as labor and capital augmenting (TFP), rather than just capital augmenting. You can also replace (AK)α × Lβ with F(AK, L)(α + β) and get a similar qualitative result. Raising the elasticity of substitution above 1 causes the growth rate to increase.\")\n\n\nIf α + β is close to 1, and *f* = 1 (full automation) the new growth rate could be very high. If α + β = 0.9 then:\n\n\n\\( g\\_Y=9g\\_A \\)\nHanson uses a more realistic model of AI automation. He separates out standard capital from computer capital, and assumes the productivity of computer capital doubles every two years, in line with Moore’s law. He finds that fully automating labor with computer capital can cause growth to rise from 4.3% a year to 45%.\n\n\nTrammell and Korinek (2021) [Section 3.4](https://docs.google.com/document/d/1XCn4Pk44evZEjbmPD-zKlj26Z_bhf0ABeWVID_4L5sg/edit#heading=h.pnqap1gk73f2) discusses other exogenous growth models where a similar mechanism causes growth to increase.\n\n\n#### 8.2.3 Task-based models\n\n\nSo far all the models have treated the economy as a homogenous mass, and talked about how well AI substitutes for human labor in general. Really though, there are many distinct tasks in the economy, and AI might substitute better in some tasks than others. Aghion et al. (2017) develops a model along these lines. In the model tasks are *gross complements*. Technically, this means that the elasticity of substitution between tasks is below one. Intuitively, it means that each task is essential: total output is bottlenecked by the task we perform least well.\n\n\nI will not describe the mathematics of the model (interested readers can read the paper), but rather its implications for growth.\n\n\nFirstly, it no longer makes sense to talk about the substitutability of capital and labor in general. Rather the substitutability varies between tasks. This is sensible.\n\n\nSecondly, we can permanently increase growth in the model by automating a constant fraction of non-automated tasks each year. Automating a task requires the elasticity of substitution to exceed 1 for that task. Presumably we are already automating some tasks each year and this is contributing to growth. But if advanced AI unleashes a process by which the rate of task automation itself increases, this would increase growth.[168](https://www.openphilanthropy.org/could-advanced-ai-drive-explosive-economic-growth#footnote168_i6ngtwt \" Growth only increases if capital accumulation is fast enough. This caps growth below s × A - δ. The reinvestment rate s is bounded below 1 and δ is constant; so super-exponential growth can only be sustained if A, the level of technology, grows.\") The quicker the pace of automation, the higher the growth rate. If we automate an increasing fraction of tasks each year we can maintain super-exponential growth.[169](https://www.openphilanthropy.org/could-advanced-ai-drive-explosive-economic-growth#footnote169_dwggk4c \" This can only be sustained if there is technological progress in the background. See footnote two above.\") However, this prediction assumes we can seamlessly reallocate human labor to the remaining tasks. If this isn’t possible (which seems likely!), then the actual boost to growth would be lower than that predicted by the model.\n\n\nThis path to higher growth is consistent with the basic story discussed in the main report: AI increases the substitutability of capital → capital is increasingly important (it performs an increasingly large fraction of tasks) → super-exponential growth.\n\n\nThirdly, if some fixed set of essential tasks remain unautomated, they will eventually bottleneck growth. Growth will fall back down to the background growth rate (that doesn’t depend on automation). I discuss whether this undermines the prospect of explosive growth [here](https://www.openphilanthropy.org/research/could-advanced-ai-drive-explosive-economic-growth/#IfThereIs).\n\n\n#### 8.2.4 How does the case perfect substitution (‘AI robots’) relate to these models?\n\n\nThe case of perfect substitution corresponds to σ = ∞. So it corresponds to σ > 1 in the CES models. In the Cobb-Douglas models it corresponds to the share of capital rising to what was previously the joint share of capital and labor. This case leads to faster growth in all the above models with plausible parameter values, and to super-exponential growth in all the models except the conservative exogenous model.\n\n\n#### 8.3 What level of AI would be sufficient for explosive growth?\n\n\nGiven all of the above growth models, what’s our best guess about the level of AI that would likely be sufficient for explosive growth? Here I ignore the possibility that growth is bottlenecked by a factor ignored in these models, e.g. regulation. A better statement of the question is: if any level of AI would drive explosive growth, would level would be sufficient?\n\n\nAnswering this question inevitably involves a large amount of speculation. I will list the main possibilities suggested by the models above, and comment on how plausible I see them. It goes without saying that these predictions are all highly speculative; they may be ‘the best we have to go on’ but they’re not very ‘good’ in an absolute sense.\n\n\nHere are three main answers to the question: ‘What level of AI is sufficient for explosive growth?’:\n\n\n1. **AI that allows us to pass a ‘tipping point’ in the capital share**. The Cobb-Douglas models typically suggest that as the capital share in goods production and knowledge production rises, growth will be exponential until a ‘tipping point’ is passed. (We imagine holding the diminishing returns to R&D fixed.) After this point, growth is super-exponential and there will be explosive growth within a few decades.\n\n\nI put limited weight on this view as the ‘tipping points’ are not reproduced in the CES setting, which generalizes Cobb-Douglas. Nonetheless, Cobb-Douglas provides a fairly accurate description of the last 100 years of growth and shouldn’t be dismissed.\n\n\n2. **AI that raises the elasticity of substitution σ** **between capital and labor above 1.** When σ < 1 there is a limit to how large output can be, no matter how much capital is accumulated. Intuitively, in this regime capital is only useful when there’s labor to combine it with. But when σ > 1, capital accumulation alone can cause output to rise without limit, even with a fixed labor supply. Intuitively, in this regime capital doesn’t *have* to be combined with labor to be useful (although labor may still be very helpful). When this condition is satisfied in goods or knowledge production, explosive growth is plausible. When it’s satisfied in both, explosive growth looks likely to happen.\n\n\nI put more weight on this view. However, these models have their limits. They assume that the degree of substitutability between labor and capital is homogenous across the economy, rather than depending on the task being performed.\n\n\n3. **AI that allows us to automate tasks very quickly**. (This could either be because an AI system itself replaces humans in many tasks, or because the AI quickly finds ways to automate un-automated tasks.) In the task-based model of Aghion et al. (2017), automating a task provides a temporary boost to growth (a ‘level effect’). If we automate a constant fraction of un-automated tasks each year, this provides a constant boost to growth. If we automate a large enough fraction of non-automated tasks sufficiently quickly, growth could be boosted all the way to 30%.[170](https://www.openphilanthropy.org/could-advanced-ai-drive-explosive-economic-growth#footnote170_yhk81fb \" This only leads to explosive growth if there’s capital augmenting technology, or if the savings rate is large enough.\") A special case of this story is of course full-automation.\n\n\nI put the most weight on this third view. Nonetheless, it has some drawbacks.\n\n\n* It doesn’t address the process by which tasks are automated and how this might feed back into the growth process.\n* It doesn’t seem to be well-positioned to consider the possible introduction of novel tasks. In their model, introducing a new task can only ever decrease output.\n* Like any model, it makes unrealistic assumptions. Most striking is the assumption that human workers are seamlessly reallocated from automated tasks to un-automated tasks. Friction in this process could slow down growth if we haven’t achieved full automation.\n* It emphasizes the possibility of growth being bottlenecked by tasks that are hard to automate but essential. But it may be possible to restructure workflows to remove tasks that cannot be automated. This should reduce the weight we place on the model.\n\n\nOne common theme that I’m inclined to accept is that explosive growth would not require perfectly substitutable AI. Some weaker condition is likely sufficient if explosive growth is possible at all. Overall, my view is that explosive growth would require AI that substantially accelerates the automation of a very wide range of tasks in production, R&D, and the implementation of new technologies.\n\n\n\n\n---\n\n\n9. Appendix D: Ignorance story\n------------------------------\n\n\nAccording to the *ignorance story*, we’re simply not in a position to know what growth will look like over the long-term. Both the *standard story* (predicting roughly exponential growth) and the *explosive growth* stories are suspect, and we shouldn’t be confident in either. Rather we should place some weight in both, and also some weight in the possibility that the pattern of long-run growth will be different to the predictions of either story.\n\n\nThe *ignorance story* is primarily motivated by distrusting the *standard story* and the *explosive growth story*. This leaves us in a position where we don’t have a good explanation for the historical pattern of growth. We don’t know why growth has increased so much over the last 10,000 years, so we don’t know if growth will increase again. And we don’t know why frontier per-capita growth has been exponential for the last 150 years, so we don’t know how long this trend will continue for.\n\n\nWe shouldn’t confidently expect explosive growth – this would require us to trust the *explosive growth story*. But nor can we confidently rule it out – we’d either have to rule out sufficient AI progress happening by the end of the century, or rule out *all* of the growth models that predict explosive growth under the assumption that capital substitutes for labor. I discuss some of these [here](https://www.openphilanthropy.org/research/could-advanced-ai-drive-explosive-economic-growth/#AppendixC), and [Trammell and Korinek (2021)](https://docs.google.com/document/d/1XCn4Pk44evZEjbmPD-zKlj26Z_bhf0ABeWVID_4L5sg/edit#heading=h.8jc2k47bcoe9) discusses many more.\n\n\n#### 9.1 The step-change story of growth\n\n\nThis report focuses on the possibility that GWP grew super-exponentially from 10,000 BCE to 1950, with some random fluctuations. The increasing returns mechanism, important in some other prominent theories of long-run growth, provides a plausible explanation for historical increases in growth..\n\n\nHowever, the pre-modern GWP data is poor quality and it is possible that GWP followed a different trajectory. More precisely, GWP may have grown at a slow exponential rate from 10,000 BCE to 1500, and then there may have been a one-off transition to a faster rate of exponential growth. If this transition is allowed to last many centuries, from 1500 to 1900, this ‘step change’ story is consistent with the data.\n\n\nLet a ‘step-change’ model be any that doesn’t use the mechanism of increasing returns to explain very long-run growth, but instead focuses on a one-off structural transition around the industrial revolution.[171](https://www.openphilanthropy.org/could-advanced-ai-drive-explosive-economic-growth#footnote171_kwatddj \" For example, see Hanson (2000), Hansen and Prescott (2002), Goodfriend and McDermott (1995), Lucas (1998), Stokey (2001) and Tamura (2002).\")\n\n\nStep-change models are typically complex, using many parameters to describe the different regimes and the transition between them. This isn’t necessarily a drawback: perhaps we should not expect economic history to be simple. Further, the step-change model is more consistent with the academic consensus that the industrial revolution was a pivotal period, breaking from previous trends.\n\n\n#### 9.2 The step-change story of growth lends itself to the ignorance story\n\n\nWhat should you think about explosive growth, if you accept the step-change story?\n\n\n[Hanson (2000)](https://www.researchgate.net/profile/Robin_Hanson2/publication/228557195_Long-term_growth_as_a_sequence_of_exponential_modes/links/0046351fac48cd6ca3000000/Long-term-growth-as-a-sequence-of-exponential-modes.pdf) is an example of the step-change story. Hanson models historical GWP as a sequence of exponential growth modes. The Neolithic revolution in 10,000 BCE was the first step-change, increasing growth from hunter-gatherer levels to agricultural society levels. Then the industrial revolution in 1700, the second step-change, increased growth from agricultural levels to modern levels. (In some of Hanson’s models, there are two step-changes around the industrial revolution.)\n\n\nIf we were in the final growth mode, Hanson’s model would predict a constant rate of exponential growth going forward.[172](https://www.openphilanthropy.org/could-advanced-ai-drive-explosive-economic-growth#footnote172_ghahzbc \" In fact, Hanson’s preferred model from this paper predicts that, even without another growth mode, growth rates will continue to increase to ~12% (6 year doubling time). Why is this? In the model, we’re still transitioning into the current growth mode. The growth rate will increase while we finish this transition, settling on the new growth mode’s rate of 12%. Though this isn’t quite sufficient for our definition of 'explosive growth', it’s still very significant.\") However, Hanson uses the pattern of past step-changes to make predictions about the next one. He tentatively predicts that the next step-change will occur by 2100 and lead to GWP doubling every two weeks or less (growth of ≫ 100%).[173](https://www.openphilanthropy.org/could-advanced-ai-drive-explosive-economic-growth#footnote173_kphickc \"In summary, if one takes seriously the model of economic growth as a series of exponential growth modes, and if relative change parameters of a new transition are likely to be similar to such parameters describing old transitions, then it seems hard to escape the conclusion that the world economy could see a very dramatic change within the next century, to a new economic growth mode with a doubling time of roughly two weeks or less.\")\n\n\nBut we should not be confident in our ability to predict the timing of future step-changes in growth from past examples. Plausibly there is no pattern in such structural breaks, and it seems unlikely any pattern could be discerned from the limited examples we have seen. Someone embracing Hanson’s view of long-run GWP should see his predictions about future step-changes as highly uncertain. They may be correct, but may not be. In other words, they should accept the ignorance story of long-run GWP.\n\n\nCould you accept the step-change theory and rule out explosive growth? You would need to believe that no more step changes will occur, despite some having occurred in the past. What could justify having confidence in this view? A natural answer is ‘I just cannot see how there could be another significant increase in growth’. However, this answer has two problems. Firstly, it may not be possible to anticipate what the step-changes will be before they happen. People in 1600 may not have been able to imagine the industrial processes that allowed growth to increase so significantly, but they’d have been wrong to rule out step-changes on this basis. Secondly, mechanisms for a faster growth regime have been suggested. [Hanson (2016)](https://ageofem.com/) describes a digital economy that doubles every month and various economic models suggest that significant automation could lead to super-exponential growth ([more](https://www.openphilanthropy.org/research/could-advanced-ai-drive-explosive-economic-growth/#AppendixC)).\n\n\n#### 9.3 How ignorant are we?\n\n\nI think the ignorance story is a reasonable view, and put some weight on it.\n\n\nUltimately though, I put more weight on a specific view of the long-run growth. This is the view offered by models of very long run growth like Jones (2001): increasing returns (to accumulable inputs) led to super-exponential growth of population and technology from ancient times until about 1900. Then, as a result of the demographic transition, population grew exponentially, driving exponential growth of technology and GDP/capita.\n\n\nOf course, this view omits many details and specific factors affecting growth. But I think it highlights some crucial dynamics driving long-run growth.\n\n\nThis view implies that 21st century growth will be sub-exponential by default: population growth is expected to fall, and so GDP/capita growth should also fall. However, if we develop AI that is highly substitutable with labor, then models of this sort suggest that increasing returns (to accumulable inputs will once again lead to super-exponential growth ([more](https://www.openphilanthropy.org/research/could-advanced-ai-drive-explosive-economic-growth/#AppendixC)).\n\n\n\n\n---\n\n\n10. Appendix E: Standard story\n------------------------------\n\n\nThis is not one story, but a collection of the methods used by contemporary economists to make long-run projections of GWP, along with the justifications for these methodologies.\n\n\nIn this section I:\n\n\n* Briefly describe three methods that economists use to project GWP, with a focus on why they judge explosive growth to be highly unlikely ([here](https://www.openphilanthropy.org/research/could-advanced-ai-drive-explosive-economic-growth/#AppendixE)).\n* Show a probability distribution over future GWP that, from my very brief survey, is representative of the views of contemporary economists ([here](https://www.openphilanthropy.org/research/could-advanced-ai-drive-explosive-economic-growth/#AppendixE)).\n* Summarize the strengths and potential limitations of this collection of methods ([here](https://www.openphilanthropy.org/research/could-advanced-ai-drive-explosive-economic-growth/#AppendixE)).\n\n\n*Note: this section focuses solely on the papers I found projecting GWP out to 2100. It does not cover the endogenous growth literature which contains various explanations of the recent period of exponential growth. I discuss these explanations in [Appendix B](https://www.openphilanthropy.org/research/could-advanced-ai-drive-explosive-economic-growth/#AppendixB).*\n\n\n#### 10.1 Methods used to project GWP\n\n\nI have only done an extremely brief review of the literature on long-term GWP extrapolations. I have come across three methods for extrapolating GWP:\n\n\n1. Low frequency forecasts – use econometric methods to extrapolate trends in GDP per capita, usually starting 1900 or later ([more](https://www.openphilanthropy.org/research/could-advanced-ai-drive-explosive-economic-growth/#AppendixB)).\n2. Growth models – calculate future growth from projected inputs of labor, capital and total factor productivity ([more](https://www.openphilanthropy.org/research/could-advanced-ai-drive-explosive-economic-growth/#AppendixE)).\n3. Expert elicitation – experts report their subjective probabilities of various levels of growth ([more](https://www.openphilanthropy.org/research/could-advanced-ai-drive-explosive-economic-growth/#AppendixE)).\n\n\nI’m primarily concerned with what these views say about the prospect of explosive growth. In summary, all three methods assign very low probabilities to explosive growth by 2100. My understanding is that the primary reason for this is that they use relatively modern data, typically from after 1900, and this data shows no evidence of accelerating growth – during this time the rate of frontier GDP per capita growth has remained remarkably constant ([source](https://www.amazon.com/Fully-Grown-Stagnant-Economy-Success/dp/022666600X), [graphs](https://www.openphilanthropy.org/research/could-advanced-ai-drive-explosive-economic-growth/#AppendixI)).\n\n\n#### 10.1.1 Low frequency forecasts of GDP per capita data since 1900\n\n\n#### 10.1.1.1 How does it work?\n\n\nLow-frequency forecasting[174](https://www.openphilanthropy.org/could-advanced-ai-drive-explosive-economic-growth#footnote174_smittwq \" See Muller (2008), Muller (2015), Muller (2016) and for descriptions of this framework, and Christensen (2018) and Muller (2019) for applications to GWP.\") is a econometrics method designed to filter out short-horizon fluctuations caused by things like business cycles and pick up on longer-term trends.\n\n\nI’ve seen two applications of low-frequency forecasting to project GWP until 2100[175](https://www.openphilanthropy.org/could-advanced-ai-drive-explosive-economic-growth#footnote175_20gft9p \" I expect that there are others.\"). The first[176](https://www.openphilanthropy.org/could-advanced-ai-drive-explosive-economic-growth#footnote176_khuwanw \" Christensen (2018).\") simply takes a single data series, historical GWP per capita since 1900, and projects it forward in time. The second[177](https://www.openphilanthropy.org/could-advanced-ai-drive-explosive-economic-growth#footnote177_1yeu9m5 \" Muller (2019).\") fits a more complex model to multiple data series, the historical GDP per capita of various countries. It can model complex relationships between these series, for example the tendency for certain groups of countries to cluster together and for low-income countries to approach frontier countries over time. Both models essentially project low-frequency trends in GDP per capita forward in time, without much reference to inside-view considerations.[178](https://www.openphilanthropy.org/could-advanced-ai-drive-explosive-economic-growth#footnote178_zfq5izn \" One small caveat is that the model in Muller (2019) gives a special role to frontier economies, which it operationalises as OECD countries, in determining long-run average per-capita GWP growth. This incorporates the view that growth of frontier countries is a leading indicator of growth in other countries and so of GWP; this is arguably an inside-view consideration.\")\n\n\nEconometric models of this kind have the benefit of providing explicit probability distributions.. E.g. these projections of US and Chinese GDP/capita from Muller (2019).\n\n\n![](https://www.openphilanthropy.org/wp-content/uploads/imageX.png)![](https://www.openphilanthropy.org/wp-content/uploads/image9-1.png)\n\n\n#### 10.1.1.2 Relation to the possibility of explosive growth\n\n\nThe structure of the model leads it to assign very low probabilities to the growth rate increasing significantly. So it assigns very low probabilities to explosive growth. In particular, the model assumes that the long-run growth rate oscillates around some constant.\n\n\nMore precisely, the models I’ve studied assume that per capita GWP growth[179](https://www.openphilanthropy.org/could-advanced-ai-drive-explosive-economic-growth#footnote179_7rpktat \" In the case of Muller (2019)gt is the frontier GDP per capita. In the long run, the per capita GDPs of all other countries approach gt, so gt has a similar role to GWP per capita (which isn’t modeled directly).\") is given by:\n\n\n\\( g\\_t=μ+u\\_t \\)\nμ is a constant and *ut* is a (possibly random) component whose expected long-run average is 0. *gt* either follows a [random walk](https://en.wikipedia.org/wiki/Random_walk#:~:text=A%20random%20walk%20is%20a,space%20such%20as%20the%20integers.) centered on μ[180](https://www.openphilanthropy.org/could-advanced-ai-drive-explosive-economic-growth#footnote180_p04w0dp \" In the models I’ve seen, the random walk is constrained such that it’s unlikely to wander far from its center.\"), or oscillates around μ deterministically. Either way, μ is the long-run average growth rate. Growth in successive periods is correlated and can differ from μ for some time, but in the long run average growth will definitely tend towards μ. These models assume that long-run growth rate is constant; they assume that long-run growth is exponential.\n\n\nThe only way that these models represent the possibility of explosive growth is through the hypothesis that the long-run growth rate μ is very large but, by a large coincidence, the random component *ut* has always canceled this out and caused us to observe low growth. The resultant probability of explosive growth is extremely small. In both the papers the estimate of average GWP growth until 2100 was about 2% with a standard deviation of 1%. Explosive growth would be > 25 standard deviations from the mean!\n\n\nModels with this structure essentially rule out the possibility of an increasing growth rate *a priori*.[181](https://www.openphilanthropy.org/could-advanced-ai-drive-explosive-economic-growth#footnote181_tlycqmd \" Even if this model was trained on data showing clear signs of super-exponential growth, it would still conclude that the long-run average growth rate was constant (probably close to the average growth rate in the dataset).\") This could be a valid modeling decision given that post-1900 GWP data, and certainly the frontier GDP data, shows no pattern of increasing per capita growth, and it is in general reasonable for a model’s assumptions to foreclose possibilities that have no support in the data. The problem, as we shall discuss later, is that pre-1900 data *does* show a pattern of super-exponential growth. Either way, it is fair to say that the low-frequency models are not designed to assess the probability of explosive growth, but rather to model the probability of hypotheses that are plausible given post-1900 data.\n\n\nCould we use the low-frequency methodology to get a more accurate idea of the probability of explosive growth? It should in principle be possible to fit a low-frequency model that, like Roodman’s, contains a parameter that controls whether long-run growth is sub- or super-exponential.[182](https://www.openphilanthropy.org/could-advanced-ai-drive-explosive-economic-growth#footnote182_lzd4x1m \" The low-frequency approach focuses on modeling a stochastic component whose expectation is 0, but it can be combined with an arbitrary deterministic component. See p. 4 of Muller (2008).\") The possibility of explosive growth would then be represented by our uncertainty over the value of this parameter (as in Roodman’s model). I suspect that this model, trained on post-1900 data, would conclude that growth was very probably sub-exponential, but assign some small probability to it being slightly super-exponential. Explosive growth would eventually follow if growth were super-exponential. So I suspect that this methodology would conclude that the probability of explosive growth was small, but not as small as in the low-frequency models I have seen.\n\n\n#### 10.1.2 Growth models\n\n\n#### 10.1.2.1 How do they work?\n\n\nGrowth models describe how inputs like labor, capital and [total factor productivity](https://en.wikipedia.org/wiki/Total_factor_productivity) (TFP) combine together to make output (GDP). They also describe how these inputs change over time.\n\n\nHere I’ll just describe how an **extremely simple growth model** could be used to generate GWP projections. Then I’ll list some ways in which it could be made more realistic.\n\n\nOutput *Y* in a year is given by the following [Cobb-Douglas](https://en.wikipedia.org/wiki/Cobb%E2%80%93Douglas_production_function) equation:\n\n\n\\(Y=AK^αL^β \\)\nwhere\n\n\n* *A* is TFP.\n* *K* is the capital, a measure of all the equipment, buildings and other assets.\n* *L* is labor, a measure of the person-hours worked during the timestep.\n* α and β give the degree of diminishing returns to capital and labor; it’s often assumed that α + β = 1, meaning that a doubling the number of workers, buildings and equipment would double the amount of output.\n\n\nThe inputs change over time as follows:\n\n\n* *A* grows at a constant exponential rate – the average rate observed in the post-1900 data.\n* *L* in each year is given by UN projections of population growth.\n* The change in *K* between successive years is *ΔK* = *sY* – δ*K*, where *s* is the constant rate of capital investment and δ is the constant rate of capital depreciation.\n\t1. The value of *K* in year *n* can be calculated from the values of *K* and *Y* in the year *n* – 1\n\n\nYou generate GWP projections as follows:\n\n\n* Identify *Y* with GWP.\n* Get starting values of *Y*, *A*, *K* and *L* from data.\n* Project *A* and *L* for future years as described above.\n* Project *K* and *Y* for future years as follows:\n\t1. Predict next year’s *K* using the current values of *K* and *Y*.\n\t2. Predict next year’s *Y* using your projections for *A*, *K*, and *L* next year. Now you have *K* and *Y* for next year.\n\t3. Repeat the above two steps for later and later years.\n\n\nThe above model is very basic; there are many ways of making it more sophisticated. Perhaps the most common is to project each country’s growth separately and model catch-up effects.[183](https://www.openphilanthropy.org/could-advanced-ai-drive-explosive-economic-growth#footnote183_5j0e70k \" See Foure (2012), Johansson (2013), Crespo (2017), Leimbach (2016).\") You could also use a different [production function](https://en.wikipedia.org/wiki/Production_function) from Cobb-Douglas, introduce additional input factors like human capital and natural resources[184](https://www.openphilanthropy.org/could-advanced-ai-drive-explosive-economic-growth#footnote184_i1wfnad \" For example, Foure (2012) introduces energy as an additional factor.\"), use sophisticated theory and econometrics to inform the values for the factors[185](https://www.openphilanthropy.org/could-advanced-ai-drive-explosive-economic-growth#footnote185_noy99qb \" Foure (2012) estimates the rate of change of A in each country using a catch-up model. This model implies that a country's speed of catch-up is related to its level of secondary education and its ability to push forward the frontier is related to its level of tertiary education; the model is fitted using historical data. It also uses data on female labor force participation to inform its projection of L.\") and constants[186](https://www.openphilanthropy.org/could-advanced-ai-drive-explosive-economic-growth#footnote186_b1xip3n \" Foure (2012) allows s to vary between countries and over time, using a theory of savings and investment.\") at each timestep, control for outlier events like the financial crisis, and model additional factors like changing exchange rates. These choices can significantly affect the predictions, and may embody significant disagreements between economists. Nonetheless, many long-run extrapolations of GWP that I’ve seen use a growth model that is, at its core, similar to my simple example.\n\n\nMy impression is that these models are regarded as being the most respected. They can incorporate wide-ranging relevant data sources and theoretical insights.\n\n\nOne down-side of these models is that the ones I’ve seen only provide point estimates of GWP in each year, not probability distributions. Uncertainty is typically represented by considering multiple *scenarios* with different input assumptions, and looking at how the projections differ between the scenarios. For example, scenarios might differ about the rate at which the TFP of lower-income countries approaches the global frontier.[187](https://www.openphilanthropy.org/could-advanced-ai-drive-explosive-economic-growth#footnote187_p02lt9g \" For example, see Johansson (2013) and the overview of the Shared Socioeconomic Pathways, Riahi (2017).\") The point estimates from such models typically find that average per capita GWP growth will be in the range 1 – 3%.\n\n\n#### 10.1.2.2 Relation to the possibility of explosive growth\n\n\nMost of the long-run growth models I’ve seen set frontier TFP exogenously, stipulating that it grows at a constant rate similar to its recent historical average.[188](https://www.openphilanthropy.org/could-advanced-ai-drive-explosive-economic-growth#footnote188_csa0sgh \" For example, see Johansson (2013), Crespo (2017), Leimbach (2016).\") While individual countries can temporarily grow somewhat faster than this due to catch-up growth, the long-run GDP growth of all countries is capped by this exogenous frontier TFP growth ([source](https://en.wikipedia.org/wiki/Solow%E2%80%93Swan_model#Long-run_implications)).\n\n\nThe structure of most of these models, in particular their assumption of constant frontier TFP growth, rules out explosive growth *a priori.[189](https://www.openphilanthropy.org/could-advanced-ai-drive-explosive-economic-growth#footnote189_fpp3y33 \" This suggestion might be strengthened by the fact that advocates of singularity stories believe it will be caused by technological change, and so by explosive growth in TFP.\")* This is supported by the relative constancy of frontier TFP growth since 1900, but is undermined by earlier data points.\n\n\nA few models do allow TFP to vary in principle, but still do not predict explosive growth because they only use post-1900 data. For example, [Foure (2012)](https://papers.ssrn.com/sol3/papers.cfm?abstract_id=2004332) allows frontier TFP growth to depend on the amount of tertiary education and finds only moderate and bounded increases of TFP growth with tertiary education.[190](https://www.openphilanthropy.org/could-advanced-ai-drive-explosive-economic-growth#footnote190_q466yta \" Even models like these do not explain increases in TFP in the way that endogenous growth models, discussed below, aim to do. They simply calculate regression coefficients for TFP growth from education level, but this is different from providing a model that explains how TFP growth results from education (which is the sort of thing endogenous growth models try and do). In other words, the mathematics of these regressions is not designed to represent the process by which economic activity leads to increases in TFP, but rather to discern high-level correlations.\")\n\n\nThe more fundamental reason these models don’t predict explosive growth is not their structure but their exclusive use of post-1900 data, which shows remarkable constancy in growth in frontier countries. This data typically motivates a choice of model that rules out explosive growth and ensures that more flexible models won’t predict explosive growth either.\n\n\n#### 10.1.3 Expert elicitation\n\n\n#### 10.1.3.1 How does it work?\n\n\nGWP forecasts are made by a collection of experts and then aggregated. These experts can draw upon the formal methods discussed above and also incorporate further sources of information and the possibility of trend-breaking events. This seems particularly appropriate to the present study, as explosive growth would break trends going back to 1900.\n\n\nI focus exclusively on [Christensen (2018)](https://www.pnas.org/content/115/21/5409), the most systematic application of this methodology to long-run GWP forecasts I have seen.\n\n\nIn this study, experts were chosen by ‘a process of nomination by a panel of peers’ and the resultant experts varied in both ‘field and methodological orientation’.[191](https://www.openphilanthropy.org/could-advanced-ai-drive-explosive-economic-growth#footnote191_sk1jgkc \" More details on the process:The criteria for nomination included contributions to the economic growth literature, familiarity with empirical research on medium-run and long-run growth, and diversity in regional expertise. Participants were selected on the basis of the frequency of nomination. Upon selection, the experts were contacted by email and provided with a link to the digital Qualtrics survey. Based on research papers in Economics (RePEc) factor rankings, the overall peer-selected sample includes: 3 of the top 10 economists in any field, 2 of the top 5 development economists, 2 of the top 5 growth economists, 1 of the top 5 macroeconomists, 1 of the top 5 economic historians, and 1 of the top 5 forecasting economists. In total, 13 experts completed the survey.\") Experts gave their median and other percentile estimates (10th, 25th, 50th, 75th, 90th percentiles) of the average annual per-capita growth of GWP until 2100. For each percentile, the [trimmed mean](https://www.investopedia.com/terms/t/trimmed_mean.asp)[192](https://www.openphilanthropy.org/could-advanced-ai-drive-explosive-economic-growth#footnote192_waoh65t \" The results for each percentile vary by less than 0.1% per capita growth if we instead use the mean, and by less than 0.2% if we instead use the median. See Table S2 here.\") was calculated and then these means were used as the corresponding percentile of the aggregated distribution.\n\n\nAs well as providing aggregated quantile estimates, Christensen (2018) fits these estimates to a normal distribution. The mean per capita growth rate is 2.06% with a standard deviation of 1.12%. This provides a full probability distribution over GWP per capita for each year.\n\n\n#### 10.1.3.2 Relation to the possibility of explosive growth\n\n\nIf any expert believed there was > 10% chance of explosive growth, this would have shown up on the survey results in their 90th percentile estimate. However, Figure 7 of their [appendix](https://www.pnas.org/content/pnas/suppl/2018/05/09/1713628115.DCSupplemental/pnas.1713628115.sapp.pdf) shows that no expert’s 90th percentile exceeds 6%. Strictly speaking, this is compatible with the possibility that some experts think there is a ~9% probability of explosive growth this century, but practically speaking this seems unlikely. The experts’ quantiles, both individually and in aggregate, were a good fit for a normal distribution (see Figure 7) which would assign ≪ 1% probability to explosive growth.\n\n\nNonetheless, there are some reasons to think that the extremely high and extremely low growth is somewhat more likely than these the surveys suggest:\n\n\n* There is a large literature on biases in probabilistic reasoning in expert judgement. It suggests that people’s 10 – 90% confidence intervals are typically much too narrow, containing the true value much less than 80% of the time. Further, people tend to anchor their uncertainty estimates to an initial point estimate. These effects are especially pronounced for highly uncertain questions. The survey tried to adjust for these effects[193](https://www.openphilanthropy.org/could-advanced-ai-drive-explosive-economic-growth#footnote193_rg45qad \" Participants were reminded about the overconfidence bias and asked to give percentile estimates for three practice questions to help calibrate their judgements. \"), but the same literature suggests that these biases are very hard to eliminate.\n* The experts self-reported their level of expertise as 6 out of 10, where 5 indicates having studied the topic but not being an expert and 10 indicates being a leading expert.[194](https://www.openphilanthropy.org/could-advanced-ai-drive-explosive-economic-growth#footnote194_fwz4gu3 \"From p. 13 of the appendix:A rating of 1 indicates little expertise, a rating of 5 indicates the expertise of someone who has studied the subject but is not a specialist, and a rating of 10 indicates expertise that is among the leading experts. The mean self-reported level of expertise is 5.99 and the median is 6.\") The authors ‘take this as suggestive that experts do not express a high level of confidence in their ability to forecast long-run growth outcomes’. It also seems to suggest that there is no clear body of experts that specialize in answering this question and has thought deeply about it. This increases the chance that there are legitimate ways of approaching the problem that the experts have not fully considered.\n\n\n#### 10.2 Probability distribution over GWP\n\n\nI want an all-things-considered probability distribution over GWP that is representative of the different views and methodologies of the standard story. This is so I can compare it with distributions from the other big pictures stories, and (at a later time) compare it to the economic growth that we think would result from TAI. If you’re not interested in this, skip to the [next section](https://www.openphilanthropy.org/research/could-advanced-ai-drive-explosive-economic-growth/#AppendixE).\n\n\nI’ve decided to use the probability distribution constructed from above-discussed expert elicitation in [Christensen (2018)](https://www.pnas.org/content/115/21/5409). It has a mean of 2.06% and a standard deviation of 1.12%. I chose it for a few reasons:\n\n\n* The experts can use the results of the other two methods I’ve discussed (econometric modeling and growth models) to inform their projections.\n* Experts can take into account the possibility of trend-breaking events and other factors that are hard to incorporate into a formal model.\n* The experts in [Christensen (2018)](https://www.pnas.org/content/115/21/5409) were selected to represent a wide-range of fields and methodologies.\n* The central aim of Christensen’s paper was to get accurate estimates of our uncertainty, and its methodology and survey structure was designed to achieve this goal.\n* The expert elicitation distribution is consistent with point estimates from growth models.[195](https://www.openphilanthropy.org/could-advanced-ai-drive-explosive-economic-growth#footnote195_u0xqr65 \" The growth model point estimates I’ve seen are clustered around expert elicitation distribution’s mean of 2.06%, and they all lie within its 10 - 90th percentile range [0.60%, 3.47%]. \") This is important because I believe these growth models incorporate the most data and theoretical insight and are consequently held in the highest regard.\n* One possible drawback of this choice is that the distribution may overestimate uncertainty about future growth and assign more probability to > 3% than is representative.\n\t+ The 90th percentile of the distribution is higher than any point estimates I’ve seen.[196](https://www.openphilanthropy.org/could-advanced-ai-drive-explosive-economic-growth#footnote196_8q3df0k \" Christensen’s paper explicitly compares its expert elicitation distribution with the growth model point estimates of the Shared Socioeconomic Pathways (SSPs), a large collection of scenario-based GWP projections constructed for use by the climate-change research community (see an overview). They find that it’s median results are consistent with the median of the SSPs but that the highest SSP projection is closer to the 75th percentile than to the 90th. \")\n\t+ The 10 – 90th percentile range is wider than the equivalent range from econometric methods.\n\t+ This may be because the expert elicitation methodology can incorporate more sources of uncertainty than the other models.\n\n\nThe expert elicitation probability distribution is over GWP *per capita*. To get a distribution over GWP I used the UN’s median population projections (which have been accurate to date).[197](https://www.openphilanthropy.org/could-advanced-ai-drive-explosive-economic-growth#footnote197_hsimmdr \" The UN does provide percentile projections, but I found that incorporating its uncertainty about the future population makes little difference to the GWP projections. Most of the standard story’s uncertainty about future GWP stems from uncertainty about GWP per capita, not about uncertainty about population.\")\n\n\n![](https://www.openphilanthropy.org/wp-content/uploads/image6.png)\n\n\n#### 10.3 Strengths and limitations\n\n\nAdvocates of *standard story* use a range of statistical techniques and theoretical models to extrapolate GWP, that are able to incorporate wide-ranging relevant data sources. If we were confident that the 21st century would resemble the 20th, these methods would plausibly be adequate for forecasting GWP until 2100.\n\n\nHowever, I do believe that the methodologies of the *standard story* are ill-equipped to estimate the probability of a regime-change leading to explosive growth. This is due to a couple of features:\n\n\n* The papers I’ve seen exclusively use post-1900 data, and often only post-1950 data.[198](https://www.openphilanthropy.org/could-advanced-ai-drive-explosive-economic-growth#footnote198_bxf0f04 \" My search was brief and it’s perfectly possible I’ve missed counter-examples, but I would be surprised to hear of a paper using pre-1800 data. \") While reasonable for short-term growth forecasts, this becomes more questionable when you forecast over longer horizons. The post-1900 data is silent on the question of whether 21st century growth will follow a similar pattern to 20th century growth and of what it might look like if it does not.\n* Its models typically foreclose the possibility of explosive growth by assuming that the long-run frontier growth rate is constant. This assumption is supported by the post-1900 data but not, we shall see, by endogenous growth theory or by data sets that go back further in time. As a result of this assumption, its models do not assess the probability that 21st century GWP growth is super-exponential, a critical question when assessing the plausibility of explosive growth.\n* An important caveat is that expert elicitation does seem well placed to anticipate a regime-change, but experts assign < 10% to explosive growth and probably < 1%. I find this the most compelling evidence against explosive growth from the *standard story*. It is hard to fully assess the strength of this evidence without knowing the reasons for experts’ projections. If they have relied heavily on the other methods I’ve discussed, their projections will suffer from drawbacks discussed in the last two bullet points.\n\n\nThese limitations are not particularly surprising. The methods I’ve surveyed in this section were originally developed for the purposes of making forecasts over a few decades, and we saw above that even the most expert people in this area do not consider themselves to have deep expertise.\n\n\n\n\n---\n\n\n11. Appendix F: Significant probability of explosive growth by 2100 seems robust to modeling serial correlation and discounting early data points\n-------------------------------------------------------------------------------------------------------------------------------------------------\n\n\nThe model in Roodman (2020) assigns 50% probability to explosive growth happening by 2044, 10% by 2033, and 90% by 2063. However, there are reasons to think that Roodman’s *model* may predict explosive growth too soon, and its confidence intervals may be too narrow.\n\n\nAn [appendix](https://www.openphilanthropy.org/research/could-advanced-ai-drive-explosive-economic-growth/#AppendixI) discusses two such reasons:\n\n\n* The growth rates in nearby periods are correlated, but Roodman’s model implies that they are independent.\n* Recent data is more relevant to predicting 21st century growth than ancient data points, but Roodman’s model doesn’t take this into account.\n\n\n(Note: there are other reasons to think explosive growth will happen later than Roodman predicts. In particular, population is no longer accumulable, where accumulable means **more output → more people**. This section does *not* adjust Roodman’s model for this objection, but only for the two reasons listed.)\n\n\nHow much would accounting for these two factors change the predictions of Roodman’s model? Would they delay explosive growth by a decade, a century, or even longer? To get a rough sense of the quantitative size of these adjustments, I built a simple model for projecting GWP forward in time. I call it the *growth multiplier model*. (At other places in the report I call it the ‘growth differences’ model.)\n\n\nThe *growth multiplier model* retains some key features of Roodman’s univariate endogenous growth model. In particular, it retains the property of Roodman’s model that leads it to predict sub- or super-exponential growth, depending on the data it is fit to. The justification for these features is the same as that for Roodman’s model: long-run GDP data displays super-exponential growth and endogenous growth models predict such growth.\n\n\nAt the same time, the *growth multiplier model* aims to address some of the drawbacks of Roodman’s model. Most significantly, it incorporates serial correlation between growth at nearby periods into its core. In addition, the user can flexibly specify how much extra weight to give to more recent data points. The model also incorporates randomness in a simple and transparent way. The cost of these advantages is that the model is considerably less theoretically principled than the endogenous growth models.\n\n\nWith my [preferred parameters](https://www.openphilanthropy.org/research/could-advanced-ai-drive-explosive-economic-growth/#AppendixF), the model assigns a 50% chance of explosive growth by 2093 and a 70% chance by 2200.[199](https://www.openphilanthropy.org/could-advanced-ai-drive-explosive-economic-growth#footnote199_4b21eho \" This compares with dates of 2044 and 2050 from Roodman’s model.\") There is still a 10% chance of explosive growth by 2036, but also a 15% chance that explosion never happens[200](https://www.openphilanthropy.org/could-advanced-ai-drive-explosive-economic-growth#footnote200_a9e53d5 \" In these cases long-run growth is sub-exponential.\"). While I don’t take these precise numbers seriously at all, I do find the general lesson instructive: when we adjust for serial correlation and the increased relevance of more recent data points we find that i) the median date by which we expect explosion is delayed by several decades, ii) there’s a non-negligible chance that explosive growth will not have occurred within the next century, and iii) there is a non-negligible chance that explosive growth *will* occur by 2050. In my sensitivity analysis, I find that these three results are resilient to wide-ranging inputs.\n\n\nThe rest of this section explains how the model works ([here](https://www.openphilanthropy.org/research/could-advanced-ai-drive-explosive-economic-growth/#AppendixF)), discusses how it represents serial correlation ([here](https://www.openphilanthropy.org/research/could-advanced-ai-drive-explosive-economic-growth/#AppendixF)), compares its predictions to the other big-picture stories about GWP ([here](https://www.openphilanthropy.org/could-advanced-ai-drive-explosive-economic-growth#WhatAreTheModels)), does a sensitivity analysis on how its predictions change for different inputs ([here](https://www.openphilanthropy.org/research/could-advanced-ai-drive-explosive-economic-growth/#AppendixF)), and discusses its strengths and weaknesses ([here](https://www.openphilanthropy.org/research/could-advanced-ai-drive-explosive-economic-growth/#AppendixF)).\n\n\nThe code behind the growth multiplier model, Roodman’s model, and [this expert survey](https://www.pnas.org/content/115/21/5409) is [here](https://colab.research.google.com/drive/11oAdADbcd6GCslV0P5ESubqghaQlpyh2?usp=sharing). (If the link doesn’t work, the colab file can be found in [this folder](https://drive.google.com/drive/folders/1dzO1eZ8xSeePOntXOGNhSK5qqsgteHSp).)\n\n\n#### 11.1 How does the *growth multiplier model* work?\n\n\nPut simply, the model asks the question ‘*How will the growth rate change by the time GWP has doubled?*’, and answers it by saying ‘*Let’s look at how it’s changed historically when GWP has doubled, and sample randomly from these historically observed changes*’. Historically, when GWP has doubled the growth rate has increased by about 40% on average, and so the model’s median prediction is that the growth rate will increase by another 40% in the future each time GWP doubles.\n\n\nThe model divides time into periods and assumes that the growth rate within each period is constant. The length of each period is the time for GWP to increase by a factor *r* – this choice is inspired by the properties of Roodman’s univariate model.\n\n\nSo we divide the historical GWP data into periods of this kind and calculate the average growth rate within each period. Then we calculate the *change* in average growth rate between successive periods. Again inspired by Roodman’s univariate model, we measure this change as the *ratio* between successive growth rates: *new\\_growth\\_rate / old\\_growth\\_rate*.[201](https://www.openphilanthropy.org/could-advanced-ai-drive-explosive-economic-growth#footnote201_fsa77pk \" This choice is notable: we could instead have measured the change as new_growth_rate - old_growth_rate. Our preferred choice leads the model to predict explosive growth much sooner than under this alternative. The choice is motivated by analogy to Roodman’s fully endogenous growth model: in that model each time output doubles the growth rate increases by a constant factor. See more here.\") Call these ratios *growth multipliers*. The *growth multiplier* of a period tells you how much the average growth rate increases (or decreases) in the following period. For example, if 1800-1850 had 2% growth and 1850-1900 had 3% growth, then the growth multiplier for the period 1800-1850 would be *1.5*.\n\n\nHere’s an example with **dummy data**, in which *r =* *2*.\n\n\n![](https://www.openphilanthropy.org/wp-content/uploads/image3.png)\n\n\nTo extrapolate GWP forward in time, we must calculate the growth rate *g* of the period starting in 2025. We do this in two steps:\n\n\n* **Randomly sample a value for the previous period’s growth multiplier***.* In this example, *gm* is the growth multiplier of the period finishing in 2025. *gm* is randomly sampled from the list *[2, 2, 1.5, 0.5]*.[202](https://www.openphilanthropy.org/could-advanced-ai-drive-explosive-economic-growth#footnote202_hm4xhzg \" One interesting, and I suspect controversial, feature of the model is that each time a growth multiplier is sampled it is added to the list of historically observed growth multipliers. Removing this feature doesn’t materially change the probability of explosion this century. I discuss this feature in this appendix. \") All items on the list need not be equally likely; we can specify a *discount rate* to favor the sampling of more recent growth multipliers. This discount rate crudely models the extra weight given to more recent data points.\n* **Multiply together the growth rate and growth multiplier from the previous period.** In this example, *g* = 1.5 × *gm*.\n* **Calculate the duration of the next period from its growth rate.** In this example, we calculate *YYYY* from *g*.[203](https://www.openphilanthropy.org/could-advanced-ai-drive-explosive-economic-growth#footnote203_o0mjl0w \" Using this formula, the calculation is YYYY - 2025 = ln(2) / ln(1 + g/100). \") Notice that we already know the GWP at the end of the next period (in this example $25,600b) as we *defined* periods as the time taken for GWP to increase by a factor of *r*.\n\n\nWe’ve now calculated the growth rate and end date of the next period. We can repeat this process indefinitely to extrapolate GWP for further periods.\n\n\nThe two seemingly arbitrary assumptions of this model – defining each period as the time for GWP to increase by a factor of *r*, and calculating the next growth rate by *multiplying* the previous growth rate by some growth multiplier – are both justified by comparison to Roodman’s univariate model. The former assumption in particular corresponds to a core element of Roodman’s model that drives its prediction of super-exponential growth. I discuss this in greater detail in [this appendix](https://www.openphilanthropy.org/research/could-advanced-ai-drive-explosive-economic-growth/#AppendixI).\n\n\n#### 11.2 How does the *growth multiplier model* represent serial correlation?\n\n\nIn Roodman’s model, the median predicted growth for 2020-40 is higher than the observed growth in 2000-20 for two reasons:\n\n\n1. The model believes, based on historical data, that when GWP increases growth tends to increase.\n2. Growth in 2000-20 was below the model’s median prediction; it treats this as a random and temporary fluctuation, uncorrelated with that of 2020-40; it expects growth to return to the median in 2020-40.\n\n\nIt is Factor 2 that causes the model to go astray, failing to capture the serial correlation between growth in the two periods. Factor 2 alone raises the model’s median prediction for 2019 growth to 7.1%.[204](https://www.openphilanthropy.org/could-advanced-ai-drive-explosive-economic-growth#footnote204_rd1rhex \" I experimented with artificially removing Factor 2 from Roodman’s model. In particular, I evolved Roodman’s estimated model with one alteration: at each instant in time I halved the instantaneous growth rate that drives the incremental increase of GWP. With the alteration, the median growth rate for 2019 is 3.55% - more in line with the actual average growth of the last 20 years (3.65%). As a result, the median date of explosive growth is 2070, with 10% probability by 2056 and 90% by 2136. These results have an interesting relationship to those from the growth multiplier model when no discount is used - a version I discuss more here. The medians of both are very similar, but the growth multiplier model has wider confidence intervals. These wider confidence intervals are to be expected given that the growth multiplier model i) represents serial correlation between the growth rates at different points in time, and ii) has the feature described in the footnote starting ‘One interesting, and..’. Of these two factors, (i) plays a much more significant role.\")\n\n\nThe *growth multiplier model* addresses this problem by predicting growth increases solely on the basis of Factor 1; Factor 2 has no role. Unlike Roodman’s model, it does not track a ‘median’ growth rate as distinct from the actual growth rate; rather, it interprets the current growth rate (whatever it is) as ‘the new normal’ and predicts future growth by adjusting this ‘new normal’ for increases in GWP (Factor 1).\n\n\nAs a result, the *growth multiplier model* builds in serial correlation between the growth in different periods. If the current growth rate is ‘surprisingly low’ (from the perspective of Roodman’s model) then this will directly affect the next period’s growth rate via the formula *new\\_growth\\_rate = old\\_growth\\_rate × growth\\_multiplier.[205](https://www.openphilanthropy.org/could-advanced-ai-drive-explosive-economic-growth#footnote205_n6ixyb7 \" In this formula, the role of ‘× growth_multiplier’ is to adjust the growth rate for the increase in GWP. The role of old_growth_rate is to link the next period’s growth directly to that of the previous period, encoding serial correlation. A single period of low growth affects all subsequent periods of growth in this way.\")* In this formula, the role of ‘*× growth\\_multiplier*’ is to adjust the growth rate for the increase in GWP (Factor 1). The role of *old\\_growth\\_rate* is to link the next period’s growth directly to that of the previous period, encoding serial correlation. A single period of low growth affects all subsequent periods of growth in this way. A single period of low growth affects all subsequent periods of growth in this way. Further, this effect does not diminish over time, as the growth of period *i + n* is proportional to the growth of period *i* for all *n*.\n\n\nThere are possible models that display degrees of serial correlation intermediate between Roodman’s model and the *growth multiplier model*. I think such models would be more realistic than either extreme, but I have not attempted to construct one. I discuss this possibility more in [this appendix](https://www.openphilanthropy.org/research/could-advanced-ai-drive-explosive-economic-growth/#AppendixI). So while I regard Roodman’s predictions as overly aggressive, I regard those of the *growth multiplier model* as adjusting too much for serial correlation and in this sense being overly conservative. We should expect some return to the longer-run trend.\n\n\n#### 11.3 What are the model’s predictions for my preferred parameters?\n\n\nThe following table describes the two inputs to the *growth difference* model and what my preferred values for these inputs are:\n\n\n \n\n\n\n\n| **INPUT** | **MEANING** | **PREFERRED VALUE** | **CONSIDERATIONS THAT INFORMED MY CHOICE OF R** |\n| --- | --- | --- | --- |\n| *r* | *r* controls the lengths of the periods that the model divides GWP into. A smaller value for *r*means we look at how growth has changed over shorter periods of time, and extrapolate smaller changes into the future.\nIts value is fairly arbitrary; the division into discrete periods is done to make the model analytically tractable. My sensitivity analysis suggests the results are not very sensitive to the value of *r* – predicted dates for explosive growth change by < 10 years. | 1.6 | If *r* is too small, the GWP data is too coarse-grained to contain successive data points where GWP only differs by a factor of *r*. For example, GWP increases by a factor of 1.5 between some successive ancient data points.\nIf *r* is too large the assumption that growth is constant within each period is less plausible, and we lose information about how growth changes over shorter periods. For example, if *r > 1.6* we lose the information that growth was slower from 2010-19 than from 2000 to 2010. |\n| *Discount rate* | How much we discount older data points. A discount of *0.9* means that when GWP was half as big we discount observations by a factor of 0.9, when GWP was 1/4 the size the discount is 0.92, when it was 1/8 the size the discount is 0.93, and so on. | 0.9 | This discount means that, compared to a 2000 observation, the 1940 observation has 73% of the weight, the 1820 observation has 53% of the weight, and the 3000 BCE observation has 23% of the weight. |\n\n\nWith these inputs the model’s percentile estimates of the first year of explosive growth (sustained > 30% growth) are as follows:\n\n\n![](https://www.openphilanthropy.org/wp-content/uploads/imageD.png)\n\n\nThese probabilistic GWP projections can be shown alongside those of Roodman’s model and the *standard story*.\n\n\n![](https://www.openphilanthropy.org/wp-content/uploads/imageH-2.png)\n\n\nSee code producing this plot at the bottom of [this notebook](https://colab.research.google.com/drive/11oAdADbcd6GCslV0P5ESubqghaQlpyh2?usp=sharing). (If the link doesn’t work, the colab file can be found in [this folder](https://drive.google.com/drive/folders/1dzO1eZ8xSeePOntXOGNhSK5qqsgteHSp).)\n\n\nI believe the probabilities from the *growth multiplier model* are closer than Roodman’s to what it’s reasonable to believe, from an outside-view perspective, conditional on the basic ideas of the *explosive growth* story being correct.[206](https://www.openphilanthropy.org/could-advanced-ai-drive-explosive-economic-growth#footnote206_q1nqi3x \" I consider objections to these ideas in a later section.\")\n\n\nIf we trust the *standard story’s* view that growth will continue at roughly its current level (1 – 3%) over the next decade or so, then we should decrease the probability of explosive growth by 2100 relative to these plots.\n\n\n#### 11.4 Sensitivity analysis: how do the *growth difference* model’s predictions change for different inputs?\n\n\nI investigated how changing both inputs affects the model’s projections. Full details are in this appendix, but I summarize the key takeaways in this section. For reference, Roodman’s percentile predictions about the first year of explosive growth are as follows:\n\n\n\n\n| | |\n| --- | --- |\n| PERCENTILE | EXPLOSIVE GROWTH DATE |\n| 10 | 2034 |\n| 30 | 2039 |\n| 50 | 2043 |\n| 70 | 2050 |\n| 90 | 2065 |\n\n\nWhen I used my preferred inputs, the *growth multiplier model* differs from Roodman’s in two ways:\n\n\n* It models serial correlation. This is implicit in the model’s structure.\n* It places a larger discount on older data points. This is via my choice of *discount rate*.\n\n\nWe’ll now investigate the effect of each factor in turn, including how sensitive these are to the choice of *r*.\n\n\n#### 11.4.1 Serial correlation alone could delay explosive growth by 30-50 years\n\n\nWe can isolate the impact of the first factor by choosing not to discount older data points (*discount rate = 1*). In this case, still using *r = 1.6*, the percentiles of the *growth multiplier model* are as follows:\n\n\n![](https://www.openphilanthropy.org/wp-content/uploads/imageM-1.png)\n\n\nA further [sensitivity analysis](https://www.openphilanthropy.org/research/could-advanced-ai-drive-explosive-economic-growth/#AppendixI) on *r* shows that using different values of *r* between 1.05 and 3 could change the median date by ± 10 years in either direction, change the 10th percentile by ± 5 years in either direction, and change the 90th percentile by ± 100s of years.\n\n\n#### 11.4.2 A reasonable discount can delay explosive growth by 20 years\n\n\nThe following table shows information about different discount rates. It shows how severely each discount downweights older data points, and how many years it delays the median predicted date of explosive growth.\n\n\n\n\n| | | |\n| --- | --- | --- |\n| | **WEIGHT OF OLDER DATA POINTS (A 2000 DATA POINT HAS WEIGHT 100%)** | **DELAY TO MEDIAN DATE OF EXPLOSIVE GROWTH (YEARS)** |\n| **DISCOUNT RATE** | **1940** | **1820** | **3000 BCE** | ***R = 1.6*** | ***R = 2*** |\n| 0.95 | 86% | 74% | 49% | 4 | 1 |\n| 0.9 | 73% | 53% | 23% | 10 | 4 |\n| 0.85 | 61% | 38% | 10% | 21 | 10 |\n| 0.8 | 51% | 26% | 4% | 46 | 19 |\n| 0.75 | 34% | 12% | 0.6% | 89 | 29 |\n| 0.7 | 22% | 5% | 0.1% | 190 | 34 |\n\n\nI consider values of *discount rate* equal or lower than 0.8 to be unreasonable. They place overwhelming importance on the last 50 years of data when forecasting GWP over much longer periods of time than this. For long-range forecasts like in this report, I favor 0.9 or 0.95. For reasonable discounts, explosive growth is delayed by up to 20 years.\n\n\nThe effect on the 10th percentile is much smaller (< 10 years), and the effect on the 70th and 90th percentiles is much larger. See [this appendix](https://www.openphilanthropy.org/research/could-advanced-ai-drive-explosive-economic-growth/#AppendixI) for more details.\n\n\nEven with very steep discounts, long term growth is still super-exponential. The recent data, even when significantly upweighted, don’t show a strong enough trend of slowing GWP growth to overwhelm the longer term trend of super-exponential growth.\n\n\nSmaller values of *r* are slightly more affected by introducing a discount rate. I believe that this is because with smaller values of *r* the can model is fine-grained enough to detect the slowdown of GWP growth in the last ~10 years and a discount heightens the effect of this slowdown on the predictions. See more details about the interaction between *r* and the *discount rate* in [this appendix](https://www.openphilanthropy.org/research/could-advanced-ai-drive-explosive-economic-growth/#AppendixI).\n\n\n#### 11.5 Strengths and limitations of the *growth multiplier model*\n\n\nThe *growth multiplier model* is really just an adjustment to Roodman’s model. Its key strength is that it addresses limitations of Roodman’s model while keeping the core elements that drive its prediction of super-exponential growth.\n\n\nIts prediction of explosive growth invites many criticisms which I address elsewhere. Beyond these, its key limitation is that its modeling choices, considered in isolation, seem arbitrary and unprincipled. They are only justified via comparison to the increasing returns of endogenous growth models. A further limitation is that its description of the evolution of GWP is both inelegant and in certain ways unrealistic.[207](https://www.openphilanthropy.org/could-advanced-ai-drive-explosive-economic-growth#footnote207_efuwu62 \" For example, the growth rate within each period is not really constant. And the growth multiplier (the ratio between the average growth of successive periods) is not confined to being exactly equal to some historically observed value, but in reality can vary continuously.\") Lastly, a somewhat arbitrary choice about the value of *r* must be made, and results are sensitive to this assumption within a couple of decades.\n\n\n\n\n---\n\n\n12. Appendix G: How I decide my overall probability of explosive growth by 2100\n-------------------------------------------------------------------------------\n\n\nThe process involves vague concepts and difficult judgement calls; others may not find it useful for deciding their own probabilities. I do not intend for the reasoning to be water-tight, but rather a pragmatic guide to forming probabilities.\n\n\nHere are my current tentative probabilities for the annual growth of GWP/capita *g* over the rest of this century:\n\n\n* **Explosive growth,** *g* > 30%**:** There’s a period, lasting > 10 years and beginning before 2100, in which *g* > 30%: **~30%**.\n* **Significant growth increase,** 5% < *g* < 30%**:** There’s no explosive growth but there’s a period, lasting > 20 years and beginning before 2100, in which *g* > 5%: ~**8%**.\n* **Exponential growth,** 1.5% < *g* < 5%**:** There’s no significant growth increase and average growth stays within its recent range of values: ~**25%**.\n* **Sub-exponential growth**, *g* < 1.5%**:** We never have a significant growth increase, and average annual growth is near the bottom or below its recent range: ~**40%**.[208](https://www.openphilanthropy.org/could-advanced-ai-drive-explosive-economic-growth#footnote208_e6y2qbc \" To (roughly) translate the condition for ‘sub-exponential growth’ into a condition for frontier growth, it corresponds in my mind to the annual growth of frontier GDP/capita being below 1%. \")\n\n\nI’ve rounded probabilities to 1 significant figure, or to the nearest 5%, to avoid any pretence at precision. As a result, the probabilities do not add up to 100%.\n\n\nNote, the specific probabilities are not at all robust. On a different day my probability of explosive growth by 2100 might be as low as 15% or as high as 60%. What is robust is that I assign non-negligible probability (>10%) to explosive growth, exponential growth, and sub-exponential growth.\n\n\nThe diagram below summarizes the process I used to determine my probabilities. I use the [toy scenario of ‘AI robots’](https://www.openphilanthropy.org/research/could-advanced-ai-drive-explosive-economic-growth/#AIRobots)discussed in the main report to help me develop my probabilities. Each AI robot can replace one human worker, and do the work more cheaply than a human worker. I use this scenario because it is concrete and easy to represent in economic models: AI robots allow capital to substitute perfectly for labour in goods production and knowledge production.\n\n\n![](https://www.openphilanthropy.org/wp-content/uploads/image5-1.png)\n\n\nThe following sections go through the diagram, explaining my decisions at each node. I recommend readers keep the diagram open in a tab to help them follow the logic. At several points, I feel I’ve been somewhat conservative about the probability of explosive growth; I indicate these as I go.\n\n\n#### 12.1 Will we develop AI robots (or AIs with a similar impact on growth) in time for explosive growth to occur by 2100?\n\n\nI split this into two sub-questions:\n\n\n1. What level of AI is sufficient for explosive growth (assuming AI robots would drive explosive growth)?\n2. Will we develop this level of AI in time for explosive growth to occur by 2100?\n\n\n#### 12.1.1 What level of AI is sufficient for explosive growth (assuming AI robots would drive explosive growth)?\n\n\nWhat’s the lowest level of AI that would be sufficient for explosive growth, assuming AI robots would be sufficient?\n\n\nMy view on this question is mostly informed by studying the growth models that imply AI robots would drive explosive growth. I analyze models one by one [here](https://www.openphilanthropy.org/research/could-advanced-ai-drive-explosive-economic-growth/#StandardGrowth), and draw my conclusions [here](https://www.openphilanthropy.org/research/could-advanced-ai-drive-explosive-economic-growth/#WhatLevelOfAI). My (rough) conclusion is that ‘explosive growth would require AI that substantially accelerates the automation of a very wide range of tasks in production, R&D, and the implementation of new technologies.’ This would require very rapid progress in both disembodied AI and in robotics.\n\n\nConsider a ‘virtual worker’ – AI that can do any task a top quality human worker could do working remotely (it could be one AI system, or multiple working together). I believe, for reasons not discussed in this report, that a virtual worker would probably enable us to quickly develop the level of robotics required for explosive growth.\n\n\nI use a ‘virtual worker’ as my extremely rough-and-ready answer to ‘what’s the lowest level of AI that would drive explosive growth?’. Of course, it is possible that a virtual worker wouldn’t be sufficient, and also possible that a lower level of AI *would* be sufficient for explosive growth.\n\n\n#### 12.1.2 Will we develop a ‘virtual worker’ in time for explosive growth to occur by 2100?\n\n\nThere are two sub-questions here.\n\n\n1. By when must we develop a virtual worker for there to be explosive growth by 2100?\n2. How likely are we to develop a virtual worker by this time?\n\n\nI have not investigated the first sub-question in depth. In the growth models I’ve studied for this report, it seems that even in the ‘AI robot’ scenario it could take a few decades for growth to increase to 30%.[209](https://www.openphilanthropy.org/could-advanced-ai-drive-explosive-economic-growth#footnote209_qk4wjux \" Even once capital is fully substitutable with labour, it takes time for enough capital to be accumulated to significantly augment the human labour supply. More technically, it takes a while before goods production approximates Y = AK and knowledge production approximates dA/dt = (Aφ)K.\") So I provisionally treat 2080 as the answer to the first sub-question. For reasons not discussed in this report, I believe this is conservative and that developing a virtual worker would drive explosive growth within years rather than decades.\n\n\nThe second sub-question is then ‘How likely are we to develop a virtual worker by 2080?’.\n\n\nMy view on this is informed by evidence external to this report:\n\n\n* [Expert forecasts](https://arxiv.org/abs/1705.08807) about when high-level machine intelligence will be developed.[210](https://www.openphilanthropy.org/could-advanced-ai-drive-explosive-economic-growth#footnote210_bchs46z \" High-level machine intelligence is achieved when unaided machines can accomplish every task better and more cheaply than human workers. \")\n\t+ If this was my only source of evidence I would assign ~45% by 2080.[211](https://www.openphilanthropy.org/could-advanced-ai-drive-explosive-economic-growth#footnote211_u6xpl8k \"The survey found that answers differed significantly depending on how the question was asked. Some participants were asked about high-level machine intelligence (HLMI): when unaided machines can accomplish every task better and more cheaply than human workers. Others were asked about full automation: when for any occupation, machines could be built to carry out the task better and more cheaply than human workers. For HLMI, the probability by 2080 = ~60%, see figure 1 of the paper. For full automation, the probability by 2075 = ~25%, see figure 2 box plot. Roughly extrapolating the rate of increase from this box plot, pr(AGI by 2080) = ~30%. Placing equal weight on HLMI and full automation estimates, we get pr(AGI by 2080) = ~45%.Note: the survey found another significant framing effect - see discussion here. The numbers from the paper aggregate across this framing effect in a complicated way. My understanding is that, roughly speaking, the numbers attempt to give the mean probability AI researchers assign to the milestone being reached by a particular year.The survey also included a third estimate of time of human-level based on the rate of recent progress. It gives similar results to the HLMI estimate - see here.\")\n* A [framework](https://drive.google.com/drive/u/1/folders/15ArhEPZSTYU8f012bs6ehPS6-xmhtBPP) by my colleague [Ajeya Cotra](https://www.openphilanthropy.org/about/team/ajeya-cotra/) analyzing when the computation required to develop TAI will be affordable.\n\t+ Her high-end estimate assigns ~90% probability by 2080.\n\t+ Her best-guess estimate assigns ~70% probability by 2080.\n\t+ Her low-end estimate assigns ~40% probability by 2080.\n* My [own report](https://www.openphilanthropy.org/research/report-on-semi-informative-priors/) on what prior we should have about when Artificial General Intelligence is developed.[212](https://www.openphilanthropy.org/could-advanced-ai-drive-explosive-economic-growth#footnote212_2klw36l \" The report defines AGI as (collection of) computer program(s) that can perform virtually any cognitive task as well as any human, for no more money than it would cost for a human to do it. This is a slightly weaker definition than HLMI, given the restriction to ‘cognitive’ tasks and the phrase ‘virtually any’. It is closer than HLMI to the level of AI that I think would be sufficient for explosive growth. \")\n\t+ My high-end estimate assigns ~30% probability by 2080.\n\t+ My best-guess estimate assigns ~15% probability by 2080.\n\t+ My low-end estimate assigns ~4% probability by 2080.\n\n\nPersonally, I put most weight on Ajeya’s framework (0.7), and roughly similar weight to the other two sources of evidence (~0.15 each). Conditional on Ajeya’s framework, I am closer to her low-end estimate than her best guess, at around 50% probability by 2080.[213](https://www.openphilanthropy.org/could-advanced-ai-drive-explosive-economic-growth#footnote213_ypbbrsd \" I’m lower mostly because I assign less weight to ‘short horizon’ paths than Ajeya. Relatedly, I may think that the level of AI necessary to drive explosive growth is higher. E.g. I’m not confident a disembodied AI with human-level analytic and scientific skills would be sufficient; I think we’d also need human-level robotics.\") Overall, I’m currently at around **~45%** that we will develop a virtual worker by 2080.[214](https://www.openphilanthropy.org/could-advanced-ai-drive-explosive-economic-growth#footnote214_nese9i1 \" 0.7 × 50% + 0.15 × 45% + 0.15 × 15% = 44%.\")\n\n\nThis explains my reasoning about the top-level node of the diagram. The next section looks at the nodes on the left hand side of the diagram, assuming we do develop a ‘virtual worker’, the section after looks at the right hand side of the diagram.\n\n\n#### 12.2 Assuming we *do* develop AI with a similar impact on growth to AI robots (left fork)\n\n\n#### 12.2.1 Would AI robots drive explosive growth, absent any unintended bottlenecks?\n\n\nAnother way to understand this question is: Do AI robots have a strong *tendency* to drive explosive growth?\n\n\nMy opinion here is influenced by the history of economic growth and the choice between different growth models:\n\n\n* There are broadly speaking two classes of theories: accumulation models and idea-based models. In accumulation models, the ultimate source of growth in GDP/capita is the accumulation of physical or human capital. In idea-based models, the ultimate source of growth is targeted R&D leading to technological progress.\n* Idea-based models imply that AI robots would lead to explosive growth, when you use realistic parameter values.\n\t+ These models have increasing returns to inputs as a central feature, but do not predict super-exponential growth as labour is not accumulable. With AI robots there are increasing returns to *accumulable* inputs which can drive super-exponential growth.\n\t+ I analyze many of idea-based models in [Appendix C](https://www.openphilanthropy.org/research/could-advanced-ai-drive-explosive-economic-growth/#AppendixC),[215](https://www.openphilanthropy.org/could-advanced-ai-drive-explosive-economic-growth#footnote215_woftwgu \" All the long-run explosive growth models in this section are idea-based, as are all the endogenous models. \") subbing in the AI robot scenario. I find that the increasing returns to accumulable inputs drives super-exponential growth when you use realistic parameter values.[216](https://www.openphilanthropy.org/could-advanced-ai-drive-explosive-economic-growth#footnote216_bgbt7ki \" The relevant parameter values describe the diminishing returns to R&D and the importance of fixed factors of production like land.\")\n\t+ Idea-based models offer a simple and plausible account of very long-run growth, according to which increasing returns to accumulable inputs has caused growth to increase over time.\n\t\t- They are compatible with the importance of one-off structural transitions occurring around the industrial revolution.\n\t+ [Appendix B](https://www.openphilanthropy.org/research/could-advanced-ai-drive-explosive-economic-growth/#AppendixB) argues that some idea-based theories (semi-endogenous growth models) offer the best explanation of the recent period of exponential growth.\n* For accumulation-based models, the link between AI and growth is less clear but it’s still plausible that AI robots would drive explosive growth conditional on these models.\n\t+ Many of these models imply that the AI robot scenario would lead to explosive growth.\n\t\t- For example, the learning by doing model of Arrow (1962) ([more](https://www.openphilanthropy.org/research/could-advanced-ai-drive-explosive-economic-growth/#Arrow1962)) or the human capital accumulation model of Lucas (1988) ([more](https://www.openphilanthropy.org/research/could-advanced-ai-drive-explosive-economic-growth/#HumanCapital)).[217](https://www.openphilanthropy.org/could-advanced-ai-drive-explosive-economic-growth#footnote217_8i0oiux \" For example, this happens whenever there’s constant returns to labour and capital in combination, and some other source of productivity growth. \")\n\t\t- It’s possible to dismiss this prediction as an unintended artifact of the model, as the primary mechanism generating sustained growth in these models (capital accumulation) has no strong intuitive link with AI. This is in contrast to idea-based models, where there is an obvious intuitive way in which human-level AI would speed up technological progress.\n\t+ Some accumulation theories don’t imply that the AI robot scenario would cause explosive growth.\n\t\t- For example, see Frankel (1962) ([more](https://www.openphilanthropy.org/research/could-advanced-ai-drive-explosive-economic-growth/#Frankel1962)), or simply a CES production function with the elasticity of substitution between labour and capital greater than 1 ([more](https://www.openphilanthropy.org/research/could-advanced-ai-drive-explosive-economic-growth/#OtherCapitalAccumulation)).\n\t\t- I suggest these models face serious problems.\n\t+ [Appendix B](https://www.openphilanthropy.org/research/could-advanced-ai-drive-explosive-economic-growth/#AppendixB) argues that accumulation theories require problematic knife-edge conditions for exponential growth.\n\t+ Growth accounting exercises, e.g. [Fernald and Jones (2014)](https://web.stanford.edu/~chadj/FernaldJones2014.pdf), find that TFP growth accounts for the majority of growth rather than the accumulation of physical or human capital. This gives us reason to prefer idea-based models.\n* Overall, I put ~80% weight on idea-based theories.\n* Exogenous growth models can be understood as expressing uncertainty about the ultimate driver of growth. Even in a [conservative exogenous growth model](https://www.openphilanthropy.org/research/could-advanced-ai-drive-explosive-economic-growth/#ExogenousGrowthModels), where a fixed factor places diminishing returns on labour and capital in combination, capital substituting for labour in goods production can cause a significant one-time increase in growth (although this may not be sufficient for > 30% annual growth).\n\n\nSo, overall, would AI robots this century drive explosive growth, assuming there are no unanticipated bottlenecks? My starting point is the 80% weight I put on idea-based models, based on their explanation of very long-run growth and the recent period of constant growth. I bump this up to 90% as various exogenous models and accumulation-based models also imply that AI robots would drive explosive growth. Lastly, I cut this back to 80% based on the possibility that we can’t trust the predictions of these models in the new regime where capital can entirely replace human labour.\n\n\nMost of the 20% where AI robots don’t have a tendency to drive explosive growth corresponds to none of our theories being well suited for describing this situation, rather than to any particular alternative model.\n\n\nSo I put **~80%** on AI robots driving explosive growth, absent unanticipated bottlenecks.\n\n\n#### 12.2.2 Will there be unanticipated bottlenecks?\n\n\nI have done very little research on this question. [Above](https://www.openphilanthropy.org/research/could-advanced-ai-drive-explosive-economic-growth/#LimitsToHowFast), I briefly listed some possible bottlenecks along with reasons to think none of them are likely to prevent explosive growth. I put **~25%** on a bottleneck of this kind preventing explosive growth.\n\n\nThis means my pr(explosive growth | AI robots this century) = 0.8 × 0.75 = ~60%. If I had chosen this probability directly, rather than decomposing it as above, I’d have picked a higher number, more like 75%. So the ‘60%’ may be too low.\n\n\n#### 12.2.3 If there is an unanticipated bottleneck, when will it apply?\n\n\n*This corresponds to the node ‘Does the bottleneck apply before g>5%?’.*\n\n\nSuppose we develop AI that has a strong tendency to drive explosive growth, but it doesn’t due to some bottleneck. How fast is the economy growing when the bottleneck kicks in?\n\n\nLarge countries have grown much faster than 5% before,[218](https://www.openphilanthropy.org/could-advanced-ai-drive-explosive-economic-growth#footnote218_ibixomn \" China’s GDP/capita growth has exceeded 5% every year since 1980 (source).\") suggesting the bottleneck probably kicks in when *g* > 5%. In addition, there’s a smaller gap between the current frontier growth (~2%) and 5% than between 5% and 30%.\n\n\nOn the other hand, it’s possible that the unknown bottleneck is *already* slowing down frontier growth, suggesting it would limit growth to below 5%.\n\n\nSomewhat arbitrarily, I assign **80%** to the bottleneck kicking in when *g* > 5%, and **20%** to it kicking in when *g* < 5%.\n\n\n#### 12.2.4 If we develop a ‘virtual worker’ but it has no tendency to drive explosive growth, will growth slow down?\n\n\n*This corresponds to the left-hand node ‘Will growth slow down?’.*\n\n\nMy first pass is to fall back on the scenario where we don’t make impressive advances in AI at all (I discuss this scenario [below](https://www.openphilanthropy.org/research/could-advanced-ai-drive-explosive-economic-growth/#AppendixG)). This implies ~65% to sub-exponential growth and ~35% to exponential growth.[219](https://www.openphilanthropy.org/could-advanced-ai-drive-explosive-economic-growth#footnote219_xdsb8wx \" I assign 35%/55% = ~60% of the weight to the sub-exponential above.\") I give **50%** to each because highly advanced AI might help us to sustain exponential growth even if it has no tendency to produce explosive growth.\n\n\n#### 12.3 Assuming we *don’t* develop AI robots, or AI with similar impacts on growth (right fork)\n\n\n#### 12.3.1 Is there explosive growth anyway?\n\n\nIf we are skeptical of the explanations of why growth increased in the past, and why it has recently grown exponentially, we may be open to growth increasing significantly without this increase being driven by AI. Growth has increased in the past, perhaps it will increase again.\n\n\nEven if we can’t imagine what could cause such an increase, this is not decisive evidence against there being some unknown cause. After all, hypothetical economists in 1600 would have been unlikely to imagine that the events surrounding the industrial revolution would increase growth so significantly. Perhaps we are just as much in the dark as they would have been.\n\n\nFurther, [brain emulation technology](https://en.wikipedia.org/wiki/Mind_uploading) could have similar effects on growth to advanced AI, allowing us to run human minds on a computer and thus making population accumulable. Perhaps radical biotechnology could also boost the stock of human capital and thus the rate of biotechnological progress.\n\n\nI currently assign **2%** to this possibility, though this feels more unstable than my other probabilities. It’s low because I put quite a lot of weight in the specific growth theories that imply that super-exponential growth was fueled by super-exponential growth in the human population (or the research population) and so wouldn’t be possible again without advanced AI or some tech that expanded the number or capability of minds in an analogous way; I’m conservatively assigning low probabilities to these other technologies. I think values as high as 5-10% could be reasonable here.\n\n\n#### 12.3.2 If there isn’t explosive growth anyway, does growth slow down?\n\n\n*This corresponds to the right-hand node ‘Will growth slow down?’.*\n\n\nI put ~75% weight in semi-endogenous growth theories, which is my first-pass estimate for the probability of sub-exponential growth in this scenario.\n\n\nYou could try to account for further considerations. Even if semi-endogenous growth theory is correct, *g* could still exceed 1.5% if the fraction of people working in R&D increases fast enough, or if other factors boost growth. On the other hand, even if semi-endogenous growth theory is wrong, growth could slow for some reason other than slowing population growth (e.g. resource limitations). I assume these considerations are a wash.\n\n\nI do make one more adjustment for the effect of AI. Even if we don’t develop AIs with comparable growth effects to AI robots, AI might still increase the pace of economic growth. Aghion et al. (2017) focus on scenarios in which AI automation boosts the exponential growth rate. I assign 10% to this possibility, and so give **65%** to sub-exponential growth in this scenario.\n\n\n\n\n---\n\n\n13. Appendix H: Reviews of the report\n-------------------------------------\n\n\nWe had numerous people with relevant expertise review earlier drafts of the report. Here we link to the reviews of those who give us permission to do so. *Note:* *the report has been updated significantly since some of these reviews were written*.\n\n\n* [Ben Jones](https://docs.google.com/document/d/1jP9Bb6J6BXH5v6EshsPF2NE1GiWatPxUUrK9wDEpTqA/edit) (reviewed final version of the report)\n* [Dietrich Vollrath](https://docs.google.com/document/d/1NScJzPLzLjYRkKJOjwlrPFO8PJ1xXUX81ksP7GwtCEU/edit) (reviewed final version of the report)\n* [Paul Gaggl](https://docs.google.com/document/d/1hCXAWxMFR5jXM89KqiCebomm53i8WzWlsi_qW0r_EoM/edit?usp=sharing) (reviewed final version of the report)\n* [Leopold Aschenbrenner](https://docs.google.com/document/d/157Jadbi3TyyO-DDRDhcZ-NDUf3ATQBfTzX7WabsRoUk/edit#) (reviewed final version of the report)\n* [Ege Erdil](https://drive.google.com/file/d/113c-vMfOeVv31KNIoH05-49kJkdWNigk/view?usp=sharing) (reviewed final version of the report)\n* [Anton Korinek](https://docs.google.com/document/d/14t5zNuaKHmnrnE0cLMSRST3LlZShM_pB35sTt-NbSeQ/edit#heading=h.rq4krnj82zba)\n* [Jakub Growiec](https://docs.google.com/document/d/1qmd46lxbEy62LKdP54jzMu8lHaMwd5f7JPOK1VEy1t8/edit#heading=h.rq4krnj82zba)\n* [Phillip Trammell](https://docs.google.com/document/d/1MFpLJF-uBepH86awgI5sspRuVVu8pzHw2cLGtOD4bWQ/edit#heading=h.rq4krnj82zba)\n* [Ben Garfinkel](https://docs.google.com/document/d/1bPxxrIroD5Ya_9mgnFoE3dj_OGXfKgpuoh1Y6tFuQZo/edit)\n\n\n\n\n---\n\n\n14. Technical appendices\n------------------------\n\n\n#### 14.1 Glossary\n\n\n**GDP**\n\n\n* Total stuff produced within a region, with each thing weighted by its price.\n\n\n**GWP**\n\n\n* Total amount of stuff produced in the whole world, with each thing weighted by its price.\n\n\n**GDP per capita**\n\n\n* GDP of a region divided by the region’s total population.\n* So GWP/capita is GWP divided by the world population.\n\n\n**Frontier GDP**\n\n\n* GDP of developed countries on the frontier of technological development. These countries have the highest levels of technology and largest GDP/capita.\n* Often operationalized as OECD countries, or just the USA.\n\n\n**Physical capital**\n\n\n* Machinery, computers, buildings, intellectual property, branding – any durable asset that helps you produce output.\n* I often refer to this as merely ‘capital’.\n* Doesn’t include land or natural resources.\n\n\n**Human capital**\n\n\n* Human skills, knowledge and experience, viewed in terms of its tendency to make workers more productive.\n\n\n**Total factor productivity (TFP) growth**\n\n\n* Increase in output that can’t be explained by increases in inputs like labor and capital.\n* If TFP doubles, but all inputs remain the same, output doubles.\n* TFP increases correspond to better ways of combining inputs to produce output, including technological progress, improvements in workflows, and any other unmeasured effects.\n* In the report I often don’t distinguish between TFP growth and technological progress.\n\n\n**Exponential growth**\n\n\n* Example 1: the number of cells doubling every hour.\n* Example 2: the number people infected by Covid doubling every month.\n* Example 3: GWP doubling every 20 years (as it does in some projections).\n* Definition 1: when ‘doubling time’ stays constant.\n* Definition 2: when a quantity increases by a constant fraction each time period.\n* *yt+1* = *yt*(1 + *g*), where *g* is the constant growth rate.\n\t+ US GDP / capita has grown exponentially with *g* = 1.8% for the last 150 years. The doubling time is ~40 years.\n\n\n**Super-exponential growth**\n\n\n* When the growth rate of a quantity increases without bound (e.g 1% one year, 2% the next year, 3% the next year…).\n* One example would be *yt+1* = *yt*(1 +*kyt*).\n* The time taken for the quantity to double falls over time.\n* Examples:\n\t+ In ancient times it took 1000s of years for GWP to double, but today GWP doubles much faster. GWP doubled between 2000 and 2019.\n\t+ Some solutions to endogenous growth models imply GWP will increase super-exponentially.\n\t+ When univariate endogenous growth models are fit to historical GWP data from 10,000 BCE, they typically imply growth is super-exponential and that GWP will go to infinity in finite time.\n\n\n**Sub-exponential growth**\n\n\n* When the growth rate of a quantity *decreases* over time (e.g 1% one year, 0.5% the next year, 0.2% the next year…).\n* One example would be *yt+1* = *yt*(1 +*k*/*yt*).\n* Another example is simply linear growth *yt+1* = *yt* + *k*.\n* The time taken for the quantity to double increases over time.\n* Examples:\n\t+ The world’s population has doubled since 1973, but UN projections imply it will not double again this century.\n\t+ Some solutions to endogenous growth models that imply GWP will increase sub-exponentially. In these models growth ultimately plateaus.\n\t+ When univariate endogenous growth models are fit to historical GWP data from 1950, they typically imply growth is sub-exponential and that GWP will plateau.\n\n\n**Constant returns to scale**\n\n\n* If the inputs to production all double, the output doubles.\n* For example, suppose output is created by labor and capital. Mathematically, we write this as *Y* = *F*(*L*, *K*). Constant returns to scale means that *F*(2*L*, 2*K*) = 2*Y*.\n\n\n**Increasing returns to scale**\n\n\n* If the inputs to production all double, the output *more than* doubles.\n* For example, suppose output is created by labor, capital and technology. Mathematically, we write this as *Y* = *F*(*L*, *K*, *A*). Increasing returns to scale means that *F*(2*L*, 2*K*, 2*A*) > 2*Y*.\n\n\n**Exogenous growth model**\n\n\n* Growth model where the ultimate driver of growth lies outside of the model.\n* E.g. in the [Solow-Swan model](https://en.wikipedia.org/wiki/Solow%E2%80%93Swan_model) growth is ultimately driven by the growth of inputs that are assumed to grow exponentially. The growth of these inputs is the ultimate source of growth, but it isn’t explained by the model.\n* Technological progress is not explained by exogenous growth models.\n\n\n**Endogenous growth model**\n\n\n* Growth model that explains the ultimate driver of growth.\n* E.g. Jones (2001) describes dynamics governing the increase in population and of technology, and the growth of these inputs is the ultimate source of growth.\n* Typically endogenous growth models explain the growth of technology.\n\n\n#### 14.1.1 Classifications of growth models\n\n\nI introduce some of my own terminology to describe different types of growth models.\n\n\n**Long-run explosive models** predict explosive growth by extrapolating the super-exponential trend in very long-run growth. I argue they should only be trusted if population is accumulable (in the sense that **more output → more people**).\n\n\n**Idea-based models** explain very long-run super-exponential growth by increasing returns to accumulable inputs, including non-rival technology. They include *long-run explosive models* and models that have a demographic transition dynamic such as [Jones (2001)](https://web.stanford.edu/~chadj/bc400.pdf) and [Galor and Weil (2000)](https://www.researchgate.net/publication/4733968_Population_Technology_and_Growth_From_Malthusian_Stagnation_to_the_Demographic_Transition_and_Beyond).\n\n\n**Step-change models.** These models of very long-run growth emphasize a structural transition occurring around the industrial revolution that increases growth. They stand in contrast to models, like long-run explosive models, that emphasize the increasing return mechanism and predict growth to increase more smoothly over hundreds and thousands of years.\n\n\n**Explosive growth models** predict that perfect substitution between labor and capital would lead to explosive growth.\n\n\n#### 14.2 Models of very long-run growth that involve increasing returns\n\n\nThe purpose of the literature on very long run growth is to understand both the long period of slow growth before the industrial revolution and the subsequent take-off from stagnation and increase in growth.\n\n\nI focus on two models on very long-run growth – [Jones (2001)](https://web.stanford.edu/~chadj/bc400.pdf) and [Galor and Weil (2000)](https://www.researchgate.net/publication/4733968_Population_Technology_and_Growth_From_Malthusian_Stagnation_to_the_Demographic_Transition_and_Beyond). They are both characterized by increasing returns to accumulable inputs until a demographic transition occurs.Both these models predict super-exponential growth before the demographic transition, and exponential growth after it.\n\n\nFor both models I:\n\n\n* Discuss the mechanisms by which they initially produce super-exponential growth, comparing them to the mechanisms of long-run explosive models.\n* Explain how these models later produce exponential growth.\n* Analyze the mechanisms why these models preclude explosive growth, and suggest that highly substitutable AI could prevent these mechanisms from applying.\n\n\n#### 14.2.1 Jones (2001)\n\n\n#### 14.2.1.1 The model\n\n\nThere are two accumulable factors in this model: technology *A* and labor *L*. There is also a fixed supply of land, *T*. They are combined together to create output in the following equation:\n\n\n\\( Y=A^σ{L\\_Y}^βT^{1−β} \\)\nwhere *LY* is the amount of labor spent on producing output (people choose to divide their time between three activities: producing output, doing research, and having children).\n\n\nImprovements in technology are determined by:\n\n\n\\( \\dot A=δA^ϕ{L\\_A}^λ \\)\nwhere *LA* is the amount of labor spent on doing research, and δ > 0 is a constant. φ describes whether the productivity of research increases (φ > 0) or decreases (φ < 0) with the level of technology; Jones assumes φ < 1. λ allows for diminishing returns to additional researchers: 0 < λ < 1. In equilibrium, the growth rate of *A* is proportional to the growth rate of *L*:\n\n\n\\( g\\_A=constant×g\\_L \\)\nIncreases in *L* depend on income per capita, via its effects on the death rate and the birth rate. For a very low level of income per capita, *gL* = 0. As income rises above this level, *gL* increases, mostly because as the death rate decreases; as income rises further *gL* starts decreasing again as the demographic transition reduces the birth rate. So *gL* as a function of income per capita is an upside-down U.\n\n\nThe general pattern of growth is then as follows:\n\n\n* Initially per capita incomes are just high enough for the population to increase very slowly. The rate of technological innovation is very slow at this stage.\n* Eventually, the population increases to a stage where technological innovation is happening somewhat quickly. There is then a powerful positive feedback loop: faster technological progress → larger per capita income → larger population → even faster technological progress →…\n* This feedback loop leads to fast growth of population, technology, and of per capita income.\n* Once per capita income is high enough, the demographic transition sets in, reducing population growth. This stabilizes the growth of technology and per capita incomes, and there is steady exponential growth.\n\n\n#### 14.2.1.2 Generating super-exponential growth\n\n\nJones places a restriction on λ, φ, β, and σ so that the model is characterized by *increasing returns* to accumulable factors (see [p. 9](https://web.stanford.edu/~chadj/bc400.pdf)). For example, suppose that φ + λ = 1, so that there are constant returns in the technology production function to accumulable factors. Then Jones’ restriction simplifies to σ + β > 1 – *increasing* returns in production to the accumulable factors *A* and *L*.\n\n\nThese increasing returns allow the model to generate super-exponential growth. In this sense, the model’s mechanism for generating super-exponential growth is very similar to that of Roodman’s model. Both models produce super-exponential growth via increasing returns to accumulable factors.\n\n\nHowever, the details of exactly how labor is accumulated differs between Jones’ and Roodman’s models. In Roodman’s model, a constant fraction of output is reinvested to increase the labor supply:[220](https://www.openphilanthropy.org/could-advanced-ai-drive-explosive-economic-growth#footnote220_apr8prm \" For clarity, I am simplifying his model somewhat by assuming that technology doesn’t mediate the reinvestment.\")\n\n\n\\( \\dot L=sY \\)\nThis implies that the growth rate of labor is proportional to per capita income.\n\n\n\\( g\\_L≡ \\frac {\\dot L}{L}= \\frac {sY}{L} \\)\nBy contrast, in Jones’ model labor accumulation is more complex. *gL* is the birth rate minus the death rate. The death rate falls with per capita income. The birth rate initially rises with per capita income because people can achieve subsistence with less work and so have more time to raise children. These combined effects mean that *gL* initially increases with per capita income.\n\n\nAlthough Jones does not have direct proportionality between *gL* and per capita income, the initial behavior is similar to that of Roodman’s model. In both cases, higher per capita income drives a higher *gL* which drives increases to *gA* and *gY*. The following super-exponential feedback loop is present in both models:\n\n\nHigher per capita income → higher *gL* → higher *gA* and *gY* → even higher per capita income…\n\n\n#### 14.2.1.3 Generating exponential growth\n\n\nIn Roodman’s model, the above dynamic continues without limit, and growth becomes ever faster. In Jones’s model, by contrast, *gL* only increases up to a point (see Figure 2 on p. 12). As per capita income increases further, the birth rate *falls* because wages are high enough that people choose to work over having children. Moreover, the birth rate falls faster than the death rate, which by now is fairly low. This means that *gL* *decreases*. This fall in *gL* is the model’s way of representing the demographic transition.\n\n\nAs a result of the fall in *gL*, *gA* and *gY* also fall. The model exogenously specifies a minimum for *gL* (via specifying a minimum death rate and birth rate), which determines the long-run values of *gA* and *gY*. The ultimate source of the exponential growth is this exogenous assumption that in the long-run *gL* tends to a constant.\n\n\n#### 14.2.1.4 Precluding explosive growth\n\n\nExplosive growth in this system requires *gL* to keep rising until it pushes up *gA* and *gY* to the point where *gL* > 30%. The model has two mechanisms that prevent this from happening.\n\n\nThe first we’ve already seen. *gL* only increases with per capita income up to a point; beyond this point *gL* falls. This represents the demographic transition.\n\n\nHowever, even without this mechanism, there is a limit to how long super-exponential growth could proceed in this model. This limit is the maximum number of children a person can have. People have a finite supply of time, and in the model they must use a fixed amount of time on each of their children. This limits birth rate, and so places another (higher) cap on *gL*.\n\n\nIt seems unlikely that either of these limits would apply if AI systems were developed that were perfectly substitutable with human labor. In this case, we could increase the effective size of the labor force by creating more AI systems. Roodman’s equation for the increase in the labor supply (*L̇* = *sY*), in which the increase in the stock of generalized labor (generalized labor = human labor + AI labor) is proportional to output, then seems more reasonable. For this is the reinvestment equation commonly used for capital, and AI systems would be a form of capital.\n\n\n#### 14.2.1.5 Institutions\n\n\nJones gets a better fit to the long-run historical data on GDP/capita and population when he models *institutions* that encourage innovation, like property rights and patents. He crudely represents these with a parameter π that controls the proportional of income paid to researchers. π influences how much effort is made to improve technology. He finds that adding shocks that boost π allows the model to better imitate the sudden rise in living standards around the time of the industrial revolution. Indeed, Roodman’s model is surprised at the speed of growth at this time.\n\n\n#### 14.2.2 Galor and Weil (2000)\n\n\nThe general model here is very similar to Jones (2001) in several respects.\n\n\n* It has accumulable factors technology and labor, and land as a fixed factor.\n* Improvements in technology are caused by people having new ideas. As a consequence, larger populations lead to faster technological progress.\n* Increases in labor are determined by people’s decisions about how to split their time between work and having children.[221](https://www.openphilanthropy.org/could-advanced-ai-drive-explosive-economic-growth#footnote221_uhef9e9 \" Galor and Weil (2000) model differs from Jones (2001) in some subtle ways. Firstly, for Jones gL depends on the birth rate and the death rate, both of which are affected by per capita income. But in Galor’s model, the death rate is fixed, so you can focus solely on the birth rate. Secondly, Galor distinguishes between the size of the labor force and its human capital. The level of human capital depends on the time parents spend educating their children. Thirdly, Galor’s equation for technological progress implies that a constant population can produce exponential increases in technology indefinitely. By contrast, Jones’ equation implies the population must be growing exponentially to sustain exponential growth of technology. \")\n\n\nOne significant addition is that parents can invest in educating their children. The more education, the more human capital per person and the faster the pace of technological progress. There are diminishing returns to education on the pace of technological progress.\n\n\nThe general pattern of growth is as follows:\n\n\n* Initially per capita income is low. People must spend most of their time working to achieve subsistence income, so have few children. The supply of labor, level of technology, and per capita income all grow slowly.\n* As per capita income rises, parents can achieve subsistence with less time working, and spend more time having children. The population rises more quickly. This leads to faster technological growth, which in turns leads to faster growth in per capita income.\n* There is a positive feedback loop: higher per capita income → more people → faster technological progress → even higher per capita income →…\n* Once technological progress is fast enough, parents are incentivized to have *fewer* children. This is because they’re instead incentivized to invest time in their children’s education.\n\t+ This causes growth to increase more quickly for a while with the following feedback loop: faster technological progress income → better educated people → faster technological progress →…\n\t+ Eventually this leads population growth to decline: the demographic transition. When population growth declines to 0, the amount of human capital and rate of technological progress are also constant.[222](https://www.openphilanthropy.org/could-advanced-ai-drive-explosive-economic-growth#footnote222_78m88xd \" There is an alternative, and in some ways more plausible, version of the model where in equilibrium both the population and technological level grow exponentially. See Footnote 23. I’m not sure if the demographic transition - the falling of population growth - happens in this version. \")\n\n\n#### 14.2.2.1 Generating super-exponential growth\n\n\nGalor and Weil (2000) generates super-exponential growth in an analogous fashion to Jones (2001). As in Jones (2001), the equations by which the factors are accumulated are characterized by increasing returns. Once both endogenous inputs (technology and population) have doubled, the growth rates of both these inputs increase.\n\n\nThe super-exponential feedback loop is roughly as follows:\n\n\nHigher per capita income → higher *L* → higher *gA* and *gY* → even higher per capita income…\n\n\nAround the industrial revolution, educational investment raises human capital per person and leads to faster increases in growth:\n\n\nHigher *gA* → more educational investment → higher human capital per person → higher *gA*\n\n\n#### 14.2.2.2 Generating exponential growth\n\n\nWe’ve touched upon the mechanism that generates constant exponential growth. There is a negative feedback loop that returns the growth rate of technology to a fixed point. In brief, the feedback loop is:\n\n\nFaster growth → smaller population → lower growth\n\n\nSlower growth → larger population → faster growth\n\n\nWhy the link from growth to population? Parents have to decide whether to spend time on having more children or on educating them; between having fewer better-educated children versus more worse-educated children. They make this choice to maximize the total income of their children (but not their children’s children). A higher growth rate of technology increases the value of education in the market, and so shifts incentives towards fewer better-educated children. Fewer children then reduces the rate of technological growth. The same negative feedback loop happens in reverse if technological growth is too low.\n\n\nFaster growth → incentive to have fewer children → population falls → slower growth\n\n\nSlower growth → incentive to have more children → population rises → faster growth\n\n\nIn equilibrium we have:\n\n\nEquilibrium growth → fixed incentives to have children → population constant → constant growth\n\n\nThis negative feedback loop returns the growth rate of technology to a fixed point and then keeps it there.[223](https://www.openphilanthropy.org/could-advanced-ai-drive-explosive-economic-growth#footnote223_oxbz38e \"The dynamic is slightly different in the version of the model where in equilibrium both the population and technological level grow exponentially (see previous footnote). In this alternate version, the negative feedback loop is:Faster growth → incentive to have fewer children → population growth falls → slower growthSlower growth → incentive to have more children → population growth rises → faster growth\")\n\n\n#### 14.2.2.3 Precluding explosive growth\n\n\nThe same negative feedback loop as discussed in the last section explains why super-exponential growth is avoided. If growth ever became too high, the population would decrease until growth settled back down again.\n\n\nAs with Jones (2001), it doesn’t seem like this mechanism would apply if AI systems were developed that were perfectly substitutable with human labor. In this case, we could increase the effective size of the labor force by creating more AI systems, and then increases in labor wouldn’t be limited by the finite amount of time that parents’ can give to childbearing.\n\n\nAgain, Roodman’s equation for the increase in the labor supply (*L̇* = *sY*), in which the increase in the stock of generalized labor (= human labor + AI labor) is proportional to output, seems more reasonable in this hypothetical.\n\n\n#### 14.3 Graphs showing frontier GDP growth\n\n\n#### 14.3.1 Summary\n\n\nThere isn’t good quality long-run data on the economic frontier because the frontier changes over time, and old data points are highly uncertain. Here I eyeball data for the USA, England, and France.\n\n\nThe data looks as if growth is super-exponential if you look at data going back to 1700 or earlier. However, when you remove data before 1900 the trend looks roughly exponential.\n\n\n#### 14.3.2 Graphs of super-exponential growth in frontier GDP/capita\n\n\n#### 14.3.2.1 United States ([source](https://ourworldindata.org/grapher/maddison-data-gdp-per-capita-in-2011us?tab=chart&yScale=log&time=earliest..2016&country=~USA))\n\n\n![](https://www.openphilanthropy.org/wp-content/uploads/imageQ-1.png)\n\n\n#### 14.3.2.2 England ([source](https://ourworldindata.org/grapher/total-gdp-in-the-uk-since-1270?yScale=log))\n\n\n![](https://www.openphilanthropy.org/wp-content/uploads/image14.png)\n\n\n \n\n\n#### 14.3.2.3 France[224](https://www.openphilanthropy.org/could-advanced-ai-drive-explosive-economic-growth#footnote224_nost9fx \" The French data series is from Roodman (2020). See Table 2. As he explains, the first two data points - in 10,000 BCE and 5,000 BCE - are taken from Maddison’s GWP/capita data series rather than being specific to France.\")\n\n\n \n\n\n[![EconomicGrowthV.png](https://www.openphilanthropy.org/files/Research/Economic_Growth/imageV.png)](https://www.openphilanthropy.org/files/Research/Economic_Growth/imageV.png)\n \n\n\nThe next two sections analyze the US and English data in a little more detail.\n\n\n#### 14.3.3 US per capita GDP growth\n\n\nUS per capita growth from 1650 looks super-exponential ([source](https://ourworldindata.org/economic-growth)). Constant exponential growth would look like a straight line as the y-axis is log.\n\n\n[![EconomicGrowthQ.png](https://www.openphilanthropy.org/files/Research/Economic_Growth/imageQ.png)](https://www.openphilanthropy.org/files/Research/Economic_Growth/imageQ.png)\n \n\n\nEven from 1800, there is a trend of super-exponential growth. The red line shows the average growth rate from 1800 – 1890; the blue line shows the average growth rate since then. It looks like growth sped up throughout the 19th century and then maintained at a constant rate.\n\n\n \n\n\n[![EconomicGrowth12.png](https://www.openphilanthropy.org/files/Research/Economic_Growth/image12.png)](https://www.openphilanthropy.org/files/Research/Economic_Growth/image12.png)\n \n\n\nHowever, if you restrict the data set to data from 1870 it looks exponential. You can see the slowing growth from 2000.\n\n\n \n\n\n[![EconomicGrowthL.png](https://www.openphilanthropy.org/files/Research/Economic_Growth/imageL.png)](https://www.openphilanthropy.org/files/Research/Economic_Growth/imageL.png)\n \n\n\nThis pattern of slowing growth since 2000 is confirmed by the data in Vollrath’s recent book, [Fully Grown](https://www.amazon.com/Fully-Grown-Stagnant-Economy-Success/dp/022666600X), and by the following data from the world bank ([source](https://data.worldbank.org/indicator/NY.GDP.MKTP.KD.ZG?contextual=default&locations=US)).\n\n\n[![EconomicGrowthE.png](https://www.openphilanthropy.org/files/Research/Economic_Growth/imageE.png)](https://www.openphilanthropy.org/files/Research/Economic_Growth/imageE.png)\n \n\n\n#### 14.3.4 England total GDP growth – sensitivity analysis\n\n\nLong-run English GDP since 1300 looks super-exponential ([source](https://ourworldindata.org/grapher/total-gdp-in-the-uk-since-1270?yScale=log)):\n\n\n[![EconomicGrowth14.png](https://www.openphilanthropy.org/files/Research/Economic_Growth/image14.png)](https://www.openphilanthropy.org/files/Research/Economic_Growth/image14.png)\n \n\n\nIt’s still super-exponential if you exclude data before 1700:\n\n\n[![EconomicGrowth1.png](https://www.openphilanthropy.org/files/Research/Economic_Growth/image1.png)](https://www.openphilanthropy.org/files/Research/Economic_Growth/image1.png)\n \n\n\nHowever, if you restrict the data to post-1800, the super-exponential trend disappears. The trend is well approximated by exponential growth – see the red line. Notice, though, that the average growth rate after WW1 (blue line) is faster than that before it (red line):\n\n\n[![EconomicGrowthC.png](https://www.openphilanthropy.org/files/Research/Economic_Growth/imageC.png)](https://www.openphilanthropy.org/files/Research/Economic_Growth/imageC.png)\n \n\n\n#### 14.4 Graph of GWP per capita\n\n\nData is from Roodman (2020) – see Table 2.\n\n\n![](https://www.openphilanthropy.org/wp-content/uploads/imageN-1-300x200.png)\n\n\n#### 14.5 Graphs of population growth\n\n\n#### 14.5.1 Frontier population\n\n\nUS and UK data shows a slight slowing of population growth.[225](https://www.openphilanthropy.org/could-advanced-ai-drive-explosive-economic-growth#footnote225_jzcd9i8 \" Source: Maddison Project 2018 population data. To download, click here.\") This may be offset by more countries joining the economic frontier.\n\n\nUS and UK separately:\n\n\n[![EconomicGrowth13.png](https://www.openphilanthropy.org/files/Research/Economic_Growth/image13.png)](https://www.openphilanthropy.org/files/Research/Economic_Growth/image13.png)\n \n\n\nUS and UK combined:\n\n\n \n\n\n[![EconomicGrowthP.png](https://www.openphilanthropy.org/files/Research/Economic_Growth/imageP.png)](https://www.openphilanthropy.org/files/Research/Economic_Growth/imageP.png)\n \n\n\n#### 14.6 Uncertainty about global population vs uncertainty about per capita GWP\n\n\nThis section argues that, for the standard story, uncertainty about GWP/capita is a much bigger source of uncertainty about 2100 GWP than uncertainty about population.\n\n\nThe following plot shows the GWP projections for various assumptions about GWP per capita growth and future population. In particular, I compare projections for the 10th and 90th percentiles of GWP per capita growth and the 5th and 95th percentiles of population. Uncertainty about population affects the GWP projections much less than uncertainty about per capita GWP growth.\n\n\n \n\n\n[![EconomicGrowth2.png](https://www.openphilanthropy.org/files/Research/Economic_Growth/image2.png)](https://www.openphilanthropy.org/files/Research/Economic_Growth/image2.png)\n \n\n\n#### 14.7 Endogenous and exogenous growth models\n\n\n#### 14.7.1 What’s the difference between endogenous and exogenous growth models?\n\n\nIn brief, exogenous growth models stipulate the rate at which technology *A* changes over time and the growth of technology is the ultimate source of growth in these models. By contrast, endogenous growth explains the ultimate source of growth, often by explaining the increase of technology.\n\n\nI’ll illustrate this difference between endogenous and exogenous growth models by comparing the standard exogenous [Solow-Swan model](https://en.wikipedia.org/wiki/Solow%E2%80%93Swan_model) with the endogenous ‘learning by doing’ model of [Frankel (1962)](https://www.jstor.org/stable/1812179?seq=1).\n\n\nBoth models use the same Cobb-Douglas production function (though with different parameter values):\n\n\n\\( Y=A^σK^αL^{1−α} \\, (0) \\)\nwhere:\n\n\n* *Y* is the total output.\n* *A* is technology.\n* *L* is the labor input.\n* *K* is the capital input.\n\n\nBoth models treat *K* as endogenous: production *Y* is invested into increasing *K*:\n\n\n\\( \\dot K=s\\_KY−δ\\_KK \\, (1) \\)\nwhere *sK* jointly represents both the proportion of *Y* that is invested into capital and how much capital that amount of investment is able to produce, and *δK* is the rate at which capital loses value due to [depreciation](https://en.wikipedia.org/wiki/Depreciation). There is a feedback loop between *Y* and *K* where *Y* is invested to increase *K* (Equation 1), which in turn increases *Y* (equation 0), which further increases investment in *K*, and so on.\n\n\nLet’s consider the two growth models in turn.\n\n\nIn the exogenous [Solow-Swan model](https://en.wikipedia.org/wiki/Solow%E2%80%93Swan_model), σ = 1 – α so the production function is:\n\n\n\\( Y=K^α(AL)^{1−α} \\)\nBoth *A* and *L* are assumed to grow at a constant exponential rate:\n\n\n\\( \\dot L=L\\_0e^{nt} \\)\n\\( \\dot A=A\\_0e^{gt} \\)\nThe feedback loop between *Y* and the endogenous factors – in this case just *K* – fizzles out due to the diminishing returns to *Y* from increases in the endogenous factors. In this model, these diminishing returns correspond mathematically to.[226](https://www.openphilanthropy.org/could-advanced-ai-drive-explosive-economic-growth#footnote226_ias857p \" We can understand why the feedback loop peters out by looking at equation (1). When K increases, s × Y increases due to Y’s dependence on K, but δ × K also increases. The latter increases by more because α <1. Eventually K is big enough that s × Y - δ × K = 0. At this point, investment of Y exactly offsets depreciation and K remains at its current value.\")\n\n\nAs a result, the long-run growth rate of *Y* is *n + g* and the long-run growth rate of per capita income *Y / L* is *g*. Long-run growth is constant because *n* and *g* are constant. The constancy of the long-run growth rate is not explained by exogenous growth models, but is rather assumed via their stipulations about *A* and *L*.\n\n\n*Endogenous* growth models allow the rate of technological progress to be determined within the model, e.g. by investment in R&D. There are many endogenous growth models, but I’ll use just one example to demonstrate.\n\n\nIn the endogenous growth model of [Frankel (1962)](https://www.jstor.org/stable/1812179?seq=1)[227](https://www.openphilanthropy.org/could-advanced-ai-drive-explosive-economic-growth#footnote227_ztixpp4 \" See also Section 2 of chapter 2 of The Economics of Growth. Here I describe the model for the special case when technology doesn’t depend on labor - this corresponds to ε’ = 0 in this presentation.\"), σ = 1 and so the production function is:\n\n\n\\( Y=AK^αL^{1−α} \\)\nCrucially, technological progress is endogenous and happens through a process of ‘learning by doing’. As capital is accumulated, technology improves as a by-product. The current level of technology *A* depends on the total amount of capital *K* that has been accumulated:\n\n\n\\( A=A\\_0K^η \\)\nTechnological progress happens (indirectly) as a result of the investment of output, rather than being exogenous to the model. The constant η controls the marginal returns to technology from accumulation of capital. If η > 1, each successive increment to *K* increases *A* by a larger and larger amount. If η < 1, there’s diminishing returns. Subbing in the expression for *A* into the production function, we get:\n\n\n\\( Y=A\\_0K^{α+η}L^{1−α} \\)\nLabor is treated as exogenous. It turns out that in this model, in contrast to the Solow-Swan model, the long-run growth rate can depend on the actions on the rate of investment in capital *sK[228](https://www.openphilanthropy.org/could-advanced-ai-drive-explosive-economic-growth#footnote228_14dobbw \" Note, however, this only happens in the knife-edge case when α + η = 1. If α + η <1, the long-run growth rate depends on the growth of L; if α + η > 1, output goes to infinity in finite time regardless if investment is larger than some threshold. \")*\n\n\n#### 14.7.1 .1 The conditions for super-exponential growth in Frankel’s simple endogenous growth model\n\n\nWe saw that in Frankel’s model the production function is\n\n\n\\( Y=A\\_0K^{α+η}L^{1−α} \\, (3) \\)\nwhere α < 1 gives the diminishing marginal returns of *K* to output, and η control the marginal returns of *K* to technology, *A* = *A0Kη*. *K* can increase due to the investment of output:\n\n\n\\( \\dot K={s\\_K}Y−{δ\\_K}^K \\, (4) \\)\nIt turns out that *Y* and *K* grow super-exponentially if α + η > 1. To understand why, substitute (3) into (4) and then divide both sides of (4) by *K* to get *K*’s growth rate, *K̇*/*K*:\n\n\n\\( \\dot K/K=A\\_0K^{α−η−1}L^{1−α}−δ\\_k \\)\n\\( \\dot K/K=A’K^{α−η−1}−δ\\_k \\, (5) \\)\nHere I have defined *A’* = *A0L1-α* to make the dependence on *K* clearer.\n\n\nFrom (5), we see that if α + η > 1, *K*’s growth rate increases whenever *K* increases. This is what is meant by super-exponential growth. Intuitively, we have a strong feedback loop between *K* and *Y*. *Y* is invested to increase *K*, which in turn increases *Y* , which in turn increases investment into *K*, and so on. This feedback loop doesn’t peter out but gets more and more powerful as there are *increasing* returns to *Y* from increments in *K*. This is due to *K*’s dual role as a direct input to production and in increasing the level of technology.\n\n\n#### 14.8 If we believed Frankel’s model, the striking constancy of 20th century growth wouldn’t convince us that long-run growth was constant\n\n\nIn Frankel’s model we have:\n\n\n\\( Y=AK^α(BL)^{1−α} \\)\nTechnology progress from ‘learning by doing’ is given by:\n\n\n\\( {B}= (\\frac {K}{L})^γ \\)\nFor simplicity, let’s assume *L* = 1. This implies:\n\n\n\\( Y=AK^{α+γ} \\)\nExponential growth requires the knife-edge condition α + γ = 1.\n\n\nIf we truly believed in Frankel’s endogenous growth model and fitted it to data on 20th century US growth, we would conclude that the equality was very nearly satisfied. But we couldn’t use the data to distinguish the possibilities that i) growth is exactly exponential, ii) growth is slightly super-exponential, iii) growth is slightly sub-exponential. With any natural prior over the values of α and η our posterior would assign *much* less probability to option (i) than to (ii) or (iii).[229](https://www.openphilanthropy.org/could-advanced-ai-drive-explosive-economic-growth#footnote229_ail0mpo \"It’s somewhat hard to explain why mathematically. The basic intuition is that once you choose α, condition (i) imposes an exact requirement on η satisfied by only one value while conditions (ii) and (iii) only impose constraints that can be satisfied by a range of values. Our prior would have much more weight on these ranges than on the exact value corresponding to condition (i).A more mathematical explanation is to imagine the two-dimensional space of possible values of α and η. Each point in this space corresponds to a value of α and a value of η. Condition (i) is satisfied by all the points on a line in this space: a one-dimensional subspace. Call this subspace S. By contrast, conditions (ii) and (iii) correspond to two-dimensional regions either side of S. Natural priors over the two-dimensional space will assign only infinitesimal probability to any one-dimensional subspace, and so will assign infinitesimal probability to S. The update from the 20th century data will concentrate our posterior on the region close to S, but we will still assign only an infinitesimal probability to S itself. So we will still assign only infinitesimal probability to (i). Most of the probability mass of our posterior will be just above or just below the line, corresponding to conditions (ii) or (iii).\") If we then extrapolated growth into the future then our probability in (ii) and (iii) would lead us to attach significant weight to explosive growth eventually occurring and significant weight to growth eventually reaching a plateau.\n\n\n#### 14.9 Roodman’s endogenous growth model\n\n\n#### 14.9.1 Description of Roodman’s univariate stochastic model\n\n\nThe starting point for Roodman’s univariate endogenous growth model is:\n\n\n\\(\\dot Y=sY^{1+B}+δY \\, (6) \\)\nwhere:\n\n\n* *Y* is output, in this case GWP.\n* *s* jointly describes the proportion of output invested into increasing future output and the effectiveness of this investment.\n* δ is the depreciation rate of output.\n* *B* controls whether growth is sub- or super-exponential. Growth is super-exponential is *B* > 0, sub-exponential if *B*< 0 and exponential if *B* = 0.\n\n\nRoodman augments (6) through the use of [stochastic calculus](https://en.wikipedia.org/wiki/Stochastic_calculus#:~:text=Stochastic%20calculus%20is%20a%20branch,model%20systems%20that%20behave%20randomly.), which models the randomness in the change of *Y*. This introduces an additional term *W(t)*, a [random walk](https://en.wikipedia.org/wiki/Random_walk#:~:text=A%20random%20walk%20is%20a,space%20such%20as%20the%20integers.) whose cumulative variance at *t* units of time equals *t* (see Roodman’s paper for more details):\n\n\n\\( \\dot Y=sY^{1+B}+δ Y+σ \\sqrt {YY^{1+B}} \\dot W \\, (7) \\)\nNotice that if *B* = 0, the amount of randomness in *Y’s* evolution is proportional to *Y*.[230](https://www.openphilanthropy.org/could-advanced-ai-drive-explosive-economic-growth#footnote230_hxn6eqd \" The exact form of (4) is chosen so that a simple change of variables converts it into a Feller diffusion, see Section 3.1 of Roodman’s paper.\")\n\n\nSo there is an element of randomness in determining *Ẏ*. This randomness has a persistent effect on the subsequent trajectory. For example, if GWP is boosted by chance in 1800, we would not expect it to regress back to its previous trendline in 1850, but rather to continue to rise at the normal rate from its 1800 value. To put this another way, GWP is modeled as a [Markov process](https://en.wikipedia.org/wiki/Markov_chain) where the next GWP value depends only on the current value.\n\n\n#### 14.9.2 How to quantify how surprised the model is by each data point\n\n\nOne of the advantages of Roodman’s model is that we can quantify how surprised the model is by each data point, conditional on the previous data points. Suppose we wanted to test how surprised the model is by GWP in 1700. First, we estimate the model parameters using only previous data points, up to 1600. Second, we calculate the probability distribution over GWP in 1700 conditional on the observed GWP in 1600. This probability distribution represents the model’s prediction for GWP in 1700 given all the previous data points. Lastly, we compare the actual GWP in 1700 to this probability distribution. If the actual GWP is higher than the 90th percentile of the probability distribution, the model is surprised at how high GWP was in 1700. If it’s lower than the 10th percentile, the model is surprised at how low it is. If the actual GWP is close to the distribution’s median, it isn’t surprisingly high or surprisingly low.\n\n\n#### 14.9.3 Graph of how surprised Roodman’s model is by French GDP/capita data\n\n\n[![EconomicGrowthK.png](https://www.openphilanthropy.org/files/Research/Economic_Growth/imageK.png)](https://www.openphilanthropy.org/files/Research/Economic_Growth/imageK.png)\n \n\n\nThe model is surprised by how slow growth is in all the data points after 1870.\n\n\n#### 14.10 Some reasons Roodman’s model’s may underestimate the time until explosive growth occurs\n\n\nIn this section I discuss some technical features of Roodman’s model that lead it to predict explosive growth in just a few decades. These objections motivate the analysis [above](https://www.openphilanthropy.org/research/could-advanced-ai-drive-explosive-economic-growth/#AppendixF) that explores the robustness of this prediction to changing these features.\n\n\n#### 14.10.1 Roodman’s model is overly surprised by the industrial revolution and by slow modern day growth because it assumes that random influences on growth at nearby points in time are uncorrelated\n\n\nOne of the advantages of Roodman’s model is that we can quantify how surprised the model is by each data point.[231](https://www.openphilanthropy.org/could-advanced-ai-drive-explosive-economic-growth#footnote231_6dsxrmr \" See this appendix for a slightly more detailed description of how the model does this. \") Suppose we wanted to quantify how surprised the model is by the value of GWP observed in 1700. We would use the model to generate a probability distribution over 1700 GWP, conditional on all the previous data points. If the *actual* GWP in 1700 is higher than the 90th percentile of this probability distribution, the model is surprised at how high GWP was in 1700. If actual GWP is lower than the 10th percentile, the model is surprised at how low it is. If the actual GWP is close to the distribution’s median, it isn’t surprisingly high or surprisingly low.\n\n\nFigure 13 of Roodman’s paper shows the percentile of each observation from 1600, each conditioning on the previous ones:\n\n\n \n\n\n[![EconomicGrowthF.png](https://www.openphilanthropy.org/files/Research/Economic_Growth/imageF.png)](https://www.openphilanthropy.org/files/Research/Economic_Growth/imageF.png)\n \n\n\nThe observations in 1820, 1870 and 1913 are all above the 90th percentile, so the model is consistently surprised by how large growth is in this period. The observations in 1990, 2000, 2010 and 2019 are all below the 30th percentile, indicating that the model is consistently surprised by how low growth is in this period. The correlation between the model’s surprise in successive data points in time is striking.\n\n\nPart of the reason for this surprise is that the random component of the model does not account for the *serial correlation* in the random fluctuations affecting successive data points. For example, after the model sees surprisingly high growth in 1820 and 1870 it does not think ‘whatever caused this recent surprisingly high growth might affect the next observation in the same way’; instead it recalculates the random component for the next observation from scratch. This leads it to be consistently surprised by successive observations, rather than adjusting its expectations. The low-frequency econometric methods discussed [above](https://www.openphilanthropy.org/could-advanced-ai-drive-explosive-economic-growth#LowFrequencyForecasts) can model this kind of serially-correlated deviation from an underlying trend.\n\n\nNot modeling serial correlation has an impact on the model’s projections of GWP into the future. There are two effects on its GWP projections.\n\n\nFirstly, the model will not infer from the surprising slowdown in growth observed in the last 50 years that ‘whatever caused this recent slowdown[232](https://www.openphilanthropy.org/could-advanced-ai-drive-explosive-economic-growth#footnote232_ydcxuo0 \" There are many candidates for such a cause. To list a few: the demographic transition, end of low-hanging fruit for technological progress, the shift of spending from goods to slower-growing services, and resource limitations. I discuss the first two of these candidates in more detail later.\") might affect the next observation in the same way’. Rather, the model will treat these recent deviations from its predictions as unrelated to future values of GWP. As a result, it will predict explosive growth sooner than if it had taken serial correlation into account in some way. This problem is highlighted by the model’s median prediction for the 2020 growth rate: **7.****1****%**.\n\n\nOne way to think about this problem is that Roodman’s model expects growth to increase for two reasons:\n\n\n1. Recent growth is surprisingly low, as judged by the other data points to which the model has been fitted.\n2. Growth tends to increase as GWP increases.\n\n\nFactor 1 causes the median projected growth to jump immediately up to 7.1%, Factor 2 then causes it to increase to 30% by 2044. The [next section](https://www.openphilanthropy.org/research/could-advanced-ai-drive-explosive-economic-growth/#AppendixF) explores another model in which only Factor 2 plays a role.\n\n\nThe second consequence of not modeling serial correlation is that the model’s uncertainty intervals around future values of GWP are too narrow. Assuming that randomness is uncorrelated between two successive periods reduces your probability that the same extreme outcome will occur in both periods. As a result, you’ll underestimate the probability of these extreme outcomes. To correct for this mistake, you should widen your uncertainty intervals so that they include more extreme outcomes in both directions. Indeed, the model’s confidence intervals do seem too narrow. Its 80% confidence interval for the first year of explosive growth is [2033, 2063].\n\n\n#### 14.10.2 Roodman’s model doesn’t account for the fact that recent data points are more relevant to future growth than ancient data points\n\n\nWhen Roodman estimates his model parameters, he downweights ancient data points to account for our uncertainty about their true values. However, he does not further discount them on the basis that patterns of growth in ancient times are less likely to be relevant to 21st century growth than patterns of growth in modern times. But this additional downweighting seems reasonable. For example, it is possible that in ancient times growth was super-exponential but that we’ve recently moved into a region of sub-exponential growth. To correct for this we could weigh more modern data more heavily when estimating the model parameters, or find some other way for the model to put more weight on recent data points.\n\n\n#### 14.11 Growth multiplier model\n\n\n#### 14.11.1 Detailed explanation of how the growth multiplier model works\n\n\nThe model takes as its starting point an insight from Roodman’s univariate endogenous growth models: *each time GWP increases by a factor of r, the growth rate should be* *multiplied* *by some number*.\n\n\nTo see this, consider Roodman’s univariate endogenous growth (before he adds in randomness):\n\n\n\\( \\dot Y=sY^{1+B}+δY \\)\nRearrange this to find the growth rate *Ẏ*/*Y* as a function of GWP *Y*:\n\n\n\\( \\dot Y/Y=sY^B+δ \\, (8) \\)\nWhen Roodman estimates the parameters for this model he finds that *B > 0* > 0 and the value of δ is extremely small[233](https://www.openphilanthropy.org/could-advanced-ai-drive-explosive-economic-growth#footnote233_qi8q8pd \" On his favored data set he finds s = 1.5 × 10-4, B = 0.55, δ = -3.4 × 10-5. The small value of δ is needed to predict positive growth rates in ancient times when Y was very low - in 10,000 BCE Y = 1.6 (the units of Y are $ billion). The current value of Y is about 70,000 and so the contribution of δ to the growth rate is negligible. \"). As a result, the contribution of δ to the growth rate for modern day values of *Y* is negligible (see previous footnote). We can simplify the equation:\n\n\n\\( \\dot Y/Y=sY^B \\, (9) \\)\nIf *Y* increases by a factor of *r*, the growth rate is multiplied by *rB*. Using Roodman’s estimated parameters, and letting *r*= 2, growth increases by a factor of 20.55 = 1.46. So Roodman’s model predicts that when GWP doubles the growth rate will on average increase by about 46%[234](https://www.openphilanthropy.org/could-advanced-ai-drive-explosive-economic-growth#footnote234_ai1qnak \" When Y is very small Roodman’s model predicts that the growth rate will increase by more than this, due to the effect of δ.\"), although the randomness in his model means the exact amount will vary. In the terminology introduced above, the average value of the *growth multiplier* is 1.46. This is the basic driver of super-exponential growth in Roodman’s model.\n\n\nThis is the basis for the *growth multiplier* model splitting up time into periods in which GWP changes[235](https://www.openphilanthropy.org/could-advanced-ai-drive-explosive-economic-growth#footnote235_4b2c837 \" It is worth stressing that the model does not assume that growth is super-exponential. Just like Roodman’s model, it is perfectly compatible with growth being sub-exponential. If the observed growth multipliers were between 0 and 1 this would be its predictions.\") by a factor *r*, and calculating the difference of growth rates between successive periods as the ratio between their growth rates. Roodman’s model suggests that the growth rates should increase by some multiple between periods so-defined.[236](https://www.openphilanthropy.org/could-advanced-ai-drive-explosive-economic-growth#footnote236_n40af2l \" The definition of period has the nice property that the assumption that growth rates are constant within each period is similarly plausible for each period. It has this property because Roodman’s model predicts that the growth rate will, in expectation, change by roughly the same amount within each period so defined (where that change is again measured as a ratio). \")\n\n\nThe *growth multiplier* model departs from Roodman’s model in how it models randomness. Rather than calculating new and independent random contributions each infinitesimal timestep, it models randomness crudely by sampling *growth multipliers* from the historical data. In essence it asks the question ‘What will the ratio be between this period’s growth rate and that of the next period?’ and answers ‘Let’s sample randomly from the values of that ratio for analogous periods throughout history’.\n\n\nHere is a step by step description of how to implement this model for the case of *r* = 2.\n\n\n* Create a shortened version of Roodman’s full GWP dataset where each data point’s GWP is twice that of the preceding data point, and the last data point is 2019.\n\t+ Here is a picture of the final three rows of the resultant dataset: \n\t\n\t[![EconomicGrowthO.png](https://www.openphilanthropy.org/files/Research/Economic_Growth/imageO.png)](https://www.openphilanthropy.org/files/Research/Economic_Growth/imageO.png)\n\t+ The full dataset has 16 rows and goes back until 5000 BCE; GWP halves each time until its 5000 BCE value of 2.02.\n* Calculate the average annualized growth rate between each pair of rows: \n\n[![EconomicGrowth8.png](https://www.openphilanthropy.org/files/Research/Economic_Growth/image8.png)](https://www.openphilanthropy.org/files/Research/Economic_Growth/image8.png)\n \n\n\n\t+ The last row reads ‘NaN’ because we don’t know what the average growth rate will be between 2019 and when GWP is twice its 2019 value.\n* Calculate the ratio between each pair of successive growth rates. Each ratio is essentially a sampled value for the multiplier on growth rates 2*B* when GWP doubles: \n\n[![EconomicGrowth10.png](https://www.openphilanthropy.org/files/Research/Economic_Growth/image10.png)](https://www.openphilanthropy.org/files/Research/Economic_Growth/image10.png)\n \n\n\n\t+ The average growth rate for 2000-2019 was **1.19** times higher than for 1977-2000. So the growth multiplier for the period starting in 1997 is 1.19.\n\t+ The growth rate and growth multiplier of each row combine to give the growth rate of the next row.\n* Extrapolate GWP for the period starting in 2019.\n\t+ First calculate the growth rate of the period *starting* in 2019. Take the previous period’s growth rate (**0.04**), randomly select a growth multiplier (using your discount rate to increase the probability of selecting more recently observed multipliers), and multiply them together. Suppose we selected a growth multiplier of 1.25, then the new growth rate is 0.04 × 1.25 = 0.05.\n\t+ Then calculate the *length* of the period starting in 2019. This can be calculated using the formula[237](https://www.openphilanthropy.org/could-advanced-ai-drive-explosive-economic-growth#footnote237_mt5rwz9 \" This is the formula when r = 2. The general formula can be calculated by rearranging the first equation here. \") *ln(*r*) / ln(1 + *g*)*, in our case *ln(2) / ln(1 + 0.05)*.\n\t+ The GWP at the end of the next period is 2 × (GWP in 2019).\n* Repeat the above bullet for the next period (the one following the period starting in 2019). Repeat for as many periods as you wish.\n\n\n#### 14.11.2 Models that display a degree of serial correlation intermediate between Roodman’s model and the *growth multiplier model*\n\n\nWhile in Roodman’s model the growth of successive periods are completely unrelated (leaving aside the effect of changing GWP[238](https://www.openphilanthropy.org/could-advanced-ai-drive-explosive-economic-growth#footnote238_w13ar2t \" In Roodman’s model, higher current growth leads to a bigger increase in GWP and this in turn increases future growth. But the current growth affects future growth through no other way except via GWP in this way. By contrast, in the growth multiplier model current growth affects future growth both via the increase in GWP and by the new_growth_rate being directly proportional to old_growth_rate.\")), in the *growth multiplier model* a period’s growth affects growth in all subsequent periods in equal measure. These two models can be seen as occupying opposite ends of a spectrum; an intermediate case would be that a period’s growth has a diminishing influence on the growth of future periods.\n\n\nOne concrete example of such an intermediate model would be a version of Roodman’s model with a different random component. We can think of Roodman’s model as sampling the random contribution to the growth rate from a Normal distribution in each timestep (this is how the randomness is implemented in practice). So *Rt* = *N*(0, σ). Instead, the random contribution in time step *t* could in part depend in part on the random contribution of the previous timestep: *Rt* = ε*Rt – 1* + *N*(0, σ), with 0 < ε < 1. The constant ε can be adjusted to control the size of serial correlation, the larger it is the larger the degree of serial correlation. The techniques of low frequency forecasting, discussed in the [main body](https://www.openphilanthropy.org/research/could-advanced-ai-drive-explosive-economic-growth/#AppendixE), might be appropriate for constructing a model of this kind.\n\n\nIn this terminology, the *growth multiplier model* is conceptually similar to having *Rt* = *Rt – 1* + *N*(0, σ). This is because in the *growth multiplier model*, and in this equation, the following hold:\n\n\n* The growth rate in period *i* has a persistent effect on the growth of all subsequent periods that does not decay over time.\n* As a result, the expected size of the deviation of the non-random element of Roodman’s model increases over time. This deviation behaves as a random walk with no tendency to return to 0.\n\n\n#### 14.11.3 Adding sampled growth multipliers to the list of historically observed growth multipliers\n\n\nOne interesting, and I suspect controversial, feature of the model is that each time a *growth multiplier* is sampled it is added to the list of historically observed growth multipliers. It is then more likely to be resampled when calculating future periods’ growth rates. In the example in the text, if we sampled *gm* = 1.5, then the next time we sampled it would be from the list *[2, 2, 1.5, 0.5, 1.5]*.\n\n\nThe intuitive reason for this is that if we observe (for example) slowing growth during the next period, this should increase our probability that the period afterwards will again contain slowing growth. And if we observe slowing growth for the next five periods, our confidence in growth continuing to slow in the sixth period should be higher still. And if we ask the model now ‘what is it reasonable to believe about GWP, conditional upon five periods of slow growth’ its predictions for the sixth period should take into account those five periods even though they have not actually happened. By adding observed growth multipliers to the list, I ensure that the five periods of slowing growth are taken into account in this scenario.\n\n\nMore formally, I want the model to satisfy the following desideratum. *The model’s current prediction, conditionalized on growth of X in the next period, is the same as what the model would predict if X growth actually happened and the model was retrained with the extra data from X*. I’ll motivate this desideratum with the same example as above. Suppose we observed five periods of slowing growth and tried to extrapolate GWP with this model. We would of course include the data of the most recent five periods, and this would influence the model’s predictions. If we then ask our current model ‘What should we believe about future GWP *conditional on the next five periods containing slowing growth*’ our model should give the same answer.\n\n\nThe strongest case for the desideratum comes from an extreme case. Roodman’s model assigns a very tiny probability to GWP staying roughly constant over the next million years. But if you condition on this extreme case and then extrapolate GWP further into the future, the model predicts that GWP would almost certainly start growing quickly again afterwards. In fact, its predictions are identical to if you had simply conditioned on the historical data. Its conditional predictions are only based on historically observed data, not on the data we’ve asked the model to conditionalize on. This makes the model’s conditional predictions unreasonable in this extreme case. But even in non-extreme cases I think it makes the model’s uncertainty intervals too narrow. I believe my desideratum prevents the *growth multiplier* model from making unreasonable conditional predictions and appropriately increases the uncertainty of its predictions.\n\n\nIf I remove this controversial feature, and only sample from the actual historical data, the main result is to narrow the model’s confidence intervals. With my other preferred inputs, the date by which there’s a 10% chance of explosive growth goes back two years from 2036 to 2038; the date by which there’s a 70% chance of explosive growth moves forward 30 years from 2200 to 2147; the probability that explosion never happens goes from ~15% to 0%. The median date of explosive growth comes forward by only three years from 2093 to 2090.\n\n\n#### 14.11.4 Sensitivity analysis\n\n\n#### 14.11.4.1 How does *r* affect predictions without a discount rate?\n\n\n#### \n\n\n[![EconomicGrowth15.png](https://www.openphilanthropy.org/files/Research/Economic_Growth/image15.png)](https://www.openphilanthropy.org/files/Research/Economic_Growth/image15.png)\n \n\n\n#### 14.11.4.2 How does *r* affect predictions with a discount rate of 0.95?\n\n\n \n\n\n[![EconomicGrowth7.png](https://www.openphilanthropy.org/files/Research/Economic_Growth/image7.png)](https://www.openphilanthropy.org/files/Research/Economic_Growth/image7.png)\n#### 14.11.4.3 How does *r* affect predictions with a discount rate of 0.9?\n\n\n[![EconomicGrowthU.png](https://www.openphilanthropy.org/files/Research/Economic_Growth/imageB.png)](https://www.openphilanthropy.org/files/Research/Economic_Growth/imageB.png)\n#### 14.11.4.4 How does *r* affect predictions with a discount rate of 0.85?\n\n\n \n\n\n[![EconomicGrowth4.png](https://www.openphilanthropy.org/files/Research/Economic_Growth/image4.png)](https://www.openphilanthropy.org/files/Research/Economic_Growth/image4.png)\n \n\n\n#### 14.11.4.5 Median singularity date by *r* and *discount rate*\n\n\nI’ve highlighted my preferred inputs and preferred output in **bold**.\n\n\n\n\n| DISCOUNT RATE | R = 2 | **R = 1.6** | R = 1.3 | R = 1.1 |\n| --- | --- | --- | --- | --- |\n| 1 | 2071 | 2084 | 2087 | 2073 |\n| 0.95 | 2072 | 2088 | 2096 | 2084 |\n| **0.9** | 2075 | **2093** | 2108 | 2098 |\n| 0.85 | 2081 | 2105 | 2130 | 2116 |\n| 0.8 | 2090 | 2130 | 2170 | 2141 |\n| 0.75 | 2100 | 2173 | 2282 | 2182 |\n| 0.7 | 2105 | 2274 | 3756 | 2228 |\n\n\nI don’t trust values of *r* < 1.5 for two reasons.\n\n\n1. We would need to know how the average growth rate changed when GWP increased by a factor of (e.g.) 1.3. But the historical GWP data is often too coarse-grained to contain data about this for some periods. For example, within somd periods each GWP data point is 1.5X as large as the previous data point; within such periods there’s no information on how growth changed when *r* increased by less than a factor of 1.5. We have to interpolate the GWP data, assuming that the growth rate didn’t change in these periods.\n2. As *r* becomes smaller, we’re more likely to pick up the effects of business cycles that aren’t relevant to the economy’s potential for long-term growth. Such cycles involve growth increasing and then immediately decreasing again; these changes are negatively correlated such that they cancel out of the medium term. But the *growth difference* model will treat these changes as uncorrelated (it samples randomly from the growth multipliers), and will subsequently overestimate the propensity for the growth rate to significantly change.\n\n\n \n\n\n#### 14.12 Elaborations on objections to long-run explosive models\n\n\n#### 14.12.1 How do the predictions of the *explosive growth story* change if we omit old data points?\n\n\nI did a sensitivity analysis on the effect of removing ancient data points. The following table summarizes the predictions of Roodman’s model and the *growth multiplier model* for data sets that begin at various different times.\n\n\n\n\n| | EARLIEST DATA POINT |\n| --- | --- |\n| *-10,000 BCE* | *-2000 BCE* | *1 CE* | *1000 CE* | *1300 CE* | *1600 CE* | *1800 CE* |\n| *Roodman (2020)* | *50%* | **2043** | **2046** | **2051** | **2091** | **2098** | **2171** | **2213** |\n| *Growth differences model* | *10%* | 2036 | 2035 | 2038 | 2037 | 2037 | 2043 | 2059 |\n| **50%** | **2093** | **2090** | **2092** | **2082** | **2089** | **2117** | **2302** |\n| *90%* | Never | Never | Never | Never | Never | Never | Never |\n\n\n[Note: Roodman’s model is not a good fit to the data sets starting in 1300, 1600 and 1800[239](https://www.openphilanthropy.org/could-advanced-ai-drive-explosive-economic-growth#footnote239_rj6beoy \" I have a few reasons for thinking that the model is a bad fit to these shortened data sets. Firstly, the model parameters are very hard to estimate from these data sets; this often happens when the data aren’t a good fit to the model. Secondly, the plots of the solution don’t visually appear to fit the data points as well as for the longer data sets. Thirdly, and most importantly, the fits involve unrealistically large values of δ -- between -0.08 and -0.17. This is unrealistic because -δ represents the rate of depreciation of GWP, and the economy does not lose > 8% of its value each year through depreciation. For contrast, when fit to the full data set δ = - 0.00003. When I stopped the optimization process early, while δ was around -0.05, the median date of explosive growth was several decades earlier (or up to 6 decades for the 1800 data set). Note: Roodman defines δ so that the parameter is expected to have a negative value, unlike in the Solow-Swan model.\"); small changes in the model, data or optimization methods might change its predictions by several decades (see most recent footnote for an example). Similarly, using a different value of *r* in the *growth multiplier model* can change the result by several decades.]\n\n\nThe prediction of super-exponential growth is robust to removing data points until 1800; the prediction of explosive growth by 2100 is robust to removing the data points until 1300. Again, if you made further adjustments based on thinking AI won’t increase growth for several decades, this would cause a further delay to the predicted date of explosive growth.\n\n\n#### 14.12.2 You’re only predicting explosive growth because of the industrial revolution\n\n\n*The step-change hypothesis vs the increasing-returns mechanism*\n\n\n**Summary of objection:** Yes, in ancient times growth rates were very low. And yes, in modern times growth rates are much higher. But this is just because the industrial revolution caused a step-change in growth rates. There’s no persistent trend of super-exponential growth beyond the one-off change caused by the industrial revolution. So the explosive growth story is wrong.\n\n\n**Response:** I find this objection plausible but indecisive.\n\n\nWe can compare two hypotheses.\n\n\n1. Growth increased gradually – ‘smooth increase’ hypothesis.\n\t* This is implied by the increasing-returns mechanism in *long-run explosive* models.\n2. There was a one-off discrete increase around the industrial revolution – ‘step change’ hypothesis.\n\n\nIf you eyeball the long-run GWP data, it doesn’t *look* like there is a step change in growth. I calculated the annualized growth rate between successive GWP data points, and looked at how this changed over time.[240](https://www.openphilanthropy.org/could-advanced-ai-drive-explosive-economic-growth#footnote240_ifdm9gx \" On the x-axis, years are spaced according to the formula log(2050 - year). This is why the distance between -10,000 BCE and 2000 BCE is similar to the distance between 1980 and 2020. With such a scaling of the x-axis, Roodman’s univariate endogenous growth model implies that the growth rates should follow the pattern of a straight line.\")\n\n\n[![EconomicGrowthR.png](https://www.openphilanthropy.org/files/Research/Economic_Growth/imageR.png)](https://www.openphilanthropy.org/files/Research/Economic_Growth/imageR.png)\n \n\n\nIt looks as if growth increased fairly steadily until the second half of the 20th century. That said, the data is highly uncertain, and it’s definitely possible that the true data would show a step-change pattern. Especially if the step-change is understood as having occurred over a few centuries.\n\n\nA similar story seems to be true of GWP per capita and frontier GDP per capita, although this is much harder to discern due to the lack of long-run data. Here’s GWP per capita.\n\n\n[![EconomicGrowthN.png](https://www.openphilanthropy.org/files/Research/Economic_Growth/imageN.png)](https://www.openphilanthropy.org/files/Research/Economic_Growth/imageN.png)\n \n\n\nThis data is even more uncertain, and so again the true data could show a clearer step-change pattern.\n\n\nHere’s French GDP per capita – a (flawed) proxy for GDP per capita on the economic frontier.[241](https://www.openphilanthropy.org/could-advanced-ai-drive-explosive-economic-growth#footnote241_ipbjlar \" I’ve taken the French data series from Roodman’s paper. He describes the data series on p. 24. As he explains, the first two data points - in 10,000 BCE and 5,000 BCE - are taken from Maddison’s GWP/capita data series rather than being specific to France.\")\n\n\n[![EconomicGrowthV.png](https://www.openphilanthropy.org/files/Research/Economic_Growth/imageV.png)](https://www.openphilanthropy.org/files/Research/Economic_Growth/imageV.png)\n \n\n\nThis data shows something like a step-change around 1800.\n\n\nOverall, while it looks to me that the data fits better with the ‘smooth increase’ hypothesis, **the data are highly uncertain and provide very little evidence between the two hypotheses**.\n\n\nNote, there may be other reasons to prefer the ‘smooth change’ theory. It’s implied by the plausible-seeming increasing-returns mechanism that features in idea-based theories. Further, even if you accept the step-change hypothesis, I suggest you should still not rule out explosive growth (see [more](https://www.openphilanthropy.org/research/could-advanced-ai-drive-explosive-economic-growth/#TheStepChangeLends)).\n\n\nThe following section provides a little further evidence that the data supports the smooth change theory. Even excluding data before/after the industrial revolution, there seems to be evidence of increasing growth.\n\n\n#### 14.12.2.1 Investigation: Is super-exponential growth only due to the industrial revolution?\n\n\nI investigated whether the prediction of explosive growth was robust to omitting data before and after the industrial revolution. In particular, I fit Roodman’s model and the *growth multiplier model* on pre-1600 data and post-1800 data. The following table summarizes the predictions of both models about these two data sets.\n\n\n\n\n| WHEN IS THERE AN X% CHANGE OF EXPLOSIVE GROWTH? | DATA SET |\n| --- | --- |\n| *10,000 BCE – 2020* | *10,000 BCE – 1600[242](https://www.openphilanthropy.org/could-advanced-ai-drive-explosive-economic-growth#footnote242_5r6di24 \" I also tried starting the pre-1600 data series in 5,000 BCE to remove any effect of the Neolithic Revolution on growth rates. Interestingly, this changed the fitted parameters quite significantly, with B moving from 0.18 to 0.50 and s decreasing by a factor of 10 to compensate. This suggests that the solutions of Roodman’s model are very sensitive to small changes in the data for data sets this small. With the 5,000 BCE - 1600 data series, Roodman’s median year of explosive growth is 2305, with 10% by 2041 and 30% of no explosion by 3000! \")* | *1800 – 2020* |\n| *Roodman (2020)* | **50%** | **2043** | **2951** | **2213** |\n| *B[243](https://www.openphilanthropy.org/could-advanced-ai-drive-explosive-economic-growth#footnote243_7tbfequ \" The parameter in Roodman’s model controlling whether growth is sub- or super-exponential. If B > 0, growth is super-exponential.\")* | 0.55 | 0.18 | 0.03 |\n| *Growth differences model* | *10%* | 2036 | 2054 | 2059 |\n| **50%** | **2093** | **2166** | **2302** |\n| *90%* | Never | Never | Never |\n\n\nThe data shows growth increasing either side of the industrial revolution, but not fast enough for Roodman’s model to predict explosive growth by 2100.\n\n\n(Note: it is not surprising that Roodman’s model predicts explosive growth happening *eventually*. As long as growth has increased on average across the data set, the model will find super-exponential growth and predict an eventual singularity (growth going to infinity).)\n\n\nThe plot below shows pre-1600 GWP vs growth data. It does not look like an exponential model would be a good fit to this data (it would be a horizontal line) and indeed Roodman’s model assigns virtually no weight to values of *B* close to 0.[244](https://www.openphilanthropy.org/could-advanced-ai-drive-explosive-economic-growth#footnote244_bhc7zil \" The estimated value of B was 0.18 with a standard error of 0.02 when estimated using maximum likelihood estimation (as Roodman does). I separately estimated B using a nonlinear least squares regression predicting the growth rate from the GWP level, the methodology of Kremer (1993). I found B was 0.34 with standard error 0.14.\") To me, it seems that a super-exponential curve is a natural fit. If we treat the cluster at the bottom right as anomalous, it looks as if Roodman’s model might be a better visual fit when it’s estimated with the full dataset (orange line) than with just pre-1600 data (blue line).\n\n\n[![EconomicGrowthG.png](https://www.openphilanthropy.org/files/Research/Economic_Growth/imageG.png)](https://www.openphilanthropy.org/files/Research/Economic_Growth/imageG.png)\n \n\n\nIf you do not trust the GWP data (see [earlier objection](https://www.openphilanthropy.org/research/could-advanced-ai-drive-explosive-economic-growth/#TheAncientData)) you should be particularly hesitant to accept the suggestion that super-exponential growth is present in the pre-1600 data. See this [write up](https://forum.effectivealtruism.org/posts/CWFn9qAKsRibpCGq8/does-economic-history-point-toward-a-singularity) by Ben Garfinkel for a more detailed investigation into whether the pre-industrial era contains evidence of super-exponential growth. He finds very little evidence of smooth super-exponential growth and he views the pre-industrial data he used as highly unreliable.\n\n\n#### 14.13 Possible explanation for exponential growth in Perretto’s model\n\n\nThe key question about this model from the perspective of this report is:\n\n\n\n> Why does *N* grow just fast enough to curb the explosive growth potential of *Z*, but not fast enough to make long-run growth sub-exponential (tending to 0 in the long run)?\n> \n> \n\n\nMy best attempt is the following argument that by the time *Z* has doubled, the technology investment per firm also doubles, and so *Ż* doubles. This implies exponential growth.\n\n\nHere’s the argument:\n\n\n\n> In the model the cost of creating a new firm must always be equal to the firm’s total revenue (more precisely, the discounted revenue stream that a firm provides). If the costs are lower than this, more firms will be created, lowering the per-firm revenue. (Although having more firms increases total output, it decreases output per firm and so decreases revenue per firm.) So in market equilibrium, the costs equal the revenue.\n> \n> \n> The cost of creating a firm is assumed to be proportional to *Z*. So we can argue as follows: *Z* doubles → the cost of creating a firm doubles → the revenue from each firm doubles.\n> \n> \n> Now, further assume that each firm invests a constant fraction of its revenue in technology investment (this only happens if this investment maximizes their profits; for now we just assume it). Then we can argue: the revenue from each firm doubles → per firm investment in technology doubles → *Ż* doubles.\n> \n> \n> Putting these arguments together we get: *Z* doubles → the cost of creating a firm doubles → the revenue from each firm doubles → per firm investment in technology doubles → *Ż* doubles. In other words *Z*doubles → *Ż* doubles. This implies that *Z* grows exponentially.[245](https://www.openphilanthropy.org/could-advanced-ai-drive-explosive-economic-growth#footnote245_rm6329c \" The results of the paper stand even when the cost of creating a firm is 0, so I don’t think this argument is the whole story. But perhaps the fact that the fixed cost of production for firms is proportional to Z allows a more general version of the argument to go through. Indeed, Peretto confirmed in private correspondence that if instead the fixed cost were proportional to Z0.9, the model would not produce exponential growth, and he thought the same was likely true if they were proportional to Z1.1.\")\n> \n> \n\n\n#### 14.14 Toy model where favourable R&D returns in an important subsector drives explosive growth in GWP\n\n\nThere are two types of technology, standard technology *A* and investment technology *Ainv*. A plays the normal role in goods production, while *Ainv* governs the efficiency of capital investment. Capital is divided equally between developing both types of technology.\n\n\n\\( Y=AL \\)\n\\( \\dot A=A^ϕ (\\frac {K}{2}) \\)\n\\( \\dot A\\_{inv}=A^{ϕinv} (\\frac {K}{2}) \\)\n\\( \\dot K=sA^{ϕinv}L \\)\nIf φ*inv* > 0, the latter two equations are sufficient to generate super-exponential growth in *Ainv* and *K*. This then drives super-exponential growth in *A* and *Y* via the first two equations, no matter what the value of φ.\n\n\nInformally, we can think of *K* as representing the number of AI systems and *Ainv* as the efficiency with which these systems can be created. Concretely, *Ainv* might relate to the level of hardware (‘how much memory and how many calculations per computer chip?’) and the level of software (‘how much memory and how many calculations do you need to run your AI?’). Then the story behind these equations is as follows. Investment in hardware and software (*Ainv*) causes explosive growth in the number of AIs (*K*). This drives explosive growth in all areas of technology (*A*) and so in GWP (*Y*).\n\n\n#### 14.15 Mathematical derivations of conditions for super-exponential growth\n\n\n*Section written by* *Guilhermo* *Costa.*\n\n\nNote that the derivations below specify the conditions for some of the models discussed in [Appendix C](https://www.openphilanthropy.org/research/could-advanced-ai-drive-explosive-economic-growth/#AppendixC) to present *sustained* super-exponential growth; a couple of these models might possibly have short periods of super-exponential growth if the conditions below are not met. Throughout we also assume sufficient saving.\n\n\nAdditionally, some of the following proofs only hold if the ‘fishing out’ effect dominates the ‘standing on shoulders’ effect, so that, as the stock of technology increases, technological progress gets harder on the margin. Mathematically, this is represented by the parameter φ being smaller than 1.\n\n\n#### 14.15.1 Kremer (1993)\n\n\n\\( Y=AL^αW^{1−α} \\)\n\\( \\dot A=δA^ϕL^λ \\)\n\\( \\bar y=\\frac {Y}{L}=constant \\)\nLet us analyze the growth rate of output *gY*:\n\n\n\\( g\\_Y=g\\_A+αg\\_L= \\frac {δL^λ}{A^{1−ϕ}}+αg\\_L \\)\nIn order for output per capita to remain constant, *gY* = *gL* and thus:\n\n\n\\( (1−α)g\\_Y= \\frac {δL^λ}{(Y/L^αW^{1−α})^{1−ϕ}} \\)\nSince *ȳ* is constant, *Y* ∝ *L* , and so *gY* ∝ *Yλ – (1 – α)(1 – φ)*.\n\n\nTherefore, the condition for super-exponential growth is:\n\n\n\\( λ>(1−α)(1−ϕ) \\)\nIn the case in which we are usually interested, φ < 1, we can rewrite the above condition as:\n\n\n\\( α> \\frac {1−λ}{1−ϕ} \\)\n#### 14.15.2 Roodman (2020)\n\n\n\\( Y=AK^αL^βW^{1−α−β} \\)\n\\( \\dot K=s\\_KY−δ\\_KK \\)\n\\( \\dot L=s\\_LY−δ\\_LL \\)\n\\( \\dot A=s\\_AA^{ϕA}Y−δ\\_AA \\)\nWe set *W* = 1 for simplicity. Roodman (2020) expresses this model using vectors v→=(A,K,L),v→=(1,α,β) and φ→=(φA,0,0) and finds the following sufficient condition for growth to be super-exponential:[246](https://www.openphilanthropy.org/could-advanced-ai-drive-explosive-economic-growth#footnote246_ihoasw9 \"The fixed factor land does not correspond to any of the vector indices, as its exponent doesn’t affect whether growth explodes. Technically speaking, the condition is for instability: either super-exponential growth or collapse. Assuming positive growth, it is a condition for super-exponential growth. This condition appears as Equation 16 in Roodman (2020).\")\n\n\n\\( \\overrightarrow α \\cdot \\overrightarrow ϕ+(1−ϕ\\_0)(\\overrightarrow α \\cdot \\overrightarrow u−1)>0 \\)\nwhere u→=(1,1,1) and the vectors are zero-indexed. Evaluating the dot products, we obtain:\n\n\n\\( ϕA+(1−ϕA)(α+β)>0 \\)\nTaking φA < 1, we can rewrite the above condition as:\n\n\n\\( α+β> \\frac {−ϕ\\_A}{(1−ϕ\\_A)} \\)\n#### 14.15.3 Hanson (2001)\n\n\n\\( Y=(AK)^αL^βW^{1−α−β} \\)\n\\( \\dot K=sY−δK \\)\n\\( A=A\\_0e^{gAt} \\)\nThis model eventually settles into a balanced growth path, so we analyze how that path changes as the level of automation increases.[247](https://www.openphilanthropy.org/could-advanced-ai-drive-explosive-economic-growth#footnote247_qph83r1 \"Notice, technology A only augments capital in this model, unlike in the models considered above.\")\n\n\nBalanced growth requires:\n\n\n\\( g\\_K= \\frac {sK}{Y}−δ=constant⇒g\\_Y=g\\_K \\)\nSubstituting the above into the expression for the growth rate of output:\n\n\n\\( g\\_Y=α(g\\_A+g\\_K)+βg\\_L⇒g\\_Y= \\frac {αg\\_A+βg\\_L}{1−α} \\)\nWe now take *gL* = 0 for simplicity. The paper models automation as increasing the capital share and reducing the labor share correspondingly:\n\n\n\\( α→α’=α+fβ,β→β’=(1−f)β \\)\nThe long-run growth rate increases as follows:\n\n\n\\( g\\_K→{g\\_K}’= \\frac {α’g\\_A}{1−α’}= \\frac {(α+fβ)g\\_A}{1−α−fβ} \\)\n#### 14.15.4 Nordhaus (2021)\n\n\n\\( Y=F\\_ρ(AK,L)=[(AK)^ρ+L^ρ]^ \\frac {1}{ρ} \\)\n\\( \\dot K=sY−δK \\)\n\\( A=A\\_0e^{gAt} \\)\nIn the case ρ = 0, this model reduces to that found in Hanson (2001), so we focus on the cases ρ < 0 and ρ > 0.\n\n\n#### 14.15.4.1 Case #1 — ρ < 0\n\n\nLet ν = |ρ| = – ρ. Writing the production function in terms of ν, we obtain:\n\n\n\\( Y= \\frac {1}{[( \\frac {1}{AK})^ν+ ( \\frac {1}{L})^ν]^{ \\frac {1}{ν}}} \\)\nAs labor is fixed but technology grows exponentially, eventually *AK* ≫ *L* and thus:\n\n\n\\( Y≈ \\frac {1}{ [\\frac {1}{L^ν}]^ {\\frac {1}{ν}}}=L \\)\nTherefore, growth is sub-exponential and output stagnates.\n\n\n#### 14.15.4.2 Case #2 — ρ > 0\n\n\nOnce again using the fact that *AK* &Gt *L* eventually, we obtain:\n\n\n\\( Y≈[(AK)^ρ]^{ \\frac {1}{ρ}}=AK \\)\nThe growth rate of capital is given by:\n\n\n\\( g\\_K= \\frac {sY}{K}−δ=sA−δ=sA\\_0e^{gAt}−δ \\)\nand thus growth is super-exponential in the long-run, as the growth rate itself grows exponentially.\n\n\n#### 14.15.5 Aghion et al. (2017)\n\n\n####  14.15.5.1 Cobb-Douglas model\n\n\n*Thanks *to Phil Trammell for noticing that the proof below contains an error. See his corrected proof**[here](https://philiptrammell.com/static/Cobb_Douglas_singularities.pdf)**.**\n\n\n\\( Y=A^ηK^α{L\\_Y}^γW^{1−α−γ} \\)\n\\( \\dot A=A^ϕK^β{L\\_A}^λW^{1−β−λ} \\)\n\\( \\dot K=sY−δK=sA^ηK^α{L\\_Y}^γW^{1−α−γ}−δK \\)\nWriting the model in terms of *gK* and *gA*:\n\n\n\\( g\\_K= \\frac {sA^η{L\\_Y}^γW^{1−α−γ}}{K^{1−α}}−δ \\)\n\\( g\\_A= \\frac {K^β{L\\_A}^λW^{1−β−λ}}{A^{1−ϕ}} \\)\nSince the production function is Cobb-Douglas, growth is super-exponential if and only if either *ġK* > 0 or *ġA* > 0 or both.[248](https://www.openphilanthropy.org/could-advanced-ai-drive-explosive-economic-growth#footnote248_ygo9fdo \" Technically, if either technology or capital were falling, one of the derivatives of the growth rates could be positive and yet output might still not be growing super-exponentially. In this model, technological growth is always positive and capital can decay at most exponentially, so such scenarios do not occur. In general, we assume that the economy is not shrinking to avoid considering these cases.\") But, if one of these growth rates is increasing, then so is the other, as both η and β are positive. Therefore, the following inequalities hold just if growth is super-exponential:\n\n\n\\( \\dot g\\_K=g\\_K[ηg\\_A+γg\\_{L\\_Y}−(1−α)g\\_K]>0 \\)\n\\( \\dot g\\_A=g\\_A[βg\\_K+λg\\_{L\\_A}−(1−ϕ)g\\_A]>0 \\)\nFollowing the paper, we introduce the parameter ξ = βη / [(1 – α)(1 – φ)] We consider *gLY* = *gLA* = 0 for simplicity. We show that ξ > 1 if and only if growth in the long-run is super-exponential.\n\n\nFirst, we prove that ξ > 1 is necessary for super-exponential growth. To do this, we prove the contrapositive; that is, we assume growth is super-exponential and deduce ξ > 1. Rewriting the inequalities above, we obtain:\n\n\n\\( \\dot g\\_K>0⇔ηg\\_A>(1−α)g\\_K \\)\n\\( \\dot g\\_A>0⇔βg\\_K>(1−ϕ)g\\_A \\)\nNotice that *gA* > (1 – α)*gK* / η and *gK* > (1 – φ)*gA* / β together imply *gA* > *gA* / ξ. Remembering that *gA* > 0, we obtain ξ > 1, as desired.\n\n\nNow we show that ξ > 1 is sufficient for super-exponential growth. To do this, we’ll once more prove the contrapositive claim that, if growth is not super-exponential, then ξ ≤ 1. As mentioned above, output grows super-exponentially if either capital or technology do the same, and, if one of these factors grows super-exponentially, so does the other. Therefore, if growth is not super-exponential, then:\n\n\n\\( \\dot g\\_K≤0⇔ηg\\_A≤(1−α)g\\_K \\)\n\\( \\dot gA≤0⇔βg\\_K≤(1−ϕ)g\\_A \\)\nBut these inequalities yield:\n\n\n\\( g\\_A≥ \\frac {βg\\_K}{1−ϕ}≥ \\frac {βηg\\_A}{(1−α)(1−ϕ)}=ξg\\_A⇒ξ≤1 \\)\nas we wished to show, completing the proof.\n\n\nUnder the assumption φ < 1, the condition ξ > 1 can be written as:\n\n\n\\( \\frac {ηβ}{1−α}>1−ϕ \\)\nas we do in the body of the report.\n\n\n#### 14.15.5.2 CES model\n\n\n\\( Y=A^η[F\\_Y(K,L)]^αW^{1−α} \\)\n\\( \\dot A=A^ϕ[F\\_A(K,L)]^βW^{1−β} \\)\nWe assume α, β, η > 0, φ < 1, and we take labor to be fixed. We assume that *K̇* = *sY* – δ*K* ≥ 0, so that capital accumulates. Writing the model in terms of growth rates:\n\n\n\\( g\\_K= \\frac {A^η(F\\_Y)^αW^{1−α}}{K}−δ \\)\n\\( g\\_A= \\frac {(FA)^βW^{1−β}}{A^{1−ϕ}} \\)\nThe general condition for super-exponential growth is that:\n\n\n\\( \\dot g\\_Y=η \\dot g\\_A+α \\dot g\\_{F\\_Y}>0 \\)\nNote that the case φ > 1 always leads to super-exponential growth, as, in this case, the growth rate of technology goes towards infinity even holding all other factors constant, and hence the growth rate of output also explodes.\n\n\n#### 14.15.5.2.1 Case #1 — ρ*Y* < 0, ρ*A* < 0\n\n\nNotice that, as *L*/*K* → 0, *FY* ≈ *FA* ≈ *L*. Consequently, eventually *gA* ≈ *LβW1 – β* / *A1 – φ* is decreasing with *A* (using the fact that φ ≤ 1) and, therefore, *gA* → 0 and *A* grows sub-exponentially (in particular, *A* roughly grows as *t1/(1 – φ)*). It turns out that this implies that both capital and output grow sub-exponentially.\n\n\n#### 14.15.5.2.2 Case #2 — ρ*Y* > 0, ρ*A* < 0\n\n\nNotice that, as *L*/*K* → 0, *FY* ≈ *K* and *FA* ≈ *L*. The growth rates become:\n\n\n\\( g\\_K≈sA^ηK^{α−1}W^{1−α}−δ \\)\n\\( g\\_A≈ \\frac {L^βW^{1−β}}{A^{1−ϕ}} \\)\nThe dynamics of *A* can be solved independently from that of *K*. Solving the differential equation for *A*, we conclude that it grows sub-exponentially (in particular, *A* roughly grows as *t1/(1 – φ)*), just as in Case #1.\n\n\nCapital has more complex dynamics. If α > 1, then capital grows super-exponentially and, therefore, output also grows in that fashion. The case α = 1 implies super-exponential growth, as, in this case, *gK* ≈ *Aη* – δ is an increasing function of time. The case α ∈ (0, 1) leads to capital and output growing sub-exponentially. To see this, consider that, if capital was growing exponentially or faster, then the (super-)exponential increase in capital would outweigh the power-law increase in technology, and the growth rate of capital would decrease.\n\n\n#### 14.15.5.2.3 Case #3 — ρ*Y* < 0, ρ*A* > 0\n\n\nNotice that, as *L*/*K* → 0, *FY* ≈ *L* and *FA* ≈ *K*. The growth rates become:\n\n\n\\( g\\_K≈ \\frac {A^ηL^αW^{1−α}}{K}−δ \\)\n\\( g\\_A≈ \\frac {K^βW^{1−β}}{A^{1−ϕ}} \\)\nThe condition for super-exponential growth becomes:\n\n\n\\( \\dot g\\_Y=η \\dot g\\_A+α \\dot g\\_L>0⇒ \\dot g\\_A>0 \\)\nas labor is fixed and η is positive. But this occurs just if *ġK* is also positive, as evaluating these derivatives yields:\n\n\n\\( \\dot g\\_K≈g\\_K(ηg\\_A−g\\_K) \\)\n\\( \\dot g\\_A≈g\\_A[βg\\_K−(1−ϕ)g\\_A] \\)\nand the above implies that sustained super-exponential growth of one of the variables implies in the sustained super-exponential growth of the other.\n\n\nTherefore, growth is super-exponential if and only if *ġA* > 0, or, equivalently, if:\n\n\n \n\n\n\\( g\\_A> \\frac {g\\_K}{η} \\)\n\\( g\\_K> \\frac {(1−ϕ)g\\_A}{β} \\)\nas these inequalities hold just if *ġA* > 0 and *ġK* > 0. Combining these inequalities, we conclude that growth is super-exponential if and only if:\n\n\n \n\n\n\\( g\\_A> \\frac {g\\_K}{η}> \\frac {(1−ϕ)g\\_A}{βη}⇒βη>1−ϕ \\)\n#### 14.15.5.2.4 Case #4 — ρ*Y* > 0, ρ*A* > 0\n\n\nNotice that, as *L*/*K* → 0, *FY* ≈ *FA* ≈ *K*. The growth rates become:\n\n\n\\( g\\_K≈A^ηK^{α−1}W^{1−α}−δ \\)\n\\( g\\_A≈ \\frac {K^βW^{1−β}}{A^{1−ϕ}} \\)\nThe condition for super-exponential growth is:\n\n\n\\( \\dot g\\_Y=η \\dot g\\_A+α \\dot g\\_K>0 \\)\nEvaluating the derivatives of the growth rates, we obtain:\n\n\n\\( \\dot g\\_K≈g\\_K[ηg\\_A−(1−α)g\\_K] \\)\n\\( \\dot g\\_A≈g\\_A[βg\\_K−(1−ϕ)g\\_A] \\)\nOnce more, we notice that, if one of the variables exhibits sustained super-exponential growth, so must the other. Similarly, if one of the variables exhibits sustained sub-exponential growth, so must the other. Therefore, the derivatives of the growth rates always have the same sign and super-exponential growth occurs if and only if both *ġK* > 0 and *ġA* > 0.\n\n\nA similar argument to that used in Case #3 yields:\n\n\n\\( g\\_A> \\frac {(1−α)g\\_K}{η}> \\frac {(1−ϕ)(1−α)g\\_A}{βη}⇒βη>(1−ϕ)(1−α) \\)\n#### 14.15.6 Jones (2001)\n\n\nThis model is somewhat more complicated than the others presented above, as it endogenizes labor, so it warrants some additional explanation.\n\n\nPeople can choose to devote their time to labor or to having children. In the model, each person devotes a fraction *l* of their time to work and a fraction 1 – *l* having children. The number *b* of births per capita is given by:\n\n\n \n\n\n\\( b=α(1−l) \\)\nwhere α is a constant. The mortality rate is given by:\n\n\n \n\n\n\\( d=f(c)+ \\bar d \\)\nwhere *f* is a decreasing function and *c* = *Y* / *N* is consumption per capita. Population growth is described by:\n\n\n \n\n\n\\( g\\_N=b−d \\)\nas expected.\n\n\nLabor can be devoted to research or to final good production; the resource constraint for labor is:\n\n\n \n\n\n\\( L\\_Y+L\\_A=L=l \\cdot N \\)\nThe production function for the final good is *Y* = *AσLYβW1 – β*, while the path of technological growth is given by *Ȧ* = δ*AφLAλ*.\n\n\nFollowing the paper, we define a parameter:\n\n\n \n\n\n\\( θ= \\frac {λσ}{1−ϕ}−(1−β) \\)\nwhere, as usual, we assume φ < 1. We show that growth is super-exponential if and only if θ > 0.\n\n\nThe condition for super-exponential growth is:\n\n\n \n\n\n\\( \\dot g\\_Y=σ \\dot g\\_A+β \\dot {g\\_L}\\_Y=σ \\dot g\\_A+β \\dot g\\_N>0 \\)\nWe first show that *ġA* > 0 if and only if *ġN* > 0; therefore, both growth rates have the same sign and hence super-exponential growth requires both to hold. Then, we show that *ġA* > 0 and *ġN* > 0 both hold just if the desired inequality holds.\n\n\nFirst, we show that *ġN* > 0 ⇒ *ġA* > 0. Observe that:\n\n\n \n\n\n\\( g\\_N=b−d=α(1−l)= \\bar d – f(c) \\)\nand hence *ġN* > 0 just if *c* = *Y* / *N* is increasing. But *Y* / *N* ∝ *Y* / *LY* ∝ *AσLYβ – 1*, so *ġN* > 0 just if:\n\n\n \n\n\n\\( σg\\_A>(1−β)g\\_N \\)\nWe suppose that β < 1, to ensure decreasing returns to labor. Therefore, sustained super-exponential growth in population requires sustained super-exponential growth in technology, as otherwise the inequality obtained above would be violated eventually.\n\n\nNow we show that *ġA* > 0 ⇒ *ġN* > 0. The growth rate of technology is:\n\n\n \n\n\n\\( g\\_A= \\frac {δ{L\\_A}^λ}{A^{1−ϕ}} \\)\nand thus its derivative is:\n\n\n \n\n\n\\( \\dot g\\_A=g\\_A[λ{g\\_L}\\_A−(1−ϕ)g\\_A]=g\\_A[λg\\_N−(1−ϕ)g\\_A] \\)\nHence:\n\n\n \n\n\n\\( \\dot g\\_A>0⇔λg\\_N>(1−ϕ)g\\_A \\)\nand thus sustained super-exponential growth in technology requires sustained super-exponential growth in population, as desired. Therefore, growth is super-exponential if and only if both *ġA* > 0 and *ġN* > 0.\n\n\nFinally, we show that the conjunction of *ġA* > 0 and *ġN* > 0 holds if and only if θ > 0 For the first of these inequalities holds just if *ġN* > (1 – φ)*gA* / λ, while the second holds just if *ġA* > (1 – β)*gN* / σ. Therefore both *ġA* > 0 and *ġN* > 0 if and only if:\n\n\n \n\n\n\\( g\\_A> \\frac (1−β)gNσ>(1−β)(1−ϕ)gAλσ \\)\n\\( ⇔1>(1−β)(1−ϕ)λσ \\)\n\\( ⇔λσ1−ϕ>1−β \\)\n\\( ⇔θ>0 \\)\nwhere we use *ġA* > 0. This completes the proof.\n\n\n15. Sources\n-----------\n\n\n\n\n| DOCUMENT | SOURCE |\n| --- | --- |\n| Aghion and Howitt (1992) | [Source](https://www.jstor.org/stable/2951599?seq=1) |\n| Aghion and Howitt (1998) | [Source](https://mitpress.mit.edu/books/endogenous-growth-theory) |\n| Aghion et al. (2017) | [Source](https://web.stanford.edu/~chadj/AJJ-AIandGrowth.pdf) |\n| Agrawal et al. (2019) | [Source](https://www.nber.org/system/files/chapters/c14024/c14024.pdf) |\n| Besl (2001) | [Source](https://www.ibrc.indiana.edu/ibr/2001/spring01/03.pdf) |\n| Bloom et al. (2020) | [Source](https://web.stanford.edu/~chadj/IdeaPF.pdf) |\n| Bond-Smith (2019) | [Source](https://bcec.edu.au/assets/2019/06/BCEC-Working-Papers-19_02-Steven-Bond-Smith-The-decades-long-dispute-over-scale-effects.pdf) |\n| Brynjolfsson (2017) | [Source](https://www.nber.org/papers/w24001) |\n| Caplan (2016) | [Source](https://www.econlib.org/archives/2016/06/whats_wrong_in.html) |\n| Carlsmith (2020) | [Source](https://www.openphilanthropy.org/research/new-report-on-how-much-computational-power-it-takes-to-match-the-human-brain/) |\n| Cesaratto (2008) | [Source](https://www.boeckler.de/pdf/v_2008_10_31_cesaratto.pdf) |\n| Christensen (2018) | [Source](https://www.pnas.org/content/115/21/5409) |\n| Christiaans (2004) | [Source](https://www.sciencedirect.com/science/article/abs/pii/S0165176503003021) |\n| Cotra (2020) | [Source](https://www.alignmentforum.org/posts/KrJfoZzpSDpnrv9va/draft-report-on-ai-timelines) |\n| Davidson (2020a) | [Source](https://www.openphilanthropy.org/research/report-on-semi-informative-priors/) |\n| Davidson (2020b) | [Source](https://colab.research.google.com/drive/11oAdADbcd6GCslV0P5ESubqghaQlpyh2) |\n| Dinopoulos and Thompson (1998) | [Source](https://link.springer.com/article/10.1007/s001910050079) |\n| Fernald and Jones (2014) | [Source](https://web.stanford.edu/~chadj/FernaldJones2014.pdf) |\n| Foure (2012) | [Source](https://papers.ssrn.com/sol3/papers.cfm?abstract_id=2004332) |\n| Frankel (1962) | [Source](https://www.jstor.org/stable/1812179?seq=1) |\n| Galor and Weil (2000) | [Source](https://www.researchgate.net/publication/4733968_Population_Technology_and_Growth_From_Malthusian_Stagnation_to_the_Demographic_Transition_and_Beyond) |\n| Garfinkel (2020) | [Source](https://forum.effectivealtruism.org/posts/CWFn9qAKsRibpCGq8/does-economic-history-point-toward-a-singularity) |\n| Grace et al. (2017) | [Source](https://arxiv.org/abs/1705.08807) |\n| Grossman and Helpman (1991) | [Source](https://mitpress.mit.edu/books/innovation-and-growth-global-economy) |\n| Growiec (2007) | [Source](https://www.researchgate.net/publication/24057379_Beyond_the_Linearity_Critique_The_Knife-edge_Assumption_of_Steady-state_Growth) |\n| Growiec (2019) | [Source](https://econpapers.repec.org/paper/sghkaewps/2019042.htm) |\n| Growiec (2020) | [Source](https://ideas.repec.org/p/sgh/kaewps/2020048.html) |\n| Hanson (2000) | [Source](https://www.researchgate.net/profile/Robin_Hanson2/publication/228557195_Long-term_growth_as_a_sequence_of_exponential_modes/links/0046351fac48cd6ca3000000/Long-term-growth-as-a-sequence-of-exponential-modes.pdf) |\n| Hanson (2016) | [Source](https://www.amazon.com/Age-Em-Work-Robots-Earth/dp/0198754620?sa-no-redirect=1&pldnSite=1) |\n| Hsieh et al. (2013) | [Source](http://klenow.com/HHJK.pdf) |\n| Investopedia, ‘Market failure’ | [Source](https://www.investopedia.com/terms/m/marketfailure.asp) |\n| Investopedia, ‘Trimmed mean’ | [Source](https://www.investopedia.com/terms/t/trimmed_mean.asp) |\n| Jones (1995) | [Source](https://www.jstor.org/stable/2138581?seq=1) |\n| Jones (1997) | [Source](https://www.nber.org/papers/w6285.pdf) |\n| Jones (1999) | [Source](https://web.stanford.edu/~chadj/scaleff10.pdf) |\n| Jones (2001) | [Source](https://web.stanford.edu/~chadj/bc400.pdf) |\n| Jones (2005) | [Source](https://web.stanford.edu/~chadj/JonesHandbook2005.pdf) |\n| Jones (2020) | [Source](https://web.stanford.edu/~chadj/emptyplanet.pdf) |\n| Jones and Manuelli (1990) | [Source](https://www.jstor.org/stable/2937622?seq=1) |\n| Karnofsky (2016) | [Source](https://www.openphilanthropy.org/research/some-background-on-our-views-regarding-advanced-artificial-intelligence/) |\n| Kortum (1997) | [Source](https://www.jstor.org/stable/2171741?seq=1) |\n| Kremer (1993) | [Source](https://www.ssc.wisc.edu/~walker/wp/wp-content/uploads/2012/01/kremer1993.pdf) |\n| Kruse-Andersen (2017) | [Source](https://papers.ssrn.com/sol3/papers.cfm?abstract_id=2947528) |\n| Lucas (1988) | [Source](https://www.parisschoolofeconomics.eu/docs/darcillon-thibault/lucasmechanicseconomicgrowth.pdf) |\n| Markets and Markets (2018) | [Source](https://www.marketsandmarkets.com/Market-Reports/deep-learning-market-107369271.html) |\n| Nordhaus (2021) | [Source](https://www.aeaweb.org/articles?id=10.1257/mac.20170105&&from=f) |\n| OECD, ‘Market concentration’ | [Source](https://www.oecd.org/daf/competition/market-concentration.htm) |\n| Open Philanthropy, ‘Ajeya Cotra’ | [Source](https://www.openphilanthropy.org/about/team/ajeya-cotra/) |\n| Open Philanthropy, ‘David Roodman’ | [Source](https://www.openphilanthropy.org/about/team/david-roodman/) |\n| Open Philanthropy, ‘Joe Carlsmith’ | [Source](https://www.openphilanthropy.org/about/team/joseph-carlsmith/) |\n| Open Philanthropy, ‘Potential Risks from Advanced Artificial Intelligence’ | [Source](https://www.openphilanthropy.org/focus/potential-risks-advanced-ai/) |\n| Our World in Data, ‘Economic growth’ | [Source](https://ourworldindata.org/economic-growth) |\n| Our World in Data, ‘GDP per capita, 1650 to 2016’ | [Source](https://ourworldindata.org/grapher/maddison-data-gdp-per-capita-in-2011us?tab=chart&yScale=log&time=earliest..2016&country=~USA) |\n| Our World in Data, ‘Total economic output in England since 1270’ | [Source](https://ourworldindata.org/grapher/total-gdp-in-the-uk-since-1270?yScale=log) |\n| Our World in Data, ‘Two centuries of rapid global population growth will come to an end’ | [Source](https://ourworldindata.org/world-population-growth-past-future) |\n| Our World in Data, ‘World population over the last 12,000 years and UN projection until 2100’ | [Source](https://ourworldindata.org/grapher/world-population-1750-2015-and-un-projection-until-2100) |\n| Peretto (1998) | [Source](https://link.springer.com/article/10.1023/A:1009799405456) |\n| Peretto (2017) | [Source](http://public.econ.duke.edu/~peretto/Robust%20Endogenous%20Growth.pdf) |\n| Romer (1990) | [Source](http://web.stanford.edu/~klenow/Romer_1990.pdf) |\n| Roodman (2020) | [Source](https://www.openphilanthropy.org/sites/default/files/Modeling-the-human-trajectory.pdf) |\n| Segerstrom (1998) | [Source](https://www.jstor.org/stable/116872?seq=1) |\n| The World Bank, ‘GDP growth (annual %) – United States’ | [Source](https://data.worldbank.org/indicator/NY.GDP.MKTP.KD.ZG?contextual=default&locations=US) |\n| Trammell and Korinek (2021) | [Source](https://docs.google.com/document/d/1XCn4Pk44evZEjbmPD-zKlj26Z_bhf0ABeWVID_4L5sg/edit) |\n| United Nations Department of Economic and Social Affairs, ‘World Population Prospects 2019’ | [Source](https://population.un.org/wpp/) |\n| Vollrath (2020) | [Source](https://www.amazon.com/Fully-Grown-Stagnant-Economy-Success/dp/022666600X) |\n| Wikipedia, ‘Cobb-Douglas production function’ | [Source](https://en.wikipedia.org/wiki/Cobb%E2%80%93Douglas_production_function) |\n| Wikipedia, ‘Constant elasticity of substitution’ | [Source](https://en.wikipedia.org/wiki/Constant_elasticity_of_substitution) |\n| Wikipedia, ‘Demographic transition’ | [Source](https://en.wikipedia.org/wiki/Demographic_transition) |\n| Wikipedia, ‘Depreciation’ | [Source](https://en.wikipedia.org/wiki/Depreciation) |\n| Wikipedia, ‘Market failure’ | [Source](https://en.wikipedia.org/wiki/Market_failure) |\n| Wikipedia, ‘Markov process’ | [Source](https://en.wikipedia.org/wiki/Markov_chain) |\n| Wikipedia, ‘Production function’ | [Source](https://en.wikipedia.org/wiki/Production_function) |\n| Wikipedia, ‘Random walk’ | [Source](https://en.wikipedia.org/wiki/Random_walk) |\n| Wikipedia, ‘Solow-Swan model’ | [Source](https://en.wikipedia.org/wiki/Solow%E2%80%93Swan_model) |\n| Wikipedia, ‘Stochastic calculus’ | [Source](https://en.wikipedia.org/wiki/Stochastic_calculus) |\n| Wikipedia, ‘Total factor productivity’ | [Source](https://en.wikipedia.org/wiki/Total_factor_productivity) |\n| Young (1998) | [Source](https://www.jstor.org/stable/10.1086/250002) |\n\n\n \n\n\n \n\n\n\n\n[Expand Footnotes\n \n\n\n\n\n Collapse Footnotes](javascript:void(0);)\n\n\n[1.](https://www.openphilanthropy.org/could-advanced-ai-drive-explosive-economic-growth#footnoteref1_ag65872)Grace et al. (2017) ‘[When Will AI Exceed Human Performance? Evidence from AI Experts](https://arxiv.org/pdf/1705.08807.pdf).’\n\n\n[2.](https://www.openphilanthropy.org/could-advanced-ai-drive-explosive-economic-growth#footnoteref2_anuh8e5)[Davidson (2020a)](https://www.openphilanthropy.org/research/report-on-semi-informative-priors/).\n\n\n[3.](https://www.openphilanthropy.org/could-advanced-ai-drive-explosive-economic-growth#footnoteref3_gwy1rm6)The ‘frontier’ refers to the country, or group of countries, with the highest levels of technology and GDP/capita.\n\n\n[4.](https://www.openphilanthropy.org/could-advanced-ai-drive-explosive-economic-growth#footnoteref4_hsb5rtg)More precisely, models in which each successive 1% increase in the level of technology requires more research effort than the last.\n\n\n[5.](https://www.openphilanthropy.org/could-advanced-ai-drive-explosive-economic-growth#footnoteref5_748imw3)The ‘frontier’ refers to the country, or group of countries, with the highest levels of technology and GDP/capita. Why focus on frontier GDP/capita? Many economists separate GWP growth into three components: growth of frontier GDP/capita, catch-up growth and population growth. They forecast that frontier GDP/capita growth will be the main contributor to GWP growth out to 2100. This is because population growth is projected to slow down and perhaps stop altogether by 2100 (e.g. [by the UN](https://population.un.org/wpp/)) and the scope for catch-up growth is limited.\n\n\n[6.](https://www.openphilanthropy.org/could-advanced-ai-drive-explosive-economic-growth#footnoteref6_zxpo86a)The trend of constant exponential growth is fairly striking for the US, with the only real exception being the [Great Depression](https://en.wikipedia.org/wiki/Great_Depression) of the 1930s. However, the trend is not as striking for other regions near the frontier. For example, in England ([here](https://ourworldindata.org/grapher/gdp-per-capita-in-the-uk-since-1270?yScale=log&time=1900..2016)) and in Western Europe as a whole ([here](https://ourworldindata.org/grapher/average-real-gdp-per-capita-regions-1960-2016)), growth is noticeably higher in the second half of the 20th century than in the first half.\n\n\n[7.](https://www.openphilanthropy.org/could-advanced-ai-drive-explosive-economic-growth#footnoteref7_30117rz)Why not focus on GWP per capita? Our focus on GWP, rather than GWP per capita, is natural because we are forecasting GWP, not GWP/capita. In addition, I find that the data series of GWP provides the strongest argument for explosive growth. Although GWP per capita displays clear super-exponential growth ([here](https://www.openphilanthropy.org/research/could-advanced-ai-drive-explosive-economic-growth/#AppendixI)), the trend is a worse fit for the endogenous growth models discussed below.\n\n\n[8.](https://www.openphilanthropy.org/could-advanced-ai-drive-explosive-economic-growth#footnoteref8_n9n1d07)[Romer (1986)](http://www.dklevine.com/archive/refs42232.pdf) discusses the super-exponential growth in GDP/capita for a number of developed countries.\n\n\n[9.](https://www.openphilanthropy.org/could-advanced-ai-drive-explosive-economic-growth#footnoteref9_sstcpum)The y-axis is logarithmic. On the x-axis, years are spaced according to the formula –*log(2050 – year)*. So the following data points are equally spaced: 2000, 1950, 1850, 1650, and 1250. (For each successive data point, *2047 – year*doubles and *log(2050 – year)* increases by a fixed amount.) The power-law implies GWP will go to infinity in 2047; 2050, rather than 2047, is used for convenience.\n\n\n[10.](https://www.openphilanthropy.org/could-advanced-ai-drive-explosive-economic-growth#footnoteref10_slnnzmw)See [David Roodman’s](https://www.openphilanthropy.org/about/team/david-roodman/) [blog post](https://www.openphilanthropy.org/research/modeling-the-human-trajectory/) for a longer and more accessible explanation of these ideas.\n\n\n[11.](https://www.openphilanthropy.org/could-advanced-ai-drive-explosive-economic-growth#footnoteref11_sm96ci8)The GWP data used in Roodman’s [report](https://www.openphilanthropy.org/wp-content/uploads/Modeling-the-human-trajectory.pdf) shows that GWP growth first exceeded 0.03% in 5000 BCE, 0.3% in 1400, and 3% shortly after 1900.\n\n\n[12.](https://www.openphilanthropy.org/could-advanced-ai-drive-explosive-economic-growth#footnoteref12_f4g2euz)We again choose the axes so that a power law is a straight line. The y-axis is logarithmic. On the x-axis, years are spaced according to the formula *log(2050 – year)*. A straight line fit indicates that growth increased by the same proportion (e.g. doubling) during each of the following periods: 1250 → 1650, 1650 → 1850, 1850 → 1950, 1950 → 2000.\n\n\n[13.](https://www.openphilanthropy.org/could-advanced-ai-drive-explosive-economic-growth#footnoteref13_fulntur)I discuss the *ignorance story* more in an [appendix](https://www.openphilanthropy.org/research/could-advanced-ai-drive-explosive-economic-growth/#AppendixD).\n\n\n[14.](https://www.openphilanthropy.org/could-advanced-ai-drive-explosive-economic-growth#footnoteref14_mq8n729)See Figure S7 in the [appendix](https://www.pnas.org/content/pnas/suppl/2018/05/09/1713628115.DCSupplemental/pnas.1713628115.sapp.pdf).\n\n\n[15.](https://www.openphilanthropy.org/could-advanced-ai-drive-explosive-economic-growth#footnoteref15_od7hozl)See Figure S7 in the [appendix](https://www.pnas.org/content/pnas/suppl/2018/05/09/1713628115.DCSupplemental/pnas.1713628115.sapp.pdf).\n\n\n[16.](https://www.openphilanthropy.org/could-advanced-ai-drive-explosive-economic-growth#footnoteref16_kwpeuel)See more detail on the expert survey in this appendix.\n\n\n[17.](https://www.openphilanthropy.org/could-advanced-ai-drive-explosive-economic-growth#footnoteref17_3s74scs)From p. 13 of the [appendix](https://www.pnas.org/content/pnas/suppl/2018/05/09/1713628115.DCSupplemental/pnas.1713628115.sapp.pdf):\n\n\n\n> A rating of 1 indicates little expertise, a rating of 5 indicates the expertise of someone who has studied the subject but is not a specialist, and a rating of 10 indicates expertise that is among the leading experts. The mean self-reported level of expertise is 5.99 and the median is 6.\n> \n> \n\n\n[18.](https://www.openphilanthropy.org/could-advanced-ai-drive-explosive-economic-growth#footnoteref18_mnpxa3b)This graph, and the ones that follow, are taken from the [blog post](https://www.openphilanthropy.org/research/modeling-the-human-trajectory/#the-human-past-coarsely-quantified) of my colleague, [David Roodman](https://www.openphilanthropy.org/about/team/david-roodman/).\n\n\n[19.](https://www.openphilanthropy.org/could-advanced-ai-drive-explosive-economic-growth#footnoteref19_hcb7ii4)The term ‘endogenous’ can be used to describe individual inputs (as I use it here), or growth theories as a whole.\n\n\n[20.](https://www.openphilanthropy.org/could-advanced-ai-drive-explosive-economic-growth#footnoteref20_d9eutyd)The standard reinvestment equation is *dK/dt* = *s* × *Y* – δ × *K*. In sophisticated models the fraction s of output that is reinvested may depend on numerous further factors.\n\n\n[21.](https://www.openphilanthropy.org/could-advanced-ai-drive-explosive-economic-growth#footnoteref21_hfoylil)The most highly cited papers, and those used in climate change forecasts, tended to be exogenous. For example, the following papers all assume technology grows exponentially: [Foure (2012)](https://papers.ssrn.com/sol3/papers.cfm?abstract_id=2004332), [Johansson (2013)](https://www.oecd-ilibrary.org/docserver/5k4ddxpr2fmr-en.pdf?expires=1589799470&id=id&accname=guest&checksum=54E2A042C209AEE9BE4D886BBA2E139E), [Crespo (2017)](http://pure.iiasa.ac.at/id/eprint/11290/1/GEC_Revision_3rd_Round.pdf), [Leimbach (2016)](https://www.sciencedirect.com/science/article/pii/S0959378015000242?via%3Dihub), and [Riahi (2017)](https://www.sciencedirect.com/science/article/pii/S0959378016300681). The DICE climate change model of [Nordhaus and Sztorc (2013)](http://www.econ.yale.edu/~nordhaus/homepage/homepage/documents/DICE_Manual_100413r1.pdf) assumes technology follows a logistic curve, growing ever more slowly over time. [Kruse-Anderson (2017)](https://papers.ssrn.com/sol3/papers.cfm?abstract_id=2947528) fits endogenous models to historical data and projects out to 2100 using endogenous growth models, predicting slowing growth.\n\n\n[22.](https://www.openphilanthropy.org/could-advanced-ai-drive-explosive-economic-growth#footnoteref22_j7nrukj)Imagine adding more and more machines, holding fixed the number of workers and the level of technology. Eventually, all the workers would have their hands full running the machines that already exist, and more machines would increase output by very little.\n\n\n[23.](https://www.openphilanthropy.org/could-advanced-ai-drive-explosive-economic-growth#footnoteref23_s0ctbmq)The long-run growth rate of output (GDP) is the sum of the growth rates of the exogenous inputs, labor and technology. The long-run growth rate of GDP/capita is the growth rate of technology, because (in the long-run) growth of labor doesn’t affect GDP/capita. (This is because GDP/capita = (output / labor), and long-run growth of labor increases both the numerator and the denominator by the same amount.)\n\n\n[24.](https://www.openphilanthropy.org/could-advanced-ai-drive-explosive-economic-growth#footnoteref24_hc50owa)I discuss semi-endogenous models in [this subsection](https://www.openphilanthropy.org/research/could-advanced-ai-drive-explosive-economic-growth/#JKSModels) of Appendix B.\n\n\n[25.](https://www.openphilanthropy.org/could-advanced-ai-drive-explosive-economic-growth#footnoteref25_ta5loqb)Why do semi-endogenous growth models have this implication? They assume that *ideas are getting harder to find*, where each ‘idea’ is understood as increasing people’s incomes by a fixed %. This assumption is used to explain why exponentially growing research effort has led to a constant flow of ideas. But if research effort stops growing, and is instead constant, then this assumption implies that we will find fewer new ideas each year. As a result growth in GDP/capita will slow.\n\n\nThe case for sub-exponential growth is strengthened by noting that the fraction of people doing R&D has grown rapidly over the past 100 years, and this growth cannot be maintained indefinitely. To sustain the historical rate of GDP/capita growth, semi-endogenous models imply we’d have to maintain the historical growth rates of both the population *and* the fraction of people doing R&D.\n\n\n[26.](https://www.openphilanthropy.org/could-advanced-ai-drive-explosive-economic-growth#footnoteref26_k108lue)Slower future growth is also suggested by the slowing growth over the past ~20 years, some of the arguments in Vollrath’s recent book *[Fully Grown](https://www.amazon.com/Fully-Grown-Stagnant-Economy-Success/dp/022666600X)*, and of course the arguments in Robert Gordon’s book *[The Rise and Fall of American Growth](https://www.amazon.co.uk/Rise-Fall-American-Growth-Standard/dp/153661825X)*.\n\n\n[27.](https://www.openphilanthropy.org/could-advanced-ai-drive-explosive-economic-growth#footnoteref27_1m65cg8)See examples of market concentration [here](https://en.wikipedia.org/wiki/Market_concentration#Real_World_Examples) and an analysis [here](https://www.aeaweb.org/articles?id=10.1257/aer.p20171102).\n\n\n[28.](https://www.openphilanthropy.org/could-advanced-ai-drive-explosive-economic-growth#footnoteref28_f7mpr72)Galor and Weil (2000) suggest an alternative equilibration mechanism. In their model, faster growth reduces the fertility rate, which in turn slows growth. Conversely, slower growth boosts the fertility rate, which in turn speeds up growth. The model implies the population level (or growth rate) will remain constant, holding the growth rate of technology constant. However, I wouldn’t trust the predictions of this model out to 2100, as the UN forecasts population growth to slow.\n\n\n[29.](https://www.openphilanthropy.org/could-advanced-ai-drive-explosive-economic-growth#footnoteref29_mjj02y7)I discuss this model in more detail [here](https://www.openphilanthropy.org/research/could-advanced-ai-drive-explosive-economic-growth/#MightMarketDynamics).\n\n\n[30.](https://www.openphilanthropy.org/could-advanced-ai-drive-explosive-economic-growth#footnoteref30_se9dn2o)More precisely, I think it’s ~75% likely that the recent exponential growth of GDP/capita is ultimately explained by the exponential growth of human population. Semi-endogenous models embody this claim and highlight the importance of targeted R&D to growth, but [other models](https://www.openphilanthropy.org/research/could-advanced-ai-drive-explosive-economic-growth/#Arrow1962) embody the claim and highlight the importance of learning by doing.\n\n\n[31.](https://www.openphilanthropy.org/could-advanced-ai-drive-explosive-economic-growth#footnoteref31_4p93heq)See for example [Lee (1988)](https://www.tandfonline.com/doi/abs/10.1080/08898488809525278?journalCode=gmps20), [Kremer (1993)](http://faculty.econ.ucdavis.edu/faculty/gclark/210a/readings/kremer1993.pdf) and [Roodman (2020)](https://www.openphilanthropy.org/wp-content/uploads/Modeling-the-human-trajectory.pdf). Roodman (2020) reviews other *long-run explosive* models.\n\n\n[32.](https://www.openphilanthropy.org/could-advanced-ai-drive-explosive-economic-growth#footnoteref32_e8ezd9y)They often have a ‘fixed factor’, land, that is exogenous. They’re called ‘fully endogenous’ because all the non-fixed factors are endogenous.\n\n\n[33.](https://www.openphilanthropy.org/could-advanced-ai-drive-explosive-economic-growth#footnoteref33_bothkg6) More precisely, let *X* be the amount of an input and *Y* be the quantity of output. *X* is accumulable just if *dX/dt* is an increasing function of *Y*. One way to think about this is that accumulable inputs are bottlenecked by the amount of output.\n\n\nA simple example is the equation for capital reinvestment: *dK/dt* = *s* × *Y* – δ × *K*. Others examples can be found in Lee (1998): *dL/dt* = *L* × α × [log(*Y*/*L*) – *constant*], *dA/dt* = *constant* × *A* × log((*Y*/*A*)*m*).\n\n\n[34.](https://www.openphilanthropy.org/could-advanced-ai-drive-explosive-economic-growth#footnoteref34_urps57m)Increases in capital are typically modeled as resulting from the direct investment of a fraction *sK* of output: *dK* = *sK* × *Y*. In Roodman’s model, the mechanism for increasing population is identical: *dP* = *sP* × *Y*. In Lee (1988) the mechanism is slightly different; we can roughly represent it as *dP* = *sP* × ln(*Y*). In Kremer (1993) Section 1, all output is converted directly into population; we can roughly represent this as *dP* = (*conversion factor*) × *dY*.\n\n\n[35.](https://www.openphilanthropy.org/could-advanced-ai-drive-explosive-economic-growth#footnoteref35_ya06i0h)Note: explosive models may contain many relationships *not* displayed in the diagram. The diagram is just designed to highlight some of the important features.\n\n\n[36.](https://www.openphilanthropy.org/could-advanced-ai-drive-explosive-economic-growth#footnoteref36_ei0ct1j)In Cobb-Douglas models, this assumption corresponds to the claim that the sum of the exponents of accumulable inputs exceeds 1.\n\n\n[37.](https://www.openphilanthropy.org/could-advanced-ai-drive-explosive-economic-growth#footnoteref37_xio1261)For more on this, see the introduction of [Jones (2005)](https://web.stanford.edu/~chadj/JonesHandbook2005.pdf) or [Romer (1990)](http://web.stanford.edu/~klenow/Romer_1990.pdf).\n\n\n[38.](https://www.openphilanthropy.org/could-advanced-ai-drive-explosive-economic-growth#footnoteref38_e8zoph5)Why do increasing returns naturally lead to super-exponential growth? Let’s explain the intuition using a simple example where output *Y* is just produced by capital *K*. *Y* = *K*α, *dK/dt* = *s* × *Y*. Increasing returns means that α > 1. If so, then by the time *K* doubles, *Y* *more than* doubles, so *dX/dt* more than doubles. This means the growth rate of *K*, (*dK/dt*)/*K*, increases. In other words, the growth rate of *K* increases when *K* doubles. More generally, increasing returns make it possible for inputs’ growth rates to increase when the system doubles in size.\n\n\n[39.](https://www.openphilanthropy.org/could-advanced-ai-drive-explosive-economic-growth#footnoteref39_sbk7wmg)Appendix C supports this claim by analyzing the precise conditions for growth in many long-run explosive models – see [here](https://www.openphilanthropy.org/research/could-advanced-ai-drive-explosive-economic-growth/#LongRun).\n\n\n[40.](https://www.openphilanthropy.org/could-advanced-ai-drive-explosive-economic-growth#footnoteref40_ywr9sui)This statement is an oversimplification in relation to Roodman’s univariate model. That model does not model population explicitly at all – its sole variable refers to GWP. However, the model is the univariate analogue of a model in which all inputs are accumulable, including population.\n\n\nTechnically, the univariate model can approximate a multivariate model where population isn’t accumulable *if* increasing returns to the other accumulable inputs are powerful enough to drive super-exponential growth. However, this doesn’t happen for realistic parameter values ([more](https://www.openphilanthropy.org/research/could-advanced-ai-drive-explosive-economic-growth/#CanDiminishingReturns)).\n\n\n[41.](https://www.openphilanthropy.org/could-advanced-ai-drive-explosive-economic-growth#footnoteref41_zfkcphx)See data on UK, France, Netherlands and US in [this graph](https://www.ncbi.nlm.nih.gov/core/lw/2.0/html/tileshop_pmc/tileshop_pmc_inline.html?title=Click%20on%20image%20to%20zoom&p=PMC3&id=4116081_nihms-526936-f0001.jpg) from [Galor (2012)](https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4116081/).\n\n\n[42.](https://www.openphilanthropy.org/could-advanced-ai-drive-explosive-economic-growth#footnoteref42_zn8063e)If population were accumulable then, holding all else constant, increasing GDP should *increase* future population. But since ~1880 increases in GDP, holding population constant, have *decreased* population growth.\n\n\n[43.](https://www.openphilanthropy.org/could-advanced-ai-drive-explosive-economic-growth#footnoteref43_mc942n7)When labor isn’t accumulable, the returns to accumulable inputs are not large enough to overcome diminishing returns to R&D, with realistic parameter values (see [more](https://www.openphilanthropy.org/research/could-advanced-ai-drive-explosive-economic-growth/#CanDiminishingReturns)).\n\n\n[44.](https://www.openphilanthropy.org/could-advanced-ai-drive-explosive-economic-growth#footnoteref44_0sps1do)For example, see [Jones (2001)](https://web.stanford.edu/~chadj/bc400.pdf), [Galor and Weil (2000)](https://www.researchgate.net/publication/4733968_Population_Technology_and_Growth_From_Malthusian_Stagnation_to_the_Demographic_Transition_and_Beyond), and the [Kremer (1993)](http://faculty.econ.ucdavis.edu/faculty/gclark/210a/readings/kremer1993.pdf) Part 3.\n\n\n[45.](https://www.openphilanthropy.org/could-advanced-ai-drive-explosive-economic-growth#footnoteref45_d8si24t)In Galor and Weil (2000), there are strictly speaking only constant returns to accumulable factors. The model, however, is still characterized by increasing returns because once the population has doubled, the growth rates of technology and labor both increase. In addition, increasing human capital driven by education investment plays an important part in generating super-exponential growth around the industrial revolution.\n\n\n[46.](https://www.openphilanthropy.org/could-advanced-ai-drive-explosive-economic-growth#footnoteref46_riy07ku)There is a slight difference in emphasis in [Jones (2001)](https://web.stanford.edu/~chadj/bc400.pdf) and [Galor and Weil (2000)](https://www.researchgate.net/publication/4733968_Population_Technology_and_Growth_From_Malthusian_Stagnation_to_the_Demographic_Transition_and_Beyond). Their feedback loop is more naturally described as: more ideas → more output/capita → more people → more ideas… They specify a relationship between output/capita and fertility directly, rather than between output and population increases. As mentioned above, Galor and Weil (2000) emphasizes educational investment boosting growth around the industrial revolution: more ideas → more output/capita → more*and better educated* people → more ideas…\n\n\n[47.](https://www.openphilanthropy.org/could-advanced-ai-drive-explosive-economic-growth#footnoteref47_di9mc93)What are these mechanisms? In Jones (2001), fertility decreases with GDP/capita and so the demographic transition occurs when people become sufficiently rich. In Galor and Weil (2000), fertility decreases with the growth rate of technology and so the demographic transition occurs once the growth rate becomes sufficiently high.\n\n\n[48.](https://www.openphilanthropy.org/could-advanced-ai-drive-explosive-economic-growth#footnoteref48_91rxui2)In particular, Galor and Weil (2000) approximates the [Romer model](https://www.openphilanthropy.org/research/could-advanced-ai-drive-explosive-economic-growth/#RGHAHModels) and Jones (2001) approximates a [semi-endogenous growth model](https://www.openphilanthropy.org/research/could-advanced-ai-drive-explosive-economic-growth/#JKSModels). As discussed above, my view is that semi-endogenous models are more plausible and that they imply 21st century growth will be sub-exponential.\n\n\n[49.](https://www.openphilanthropy.org/could-advanced-ai-drive-explosive-economic-growth#footnoteref49_bo9fcrf)I explain the dynamics of Jones (2001) and Galor and Weil (2000) in [this technical appendix](https://www.openphilanthropy.org/research/could-advanced-ai-drive-explosive-economic-growth/#AppendixI).\n\n\n[50.](https://www.openphilanthropy.org/could-advanced-ai-drive-explosive-economic-growth#footnoteref50_777j3i1)Increasing returns leads to a smooth curve of super-exponential growth, where growth increases very slowly at first and then more and more quickly over time. There are no structural breaks. I say ‘fairly’ smooth because increasing return models may allow for random influences on growth, as in Roodman (2020).\n\n\n[51.](https://www.openphilanthropy.org/could-advanced-ai-drive-explosive-economic-growth#footnoteref51_1pnhp2x)Galor and Weil (2000), Jones (2001), Kremer (1993), and Lee (1988).\n\n\n[52.](https://www.openphilanthropy.org/could-advanced-ai-drive-explosive-economic-growth#footnoteref52_c58y278)For example, [Hansen and Prescott (2002)](https://www.jstor.org/stable/3083308?casa_token=lwxmDZzncTgAAAAA:Kv9Tpwl1_ZyXxX8QQsInbOWEpNtyvFET8JZPaY9j1erV5C9IOqJYF7DYkC1AjBgFRQfoYa3XvrKshnKAinI7NW6FtzhY-BzuyTVo7ClwDESn4FZwf8iY&seq=1) discuss a model in which a phase transition increases growth. Initially the economy faces diminishing returns to labor due to the fixed factor land. But once exogenously growing technology is high enough, it becomes profitable for firms to use less land-intensive production processes; this phase transition increases growth. Other examples include [Goodfriend and McDermott (1995)](https://econpapers.repec.org/article/aeaaecrev/v_3a85_3ay_3a1995_3ai_3a1_3ap_3a116-33.htm), [Lucas (1998)](http://www.econ.hku.hk/~cwyuen/seminar/papers/Lucas%20(Kuznets%20Lectures).pdf), [Stokey (2001)](https://www.sciencedirect.com/science/article/abs/pii/S0167223101800038), [Tamura (2002)](https://isiarticles.com/bundles/Article/pre/pdf/18414.pdf) and [Hanson (2000)](https://www.researchgate.net/profile/Robin_Hanson2/publication/228557195_Long-term_growth_as_a_sequence_of_exponential_modes/links/0046351fac48cd6ca3000000/Long-term-growth-as-a-sequence-of-exponential-modes.pdf).\n\n\n[53.](https://www.openphilanthropy.org/could-advanced-ai-drive-explosive-economic-growth#footnoteref53_owigcbw)Note, Galor and Weil (2000) and Jones (2001) feature both increasing returns to accumulable inputs *and* a structural change around the industrial revolution that speeds up technological progress. In Jones (2001) there’s an increase in the fraction of the population doing R&D; in Galor and Weil (2000) there’s a shift towards more education.\n\n\n[54.](https://www.openphilanthropy.org/could-advanced-ai-drive-explosive-economic-growth#footnoteref54_nubueri)I discuss the step-change view in more detail [here](https://www.openphilanthropy.org/research/could-advanced-ai-drive-explosive-economic-growth/#TheStepChangeLends).\n\n\n[55.](https://www.openphilanthropy.org/could-advanced-ai-drive-explosive-economic-growth#footnoteref55_0i68hil)I discuss the uncertainty of the ancient data points more [here](https://www.openphilanthropy.org/research/could-advanced-ai-drive-explosive-economic-growth/#TheAncientData).\n\n\n[56.](https://www.openphilanthropy.org/could-advanced-ai-drive-explosive-economic-growth#footnoteref56_oykf9mq)Ben Garfinkel explicitly proposes a slow step-change view [here](https://forum.effectivealtruism.org/posts/CWFn9qAKsRibpCGq8/does-economic-history-point-toward-a-singularity?commentId=3D8hpEFbYmEGA8i5P). Such a view should probably allow for another step-change increase in growth around 10,000 BCE; growth seems to have increased in this period, plausibly due to the [Neolithic Revolution](https://en.wikipedia.org/wiki/Neolithic_Revolution#). This strengthens the case for this view being open to another step-change occurring in the future.\n\n\n[57.](https://www.openphilanthropy.org/could-advanced-ai-drive-explosive-economic-growth#footnoteref57_9r49c0f)There may be other plausible explanations for some of these rankings. For example, Eurasia seems to have started with a better supply of domesticable plants and animals than Australia; this factor alone may have been enough to cause Australia to discover farming later. Early population levels may also correlate with biodiversity, which could help with the early stages of technological development. Thanks to Ben Garfinkel for making the point.\n\n\n[58.](https://www.openphilanthropy.org/could-advanced-ai-drive-explosive-economic-growth#footnoteref58_kgmbpko)I was not able to spend much time investigating the relative importance of increasing returns vs other mechanisms in explaining long run growth; we hope to do more work on this in the future. Ben Garfinkel [argues](https://docs.google.com/document/d/1wcEPEb2mnZ9mtGlkv8lEtScUw1k_dI0akbuu1ltb0gM/edit#heading=h.fdrz915e3wk4) that new ideas were not the central driver of growth before the industrial revolution, and [suggests](https://docs.google.com/document/d/1wcEPEb2mnZ9mtGlkv8lEtScUw1k_dI0akbuu1ltb0gM/edit#heading=h.s6xrl5synz9n) that population data doesn’t show much evidence of increasing growth rates in the period 5,000 BCE to 1500 CE. One possibility Ben mentions is that the increasing returns mechanism became the central driver of growth around the time of the industrial revolution, when the population and research effort became large enough for new ideas to become a dominant driver of growth.\n\n\n[59.](https://www.openphilanthropy.org/could-advanced-ai-drive-explosive-economic-growth#footnoteref59_8m5d36u)Technological advances other than AI could potentially make population accumulable. Examples include whole-brain emulations, artificial wombs, and genetic engineering. behavioral changes could also make population accumulable, e.g. if everyone tried to have as many kids as biologically possible. This report focuses on advanced AI because we believe it is more likely to occur this century than these alternatives, and because it ties in with Open Philanthropy’s focus area of risks from advanced AI.\n\n\n[60.](https://www.openphilanthropy.org/could-advanced-ai-drive-explosive-economic-growth#footnoteref60_5208hb3)Again, if diminishing marginal returns to technology R&D are steep enough, this could prevent super-exponential growth. Plausible parameter values suggest this would not happen if capital can substitute for labor in all jobs.\n\n\n[61.](https://www.openphilanthropy.org/could-advanced-ai-drive-explosive-economic-growth#footnoteref61_718krsu)AI robots are a form of capital, so it’s natural to use the same reinvestment equation as for capital: *dR/dt* = *s* × *Y* – δ × *R*.\n\n\n[62.](https://www.openphilanthropy.org/could-advanced-ai-drive-explosive-economic-growth#footnoteref62_jrifbxj)I discuss these models in Appendix C – see [here](https://www.openphilanthropy.org/research/could-advanced-ai-drive-explosive-economic-growth/#AppendixC).\n\n\n[63.](https://www.openphilanthropy.org/could-advanced-ai-drive-explosive-economic-growth#footnoteref63_iyj6gae)The hardware-software model in [Growiec (2020)](https://econpapers.repec.org/paper/sghkaewps/2019042.htm) offers a unified model for explaining pre-modern growth, the industrial revolution, and what he calls the ‘digital revolution’ that has only just started. Capital and labor are replaced by hardware (‘brawn’) and software (‘brains’) as the fundamental inputs to production. In the digital revolution advanced AI decouples overall software supply from the size of the human population; this makes software accumulable and leads to an increase in growth.\n\n\n[64.](https://www.openphilanthropy.org/could-advanced-ai-drive-explosive-economic-growth#footnoteref64_iaatm75)Intuitively, human workers are bottlenecking growth; advanced AI would release that bottleneck and increase growth. By analogy, the fixed supply of land may have bottlenecked growth in ancient times; the industrial revolution may have released that bottleneck and increased growth. (During the industrial revolution, we moved over to less land-intensive production processes.)\n\n\n[65.](https://www.openphilanthropy.org/could-advanced-ai-drive-explosive-economic-growth#footnoteref65_x0g014m)The papers I’ve studied most closely are Nordhaus (2021), Aghion et al. (2017), and Hanson (2001), and the AI growth literature review [Trammell and Korinek (2021)](https://docs.google.com/document/d/1XCn4Pk44evZEjbmPD-zKlj26Z_bhf0ABeWVID_4L5sg/edit).\n\n\n[66.](https://www.openphilanthropy.org/could-advanced-ai-drive-explosive-economic-growth#footnoteref66_qeu5f04)What is the difference between this condition and that of perfect substitutability? The key parameter is the elasticity of substitution, σ. σ > 1 is a weaker claim than perfect substitution, which corresponds to σ = ∞. I like to think about the difference as follows. Imagine replacing human workers with capital one by one. When σ = ∞, the amount of capital needed to replace each worker is fixed. It’s like we replace each worker with an AI robot at fixed cost. But when 1 < σ < ∞, the amount of capital needed to replace each worker increases as fewer workers remain. For example, one unit of capital replaces the first worker, two units replace the second worker, three units replace the third, etc. It’s as if each worker does a different role, and the initial roles are cheaper to automate than the latter ones. For both 1 < σ < ∞ and σ = ∞, the growth rate of output ultimately approaches the growth rate of capital. What about σ < 1? In this case output cannot exceed a fixed ceiling no matter how much capital you have, holding labor constant. Intuitively, *no* amount of capital can fully replace a human worker.\n\n\n[67.](https://www.openphilanthropy.org/could-advanced-ai-drive-explosive-economic-growth#footnoteref67_m6jaw99)Two clarifications. Firstly, the rate of task automation would have to *increase* from its current value to boost growth. Secondly, to increase the rate of exponential growth we must automate a constant fraction of non-automated tasks each year (e.g. the total fraction of automated tasks goes 0%, 50%, 75%, 87.5%,… – we automate half the non-automated tasks each year). Thirdly, super-exponential growth is possible if we automate an *increasing* *fraction* of non-automated tasks each year (e.g. the total fraction of automated tasks goes 0%, 50%, 80%, 95%,… – we automate 1/2 the tasks in the first year, 2/3 in the second year, 3/4 in the third year). For super-exponential growth there must also be some capital augmenting technological progress in the background.\n\n\n[68.](https://www.openphilanthropy.org/could-advanced-ai-drive-explosive-economic-growth#footnoteref68_zl227tb)I explain my thinking about what AI would be sufficient for explosive growth in more detail [here](https://www.openphilanthropy.org/research/could-advanced-ai-drive-explosive-economic-growth/#WhatLevelOfAI).\n\n\n[69.](https://www.openphilanthropy.org/could-advanced-ai-drive-explosive-economic-growth#footnoteref69_b18lkdr)I analyze the conditions for super-exponential growth in semi-endogenous models [here](https://www.openphilanthropy.org/research/could-advanced-ai-drive-explosive-economic-growth/#EndogenousGrowth), and the conditions in exogenous models [here](https://www.openphilanthropy.org/research/could-advanced-ai-drive-explosive-economic-growth/#ExogenousGrowthModels).\n\n\n[70.](https://www.openphilanthropy.org/could-advanced-ai-drive-explosive-economic-growth#footnoteref70_ye86yau) I personally find these mechanisms more speculative than the one I’ve focused on.\n\n\n[71.](https://www.openphilanthropy.org/could-advanced-ai-drive-explosive-economic-growth#footnoteref71_hto14ly)[Grace, Katja (2017)](https://arxiv.org/pdf/1705.08807.pdf).\n\n\n[72.](https://www.openphilanthropy.org/could-advanced-ai-drive-explosive-economic-growth#footnoteref72_66hmpaz)I discuss the framing issues more in a footnote [here](https://www.openphilanthropy.org/research/could-advanced-ai-drive-explosive-economic-growth/#what-about-diminishing).\n\n\n[73.](https://www.openphilanthropy.org/could-advanced-ai-drive-explosive-economic-growth#footnoteref73_ty4q4pk)[Agrawal et al. (2019)](https://www.nber.org/system/files/chapters/c14024/c14024.pdf) discuss a mechanism where AI assistance in research raises the returns to human research efforts.\n\n\n[74.](https://www.openphilanthropy.org/could-advanced-ai-drive-explosive-economic-growth#footnoteref74_pfgc4qn)Appendix A also discusses two other objections from Aghion et al. (2017): ‘[search limits](https://www.openphilanthropy.org/research/could-advanced-ai-drive-explosive-economic-growth/#what-about-diminishing)’ and ‘[Baumol tasks](https://www.openphilanthropy.org/research/could-advanced-ai-drive-explosive-economic-growth/#Baumol-tasks)’.\n\n\n[75.](https://www.openphilanthropy.org/could-advanced-ai-drive-explosive-economic-growth#footnoteref75_mcrntzb)For an example of an objection in this vein, see Point 9 in [this blog post](https://www.econlib.org/archives/2016/06/whats_wrong_in.html) by Bryan Caplan.\n\n\n[76.](https://www.openphilanthropy.org/could-advanced-ai-drive-explosive-economic-growth#footnoteref76_jaslj14)Between 1979 and 2018, Chinese GDP grew by an average of 9.5% per year ([source](https://fas.org/sgp/crs/row/RL33534.pdf)).\n\n\n[77.](https://www.openphilanthropy.org/could-advanced-ai-drive-explosive-economic-growth#footnoteref77_tx6c81k)In his review of this report, Anton Korinek raises the intriguing possibility that although the *human*economy does not grow at 30% per year, a virtual *AI*economy with which the human economy interacts does grow at 30%.\n\n\n[78.](https://www.openphilanthropy.org/could-advanced-ai-drive-explosive-economic-growth#footnoteref78_uxclzxb)[Bacteria populations can double in size once every 10 minutes under ideal conditions](https://www.ncbi.nlm.nih.gov/pmc/articles/PMC6015860/#:~:text=They%20are%20known%20to%20have,just%209.8%20min%20%5B55%5D.); [there’s evidence that phytoplankton populations can double once every day](https://academic.oup.com/plankt/article/39/1/13/2528006).\n\n\n[79.](https://www.openphilanthropy.org/could-advanced-ai-drive-explosive-economic-growth#footnoteref79_9eq8jxz)For example, see Section 4 of [this review](https://bcec.edu.au/assets/2019/06/BCEC-Working-Papers-19_02-Steven-Bond-Smith-The-decades-long-dispute-over-scale-effects.pdf).\n\n\n[80.](https://www.openphilanthropy.org/could-advanced-ai-drive-explosive-economic-growth#footnoteref80_ge536ur)I explain my overall probabilities and how I reached them in [Appendix G](https://www.openphilanthropy.org/research/could-advanced-ai-drive-explosive-economic-growth/#AppendixG).\n\n\n[81.](https://www.openphilanthropy.org/could-advanced-ai-drive-explosive-economic-growth#footnoteref81_nyskxuc)By this I mean ignoring theoretical considerations like ‘What explains the rise in growth rates?’ and ‘Is population accumulable?’, and only taking into account the historical growth data.\n\n\n[82.](https://www.openphilanthropy.org/could-advanced-ai-drive-explosive-economic-growth#footnoteref82_t11z8hr)[Upchurch (2018)](https://onlinelibrary.wiley.com/doi/abs/10.1111/ntwe.12124) has a similar thesis to Nordhaus (2021), but I haven’t investigated its claims in depth.\n\n\n[83.](https://www.openphilanthropy.org/could-advanced-ai-drive-explosive-economic-growth#footnoteref83_bwy25za)One of these – Test 6 – specifically relates to the share of information capital as a proportional of total capital. Two of the other tests – Tests 3 and 4 – Norhaus primarily applies to capital stock as a whole, but he also tests with data specific to information capital.\n\n\n[84.](https://www.openphilanthropy.org/could-advanced-ai-drive-explosive-economic-growth#footnoteref84_np3n2wm)Test 6 naively suggests that explosive growth will happen in > 100 years; Test 4 with IT-specific data suggests that explosive growth will happen but Nordhaus doesn’t calculate the expected date; Test 3 with IT-specific data suggests explosive growth won’t happen.\n\n\n[85.](https://www.openphilanthropy.org/could-advanced-ai-drive-explosive-economic-growth#footnoteref85_9y7bg9w)[Niochoj (2018)](https://www.boeckler.de/pdf/v_2018_10_27_niechoj.pdf) has a similar thesis.\n\n\n[86.](https://www.openphilanthropy.org/could-advanced-ai-drive-explosive-economic-growth#footnoteref86_okrtkd9)\n\n\n\n> Namely, there is no inherent inconsistency between forward-looking technological optimism and backward-looking disappointment. Both can simultaneously exist. Indeed, there are good conceptual reasons to expect them to simultaneously exist when the economy undergoes the kind of restructuring associated with transformative technologies. In essence, the forecasters of future company wealth and the measurers of historical economic performance show the greatest disagreement during times of technological change. In this paper we argue and present some evidence that the economy is in such a period now… Implicit or explicit in the pessimistic view of the future is that the recent slowdown in productivity growth portends slower productivity growth in the future. We begin by establishing one of the most basic elements of the story: that slow productivity growth today does not rule out faster productivity growth in the future. In fact, the evidence is clear that it is barely predictive at all.\n> \n> \n\n\n[87.](https://www.openphilanthropy.org/could-advanced-ai-drive-explosive-economic-growth#footnoteref87_ocerc7s)Indeed, Romer (1986), the first paper in the ‘endogenous growth’ wave, starts by looking at Maddison data over centuries.\n\n\n[88.](https://www.openphilanthropy.org/could-advanced-ai-drive-explosive-economic-growth#footnoteref88_sofn71p)This effect is closely related to [Baumol’s cost disease](https://en.wikipedia.org/wiki/Baumol%27s_cost_disease). Baumol found that sectors with high productivity growth often have a declining share of GDP. As a result, sectors with lower productivity growth are increasingly important to GDP and the GDP growth rate is dominated by these slow-growing sectors.\n\n\n[89.](https://www.openphilanthropy.org/could-advanced-ai-drive-explosive-economic-growth#footnoteref89_14rzyj2)Technically, this means that the elasticity of substitution between tasks is below one.\n\n\n[90.](https://www.openphilanthropy.org/could-advanced-ai-drive-explosive-economic-growth#footnoteref90_mkuuwpz)As output of automated tasks increases, the percentage of GDP spent on completing them falls (as the % spend on agriculture has fallen).\n\n\n[91.](https://www.openphilanthropy.org/could-advanced-ai-drive-explosive-economic-growth#footnoteref91_ijbq4rq)In this scenario, the model implies that growth cannot exceed *s* × *A* – δ. The reinvestment rate *s* is bounded below 1 and δ is constant, and so super-exponential growth can only be sustained if *A*, the level of technology, grows.\n\n\n[92.](https://www.openphilanthropy.org/could-advanced-ai-drive-explosive-economic-growth#footnoteref92_rc5k5y3)For growth to *permanently* increase in this model, we must automate a constant fraction of non-automated tasks each year. If some fixed fraction of tasks can never be automated, this process cannot continue indefinitely.\n\n\n[93.](https://www.openphilanthropy.org/could-advanced-ai-drive-explosive-economic-growth#footnoteref93_yo6j35o)If tasks are automated faster, peak growth will be higher.\n\n\n[94.](https://www.openphilanthropy.org/could-advanced-ai-drive-explosive-economic-growth#footnoteref94_sqxcaep)The speed of capital accumulation depends on the following equation: *dK/dt* = *s* × *A* × *F*(*K*, *L*) – δ × *K*, where *s* is the investment rate and *A* is the level of technology. It’s not possible to sustain faster output growth than *s* × *A* – δ.\n\n\n[95.](https://www.openphilanthropy.org/could-advanced-ai-drive-explosive-economic-growth#footnoteref95_wqg1yrf)In the language of the model, this corresponds to the fraction of tasks that we cannot automate.\n\n\n[96.](https://www.openphilanthropy.org/could-advanced-ai-drive-explosive-economic-growth#footnoteref96_s60c9gl)If we are initially very productive at the non-automated task compared to the other tasks, it will be longer before it becomes a bottleneck.\n\n\n[97.](https://www.openphilanthropy.org/could-advanced-ai-drive-explosive-economic-growth#footnoteref97_bf3mlwd)Thanks to Trammell and Korinek (2021) for this insight.\n\n\n[98.](https://www.openphilanthropy.org/could-advanced-ai-drive-explosive-economic-growth#footnoteref98_t76sn36)See their ‘Baumol tasks’ objection.\n\n\n[99.](https://www.openphilanthropy.org/could-advanced-ai-drive-explosive-economic-growth#footnoteref99_mp8irzn)In these models, there are two main factors determining whether growth is super-exponential. Firstly, *the importance of accumulable inputs*. By an input’s ‘importance’ I mean its output share; this is given by the input’s exponent in Cobb-Douglas models. This first factor depends on whether there is a fixed factor, and whether capital can substitute for labor. Secondly,the*diminishing returns to R&D*.\n\n\n[100.](https://www.openphilanthropy.org/could-advanced-ai-drive-explosive-economic-growth#footnoteref100_ssyo3r4)[Agrawal et al. (2019)](https://www.nber.org/system/files/chapters/c14024/c14024.pdf) discuss a dynamic where AI assistance in research raises φ.\n\n\n[101.](https://www.openphilanthropy.org/could-advanced-ai-drive-explosive-economic-growth#footnoteref101_nezemj7)You get the same qualitative result if *Y* is a CES production function of labor and capital *F*(*L*, *K*) with the elasticity of substitution is less than 1: *Y* = *A* × *F*(*L*, *K*)\n\n\n[102.](https://www.openphilanthropy.org/could-advanced-ai-drive-explosive-economic-growth#footnoteref102_46m3ngx)Aghion et al. (2017) considers a model where goods production is automated and technological progress is exogenous and finds that the growth rate increases without limit. Further, if both goods production and ideas production are fully automated – *Y* = *AK* and dA/dt = *Aφ* × *K* – then the growth rate increases without limit regardless of the value of φ.\n\n\n[103.](https://www.openphilanthropy.org/could-advanced-ai-drive-explosive-economic-growth#footnoteref103_bjf4okl)It could be objected that long before 3% growth we had seen that after plagues or access to new lands human populations could grow rapidly given abundant resources. This could have enabled us to speculate that growth as high as 3% might be possible. But similarly, by looking at the growth of mice and bacteria we can say that growth of a system can in principle be much faster than 30% per year. By a similar token, we could use this observed growth to speculate that 30% growth might be possible.\n\n\n[104.](https://www.openphilanthropy.org/could-advanced-ai-drive-explosive-economic-growth#footnoteref104_e3nwlbd)As Bryan Caplan seems to do [here](https://www.econlib.org/archives/2016/06/the_age_of_em_r.html).\n\n\n[105.](https://www.openphilanthropy.org/could-advanced-ai-drive-explosive-economic-growth#footnoteref105_aazje1x)[Solow (1994)](https://www.jstor.org/preview-page/10.2307/2138150?seq=1) p. 50.\n\n\n[106.](https://www.openphilanthropy.org/could-advanced-ai-drive-explosive-economic-growth#footnoteref106_9tkk411)For example, Newtonian mechanics is accurate only when objects are moving much slower than the speed of light, Newton’s theory of gravity is accurate only when objects’ masses are sufficiently small, and protons and neutrons are not predictively useful concepts in very high energy conditions (under such conditions particle-like objects of this sort do not emerge from quantum field theory).\n\n\n[107.](https://www.openphilanthropy.org/could-advanced-ai-drive-explosive-economic-growth#footnoteref107_y5yw90f)There is a large literature on circumstances in which actual human behavior differs from the predictions of economics’ rational agent model. Nonetheless, the rational agent model is fairly accurate in many situations.\n\n\n[108.](https://www.openphilanthropy.org/could-advanced-ai-drive-explosive-economic-growth#footnoteref108_gp0eqyk)See [Roodman (2020)](https://www.openphilanthropy.org/wp-content/uploads/Modeling-the-human-trajectory.pdf) Table 4 – p. 42.\n\n\n[109.](https://www.openphilanthropy.org/could-advanced-ai-drive-explosive-economic-growth#footnoteref109_mt8gre9)Intuitively, this is because the post-1950 slowdown in GWP growth has more influence over the model’s predictions for the shorter data sets.\n\n\n[110.](https://www.openphilanthropy.org/could-advanced-ai-drive-explosive-economic-growth#footnoteref110_xzf0phe)The mechanism is also used by Jones (2001) and Galor and Weil (2000). These theories don’t predict explosive growth due as they model the demographic transition (see [more](https://www.openphilanthropy.org/research/could-advanced-ai-drive-explosive-economic-growth/#AppendixI)).\n\n\n[111.](https://www.openphilanthropy.org/could-advanced-ai-drive-explosive-economic-growth#footnoteref111_yz9qab6)I feel that both the length of the slowdown in calendar time and the fractional increase in GWP during the slowdown are relevant. The first is relevant because slowdowns are caused by dynamics that play out over roughly fixed amounts of calendar time, like pandemics and human rulers. The second is relevant because (to oversimplify) the endogenous growth models we’ve focused on suggest that when GWP doubles, its growth should increase by some percentage (in Roodman’s model this is about 46%). So if growth stays constant (or decreases) during a period, the model is surprised to the extent that GWP increases over that period. To the extent that slowdowns are caused by unevenness in the technological landscape (see next section), we should measure their length by the amount of technological progress that is made during the slowdown. On this measure, the current slowdown is much longer than past slowdowns.\n\n\n[112.](https://www.openphilanthropy.org/could-advanced-ai-drive-explosive-economic-growth#footnoteref112_l2iexki)It finds that 20 – 40% of growth in output per person can be explained by improved talent allocation.\n\n\n[113.](https://www.openphilanthropy.org/could-advanced-ai-drive-explosive-economic-growth#footnoteref113_zqwzo5w)The ratio of [English GDP](https://ourworldindata.org/grapher/total-gdp-in-the-uk-since-1270?yScale=log&time=1800..) between 2016 and 1900 is roughly 10. The ratio of [per capita US GDP](https://ourworldindata.org/economic-growth#growth-at-the-technological-frontier-and-catch-up-growth) between 1870 and 2016 is about 14.\n\n\n[114.](https://www.openphilanthropy.org/could-advanced-ai-drive-explosive-economic-growth#footnoteref114_0r6l4dt)See data [here](https://www.nber.org/papers/w23782).\n\n\n[115.](https://www.openphilanthropy.org/could-advanced-ai-drive-explosive-economic-growth#footnoteref115_ce792co)For GWP growth to be smooth, we would need the effect of catch-up growth on GWP to exactly cancel the non-smooth progress of the frontier.\n\n\n[116.](https://www.openphilanthropy.org/could-advanced-ai-drive-explosive-economic-growth#footnoteref116_4a4742m)These plots are generated by the final section of [this python notebook](https://colab.research.google.com/drive/11oAdADbcd6GCslV0P5ESubqghaQlpyh2?usp=sharing). (If the link doesn’t work, the colab file can be found in [this folder](https://drive.google.com/drive/folders/1dzO1eZ8xSeePOntXOGNhSK5qqsgteHSp).)\n\n\n[117.](https://www.openphilanthropy.org/could-advanced-ai-drive-explosive-economic-growth#footnoteref117_ww9uc66)See my best guess about what would count as ‘highly substitutable’ [here](https://www.openphilanthropy.org/research/could-advanced-ai-drive-explosive-economic-growth/#WhatLevelOfAI).\n\n\n[118.](https://www.openphilanthropy.org/could-advanced-ai-drive-explosive-economic-growth#footnoteref118_7ozusj6)A datapoint when GWP was 1/2*n* times its current value is discounted by a factor *d**n*, *d* < 1. So the discount is not applied at a fixed rate per unit time.\n\n\n[119.](https://www.openphilanthropy.org/could-advanced-ai-drive-explosive-economic-growth#footnoteref119_lw93wiz)My preferred discount implies that, compared to a 2000 data point, a 1940 data point has weight 0.73, a 1820 data point has weight 0.53, and a 3000 BCE data point has weight 0.23.\n\n\n[120.](https://www.openphilanthropy.org/could-advanced-ai-drive-explosive-economic-growth#footnoteref120_mybj6mk)This discount rate may be an unhappy compromise. If output cannot easily be reinvested to increase the size of labor supply (as will be true by default unless we develop highly substitutable AI), this approach may still put too much weight on pre-modern data points when labor was accumulable. On the other hand, if AI systems means that output *can*be easily reinvested to increase the generalized labor supply (= human labor + AI labor), then placing more weight on recent data points may be inappropriate as these are the data points for which labor *isn’t* accumulable.\n\n\n[121.](https://www.openphilanthropy.org/could-advanced-ai-drive-explosive-economic-growth#footnoteref121_xzwjmfk)See [here](https://en.wikipedia.org/wiki/Origin_of_language#:~:text=The%20results%20suggest%20that%20language,when%20modern%20Homo%20sapiens%20evolved).\n\n\n[122.](https://www.openphilanthropy.org/could-advanced-ai-drive-explosive-economic-growth#footnoteref122_cyxx4ug)See data on frontier population growth [here](https://www.openphilanthropy.org/research/could-advanced-ai-drive-explosive-economic-growth/#AppendixI).\n\n\n[123.](https://www.openphilanthropy.org/could-advanced-ai-drive-explosive-economic-growth#footnoteref123_rtio41p)It would be transitional, for example, if it was a temporary deviation from the historical pattern of super-exponential growth, or a transitional period between pre-1900 super-exponential growth and post-2000 sub-exponential growth.\n\n\n[124.](https://www.openphilanthropy.org/could-advanced-ai-drive-explosive-economic-growth#footnoteref124_2ngk98s)For example, when output per capita becomes large people may choose to have fewer children. This would reduce the percentage increase of labor in subsequent years.\n\n\n[125.](https://www.openphilanthropy.org/could-advanced-ai-drive-explosive-economic-growth#footnoteref125_s9hamwa)One reason they might cancel exactly would be if the production function displayed constant returns to scale. If this were the case, and the difficulty of making absolute improvements to each factor did not change as the factor increased (a fairly natural assumption), then there would be exponential growth. But production functions only express constant returns to scale when technology is excluded; when technology is endogenous there are typically increasing returns to scale in the total stock of factors.\n\n\n[126.](https://www.openphilanthropy.org/could-advanced-ai-drive-explosive-economic-growth#footnoteref126_dlyfe4f)Thanks to Phil Trammell for suggesting this distinction.\n\n\n[127.](https://www.openphilanthropy.org/could-advanced-ai-drive-explosive-economic-growth#footnoteref127_f2q95n8)More precisely, if we held the level of technology constant then accumulation alone would not deliver sustained growth.\n\n\n[128.](https://www.openphilanthropy.org/could-advanced-ai-drive-explosive-economic-growth#footnoteref128_pc6w8a0)An alternative version of the *AK* model might be *Y* = *F*(*K*, *BL*), where the arguments of *F* are gross complements (elasticity of substitution less than one). If *B* = (*K*/*L*)γ, then γ > 1 would lead to super-exponential growth for a while, and then exponential growth. We’d reach exponential growth because the second argument would grow more quickly than the first, so the function would approximate *Y* = *K*. At this point however, the capital share would be at 1, so this model is not realistic as a description of the modern regime of exponential growth.\n\n\n[129.](https://www.openphilanthropy.org/could-advanced-ai-drive-explosive-economic-growth#footnoteref129_cx4g5ad)This mechanism plausibly faces diminishing returns: if you keep doubling the number of machines overseen by each worker they must spend less time per machine and reduce their output per machine. If this weren’t the case, you could leave one worker in charge of all the machines in a factory (or indeed the world!).\n\n\n*[130.](https://www.openphilanthropy.org/could-advanced-ai-drive-explosive-economic-growth#footnoteref130_mfh95ni)Perspectives on Growth Theory* (Journal of Economic Perspectives, 1994).\n\n\n[131.](https://www.openphilanthropy.org/could-advanced-ai-drive-explosive-economic-growth#footnoteref131_fd8blow)This is because it will increase the reinvestment in *K*: *gK* = *sY*/*K* = *sA*.\n\n\n[132.](https://www.openphilanthropy.org/could-advanced-ai-drive-explosive-economic-growth#footnoteref132_snfwda6)[Cesaratto (2008)](https://www.boeckler.de/pdf/v_2008_10_31_cesaratto.pdf) provides a useful discussion of various AK models and their interrelations.\n\n\n[133.](https://www.openphilanthropy.org/could-advanced-ai-drive-explosive-economic-growth#footnoteref133_17sdu4c)If γ = 1, then population growth will lead the growth rate of output to increase without limit. γ = 1 implies *Y* = *AKL(1-α)*. Therefore *gY* = *gK* + (1 – α) *gL*. The reinvestment equation implies that in a steady state *gY* = *gK*. Therefore in the steady state growth is infinite.\n\n\n[134.](https://www.openphilanthropy.org/could-advanced-ai-drive-explosive-economic-growth#footnoteref134_d7sisly)The assumption of constant returns to capital and labor in combination, embodied by the CES production function, is reasonable when we only consider direct effects. If you double the number of workers and the factories and machines at their disposal, you’ll produce twice as much. But once you account for spillover effects from capital accumulation, as a plausible theory without a distinct representation of technology must do, there is no particular reason to think there should be exactly constant returns.\n\n\n[135.](https://www.openphilanthropy.org/could-advanced-ai-drive-explosive-economic-growth#footnoteref135_98s7jyn)I borrow these interpretations from [Carroll (2020)](http://www.econ2.jhu.edu/people/ccarroll/Public/LectureNotes/Growth/LucasGrowth.pdf).\n\n\n[136.](https://www.openphilanthropy.org/could-advanced-ai-drive-explosive-economic-growth#footnoteref136_6409cbo)This is probably the intended interpretation as Lucas *l* is chosen via an individual optimization decision.\n\n\n[137.](https://www.openphilanthropy.org/could-advanced-ai-drive-explosive-economic-growth#footnoteref137_u91etkm)This interpretation is argued for in [Mankiw (1995)](https://www.jstor.org/stable/2534576?seq=1).\n\n\n[138.](https://www.openphilanthropy.org/could-advanced-ai-drive-explosive-economic-growth#footnoteref138_mdqjcg6)This mirrors the criticism of [Romer (1990)](http://web.stanford.edu/~klenow/Romer_1990.pdf) made in [Jones (1995)](https://www.jstor.org/stable/2138581?seq=2#metadata_info_tab_contents).\n\n\n[139.](https://www.openphilanthropy.org/could-advanced-ai-drive-explosive-economic-growth#footnoteref139_4y98lb5)Population growth has slowed somewhat, but I suggest that this isn’t strong evidence against semi-endogenous models.\n\n\n[140.](https://www.openphilanthropy.org/could-advanced-ai-drive-explosive-economic-growth#footnoteref140_8it8m3f)In addition, the proportion of the workforce engaged in R&D increased exponentially during the 20th century. The number of researchers is what matters for knowledge production.\n\n\n[141.](https://www.openphilanthropy.org/could-advanced-ai-drive-explosive-economic-growth#footnoteref141_yqxuh8l)See data on frontier population growth [here](https://www.openphilanthropy.org/research/could-advanced-ai-drive-explosive-economic-growth/#AppendixI).\n\n\n[142.](https://www.openphilanthropy.org/could-advanced-ai-drive-explosive-economic-growth#footnoteref142_c4gzaml)Some papers try to empirically distinguish between *J* / *K* / *S* models and *Y* / *GH* / *AH* models, but I think this is a very difficult task. Such attempts often give conflicting results (e.g. see Section 4 of [this review](https://bcec.edu.au/assets/2019/06/BCEC-Working-Papers-19_02-Steven-Bond-Smith-The-decades-long-dispute-over-scale-effects.pdf)). This may be because a number of messy empirical factors make testing very difficult: unknown time lags between R&D and subsequent TFP growth, other significant factors influencing TFP growth other than targeted R&D, the possibility of a factor influencing both R&D effort and subsequent TFP growth, and somewhat arbitrary choices about how to define the inputs to R&D efforts (this is especially true for *Y* / *GH* / *AH* models where we must calculate R&D effort *per product line*).\n\n\n[143.](https://www.openphilanthropy.org/could-advanced-ai-drive-explosive-economic-growth#footnoteref143_8373dr0)There are of course possible mechanisms by which fertility could pick up again in the long run, which could lead to exponential growth once more.\n\n\n[144.](https://www.openphilanthropy.org/could-advanced-ai-drive-explosive-economic-growth#footnoteref144_h6r62lr)The paper has 27 [citations](https://scholar.google.com/scholar?cites=14565838151111926151&as_sdt=2005&sciodt=0,5&hl=en), none of which seem to dispute the proof. Growiec and his colleagues have published two [further](https://www.sciencedirect.com/science/article/abs/pii/S0164070410000480) [papers](https://www.sciencedirect.com/science/article/abs/pii/S0164070415000683) that generalize and reformulate these arguments.\n\n\n[145.](https://www.openphilanthropy.org/could-advanced-ai-drive-explosive-economic-growth#footnoteref145_xhnalrt)For a striking example along these lines consider the thermostat equation *dY/dt* = *k – Y*. This equation says that the value of *Y* will tend towards *k*. Although it seems stable, it has a knife-edge according to Growiec’s theorem. We expand the initial equation to *dY/dt* = (*k* – *Y*) + φ × *Y*2. The ‘knife-edge’ is that φ is exactly equal to 0. If it differs at all from this value, then a large enough initial value of *Y* will cause the system to explode, with *Y* going to infinity in finite time. This may be a knife-edge in the sense defined by Growiec (2007), but it is not problematic: there’s no motivation for the introduction of a term that can have such large effects for large *Y*, and even the altered system is robust if the initial value of *Y* is not too high. Perhaps there are theories predicting that long-run growth is exponential that have similarly unproblematic knife-edges.\n\n\n[146.](https://www.openphilanthropy.org/could-advanced-ai-drive-explosive-economic-growth#footnoteref146_0xh5pbh)A case that *does* seem knife-edge to me is Cobb-Douglas. It assumes that the elasticity of substitution is exactly 1; deviating from this assumption ever so slightly produces very different qualitative behavior. However, like the assumption of exponential growth, it has empirical support. So I still place weight on Cobb-Douglas models, just like I place weight on exponential GWP extrapolations.\n\n\n[147.](https://www.openphilanthropy.org/could-advanced-ai-drive-explosive-economic-growth#footnoteref147_h5i9hhs)This is a critical difference with standard growth models. Normally all endogenous factors positively reinforce each other, in that an increase in one factor would increase output and so increase investment in the other factors. But in this system there’s a negative feedback cycle: increases in *N* dampens returns to investment in *Z*.\n\n\n[148.](https://www.openphilanthropy.org/could-advanced-ai-drive-explosive-economic-growth#footnoteref148_e59fpdc)See Section III of [Jones (1999)](https://web.stanford.edu/~chadj/scaleffAERPP1999.pdf) for a brief introduction to Schumpeterian growth models and discussion of the knife-edge conditions they typically use to achieve constant exponential growth.\n\n\n[149.](https://www.openphilanthropy.org/could-advanced-ai-drive-explosive-economic-growth#footnoteref149_3yp0p3c)Examples from .\n\n\n[150.](https://www.openphilanthropy.org/could-advanced-ai-drive-explosive-economic-growth#footnoteref150_8diphtk)This objection interprets the ‘firms’ in the model as referring to *organizations* in the real world. Perhaps though they’re better interpreted as referring to *distinct products*. Even with this interpretation, it’s unclear to me whether the number of products is growing as fast as the model implies.\n\n\n[151.](https://www.openphilanthropy.org/could-advanced-ai-drive-explosive-economic-growth#footnoteref151_05s0grq)See [Autor et al. (2017)](https://www.aeaweb.org/articles?id=10.1257/aer.p20171102).\n\n\n[152.](https://www.openphilanthropy.org/could-advanced-ai-drive-explosive-economic-growth#footnoteref152_7psp2b7)I remove the input ‘human capital’, set the exponent on technology to 1, and set a number of constants to 0 – those controlling the effect of technological advance on reinvestment in non-technology inputs. (Roodman considers a similar simplification at the top of p. 12.)\n\n\n[153.](https://www.openphilanthropy.org/could-advanced-ai-drive-explosive-economic-growth#footnoteref153_92n9bf3)Note: φA has a different meaning to a similar parameter in semi-endogenous growth models. This is because Roodman assumes *Y* is the R&D input, whereas semi-endogenous growth models typically use *L* as the R&D input.\n\n\n[154.](https://www.openphilanthropy.org/could-advanced-ai-drive-explosive-economic-growth#footnoteref154_zr7jrio)Technically, these are the conditions under which there’s *either* super exponential growth *or* the system decays towards 0. But if we assume positive growth then they are the conditions for super exponential growth. If we set the δs to 0, these would be conditions for super exponential growth. Derived from Equation 16 in Roodman (2020).\n\n\n[155.](https://www.openphilanthropy.org/could-advanced-ai-drive-explosive-economic-growth#footnoteref155_0gbcx1e)Roodman reruns their analysis with his model.\n\n\n[156.](https://www.openphilanthropy.org/could-advanced-ai-drive-explosive-economic-growth#footnoteref156_n5ow2p2)The version in Section 1 is more simple, so the conditions for explosion are less informative. The version in Section 3 doesn’t predict explosive growth due to an additional mechanism corresponding to the demographic transition.\n\n\n[157.](https://www.openphilanthropy.org/could-advanced-ai-drive-explosive-economic-growth#footnoteref157_sfkndmc)Kremer (1993) uses 1/3 as a high-end estimate of land’s share of output, based on evidence from share-cropping contracts.\n\n\n[158.](https://www.openphilanthropy.org/could-advanced-ai-drive-explosive-economic-growth#footnoteref158_rir4sb0)In this system, the work producing super-exponential growth is done more by the dynamical equations describing how the inputs change, which directly state that the growth rate of inputs increases with the size of the system. The increasing returns in the production function is less important. This reflects a general truth. Super-exponential growth is produced by the production function *in combination*with the dynamical equations. In some models more work is done by the former, in others by the latter.\n\n\n[159.](https://www.openphilanthropy.org/could-advanced-ai-drive-explosive-economic-growth#footnoteref159_dpy66ed)People choose how to divide their time between three activities: producing output, doing research, and having children.\n\n\n[160.](https://www.openphilanthropy.org/could-advanced-ai-drive-explosive-economic-growth#footnoteref160_1w5lb12)Jones writes that: \n\nIn particular, under the crucial assumption of increasing returns to accumulable factors (θ > 0), the general pattern is for growth rates of both population and standards of living to first increase and then to decrease…\n\n\nMy condition rearranges his condition θ > 0.\n\n\n[161.](https://www.openphilanthropy.org/could-advanced-ai-drive-explosive-economic-growth#footnoteref161_bpnnk1k)He does not estimate φ from the data, but tries out different values and chooses the one that seems to give the best fit – see p. 22.\n\n\n[162.](https://www.openphilanthropy.org/could-advanced-ai-drive-explosive-economic-growth#footnoteref162_psl9ibn)Note, Bloom et al. (2020) use a knowledge production function where only labor is an input. There is no role for capital, as in this model. This might change the estimate of φ somewhat.\n\n\n[163.](https://www.openphilanthropy.org/could-advanced-ai-drive-explosive-economic-growth#footnoteref163_k87j1m9)Alternatively, if labor were automated it would be satisfied. The sum of exponents of capital and labor are typically taken to be close to 1 and so > 0.75.\n\n\n[164.](https://www.openphilanthropy.org/could-advanced-ai-drive-explosive-economic-growth#footnoteref164_t070ka7)The capital share has risen by 5% in the last 20 years ([source](https://www.mckinsey.com/featured-insights/employment-and-growth/a-new-look-at-the-declining-labor-share-of-income-in-the-united-states)).\n\n\n[165.](https://www.openphilanthropy.org/could-advanced-ai-drive-explosive-economic-growth#footnoteref165_h4y5i9j)We’d approximate an *AK* model with constant *A* and growth driven by capital accumulation.\n\n\n[166.](https://www.openphilanthropy.org/could-advanced-ai-drive-explosive-economic-growth#footnoteref166_xhm48gc)I found the presentation in Trammell and Korinek (2021) [Section 3.3](https://docs.google.com/document/d/1XCn4Pk44evZEjbmPD-zKlj26Z_bhf0ABeWVID_4L5sg/edit#heading=h.afl86dom6vjx) helpful here.\n\n\n[167.](https://www.openphilanthropy.org/could-advanced-ai-drive-explosive-economic-growth#footnoteref167_ixm6tbg)You get slightly more moderate growth increases if you treat *A* as labor and capital augmenting (TFP), rather than just capital augmenting. You can also replace (*AK*)*α* × *Lβ* with *F*(*AK*, *L*)*(α + β)* and get a similar qualitative result. Raising the elasticity of substitution above 1 causes the growth rate to increase.\n\n\n[168.](https://www.openphilanthropy.org/could-advanced-ai-drive-explosive-economic-growth#footnoteref168_i6ngtwt)Growth only increases if capital accumulation is fast enough. This caps growth below *s* × *A* – δ. The reinvestment rate s is bounded below 1 and δ is constant; so super-exponential growth can only be sustained if *A*, the level of technology, grows.\n\n\n[169.](https://www.openphilanthropy.org/could-advanced-ai-drive-explosive-economic-growth#footnoteref169_dwggk4c)This can only be sustained if there is technological progress in the background. See footnote two above.\n\n\n[170.](https://www.openphilanthropy.org/could-advanced-ai-drive-explosive-economic-growth#footnoteref170_yhk81fb)This only leads to explosive growth if there’s capital augmenting technology, or if the savings rate is large enough.\n\n\n[171.](https://www.openphilanthropy.org/could-advanced-ai-drive-explosive-economic-growth#footnoteref171_kwatddj)For example, see [Hanson (2000)](https://www.researchgate.net/profile/Robin_Hanson2/publication/228557195_Long-term_growth_as_a_sequence_of_exponential_modes/links/0046351fac48cd6ca3000000/Long-term-growth-as-a-sequence-of-exponential-modes.pdf), [Hansen and Prescott (2002)](https://www.jstor.org/stable/3083308?casa_token=lwxmDZzncTgAAAAA:Kv9Tpwl1_ZyXxX8QQsInbOWEpNtyvFET8JZPaY9j1erV5C9IOqJYF7DYkC1AjBgFRQfoYa3XvrKshnKAinI7NW6FtzhY-BzuyTVo7ClwDESn4FZwf8iY&seq=1), [Goodfriend and McDermott (1995)](https://econpapers.repec.org/article/aeaaecrev/v_3a85_3ay_3a1995_3ai_3a1_3ap_3a116-33.htm), [Lucas (1998)](http://www.econ.hku.hk/~cwyuen/seminar/papers/Lucas%20(Kuznets%20Lectures).pdf), [Stokey (2001)](https://www.sciencedirect.com/science/article/abs/pii/S0167223101800038) and [Tamura (2002)](https://isiarticles.com/bundles/Article/pre/pdf/18414.pdf).\n\n\n[172.](https://www.openphilanthropy.org/could-advanced-ai-drive-explosive-economic-growth#footnoteref172_ghahzbc)In fact, Hanson’s preferred model from this paper predicts that, even without another growth mode, growth rates will continue to increase to ~12% (6 year doubling time). Why is this? In the model, we’re still transitioning into the current growth mode. The growth rate will increase while we finish this transition, settling on the new growth mode’s rate of 12%. Though this isn’t quite sufficient for our definition of ‘explosive growth’, it’s still very significant.\n\n\n[173.](https://www.openphilanthropy.org/could-advanced-ai-drive-explosive-economic-growth#footnoteref173_kphickc)\n\n\n\n> In summary, if one takes seriously the model of economic growth as a series of exponential growth modes, and if relative change parameters of a new transition are likely to be similar to such parameters describing old transitions, then it seems hard to escape the conclusion that the world economy could see a very dramatic change within the next century, to a new economic growth mode with a doubling time of roughly two weeks or less.\n> \n> \n\n\n[174.](https://www.openphilanthropy.org/could-advanced-ai-drive-explosive-economic-growth#footnoteref174_smittwq)See [Muller (2008)](https://onlinelibrary.wiley.com/doi/abs/10.3982/ECTA6814), [Muller (2015)](http://www.princeton.edu/~mwatson/papers/LFE_Mueller_Watson_Sept_2015.pdf), [Muller (2016)](https://www.nber.org/papers/w18870) and for descriptions of this framework, and [Christensen (2018)](https://www.pnas.org/content/115/21/5409) and [Muller (2019)](http://www.princeton.edu/~mwatson/papers/SCC_20191216.pdf) for applications to GWP.\n\n\n[175.](https://www.openphilanthropy.org/could-advanced-ai-drive-explosive-economic-growth#footnoteref175_20gft9p)I expect that there are others.\n\n\n[176.](https://www.openphilanthropy.org/could-advanced-ai-drive-explosive-economic-growth#footnoteref176_khuwanw)[Christensen (2018)](https://www.pnas.org/content/115/21/5409).\n\n\n[177.](https://www.openphilanthropy.org/could-advanced-ai-drive-explosive-economic-growth#footnoteref177_1yeu9m5)[Muller (2019)](http://www.princeton.edu/~mwatson/papers/SCC_20191216.pdf).\n\n\n[178.](https://www.openphilanthropy.org/could-advanced-ai-drive-explosive-economic-growth#footnoteref178_zfq5izn)One small caveat is that the model in [Muller (2019)](http://www.princeton.edu/~mwatson/papers/SCC_20191216.pdf) gives a special role to frontier economies, which it operationalises as [OECD countries](https://www.oecd.org/about/members-and-partners/), in determining long-run average per-capita GWP growth. This incorporates the view that growth of frontier countries is a leading indicator of growth in other countries and so of GWP; this is arguably an inside-view consideration.\n\n\n[193](https://www.openphilanthropy.org/could-advanced-ai-drive-explosive-economic-growth#footnote193_rg45qad \" Participants were reminded about the overconfidence bias and asked to give percentile estimates for three practice questions to help calibrate their judgements. \")In the case of [Muller (2019)](http://www.princeton.edu/~mwatson/papers/SCC_20191216.pdf) *gt* is the frontier GDP per capita. In the long run, the per capita GDPs of all other countries approach *gt*, so *gt* has a similar role to GWP per capita (which isn’t modeled directly).\n\n\n[180.](https://www.openphilanthropy.org/could-advanced-ai-drive-explosive-economic-growth#footnoteref180_p04w0dp)In the models I’ve seen, the random walk is constrained such that it’s unlikely to wander far from its center.\n\n\n[181.](https://www.openphilanthropy.org/could-advanced-ai-drive-explosive-economic-growth#footnoteref181_tlycqmd)Even if this model was trained on data showing clear signs of super-exponential growth, it would still conclude that the long-run average growth rate was constant (probably close to the average growth rate in the dataset).\n\n\n[182.](https://www.openphilanthropy.org/could-advanced-ai-drive-explosive-economic-growth#footnoteref182_lzd4x1m)The low-frequency approach focuses on modeling a stochastic component whose expectation is 0, but it can be combined with an arbitrary deterministic component. See p. 4 of [Muller (2008)](https://onlinelibrary.wiley.com/doi/abs/10.3982/ECTA6814).\n\n\n[183.](https://www.openphilanthropy.org/could-advanced-ai-drive-explosive-economic-growth#footnoteref183_5j0e70k)See [Foure (2012)](https://papers.ssrn.com/sol3/papers.cfm?abstract_id=2004332), [Johansson (2013)](https://www.oecd-ilibrary.org/docserver/5k4ddxpr2fmr-en.pdf?expires=1589799470&id=id&accname=guest&checksum=54E2A042C209AEE9BE4D886BBA2E139E), [Crespo (2017)](http://pure.iiasa.ac.at/id/eprint/11290/1/GEC_Revision_3rd_Round.pdf), [Leimbach (2016)](https://www.sciencedirect.com/science/article/pii/S0959378015000242?via%3Dihub).\n\n\n[184.](https://www.openphilanthropy.org/could-advanced-ai-drive-explosive-economic-growth#footnoteref184_i1wfnad)For example, [Foure (2012)](https://papers.ssrn.com/sol3/papers.cfm?abstract_id=2004332) introduces energy as an additional factor.\n\n\n[185.](https://www.openphilanthropy.org/could-advanced-ai-drive-explosive-economic-growth#footnoteref185_noy99qb)[Foure (2012)](https://papers.ssrn.com/sol3/papers.cfm?abstract_id=2004332) estimates the rate of change of *A*in each country using a catch-up model. This model implies that a country’s speed of catch-up is related to its level of secondary education and its ability to push forward the frontier is related to its level of tertiary education; the model is fitted using historical data. It also uses data on female labor force participation to inform its projection of *L*.\n\n\n[186.](https://www.openphilanthropy.org/could-advanced-ai-drive-explosive-economic-growth#footnoteref186_b1xip3n)[Foure (2012)](https://papers.ssrn.com/sol3/papers.cfm?abstract_id=2004332) allows *s*to vary between countries and over time, using a theory of savings and investment.\n\n\n[187.](https://www.openphilanthropy.org/could-advanced-ai-drive-explosive-economic-growth#footnoteref187_p02lt9g)For example, see [Johansson (2013)](https://www.oecd-ilibrary.org/docserver/5k4ddxpr2fmr-en.pdf?expires=1589799470&id=id&accname=guest&checksum=54E2A042C209AEE9BE4D886BBA2E139E) and the overview of the Shared Socioeconomic Pathways, [Riahi (2017)](https://www.sciencedirect.com/science/article/pii/S0959378016300681).\n\n\n[188.](https://www.openphilanthropy.org/could-advanced-ai-drive-explosive-economic-growth#footnoteref188_csa0sgh)For example, see [Johansson (2013)](https://www.oecd-ilibrary.org/docserver/5k4ddxpr2fmr-en.pdf?expires=1589799470&id=id&accname=guest&checksum=54E2A042C209AEE9BE4D886BBA2E139E), [Crespo (2017)](http://pure.iiasa.ac.at/id/eprint/11290/1/GEC_Revision_3rd_Round.pdf), [Leimbach (2016)](https://www.sciencedirect.com/science/article/pii/S0959378015000242?via%3Dihub).\n\n\n[189.](https://www.openphilanthropy.org/could-advanced-ai-drive-explosive-economic-growth#footnoteref189_fpp3y33)This suggestion might be strengthened by the fact that advocates of singularity stories believe it will be caused by technological change, and so by explosive growth in TFP.\n\n\n[190.](https://www.openphilanthropy.org/could-advanced-ai-drive-explosive-economic-growth#footnoteref190_q466yta)Even models like these do not *explain*increases in TFP in the way that endogenous growth models, discussed below, aim to do. They simply calculate regression coefficients for TFP growth from education level, but this is different from providing a model that explains how TFP growth results from education (which is the sort of thing endogenous growth models try and do). In other words, the mathematics of these regressions is not designed to represent the process by which economic activity leads to increases in TFP, but rather to discern high-level correlations.\n\n\n[191.](https://www.openphilanthropy.org/could-advanced-ai-drive-explosive-economic-growth#footnoteref191_sk1jgkc)More details on the process:\n\n\n\n> The criteria for nomination included contributions to the economic growth literature, familiarity with empirical research on medium-run and long-run growth, and diversity in regional expertise. Participants were selected on the basis of the frequency of nomination. Upon selection, the experts were contacted by email and provided with a link to the digital Qualtrics survey. Based on research papers in Economics (RePEc) factor rankings, the overall peer-selected sample includes: 3 of the top 10 economists in any field, 2 of the top 5 development economists, 2 of the top 5 growth economists, 1 of the top 5 macroeconomists, 1 of the top 5 economic historians, and 1 of the top 5 forecasting economists.\n> \n> \n\n\nIn total, 13 experts completed the survey.\n\n\n[192.](https://www.openphilanthropy.org/could-advanced-ai-drive-explosive-economic-growth#footnoteref192_waoh65t)The results for each percentile vary by less than 0.1% per capita growth if we instead use the mean, and by less than 0.2% if we instead use the median. See Table S2 [here](https://www.pnas.org/content/pnas/suppl/2018/05/09/1713628115.DCSupplemental/pnas.1713628115.sapp.pdf).\n\n\n[193.](https://www.openphilanthropy.org/could-advanced-ai-drive-explosive-economic-growth#footnoteref193_rg45qad)Participants were reminded about the overconfidence bias and asked to give percentile estimates for three practice questions to help calibrate their judgements.\n\n\n[194.](https://www.openphilanthropy.org/could-advanced-ai-drive-explosive-economic-growth#footnoteref194_fwz4gu3)From p. 13 of the [appendix](https://www.pnas.org/content/pnas/suppl/2018/05/09/1713628115.DCSupplemental/pnas.1713628115.sapp.pdf):\n\n\n\n> A rating of 1 indicates little expertise, a rating of 5 indicates the expertise of someone who has studied the subject but is not a specialist, and a rating of 10 indicates expertise that is among the leading experts. The mean self-reported level of expertise is 5.99 and the median is 6.\n> \n> \n\n\n[195.](https://www.openphilanthropy.org/could-advanced-ai-drive-explosive-economic-growth#footnoteref195_u0xqr65)The growth model point estimates I’ve seen are clustered around expert elicitation distribution’s mean of 2.06%, and they all lie within its 10 – 90th percentile range [0.60%, 3.47%].\n\n\n[196.](https://www.openphilanthropy.org/could-advanced-ai-drive-explosive-economic-growth#footnoteref196_8q3df0k)Christensen’s paper explicitly compares its expert elicitation distribution with the growth model point estimates of the Shared Socioeconomic Pathways (SSPs), a large collection of scenario-based GWP projections constructed for use by the climate-change research community (see an [overview](https://www.sciencedirect.com/science/article/pii/S0959378016300681)). They find that it’s median results are consistent with the median of the SSPs but that the highest SSP projection is closer to the 75th percentile than to the 90th.\n\n\n[197.](https://www.openphilanthropy.org/could-advanced-ai-drive-explosive-economic-growth#footnoteref197_hsimmdr)The UN does provide percentile projections, but I found that incorporating its uncertainty about the future population makes little difference to the GWP projections. Most of the *standard story’s*uncertainty about future GWP stems from uncertainty about GWP per capita, not about uncertainty about population.\n\n\n[198.](https://www.openphilanthropy.org/could-advanced-ai-drive-explosive-economic-growth#footnoteref198_bxf0f04)My search was brief and it’s perfectly possible I’ve missed counter-examples, but I would be surprised to hear of a paper using pre-1800 data.\n\n\n[199.](https://www.openphilanthropy.org/could-advanced-ai-drive-explosive-economic-growth#footnoteref199_4b21eho)This compares with dates of 2044 and 2050 from Roodman’s model.\n\n\n[200.](https://www.openphilanthropy.org/could-advanced-ai-drive-explosive-economic-growth#footnoteref200_a9e53d5)In these cases long-run growth is sub-exponential.\n\n\n[201.](https://www.openphilanthropy.org/could-advanced-ai-drive-explosive-economic-growth#footnoteref201_fsa77pk)This choice is notable: we could instead have measured the change as *new\\_growth\\_rate – old\\_growth\\_rate*. Our preferred choice leads the model to predict explosive growth much sooner than under this alternative. The choice is motivated by analogy to Roodman’s fully endogenous growth model: in that model each time output doubles the growth rate increases by a constant factor. See more [here](https://www.openphilanthropy.org/research/could-advanced-ai-drive-explosive-economic-growth/#AppendixI).\n\n\n[202.](https://www.openphilanthropy.org/could-advanced-ai-drive-explosive-economic-growth#footnoteref202_hm4xhzg)One interesting, and I suspect controversial, feature of the model is that each time a *growth multiplier*is sampled it is added to the list of historically observed growth multipliers. Removing this feature doesn’t materially change the probability of explosion this century. I discuss this feature in [this appendix](https://www.openphilanthropy.org/research/could-advanced-ai-drive-explosive-economic-growth/#AppendixI).\n\n\n[203.](https://www.openphilanthropy.org/could-advanced-ai-drive-explosive-economic-growth#footnoteref203_o0mjl0w)Using [this formula](https://en.wikipedia.org/wiki/Doubling_time#Examination), the calculation is YYYY – 2025 = ln(2) / ln(1 + *g*/100).\n\n\n[204.](https://www.openphilanthropy.org/could-advanced-ai-drive-explosive-economic-growth#footnoteref204_rd1rhex)I experimented with artificially removing Factor 2 from Roodman’s model. In particular, I evolved Roodman’s estimated model with one alteration: at each instant in time I halved the instantaneous growth rate that drives the incremental increase of GWP. With the alteration, the median growth rate for 2019 is 3.55% – more in line with the actual average growth of the last 20 years (3.65%). As a result, the median date of explosive growth is 2070, with 10% probability by 2056 and 90% by 2136. These results have an interesting relationship to those from the *growth multiplier model* when no discount is used – a version I discuss more [here](https://www.openphilanthropy.org/research/could-advanced-ai-drive-explosive-economic-growth/#AppendixF). The medians of both are very similar, but the *growth multiplier model* has wider confidence intervals. These wider confidence intervals are to be expected given that the *growth multiplier model* i) represents serial correlation between the growth rates at different points in time, and ii) has the feature described in the footnote starting ‘*One interesting, and..*’. Of these two factors, (i) plays a much more significant role.\n\n\n[205.](https://www.openphilanthropy.org/could-advanced-ai-drive-explosive-economic-growth#footnoteref205_n6ixyb7)In this formula, the role of ‘*× growth\\_multiplier*’ is to adjust the growth rate for the increase in GWP. The role of *old\\_growth\\_rate*is to link the next period’s growth directly to that of the previous period, encoding serial correlation. A single period of low growth affects all subsequent periods of growth in this way.\n\n\n[206.](https://www.openphilanthropy.org/could-advanced-ai-drive-explosive-economic-growth#footnoteref206_q1nqi3x)I consider objections to these ideas in a [later section](https://www.openphilanthropy.org/research/could-advanced-ai-drive-explosive-economic-growth/#AppendixA).\n\n\n[207.](https://www.openphilanthropy.org/could-advanced-ai-drive-explosive-economic-growth#footnoteref207_efuwu62)For example, the growth rate within each period is not really constant. And the *growth multiplier*(the ratio between the average growth of successive periods) is not confined to being exactly equal to some historically observed value, but in reality can vary continuously.\n\n\n[208.](https://www.openphilanthropy.org/could-advanced-ai-drive-explosive-economic-growth#footnoteref208_e6y2qbc)To (roughly) translate the condition for ‘sub-exponential growth’ into a condition for *frontier*growth, it corresponds in my mind to the annual growth of frontier GDP/capita being below 1%.\n\n\n[209.](https://www.openphilanthropy.org/could-advanced-ai-drive-explosive-economic-growth#footnoteref209_qk4wjux)Even once capital is fully substitutable with labour, it takes time for enough capital to be accumulated to significantly augment the human labour supply. More technically, it takes a while before goods production approximates *Y* = *AK* and knowledge production approximates *dA/dt* = *(Aφ)K*.\n\n\n[210.](https://www.openphilanthropy.org/could-advanced-ai-drive-explosive-economic-growth#footnoteref210_bchs46z)High-level machine intelligence is achieved when unaided machines can accomplish every task better and more cheaply than human workers.\n\n\n[211.](https://www.openphilanthropy.org/could-advanced-ai-drive-explosive-economic-growth#footnoteref211_u6xpl8k)\n\n\nThe survey found that answers differed significantly depending on how the question was asked. Some participants were asked about *high-level machine intelligence*(HLMI): when unaided machines can accomplish every task better and more cheaply than human workers. Others were asked about *full automation:*when for any occupation, machines could be built to carry out the task better and more cheaply than human workers. For HLMI, the probability by 2080 = ~60%, see figure 1 of the [paper](https://arxiv.org/pdf/1705.08807.pdf). For full automation, the probability by 2075 = ~25%, see figure 2 box plot. Roughly extrapolating the rate of increase from this box plot, pr(AGI by 2080) = ~30%. Placing equal weight on HLMI and full automation estimates, we get pr(AGI by 2080) = ~45%.\n\n\nNote: the survey found another significant framing effect – see discussion [here](https://aiimpacts.org/2016-expert-survey-on-progress-in-ai/#Human-level_intelligence). The numbers from the paper aggregate across this framing effect in a complicated way. My understanding is that, roughly speaking, the numbers attempt to give the *mean* probability AI researchers assign to the milestone being reached by a particular year.\n\n\nThe survey also included a third estimate of time of human-level based on the rate of recent progress. It gives similar results to the HLMI estimate – see [here](https://aiimpacts.org/surveys-on-fractional-progress-towards-hlai/).\n\n\n[212.](https://www.openphilanthropy.org/could-advanced-ai-drive-explosive-economic-growth#footnoteref212_2klw36l)The report defines AGI as (collection of) computer program(s) that can perform virtually any cognitive task as well as any human, for no more money than it would cost for a human to do it. This is a slightly weaker definition than HLMI, given the restriction to ‘cognitive’ tasks and the phrase ‘virtually any’. It is closer than HLMI to the level of AI that I think would be sufficient for explosive growth.\n\n\n[213.](https://www.openphilanthropy.org/could-advanced-ai-drive-explosive-economic-growth#footnoteref213_ypbbrsd)I’m lower mostly because I assign less weight to ‘short horizon’ paths than Ajeya. Relatedly, I may think that the level of AI necessary to drive explosive growth is higher. E.g. I’m not confident a disembodied AI with human-level analytic and scientific skills would be sufficient; I think we’d also need human-level robotics.\n\n\n[214.](https://www.openphilanthropy.org/could-advanced-ai-drive-explosive-economic-growth#footnoteref214_nese9i1)0.7 × 50% + 0.15 × 45% + 0.15 × 15% = 44%.\n\n\n[215.](https://www.openphilanthropy.org/could-advanced-ai-drive-explosive-economic-growth#footnoteref215_woftwgu)All the long-run explosive growth models in this section are idea-based, as are all the endogenous models.\n\n\n[216.](https://www.openphilanthropy.org/could-advanced-ai-drive-explosive-economic-growth#footnoteref216_bgbt7ki)The relevant parameter values describe the diminishing returns to R&D and the importance of fixed factors of production like land.\n\n\n[217.](https://www.openphilanthropy.org/could-advanced-ai-drive-explosive-economic-growth#footnoteref217_8i0oiux)For example, this happens whenever there’s constant returns to labour and capital in combination, and some other source of productivity growth.\n\n\n[218.](https://www.openphilanthropy.org/could-advanced-ai-drive-explosive-economic-growth#footnoteref218_ibixomn)China’s GDP/capita growth has exceeded 5% every year since 1980 ([source](https://data.worldbank.org/indicator/NY.GDP.PCAP.KD.ZG?locations=CN)).\n\n\n[219.](https://www.openphilanthropy.org/could-advanced-ai-drive-explosive-economic-growth#footnoteref219_xdsb8wx)I assign 35%/55% = ~60% of the weight to the sub-exponential above.\n\n\n[220.](https://www.openphilanthropy.org/could-advanced-ai-drive-explosive-economic-growth#footnoteref220_apr8prm)For clarity, I am simplifying his model somewhat by assuming that technology doesn’t mediate the reinvestment.\n\n\n[221.](https://www.openphilanthropy.org/could-advanced-ai-drive-explosive-economic-growth#footnoteref221_uhef9e9)Galor and Weil (2000) model differs from Jones (2001) in some subtle ways. Firstly, for Jones *gL* depends on the birth rate and the death rate, both of which are affected by per capita income. But in Galor’s model, the death rate is fixed, so you can focus solely on the birth rate. Secondly, Galor distinguishes between the size of the labor force and its human capital. The level of human capital depends on the time parents spend educating their children. Thirdly, Galor’s equation for technological progress implies that a constant population can produce exponential increases in technology indefinitely. By contrast, Jones’ equation implies the population must be growing exponentially to sustain exponential growth of technology.\n\n\n[222.](https://www.openphilanthropy.org/could-advanced-ai-drive-explosive-economic-growth#footnoteref222_78m88xd)There is an alternative, and in some ways more plausible, version of the model where in equilibrium both the population and technological level grow exponentially. See Footnote 23. I’m not sure if the demographic transition – the falling of population growth – happens in this version.\n\n\n[223.](https://www.openphilanthropy.org/could-advanced-ai-drive-explosive-economic-growth#footnoteref223_oxbz38e)\n\n\nThe dynamic is slightly different in the version of the model where in equilibrium both the population and technological level grow exponentially (see previous footnote). In this alternate version, the negative feedback loop is:\n\n\nFaster growth → incentive to have fewer children → population growth falls → slower growth\n\n\nSlower growth → incentive to have more children → population growth rises → faster growth\n\n\n[224.](https://www.openphilanthropy.org/could-advanced-ai-drive-explosive-economic-growth#footnoteref224_nost9fx)The French data series is from [Roodman (2020)](https://www.openphilanthropy.org/wp-content/uploads/Modeling-the-human-trajectory.pdf). See Table 2. As he explains, the first two data points – in 10,000 BCE and 5,000 BCE – are taken from Maddison’s GWP/capita data series rather than being specific to France.\n\n\n[225.](https://www.openphilanthropy.org/could-advanced-ai-drive-explosive-economic-growth#footnoteref225_jzcd9i8)Source: Maddison Project 2018 population data. To download, click [here](https://www.rug.nl/ggdc/historicaldevelopment/maddison/data/mpd2018.xlsx).\n\n\n[226.](https://www.openphilanthropy.org/could-advanced-ai-drive-explosive-economic-growth#footnoteref226_ias857p)We can understand why the feedback loop peters out by looking at equation (1). When *K*increases, *s × Y*increases due to *Y*’s dependence on *K*, but *δ* × *K* also increases. The latter increases by more because α < 1. Eventually *K* is big enough that *s × Y – δ*× *K*= 0. At this point, investment of *Y*exactly offsets depreciation and *K*remains at its current value.\n\n\n[227.](https://www.openphilanthropy.org/could-advanced-ai-drive-explosive-economic-growth#footnoteref227_ztixpp4)See also Section 2 of [chapter 2](https://www.brown.edu/Departments/Economics/Faculty/Peter_Howitt/2070-2015/Aghion_Howitt_Ch3-AK.pdf) of *The Economics of Growth*. Here I describe the model for the special case when technology doesn’t depend on *labor* – this corresponds to ε’ = 0 in [this presentation](http://sweet.ua.pt/afreitas/growthbook/Part%20II/mlfchap6.pdf).\n\n\n[228.](https://www.openphilanthropy.org/could-advanced-ai-drive-explosive-economic-growth#footnoteref228_14dobbw)Note, however, this only happens in the knife-edge case when *α + η = 1*. If *α + η < 1*, the long-run growth rate depends on the growth of *L*; if *α + η > 1*, output goes to infinity in finite time regardless if investment is larger than some threshold.\n\n\n[229.](https://www.openphilanthropy.org/could-advanced-ai-drive-explosive-economic-growth#footnoteref229_ail0mpo)It’s somewhat hard to explain why mathematically. The basic intuition is that once you choose α, condition (i) imposes an *exact*requirement on η satisfied by only one value while conditions (ii) and (iii) only impose constraints that can be satisfied by a range of values. Our prior would have much more weight on these ranges than on the exact value corresponding to condition (i).\n\n\nA more mathematical explanation is to imagine the two-dimensional space of possible values of α and η. Each point in this space corresponds to a value of α and a value of η. Condition (i) is satisfied by all the points on a line in this space: a one-dimensional subspace. Call this subspace *S*. By contrast, conditions (ii) and (iii) correspond to two-dimensional regions either side of *S*. Natural priors over the two-dimensional space will assign only infinitesimal probability to any one-dimensional subspace, and so will assign infinitesimal probability to *S*. The update from the 20th century data will concentrate our posterior on the region close to *S*, but we will still assign only an infinitesimal probability to *S* itself. So we will still assign only infinitesimal probability to (i). Most of the probability mass of our posterior will be just above or just below the line, corresponding to conditions (ii) or (iii).\n\n\n[230.](https://www.openphilanthropy.org/could-advanced-ai-drive-explosive-economic-growth#footnoteref230_hxn6eqd)The exact form of (4) is chosen so that a simple change of variables converts it into a Feller diffusion, see Section 3.1 of Roodman’s paper.\n\n\n[231.](https://www.openphilanthropy.org/could-advanced-ai-drive-explosive-economic-growth#footnoteref231_6dsxrmr)See [this appendix](https://www.openphilanthropy.org/research/could-advanced-ai-drive-explosive-economic-growth/#AppendixI) for a slightly more detailed description of how the model does this.\n\n\n[232.](https://www.openphilanthropy.org/could-advanced-ai-drive-explosive-economic-growth#footnoteref232_ydcxuo0)There are many candidates for such a cause. To list a few: the demographic transition, end of low-hanging fruit for technological progress, the shift of spending from goods to slower-growing services, and resource limitations. I discuss the first two of these candidates in more detail later.\n\n\n[233.](https://www.openphilanthropy.org/could-advanced-ai-drive-explosive-economic-growth#footnoteref233_qi8q8pd)On his favored data set he finds *s = 1.5 × 10-4*, *B = 0.55, δ = -3.4 × 10-5*. The small value of δ is needed to predict positive growth rates in ancient times when *Y*was very low – in 10,000 BCE *Y = 1.6* (the units of *Y* are $ billion). The current value of *Y* is about *70,000*and so the contribution of δ to the growth rate is negligible.\n\n\n[234.](https://www.openphilanthropy.org/could-advanced-ai-drive-explosive-economic-growth#footnoteref234_ai1qnak)When *Y*is very small Roodman’s model predicts that the growth rate will increase by more than this, due to the effect of δ.\n\n\n[235.](https://www.openphilanthropy.org/could-advanced-ai-drive-explosive-economic-growth#footnoteref235_4b2c837)It is worth stressing that the model does not assume that growth is super-exponential. Just like Roodman’s model, it is perfectly compatible with growth being sub-exponential. If the observed *growth multipliers*were between 0 and 1 this would be its predictions.\n\n\n[236.](https://www.openphilanthropy.org/could-advanced-ai-drive-explosive-economic-growth#footnoteref236_n40af2l)The definition of period has the nice property that the assumption that growth rates are constant within each period is similarly plausible for each period. It has this property because Roodman’s model predicts that the growth rate will, in expectation, change by roughly the same amount within each period so defined (where that change is again measured as a ratio).\n\n\n[237.](https://www.openphilanthropy.org/could-advanced-ai-drive-explosive-economic-growth#footnoteref237_mt5rwz9)[This](https://en.wikipedia.org/wiki/Doubling_time#Examination) is the formula when *r = 2*. The general formula can be calculated by rearranging the first equation [here](https://www.varsitytutors.com/hotmath/hotmath_help/topics/exponential-growth).\n\n\n[238.](https://www.openphilanthropy.org/could-advanced-ai-drive-explosive-economic-growth#footnoteref238_w13ar2t)In Roodman’s model, higher current growth leads to a bigger increase in GWP and this in turn increases future growth. But the current growth affects future growth through *no* other way except via GWP in this way. By contrast, in the *growth multiplier model* current growth affects future growth both via the increase in GWP and by the *new\\_growth\\_rate*being directly proportional to *old\\_growth\\_rate*.\n\n\n[239.](https://www.openphilanthropy.org/could-advanced-ai-drive-explosive-economic-growth#footnoteref239_rj6beoy)I have a few reasons for thinking that the model is a bad fit to these shortened data sets. Firstly, the model parameters are very hard to estimate from these data sets; this often happens when the data aren’t a good fit to the model. Secondly, the plots of the solution don’t visually appear to fit the data points as well as for the longer data sets. Thirdly, and most importantly, the fits involve unrealistically large values of δ – between -0.08 and -0.17. This is unrealistic because -δ represents the rate of depreciation of GWP, and the economy does not lose > 8% of its value each year through depreciation. For contrast, when fit to the full data set δ = – 0.00003. When I stopped the optimization process early, while δ was around -0.05, the median date of explosive growth was several decades earlier (or up to 6 decades for the 1800 data set). Note: Roodman defines δ so that the parameter is expected to have a negative value, unlike in the [Solow-Swan model](https://en.wikipedia.org/wiki/Solow%E2%80%93Swan_model).\n\n\n[240.](https://www.openphilanthropy.org/could-advanced-ai-drive-explosive-economic-growth#footnoteref240_ifdm9gx)On the x-axis, years are spaced according to the formula *log(2050 – year)*. This is why the distance between -10,000 BCE and 2000 BCE is similar to the distance between 1980 and 2020. With such a scaling of the x-axis, Roodman’s univariate endogenous growth model implies that the growth rates should follow the pattern of a straight line.\n\n\n[241.](https://www.openphilanthropy.org/could-advanced-ai-drive-explosive-economic-growth#footnoteref241_ipbjlar)I’ve taken the French data series from Roodman’s [paper](https://www.openphilanthropy.org/wp-content/uploads/Modeling-the-human-trajectory.pdf). He describes the data series on p. 24. As he explains, the first two data points – in 10,000 BCE and 5,000 BCE – are taken from Maddison’s GWP/capita data series rather than being specific to France.\n\n\n[242.](https://www.openphilanthropy.org/could-advanced-ai-drive-explosive-economic-growth#footnoteref242_5r6di24)I also tried starting the pre-1600 data series in 5,000 BCE to remove any effect of the [Neolithic Revolution](https://en.wikipedia.org/wiki/Neolithic_Revolution#Comparative_chronology) on growth rates. Interestingly, this changed the fitted parameters quite significantly, with *B* moving from 0.18 to 0.50 and *s* decreasing by a factor of 10 to compensate. This suggests that the solutions of Roodman’s model are very sensitive to small changes in the data for data sets this small. With the 5,000 BCE – 1600 data series, Roodman’s median year of explosive growth is 2305, with 10% by 2041 and 30% of no explosion by 3000!\n\n\n[243.](https://www.openphilanthropy.org/could-advanced-ai-drive-explosive-economic-growth#footnoteref243_7tbfequ)The parameter in Roodman’s model controlling whether growth is sub- or super-exponential. If *B* > 0, growth is super-exponential.\n\n\n[244.](https://www.openphilanthropy.org/could-advanced-ai-drive-explosive-economic-growth#footnoteref244_bhc7zil)The estimated value of *B* was 0.18 with a standard error of 0.02 when estimated using maximum likelihood estimation (as Roodman does). I separately estimated *B* using a nonlinear least squares regression predicting the growth rate from the GWP level, the methodology of [Kremer (1993)](https://www.jstor.org/stable/2118405). I found *B* was 0.34 with standard error 0.14.\n\n\n[245.](https://www.openphilanthropy.org/could-advanced-ai-drive-explosive-economic-growth#footnoteref245_rm6329c)The results of the paper stand even when the cost of creating a firm is 0, so I don’t think this argument is the whole story. But perhaps the fact that the fixed cost of production for firms is proportional to *Z* allows a more general version of the argument to go through. Indeed, Peretto confirmed in private correspondence that if instead the fixed cost were proportional to *Z0.9*, the model would not produce exponential growth, and he thought the same was likely true if they were proportional to *Z1.1*.\n\n\n[246.](https://www.openphilanthropy.org/could-advanced-ai-drive-explosive-economic-growth#footnoteref246_ihoasw9)The fixed factor land does not correspond to any of the vector indices, as its exponent doesn’t affect whether growth explodes. Technically speaking, the condition is for *instability:* either super-exponential growth or collapse. Assuming positive growth, it is a condition for super-exponential growth. This condition appears as Equation 16 in Roodman (2020).\n\n\n[247.](https://www.openphilanthropy.org/could-advanced-ai-drive-explosive-economic-growth#footnoteref247_qph83r1)Notice, technology *A* only augments capital in this model, unlike in the models considered above.\n\n\n[248.](https://www.openphilanthropy.org/could-advanced-ai-drive-explosive-economic-growth#footnoteref248_ygo9fdo)Technically, if either technology or capital were falling, one of the derivatives of the growth rates could be positive and yet output might still not be growing super-exponentially. In this model, technological growth is always positive and capital can decay at most exponentially, so such scenarios do not occur. In general, we assume that the economy is not shrinking to avoid considering these cases.", "url": "https://www.openphilanthropy.org/could-advanced-ai-drive-explosive-economic-growth", "title": "Could Advanced AI Drive Explosive Economic Growth?", "source": "html_articles", "source_type": "blogPost", "source_filetype": "pdf", "date_published": "2021-04-07T22:00:00Z", "authors": ["Tom Davidson"], "summary": [], "id": "c47160031a455c612a29584acde69686"}