{"source": "nonarxiv_papers", "source_filetype": "pdf", "abstract": "Mankind! is! rapidly! developing! \"emerging! technologies\"! in! the! fields!of!bioengineering,!nanotechnology,!and!artificial!intelligence!that!have! the! potential! to! solve! humanity's! biggest! problems,! such! as! by! curing! all! disease,! extending! human! life,! or! mitigating! massive! environmental! problems! like! climate! change.! However,! if! these! emerging! technologies! are! misused! or! have! an! unintended! negative! effect,! the! consequences! could! be! enormous,!potentially!resulting!in!serious,!global!damage!to!humans!(known! as!\"global!catastrophic!harm\")!or!severe,!permanent!damage!to!the!Earthincluding,! possibly,! human! extinction! (known! as! \"existential! harm\").! The! chances!of!a!global!catastrophic!risk!or!existential!risk!actually!materializing! are! relatively! low,! but! mankind! should! be! careful! when! a! losing! gamble! means! massive! human! death! and! irreversible! harm! to! our! planet.! While! international! law! has! become! an! important! source! of! global! regulation! for! other! global! risks! like! climate! change! and! biodiversity! loss,! emerging! technologies!do!not!fall!neatly!within!existing!international!regimes,!and!thus! any! country! is! more! or! less! free! to! develop! these! potentially! dangerous! technologies! without! practical! safeguards! that! would! curtail! the! risk! of! a! catastrophic!event.!In!light!of!these!problems,!this!paper!serves!to!discuss!the! risks! associated! with! bioengineering,! nanotechnology,! and! artificial! intelligence;! review! the! potential! of! existing! international! law! to! regulate! these!emerging!technologies;!and!propose!an!international!regulatory!regime! that! would! put! the! international! world! in! charge! of! ensuring! that! lowJ probability,!highJrisk!disasters!never!materialize.!!", "authors": ["Grant Wilson", "! !!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!"], "title": "Minimizing Global Catastrophic and Existential Risks from Emerging Technologies through International Law", "text": "INTRODUCTION The world is currently undergoing a remarkable revolution in science and technology that will seemingly allow us to engineer synthetic life of any imaginable variety, build swarms of robots so small that they are invisible to the human eye, and, perhaps, create an intelligence far superior to the collective brainpower of every human. Much of this \"emerging technology\" either already exists in rudimentary form or may be developed in the coming decades, 1 including the three technologies covered by this paper: nanotechnology, bioengineering, and artificial intelligence (AI). While many scientists point to these developments as a panacea for disease, pollution, and even mortality, 2 these emerging technologies also risk massive human death and environmental harm. Nanotechnology consists of \"materials, devices, and systems\" created at the scale of one to one hundred nanometers 3 -a nanometer being one billionth of a meter in size (10 -9 m) or approximately one hundred-thousandth the width of a human hair 4 -including nano-sized machines (\"nanorobots\"). Bioengineering also operates on a tremendously small scale but uses concepts of engineering to build new biological systems or modify existing biological systems 5 by manipulating the very building blocks of life. 6 Finally, AI, meaning intelligent computers, is a pathway to \"the Singularity,\" the concept that manmade greater-than-human intelligence could improve upon its own design, thus beginning an intelligence feedback mechanism or \"explosion\" that would culminate in a godlike intelligence with the potential to operate at one million times the speed of the human brain. 7 These and other threats from emerging technologies may pose a \"global catastrophic risk\" (GCR), which is a risk that could cause serious global damage to human well-being, or an \"existential risk\" (ER), which is a risk that could cause human extinction or the severe and permanent reduction of the quality of human life on Earth. 8 Currently, the main risks from emerging technologies involve the accidental release or intentional misuse of bioengineered organisms, such as the airborne highly pathogenic avian influenza A (H5N1) virus, commonly known as \"bird flu,\" that scientists genetically engineered in 2011. However, with emerging technologies developing at a rapid pace, experts predict that perils such as dangerous self- !!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!! 1 See infra Section II. 2 See Terry Grossman et al., Reinventing Humanity: The Future of Human-Machine Intelligence, THE FUTURIST (Feb. 03, 2006) , available at: www.kurzweilai.net/reinventing-humanity-the-future-of-human-machine-intelligence. 3 Ortwin Renn & Mihail Roco, Nanotechnology and the Need for Risk Governance, 8 J. NANOPARTICLE RES. 153 (2006) , available at: http://www.springerlink.com/content/y80541n7740785gm/fulltext.pdf. 4 Nanotechnology White Paper, NANOTECHNOLOGY WORKGROUP, ENVIRONMENTAL PROTECTION AGENCY'S SCIENCE POLICY COUNCIL (SPC) 5 (2007) , available at: http://epa.gov/osa/pdfs/nanotech/epa-nanotechnologywhitepaper-0207.pdf. 5 The Issues, ETC GROUP, http://www.etcgroup.org/en/materials/issues (last visited Feb. 14, 2012). 6 See Natalie Angier, Peering Over the Fortress That is the Mighty Cell, N.Y. TIMES (May 31, 2010), available at: http://www.nytimes.com/2010/06/01/science/01angi.html?ref=jcraigventer. 7 ! 3! replicating nanotechnology, 9 deadly synthetic viruses available to amateur scientists, and unpredictable superintelligent AI 10 may materialize in the coming few decades. Society should take great care to prevent a GCR or ER (\"GCR/ER\") from materializing, yet GCRs/ERs arising out of nanotechnology, bioengineering, and AI are almost entirely unregulated at the international level. 11 One possible way to mitigate the chances of a GCR/ER ever materializing is for the international community to establish an international convention tailored to these emerging technologies based on the following three principles: first, that nanotechnology, bioengineering, and AI pose a GCR/ER; second, that existing international regulatory mechanisms either do not include emerging technologies within their scope or else insufficiently mitigate the risks arising from emerging technologies; and third, that a international convention based on the precautionary principle could reduce GCRs/ERs to an acceptable level. This paper purports to establish the threats of emerging technologies, highlight regulatory gaps under international law, and recommend an international framework to address the associated risks. Specifically, Section II discusses the benefits and risks of emerging technologies, establishing that bioengineering poses a GCR/ER now while nanotechnology and AI pose a GCR/ER in the future. Section II also provides the background of attempts to enjoin the operation of the Large Hadron Collider (LHC) to highlight difficulties courts have in addressing low-probability scientific threats and the conflicts of interest scientists may have in self-regulation. Section III then analyzes GCRs/ERs from bioengineering under international law, concluding that no international convention sufficiently regulates the risks arising out of bioengineering. This section focuses on bioengineering because it is the only emerging technology that poses an immediate GCR/ER. Section IV stitches together the fundamentals of an international treaty that would regulate GCRs/ERs from emerging technologies through concepts such as the precautionary principle, decisionmaking from a body of experts, and public participation. Finally, Section V concludes that states should act quickly to create a flexible, legally binding treaty to regulate the emerging technologies that present a GCR/ER. \n II. BACKGROUND ON EMERGING TECHNOLOGIES AND EXISTENTIAL RISKS While many experts cite a forthcoming revolution in nanotechnology, bioengineering, and AI as a source of great potential benefit to mankind and the environment, these technologies also risk causing profound negative consequences if insufficiently regulated. This section discusses current and forthcoming emerging technologies to highlight the benefits of emerging technologies while also establishing that they pose a GCR/ER. This background will serve as a foundation for an international regulatory regime that seeks to curtail the risks of emerging technologies without stifling their beneficial uses. Additionally, this section presents a brief case study of the LHC, which demonstrates the challenges of seeking judicial review of a complex scientific technology that poses a remote but significant harm and the problems with permitting self-assessment of risks amongst scientists. !!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!! 9 See Nanotechnology White Paper, supra note 4, at 12. 10 Eliezer Yudkowsky, Artificial Intelligence as a Positive and Negative Factor in Global Risk, in GLOBAL CATASTROPHIC RISKS 237 (Nick Bostrom & Milan M. Ćirković, eds., 2008). 11 See infra Section III. ! 4! \n A. Global Catastrophic Risk and Existential Risk A GCR is a risk that has the potential to cause \"serious damage to human well-being\" on the global scale. 12 While the threshold of \"serious\" damage is somewhat ambiguous, one expert sought to clarify the matter by asserting that an event killing 10,000 people would not qualify as a GCR, while one that killed 10,000,000 people would. 13 Furthermore, an event need not affect the entire Earth to have a \"global\" scale, but certainly must affect at least several parts of the world. 14 GCRs may be categorized into natural, anthropogenic, and intermediate risks. An example of a natural GCR that has already materialized is the Spanish flu pandemic, 15 while possible future natural GCRs include extreme natural disasters, another ice age, or a meteor striking the Earth. 16 Examples of past anthropogenic GCRs are the first and second World Wars, while possible future anthropogenic GCRs include nuclear war, accidents involving experimental technology, or bioterrorism. 17 Finally, intermediate GCRs are those that involve \"complex interactions between humanity and its environment,\" such as climate change. 18 One specific type of GCR is an ER, which is a low-probability, high impact risk that could (1) make humans go extinct or (2) severely and permanently harm the future quality of life of humans. An existential risk requires, at minimum, a global scope, a terminal (i.e. fatal) intensity, and a permanent effect on the quality of human life that continues into future generations. 19 Several GCRs are also ERs, such as nuclear war, certain experimental technologies, and climate change. Likewise, the three risks that this paper focuses onnanotechnology, bioengineering, and AI-are both GCRs and ERs. Because ERs are irrevocable, great care must be taken as to never let one happen. Although the probability of an existential risk materializing is up to much debate, some experts have come up with rough estimates. For example, Martin Rees, a decorated scientist and former President of the Royal Society, 20 believes there to be a fifty percent chance of human extinction before the 22 nd century, with much of the risk arising from some of the emerging technologies discussed in this paper. 21 \n B. Global Catastrophic Risks and Existential Risks from Emerging Technologies States should consider concluding an international treaty to regulate emerging technologies if they perceive these technologies to pose a GCR/ER. This section considers the current and future risks and benefits posed by three emerging technologies-bioengineering, !!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!! 12 BOSTROM & ĆIRKOVIĆ, supra note 10, at 23. 13 Id. at 24. 14 Id. 15 Id. 16 Id. at 13-27. 17 Id. 18 For a discussion of intermediate ERs, see MILAN M. ĆIRKOVIĆ, ANDERS SANDBERG, & NICK BOSTRON, Anthropic Shadow: Observation Selection Effects and Human Extinction Risks, 30 RISK ANALYSIS 1495 (2010). 19 BOSTROM & ĆIRKOVIĆ, supra note 10, at 04. 20 The Royalty Society, of which the about 1,500 Fellows and Foreign Members includes about 80 Nobel Laureates, is Britain's Academy of Sciences and publisher of nine peer-reviewed journals. See About Us, THE ROYAL SOCIETY, at: http://royalsociety.org/about-us (last visited Jan. 29, 2012). 21 Steve King, Worst Possible Scenarios, SPECTATOR (May 24, 2003), reviewing MARTIN REES, OUR FINAL CENTURY? (2003) , available on Westlaw at 2003 WLNR . ! 5! nanotechnology, and AI. This section concludes that bioengineering is the only emerging technology that poses an immediate GCR/ER, while nanotechnology and AI pose a future GCR/ER. \n Bioengineering Simply defined, bioengineering is the \"engineering of living organisms.\" 22 Bioengineering is commonly associated with genetically modified (GM) foods made from crops that scientists develop to have qualities like pest resistance or increased nutrition. However, bioengineering is rapidly expanding beyond agriculture into fields like medicine, disease control, and life-extension. The technology behind bioengineering has also developed quickly, with scientists now able to understand and manipulate life at the molecular level such that biology is viewed as a \"machine\" that can be tweaked, like in genetic engineering, or even built from the ground up, like in synthetic biology. 23 While breakthroughs in bioengineering research could significantly benefit mankind and the environment, bioengineering research can also be misused to the detriment of humans, animals, and environmental health. 24 Such \"dual use\" research currently poses significant risks to humankind, but even greater risks in the future. Furthermore, both current and future bioengineering technologies pose the risk of an accident that has significant detrimental effects. In exploring these issues, this section demonstrates that bioengineering poses an immediate GCR/ER. \n a. Current technology Bioengineering is already widely used to modify existing organisms, and scientists are on the cusp of creating entirely synthetic organisms. For example, scientists controversially use bioengineering to \"improve\" natural biological products and activities, resulting in increased nutrient value, bigger yields, and insect and disease resistance 25 in various types of crops. 26 In 2011, 94 percent by acre of soybeans in the United States were genetically engineered, while 73 percent of all U.S. corn was genetically engineered to be insect resistant and 65 percent to be herbicide tolerant. 27 Another controversial current bioengineering technology is genetically engineer viruses, highlighted by the 2011 genetic engineering of the H5N1 virus to become highly contagious amongst ferrets. Many scientists argue that creating the genetically engineered virus was necessary to develop a remedy in case the H5N1 virus mutates naturally, but skeptics argue that the modified H5N1 virus is dangerous because of risks that the virus will escape or that !!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!! malicious actors will engineer a similar virus. 28 Another example of recent advancements in bioengineering is a project spearheaded by biologist Craig Venter that transplanted a completely synthetic DNA sequence, or \"genome,\" into an E. coli bacteria-scientists then also added DNA \"watermarks\" such as the names of researchers and famous quotes-which Craig Venter termed \"the first self-replicating species we've had on the planet whose parent is a computer.\" 29 Bioengineering has also become vastly cheaper and more accessible to the general public. For example, massive databases of DNA sequences are available online from the Department of Energy Joint Genome Institute (JGI) and the National Center for Biological Information's GenBank ® database. 30 To materialize these DNA sequences, individuals can order custom genomes online for a few thousand dollars, which are \"printed\" from a DNA synthesis machine and shipped to them, opening the door for amateur biologists to engage in genetic engineering. 31 DNA synthesis machines can print DNA strands long enough for certain types of viruses, which untrained individuals can obtain within six weeks of purchase. 32 Even the synthesizing machines themselves can be purchased on the Internet on sites like eBay. 33 Much like bioengineering costs, the necessary expertise to engage in bioengineering is also plummeting. For example, since 2003, teams of entrepreneurs, college students, and even high school students submitted synthetic biology creations to the International Genetically Engineered Machine (IGEM) competition, such as UC Berkeley's \"BactoBlood\" creation-a \"cost-effective red blood cell substitute\" developed by genetically engineering E. coli bacteria. 34 \n b. Forthcoming technology Perhaps the greatest forthcoming development in bioengineering is synthetic biology, which includes techniques to \"construct new biological components, design those components and redesign existing biological systems.\" 35 This is in contrast to the traditional form of bioengineering that utilizes \"recombinant DNA\" techniques in which the DNA from one organism is stitched together with DNA from other organisms or synthetic DNA. 36 One method of synthetic biology involves \"cataloguing\" DNA sequences like \"Lego bricks\" and assembling !!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!! \n ! 7! them in unique ways (assembling natural molecules into an unnatural system, like combining the molecules from several types of bacteria to create a new bacteria with novel properties. Another method of synthetic biology involves using DNA synthesizers to create life \"entirely from scratch … the biological equivalent of word processors\" (using unnatural molecules to emulate a natural system, like creating the synthetic equivalent of a natural strand of influenza). 37 One way to \"birth\" synthetic DNA is to insert the DNA into a \"biological shell\"an organism, often a bacteria, that had its own genes removed-that can run the synthetic DNA like a computer runs software. 38 And while the technology to create eukaryotic cells (i.e. \"a cell with a nucleus, such as those found in animals, including human beings\") is a long ways away, synthetic viruses and bacteria are just around the corner. 39 c. \n Benefits of bioengineering Bioengineering is already displaying its potential to remedy major human health and environmental problems. For example, bioengineering is responsible for several pharmaceuticals and vaccines, such as insulin and a vaccine for Hepatitis B, while \"gene therapy\" employs genetically engineered viruses to help treat cancer. 40 Environmental benefits resulting from the 15.4 million farmers who grew genetically modified crops in 2010 include increased yield of six to thirty percent per acre of land, pest-resistant crops that require fewer pesticides (resulting in 17.1% less pesticide use globally in 2010), lower water use for drought-resistant crops, decreased CO 2 emissions, and crops that do not require harmful tilling practices. 41 Forthcoming benefits to human health could be a new wave of ultra-effective drugs (e.g. antimalarial and antibiotic drugs), bioengineered agents that kill cancer cells, and the ability to rapidly create vaccines in response to epidemics. 42 Bioengineering could also serve as a beacon of human diagnostics by analyzing \"thousands of molecules simultaneously from a single sample.\" 43 Meanwhile, forthcoming benefits to the environment could be organisms that remedy harmful pollution and superior forms of biofuel, for example. 44 Bioengineering could also spur an environmental revolution in which industries reuse modified waste from biomass feedstock and farmers grow bioengineered crops on \"marginally productive lands\" (e.g. switchgrass). 45 \n d. \n Risks from bioengineering While bioengineering offers current and future benefits to humans and the environment, there are also significant yet uncertain risks that could devastate human life, societal stability, !!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!! 37 What is Synthetic Biology?, SYNETHICBIOLOGY.ORG, at: http://syntheticbiology.org/FAQ.html (last visited Feb. 16, 2012). See also Garfinkel & Friedman, supra note 32, at 269. 38 Id. 39 Garfinkel & Friedman, supra note 32, at 271. 40 46 This paper focuses on three predominant GCR/ER risks arising from bioengineering: (1) the accidental release of harmful organisms (a \"biosafety\" issue), (2) the malicious release of harmful organisms (\"bioterrorism\"), and (3) the bioengineering of humans. The first two are current GCRs/ERs, while the third is a future GCR/ER. \n i. Risk of an accident An accidental release of a bioengineered microorganism during legitimate research poses a GCR/ER when such a microorganism has the potential to be highly deadly and has never been tested in an uncontrolled environment. 47 The threat of an accidental release of a harmful organism recently sparked an unprecedented scientific debate amongst policymakers, scientists, and the general public in reaction to the creation of an airborne strain of H5N1. 48 In September 2011, Ron Fouchier, a scientist from the Netherlands, announced that he had genetically engineered the H5N1 virus-his lab \"mutated the hell out of H5N1,\" he professed-to become airborne, which was tested on ferrets; a laboratory at the University of Wisconsin-Madison similarly mutated the virus into a highly transmittable form. 49 The \"natural\" H5N1 killed approximately 60 percent of those with reported infections (although the large amount of unreported cases means that this is an over estimate), but the total number of fatalities-three hundred and forty-six people-was relatively small because the virus is difficult to transmit from human to human. The larger risk comes from the possibility that a mutated virus would spread more easily amongst humans, 50 which could result in a devastating epidemic amongst the worst in history, if not the very worst. 51 To put this in context, about one in every fifteen Americans-20 million people-would die every year from a seasonal flu as virulent as a highly transmittable form of H5N1. 52 Lax regulations and a rapidly growing number of laboratories exacerbate the dangers posed by bioengineered organisms. While lab biosafety 53 guidelines in the United States and Europe recommended that projects like reengineering the H5N1 virus be conducted in a BSL-4 facility (the highest security level), neither laboratory that reengineered the H5N1 virus met this non-binding standard. 54 Meanwhile, a 2007 Government Accountability Office (GAO) report ! 9! indicated that BSL-3 and BSL-4 labs are rapidly expanding in the United States. While there is significant public information about laboratories that receive federal funding or are registered with the Centers for Disease Control and Prevention (CDC) and the U.S. Department of Agriculture's (USD) Select Agent Program, much less is known about the \"location, activities, and ownership\" of labs that are not federally funded and not registered with the CDC or the USD Select Agent Program. 55 The same report also concluded that there is no single U.S. agency that is responsible for tracking and assessing the risks of labs engaging in bioengineering. 56 While some claim that critics are overreacting to the genetically engineered H5N1 virus, there are a series of accidental releases of microbes from laboratories that demonstrate the risks of largely unregulated laboratory safety. In 1978, an employee died from an accidental smallpox release from a laboratory on the floor below her. 57 Many scientists believe that the global H1N1 (\"swine flu\") outbreak in the late 2000s originated from an accidental release from a Chinese laboratory. 58 Reports concluded that the accidental releases of Severe Acute Respiratory Syndrome (SARS) in Singapore, Taiwan, and China from BSL-3 and BSL-4 laboratories all resulted from a low standard of laboratory safety. 59 In the United States alone, a review by the Associated Press of more than 100 laboratory accidents and lost shipments between 2003 and 2007 show a pattern of poor oversight, reporting failures, and faulty procedures, specifically describing incidents at \"44 labs in 24 states,\" including at high-security labs. 60 In 2007, an outbreak of Foot and Mouth Disease likely came from a laboratory that was the \"only known location where the strain [was] held in the country\" 61 because of a leaky pipe that had known problems. 62 This long history of faulty laboratory safety is why some experts, such as Rutgers University chemistry professor and bioweapons expert Richard H. Ebright, believe that the H5N1 virus will \"inevitably escape, and within a decade,\" citing the hundreds of germs with potential use in bioweapons that have accidentally escaped from laboratories in the United States. 63 While the effects of such lapses in laboratory safety have not yet been felt aside from relatively small events such as the swine flu outbreak mentioned above, the increasing ability of less-sophisticated scientists to engineer more deadly organisms vastly increase the possibility that a lapse in biosafety will have detrimental effects. !!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!! !!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!! 55 GAO, \n ! 10! An accidental or purposeful release of a bioengineered organism has potentially grave consequences. For example, researchers in Australia recently accidentally developed a mousepox virus with a one-hundred percent fatality rate when they had merely intended to sterilize the mice. 64 Scientists in the United States also created a \"superbug\" version of mousepox created to \"evade vaccines,\" which they argue is important research to thwart terrorists, sparking a debate amongst scientists and policymakers about whether the benefits of such research is worth the associated risks. 65 If such a bioengineered organism escaped from a laboratory, the results would be unpredictable but potentially extremely deadly to humans and/or other animal species. The widespread availability of bioengineering technology and information further increases the risks of error in a laboratory. Students and amateurs have a growing capability to create bioengineered organisms, as evidenced by the iGEM contests, which tests the bioengineering capabilities of students in high schools and colleges. 66 Because of the dangers posed by this dual use research, the U.S. National Science Advisory Board for Biosecurity (NSABB) has started outreach programs for amateur biologists, including untrained, curious young individuals who consider themselves \"bioartists\" rather than researchers. 67 Defenders of genetically engineering viruses in a laboratory setting argue that such viruses could mutate outside of a laboratory anyway, and so understanding possible mutations in the laboratory is a defensive tool against the unknown. 68 As evidence, there have been previous examples of successful outcomes of bioengineering viruses, such as when Ralph Baric, using publicly available genome sequences, created a synthetic SARS virus contagious to bats that he claims can be tweaked to be a potential vaccine in the case of another SARS outbreak. 69 Furthermore, while environmentalists have long questioned the safety of GM foods on human health and the environment, GM foods have not been shown to be unsafe for human consumption 70 and so-called \"super weeds\" created from gene transfer from GM crops have not materialized. 71 However, just because a risk has not yet materialized does not mean that society should assume that a risk will not ever materialize, and a GCR/ER from bioengineering poses too much potential damage to rely on past events as an indicator of the future. Overall, this subsection demonstrates the risk of a bioengineered organism escaping from the lab with unknown but potentially catastrophic consequences, thus establishing a GCR/ER. \n ii. Risk of bioterrorism The threat of the malicious release of bioengineered organisms (i.e., bioterrorism) poses a GCR/ER. 72 Bioengineering enables a malicious actor to create an organism that is more deadly !!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!! to humans, animals, or plants than anything that exists in the natural world. 73 Experts say that the barriers for a terrorist to order a DNA sequence for a highly pathogenic virus online or acquire a DNA synthesis machine online are \"surmountable.\" 74 Alternatively, bioterrorists could break into laboratories housing dangerous bioengineered organisms-like the H5N1 virus, for example-and release them. Meanwhile, third world countries with laxer standards and lower laboratory accountability are rapidly discovering and using bioengineering, which may give bioterrorists an easier pathway to obtain deadly bioengineered organisms. 75 There have already been several occasions in which groups attempted to use or successfully used biological weapons. One unsophisticated example of bioterrorism occurred when an individual contaminated salads and dressing with salmonella in what apparently was an attempt to decide a local election. 76 Another example is a slew of attacks by Aum Shinrikyo, a Japanese cult, in the 1990s, the worst of which killed 12 people and injured over 5,000 from the release of sarin nerve gas in a subway in Tokyo in 1995. 77 While these particular acts of bioterrorism did not cause widespread death, deploying extremely deadly bioengineered organisms over a large area is real possibility: tests by the United States in 1964 demonstrated that a single aircraft can contaminate five thousand square kilometers of land with a deadly bacterial aerosol. 78 The recent engineering of an airborne H5N1 virus demonstrates society's concern over risks of bioterrorism arising from bioengineering. Before scientists could publish their results of their bioengineered airborne H5N1 virus in the widely read journals Nature and Science, the NSABB determined that the danger of releasing the sensitive information outweighed the benefits to society, advising that the findings not be published in their entirety. 79 The main risk is that either a state or non-state actor could synthesize a \"weaponized\" version of the H5N1 virus to create a disastrous pandemic. 80 There is precedent of outside groups recreating advanced bioengineering experiments, such as when many scientists immediately synthesized hepatitis C replicons upon publication of the its genetic code. 81 However, the NSABB's recommendation was nonbinding, and there is nothing to stop other scientists from releasing similar data in the future. Furthermore, while the NSABB merely assert that the \"blueprints\" of the virus should not be printed, other biosecurity experts argue that the virus should never have been created in the first place because of risks that the viruses would escape or be stolen. 82 !!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!! \n iii. Bioengineering of Humans A final GCR/ER arising out of bioengineering, but which has not yet occurred, involves inheritable genetic alterations of humans. In one possible scenario, bioengineering would create a new species or subspecies of humans-sometimes called \"transhumans\" 83 or \"posthumans\" 84that presents a variety of risks. First is the risk of a eugenics movement that prejudices \"normal\" humans. Second, posthumans may have a competitive advantage that is detrimental to normal humans. 85 In another scenario, genetically engineered humans may perceive the normal humans as \"inferior, even savages, and fit for slavery or slaughter.\" 86 Meanwhile, normal humans could attempt to preemptively suppress genetically engineered humans to protect themselves in the future, which could result in warfare amongst the different groups. 87 For these reasons, some argue that genetically engineered humans are \"potential weapons of mass destruction\" that could result in human genocide, and thus an international convention is necessary to address that risk. 88 On the other hand, there is also the possibility that genetically engineered humans would supplement humans and live harmoniously in society, but the possibility of a favorable outcome is not a sufficient reason to disregard potentially catastrophic risks. Other scholars believe that genetically engineering humans pose an ethical quandary, and that \"humans will lose the experiential or other basis that makes us human\" if such a movement becomes widespread. 89 These scenarios may not be too far in the future, as current science has proven that \"relatively simple gene alterations can significantly extend the lifespan of nematodes and mice,\" so perhaps there will be pressure from certain groups to expand these technologies to humans. 90 Furthermore, embryonic technology is heading towards the possibility of \"designer babies,\" which would use genetic engineering to create \"specific traits in pre-implanted embryos.\" 91 On the other hand, proponents of genetically engineering humans cite benefits such as increased life-expectancy, superior intelligence, and eradication of genetic defects, arguing !!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!! 83 Transhumanism may be defined as \"the intellectual and cultural movement that affirms the possibility and desirability of fundamentally improving the human condition through applied reason, especially by using technology to eliminate aging and greatly enhance human intellectual, physical, and psychological capacities.\" See ANDREW LUSTIG ET AL., ALTERING NATURE: VOL. 2 240 (2008). 86 Id. at 162. 87 Id. 88 Id. 89 Ikemoto, supra note 84, at 1102. 90 Id. at . 91 \n BILL MCKIBBEN, ENOUGH: STAYING HUMAN IN AN ENGINEERED AGE 47 (2002). ! 13! that just because we are born human does not mean that we are bound to remain that way. 92 Nonetheless, genetically engineered humans present a GCR/ER. \n Nanotechnology Nanotechnology involves manipulating materials or systems at the atomic, molecular, and supramolecular scales to create structures, devices, and systems with radical and novel properties. 93 Nanotechnology works on a scale of approximately 1-100 nanometers, with one nanometer being a billionth of a meter (10 -9 m). 94 To put this in perspective, a red blood cell is 1,000 nanometers, a single DNA strand is 2 nanometers in diameter, and the width of a human hair is 100,000 nanometers. 95 While development of nanotechnology is nascent, nanotechnology research and development is massive, and many experts believe that nanotechnology will result in pervasive change in \"all sectors and spheres of life,\" including \"social, economic, ethical and ecological spheres.\" 96 \n a. Current Technology Researchers categorize nanotechnology into four generations. The first generation consists primarily of \"nanomaterials\" (or \"passive nanostructures\") and is already widely available in the global market. Nanomaterials includes nanoparticles, coatings, and nanostructured material 97 that are created by reducing \"normal\" materials to the nanoscale 98 and typically combining them with normal materials to improve their functionality, 99 making materials stronger, lighter, more flexible, or more conductive, amongst other desirable traits. 100 Nano-sized materials have fundamentally different properties from their normal-sized counterparts because the size of a particle affects that particle's properties; thus, for example, creating nanomaterials from gold creates a unique color, melting point, and chemical properties. 101 Already, over 800 products use nanomaterials, 102 accounting for $225 billion of !!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!! 92 As Kevin Warwick, a posthumanism advocate, declared, \"I was born human. But this was an accident of fate -a condition merely of time and place. I believe it's something that we have the power to change.\" COLSON & CAMERON, supra note 85, citing Kevin Warwick, Cyborg 1.0, WIRED 15 (2000) . See also Ikemoto, supra note 84, at 1102. 93 Renn & Roco, supra note 3, at 153. 94 Nanotechnology White Paper, supra note 4, at 12. 95 Id. 96 See Renn & Roco, supra note 3, at 154. 97 Id. at 153-156. 98 Building nanomaterials from the \"bottom up\" is still being tested in laboratories; while scientists can successfully manipulate an individual atom, they have not achieved the ability to create construct technologies with \"atomic precision,\" which is a central goal of nanotechnology scientists. ! 14! sales in 1999. 103 Such products include tennis rackets, sunscreen, stain-resistant pants, computer displays, paint, antimicrobial pillows, canola oil, non-stick pans, and various coatings and lubricants. 104 The construction industry foresees using nanomaterials to create stronger steel, bacteria-killing and fire-resistant materials, solar panels that generate more power, and energyefficient lighting, which could increase the lifespan and lower the energy consumption of buildings. 105 The second generation of nanotechnology, which currently only exists in laboratories, consists of \"active nanostructure[s]\" 106 that \"change their behavior in response to changes in their environment,\" such as through \"exposure to light\" or the \"presence of certain biological materials.\" 107 An example of the latter function is a nanodevice that targets the brain cells responsible for neuroinflammation to deliver pinpointed drugs as a potential treatment of cerebral palsy, as recently tested in rabbits. 108 \n b. Forthcoming technology The yet-unavailable third and fourth generations of nanotechnologies consist of complete nanosystems as opposed to mere nanotechnology components. 109 The third generation includes \"three-dimensional nanosystems with heterogeneous nanocomponents\" with \"thousands of interacting components\" that act like the parts to a sophisticated yet incredibly small machine. 110 And the fourth generation of nanotechnology consists of \"heterogeneous molecular nanosystems\" that operate \"like a mammalian cell with hierarchical systems within systems,\" including technologies like molecular manufacturing and molecular nanorobotics (i.e. robots designed at the nanoscale). 111 This fourth generation of nanotechnology could spur widespread molecular manufacturing in which any designable product could be built with atomic precision, such as incredibly fast computers, nanorobots that perform a specific function, or complex machines. 112 Both third and fourth generation nanotechnology will focus on \"bottom up\" manufacturing rather than the \"top down\" approach, i.e. manufacturing nanotechnology on the molecular level rather than reducing existing materials to the nanoscale. The third and fourth !!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!! 15! generations of nanotechnology only exist in computer experiments and models, 113 but they are expected to be developed in the coming few decades. 114 \n c. Benefits of nanotechnology Nanotechnology is set to \"have a significant impact on drug delivery, computing, communications, defense, space exploration, and energy,\" thus governments are spending significant amounts of money on nanotechnology research and development. 115 By testing different combinations and sizes of nanomaterials to fine-tune their desired effect, 116 materials can be lighter and stronger, resistant to bacteria, scratch proof, and hold superior charges. 117 Scientists have already created a variety of materials to benefit the environment: a paper towel for oil spills capable of absorbing 20 times its mass of oil by utilizing nanomaterials with enhanced absorption properties, thin and flexible solar panel films (perhaps one day even \"paintable\"), more efficient lithium-ion batteries, and superior windmill blades made of carbon nanotubes. 118 To benefit human health, 80 percent of cars already have nanomaterial filters to remove certain harmful particles from the air, and scientists are developing a filter that can remove viruses from water. 119 In the future, scientists predict that nanotechnology could also locate and deliver pinpointed treatment to cancer cells by creating \"gold-coated nanoparticles\" that target cancer cells and destroy then when heated by electromagnetic frequencies (as scientists tested at Rice University 120 ), restore damaged cells to slow aging by molecularly engineering nanomedicines, 121 and increase solar efficiency by a factor of one hundred by developing materials with optimum light-absorption and energy conversion. 122 \n d. Risks from nanotechnology Currently, most apprehension over nanotechnology involves first generation nanomaterials, whose toxicity, potential to bioaccumulate, and health effects from exposure is generally unknown. 123 One concern is that nanoparticles are smaller in size than natural particles, and thus they may have an increased potential to permeate the lungs and blood vessels of humans and animals. 124 Another concern is that nanotechnology could have negative ecotoxilogical !!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!! effects, for example by passing through the cell walls of fungi, algae, and bacteria; inhibiting photosynthesis and respiration in plants; or being unnaturally persistent in the environment. 125 Furthermore, preconceptions of the safety of the materials from which nanomaterials are derived do not accurately predict how their nanoparticle equivalent will act because nanomaterials can have unique toxicity, reactivity with other chemicals, persistence, and other qualities. 126 Overall, nanomaterials may pose a GCR because if nanomaterials become pervasive in consumer goods, building materials, and so forth, and they turn out to have a highly negative effect on health and the environment, then this may cause \"serious damage to human well-being\" on the global scale and thus constitute a GCR. However, they do not seem to pose an ER because toxic and harmful substances will not likely make humans go extinct or severely and permanently damage the future quality life of humans. In the future, however, nanotechnology has immense ethical, health, and environmental implications, and several scenarios indicate the presence of both a GCR and an ER. For example, one risk is that nanotech \"organisms,\" like an omnivorous bacteria constructed atom-by-atom, will out-compete their natural counterparts, causing unknown ecological effects. 127 Furthermore, because third and fourth generation nanotechnology will likely be designed to self-replicate in order to obtain meaningful amounts of particular nanotechnologies, 128 self-replicating nanotechnology (nanorobots, perhaps) could either mutate or be maliciously released such that they cause significant harm to humans and the environment. If such self-replication became uninhibited, a chain reaction of self-replication could significantly increase the potential damage to humans and the environment, or even engulf the entire Earth in a mass of self-replicating matter known as \"grey goo.\" 129 On the other hand, others argue that self-replication of nanotechnology is extremely unlikely to occur, and that the more imminent threat from nanotechnology arises from incredibly destructive weapons developed with nanotechnology. 130 Such nanotech weapons could be \"more powerful than any known chemical, biological, or nuclear agent\" and very difficult to detect. 131 For this reason, some commentators point out the possibility of a \"nanotechnology arms race,\" which poses risks of state or non-state actors intentional using nanotech weapons or of accidents involving weapons development. 132 The international community's inability to eliminate the nuclear weapons programs of North Korea and possibly Iran highlights the complications of curtailing vastly powerful and destructive weapons once they are possessed by some states. Overall, future nanotechnology developments present several GCRs/ERs from both accidental and intentional uses. !!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!! \n Artificial Intelligence One common definition of AI is \"the science of making machines do tasks that humans can do or try to do.\" 133 While much of society's association of AI arises from fictional film and literature-2001: A Space Odyssey; I, Robot; and the Terminator series all portray AI in a dangerous light-experts predict that many of the premises behind such science fiction will occur: computers with intelligence similar to or greater than humans, robotic warfare, vehicles operated by computers, and so forth. While AI may prove to benefit society immensely, many experts believe there is a risk of GCR/ER from highly sophisticated AI. \n a. Current Technology While AI is currently nowhere near the level that would pose a GCR/ER, several milestones show progress towards creating computers with immense AI. For example, a robot named Data does comedy routines in front of live audiences and is able to respond to the reaction of the crowd and adjust its comedy routine in real-time. 134 Neural \"cochlear\" implantscomputer devices that translate sound and transmit it into the brain-provide hearing to individuals who are deaf. 135 Google developed a car that is automatically driven by computers, which has already logged 140,000 miles. 136 And supercomputers with AI defeated humans at games of great intellect and rational thinking: The IBM supercomputer Watson, which analyzes \"200 million pages of information\" in a mere three seconds, defeated several former champions on Jeopardy, and the IBM supercomputer Deep Blue defeated grandmaster Garry Kasparov at chess. 137 Meanwhile, several educational institutes are already entirely dedicated to advancing AI. For example, the Singularity University, hosted by NASA and founded in part by Google, seeks to develop AI to \"solve humanity's grand challenges,\" 138 while the Singularity Institute for AI (\"Singularity Institute\"), established in part by former Pay Pal CEO Peter Thiel, teaches graduate students and executives about AI and engages in AI research and development. 139 \n b. Forthcoming Technologies Perhaps the most significant emerging AI technology arises out of the concept of \"the Singularity,\" which is \"the technological creation of smarter-than-human intelligence.\" 140 The basic premise of the Singularity is that if humans create a superhuman AI, then this superior mind could create a more superior mind, beginning a feedback loop that would cumulate in a !!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!! 18! near godlike intelligence. 141 Such \"superintelligence\" could feasibly think one million times faster than the human brain and even rewrite its own code to \"recursively self-improve.\" 142 And humans themselves could be enhanced through \"direct brain-computer interfaces\" or \"biological augmentation of the brain.\" 143 Admittedly, superhuman AI seems a long way off: A recent study measuring the ability of the human brain to store, communicate, and compute information concluded that the processing power of the human brain is equivalent to the combined processing power of all general-use computers in the world in 2007. 144 However, some scientists predict an \"intelligence explosion\" in machines during the 21st century. Raymond Kurzweil, an MIT graduate with 19 honorary doctorates who was awarded the National Medal of Technology by Bill Clinton and who Bill Gates says is the best predictor of future AI he knows, forecasts that the Singularity will occur by 2045 at a level \"about [one] billion times the sum of all the human intelligence that exists today\" based on a model of exponential growth in technology. 145 Before then, predicts Kurzweil, humans will \"reverse-engineer the human brain by the mid-2020s,\" and by the end of the 2020s, \"computers will be capable of human-level intelligence.\" 146 On the other hand, others, like biologist Denis Bray, argue that the unique biochemical processes in the human body far supersede the programmable mind of a robot. 147 \n c. Benefits of artificial intelligence Current practical applications of AI include the unmanned navigation of cars; consumer protection, like discovering credit card fraud; educational advancement; medical technology; and data mining and analysis. 148 And in the future, superintelligent machines could benefit mankind by helping \"eradicate diseases, avert long-term nuclear risks, and live richer more meaningful lives.\" 149 Ethicist Michael Ray LaCha argues that AI will be \"morally perfect\" and that humans !!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!! 141 Id. I.J. Good, a British mathematician, cryptologist, and computer engineer, first came out up with the idea of an \"intelligence explosion\" in 1965. According to I.J. Good, \"[s]ince the design of machines is one of these intellectual activities, an ultraintelligent machine could design even better machines; there would then unquestionably be an 'intelligence explosion,' and the intelligence of man would be left far behind. Thus the first ultraintelligent machine is the last invention that man need ever make.\" Hear That? It's the Singularity Coming, SENTIENT DEVELOPMENTS (June 29, 2011), at: www.sentientdevelopments.com/2011/06/hear-that-its-singularity-coming.html. 142 Overview: What is the Singularity?, SINGULARITY INSTITUTE FOR ARTIFICIAL INTELLIGENCE, at: http://singinst.org/overview/whatisthesingularity (last visited Jan. 31, 2012). 143 Id. 144 Todd Leopold, Roboticist Sees Improvisation Through Machine's Eyes, CABLE NEWS NETWORK (CNN) (Feb. 03, 2012), at: http://www.cnn.com/2012/02/03/living/creativity-improvisation-intelligence-heather-knight/index.html. 145 Kurzweil developed an exponential curve that predicts \"change over time in the amount of computing power, measured in MIPS (millions of instructions per second), that you can buy for $1,000.\" Like the famous Moore's law, the figure doubled roughly every two years. Kurzweil ran the model backwards to the year 1900 and it still held true. Then he checked it against \"the falling cost of sequencing DNA and of wireless data service and the rising numbers of Internet hosts and nanotechnology patents,\" which confirmed the exponential growth of his model. Grossman, supra note 139. 146 Id. 147 ! 19! may rely on AI to make moral decisions. 150 An example of this would be relying on AI to decide whether going to war is likely to result in a net positive benefit to society. Meanwhile, many proponents of AI believe the Singularity to be of paramount importance in the future of the Earth-as significant as the \"first self-replicating chemical that gave rise to life on Earth,\" according to some 151 -because such superintelligence would feasibly develop revolutionary technologies at a rapid pace, such as by discovering cures for all diseases or inventing new means to produce extraordinary amounts of food. 152 Others purport that AI could manage society without the \"[c]onflicts of interest, flawed judgment, lack of information, or political considerations\" that result in flawed human decisionmaking because AI could have a knowledge vastly superior to humans and could be programmed to act without human bias. 153 \n d. Risks from artificial intelligence AI poses a GCR/ER even if the chance of this risk materializing is enormously low. While some argue that humans need AI for their long-term survival, 154 others argue that superhuman AI presents the foremost challenge to the future existence of humans. 155 Assuming that humans develop highly intelligent AI, proponents of the technology argue that coding AI to be inherently friendly and possess moral values (\"Friendly AI\") mitigates any risks. 156 However, some individuals and corporations may derogate from Friendly AI principles, which would present the risk of creating a dangerous form of AI. 157 The difficulties of regulating vastly powerful technologies that could benefit society but also risks massive destruction is evident from current politics involving Iran's development of nuclear technologies, which Iran claims is for use as an energy source, but most of the international world believes is to develop weapons. 158 Furthermore, AI could suffer a \"mechanical failure\" in which AI does not work as designed and therefore presents unpredictable risks to mankind. 159 Finally, anticipating and controlling the outcome of highly intelligent AI is difficult, 160 especially if AI is \"selfimproving\" and thus able to alter its own programming. 161 WRONG 194 (2009) . 151 !!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!! 150 WENDELL WALLACH & COLLIN ALLEN, MORAL MACHINES: TEACHING ROBOTS RIGHT FROM \n ! 20! One specific risk from AI is that highly-intelligent computers may be subject to errors, collapses, viruses, or other unforeseen developments that compromise their ability or intent to properly manage, for example, nuclear weapons, transportation, or other major elements of society. 162 A related risk is that a programming error could give AI the imperative to destroy mankind, or that AI's \"benevolent\" goals may be conflicting with the interests of humans. 163 If this occurs, AI may have the ability to wipe out humans if they are intellectually superior to humans. 164 Another risk is that AI would out-compete humans because of the pressures of evolution and self-preservation, which may compel AI to contest humans for scarce resources. 165 Finally, there is the risk that humans will undergo a calculated self-termination as humans opt to transcend our biological forms and be \"transferred\" to machines as \"post-humans.\" 166 Overall, there is a clear GCR/ER from AI. \n The Large Hadron Collider The Large Hadron Collider (LHC) is a recent example of a failure of the legal system to properly consider what some scientists believed to be a GCR and an ER from a radical new technology. In September 2008, after fourteen years and, according to mid-range estimates, at least $8 billion dollars spent, 167 the European Organization for Nuclear Research 168 (CERN) began using the world's most powerful particle accelerator. The LHC accelerates protons or, more recently, lead ions, in opposite directions around a massive vacuumed track, about 27 kilometers (approximately 17 miles) in circumference and 50 to 175 meters underground, which the magnet-guided particles whip around at 99.99999 percent speed of light 169 (clocking in about 11,000 laps of 27 kilometers every second) 170 as they continuously smash together at up to 7 trillion electron volts 171 at four points of intersection-thereby creating exotic particles, evidently even the coveted Higgs boson-for hours at a time. 172 Most scientists throughout the world rejoiced at creating a particle accelerator seven times the energy of its nearest competitor, the Tevatron particle accelerator at the Fermi National Accelerator Academy in Illinois, and which has created \"sub-atomic fireballs with temperatures of over ten trillion degrees [centrigrade], a ! 21! million times hotter than the [center] of the Sun\" 173 that \"[recreates] conditions … only a trillionth of a second after the Big Bang.\" 174 Many scientists held their breath for the LHC to \"reveal the origins of mass, shed light on dark matter, uncover hidden symmetries of the universe, and possibly find extra dimensions of space.\" 175 However, others feared that the accelerator could create a black hole to gobble up the Earth or \"[convert] all the [Earth's] matter into a super-dense glob called a 'strangelet.'\" 176 Scientists in the latter category attempted to seek court review of the LHC in a variety of international and domestic courts but to no success. In this regards, the following subsections highlight two problems with current international regulations of advanced technologies that may pose a GCR/ER: first, the difficulty of getting a court to consider a minority opinion in a highly complex scientific case, and second, the potential bias of scientists as risk assessors. !!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!! \n a. Difficultly in getting a court to consider the case The LHC highlights the problems with traditional courts serving as risk assessors for a groundbreaking technology, i.e. one that has never been tested before. 177 While some scientists and other individuals sought injunctions from various national and international judicial bodies against operating the LHC-including a Swiss court, 178 the European Court of Human Rights (ECtHR), 179 the U.S. Ninth Circuit Court of Appeals, 180 the German Constitutional Court, 181 the International Security Court, 182 and the Administrative Court of Cologne 183 -all such attempts failed for a variety of reasons discussed below. In the United States, Walter L. Wagner, a nuclear physicist, and Luis Sancho, founder of Citizens Against the Large Hadron Collider, failed in their bid to enjoin operation of the LHC in the case Sancho v. U.S. Department of Energy. The Ninth Circuit Court of Appeals, reviewing de novo the District Court of Hawaii's decision to deny standing for lack of subject matter jurisdiction, 184 ruled that, inter alia, Wagner does not have standing because he could not !!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!! establish an injury in fact. According to the Court, Wagner's claim demonstrated, at most, \"potential adverse consequences,\" which falls short of the standing requirement of a \"credible threat of harm\" because the risk of the obliteration of the Earth is too speculative. 185 However, advanced technologies that pose a GCR/ER are often highly improbable but result in catastrophic consequences if such risks materialize, thus GCRs/ERs are nearly impossible to challenge in the U.S. judicial system without consideration of the magnitude of their risks. Operation of the LHC was also challenged in Europe. The German Constitutional Court rejected the claims of Gabriele Schröter-a biochemist who has published over 300 papers and who is known by some as the \"father of Chaos theory\"-because the Court believed that theories of mini-black holes or \"strange matter\" were unsubstantiated and theoretical, although the Court did recommend a conference on LHC safety issues. 186 Meanwhile, the ECtHR rejected German scientist Dr. Otto Rössler's attempt to enjoin the LHC without stating a reason for their decision. 187 Notably, none of these courts had any expertise in science, so whether they could properly assess the associated risks is questionable. Although many judges have shown the ability to assess scientific principles, oftentimes a thorough understanding of a scientific topic requires cross-examination and other procedures in full-fledged court proceedings, 188 and thus dismissing a case before the testimony develops truly compromises the consideration that minority science receives. Furthermore, judges who receive scientific training are better able to weigh conflicting scientific evidence and judge methodological problems in scientific research presented to the court. 189 Overall, these test cases demonstrate that the neither domestic nor international courts are equipped to handle disputes involving low-probability, high consequence advanced technologies. There seems to be no court that offers judicial relief for such situations, despite the fact that if a GCR/ER materializes, the lives of a huge amount of people are at stake. 190 While courts may rely upon the political process to mitigate such risks, 191 emerging technologies are becoming increasingly widespread and privatized, and the political process thus far has been insufficient to create sufficient safeguards from these risks. APPLIED PSYCHOLOGY 574, 585 (2000) . 189 See id.; see also FEDERAL JUDICIAL CENTER, REFERENCE MANUAL ON SCIENTIFIC EVIDENCE 3-4 (2000). 190 Adams, supra note 175, at 136. 191 The U.S. District Court of Hawaii ruled that the political process, not NEPA, was the proper forum for handling the risks of the LHC. Sancho, 578 F.Supp.2d at1269. !!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!! ! federal action\" under § 4332(2)(c) \n ! 23! b. Potential bias from CERN scientists as risk assessors While scientists often reassure the public that GCRs/ERs from emerging technologies are nonexistent or negligible, some critics warn that scientists are unreliable risk assessors of their own work because financial incentives, career pressures, 192 and competition among competing groups of scientists taint their judgment. 193 Despite these inherent conflicts of interests, almost all of the LHC safety reports came from CERN scientists. 194 Two such CERN reports discussed various alleged risks of the LHC, but they did not weigh the probability of a GCR/ER against the value of operating the LHC. 195 While the reports concluded that operation of the LHC presented no danger, 196 at least some experts doubt this conclusion. For example, revered scientist Martin Rees quantified the risk of a global catastrophe arising from the LHC as being about one in 50 million, 197 whereas Toby Ord of Oxford University estimated the risk of a disaster as falling somewhere between one in 10,000 and 1,000,000. 198 While the differences in these estimates show the difficulty in estimating some GCRs/ERs, they obviously contradict the \"no danger\" conclusion of CERN and highlight the need for courts to consider unbiased scientific information in making decisions concerning low-probability GCRs/ERs. While the operation of the LHC has not caused worldwide destruction, there have still been several accidents that, for some, put the reliability of CERN's assurances into doubt. The first incident occurred only nine days after the LHC began operating in 2008, when a \"faulty electrical connection between two of the accelerator's magnets\" 199 melted, causing one ton of helium to blast into the tunnel 200 and shutting down the LHC for 14 months. 201 The mistake in question was described by some as \"basic\" and should have been discovered during \"four 202 This oversight caused some commentators to worry about mistakes in the mechanics or risk assessment of the LHC that could have more devastating results. 203 Overall, this brief case study demonstrates that self-assessments of safety by scientists intimately involved with a project should be doubted. Therefore, an independent risk assessment should be a mandatory element of engaging in emerging technologies that pose a GCR/ER. !!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!! engineering reviews.\" \n III. BIOENGINEERING AND INTERNATIONAL LAW Despite the low probably but extremely high risk of global catastrophe posed by bioengineering, nanotechnology, and AI, international law has done very little to limit these risks. This section focuses solely on the emerging technology of bioengineering as a GCR/ER under international law because, unlike nanotechnology and AI, the current level of available bioengineering science already presents a GCR/ER. Furthermore, nanotechnology and AI are essentially unregulated at international law, whereas international law regulates some forms of bioengineering at least to some extent. After discussing several binding and non-binding international law instruments, this section demonstrates that there is no binding international regime that sufficiently addresses trade in bioengineered food, much less the GCRs/ERs arising out of risks posed by other forms of bioengineering, such as a potentially highly deadly bioengineered organism or a genetically engineered human. \n A. Convention on Biological Diversity The Convention on Biological Diversity (CBD) functions to conserve biological diversity, sustainably use the components of biodiversity, and share the benefits of genetic resources in a fair and equitable way. 204 Based on this objective, bioterrorism and genetically engineered humans do not fall neatly within objectives of the CBD. Nonetheless, a failure to mitigate GCRs/ERs arising from the accidental release of dangerous bioengineered organisms from a laboratory seems to fall within the biotechnology provisions of Article 8 of the CBD. Article 8 of the CBD states that each Party shall (meaning mandatory as opposed to discretionary), \"as far as possible and appropriate,\" (g) Establish or maintain means to regulate, manage or control the risks associated with the use and release of living modified organisms resulting from biotechnology which are likely to have adverse environmental impacts that could affect the conservation and sustainable use of biological diversity, taking also into account the risks to human health… 205 First, Article 8(g) establishes an affirmative duty of parties to \"regulate, manage or control\" the use and release of living modified organisms (LMOs), and thus a failure to properly regulate the actions of private laboratories could breach this provision. Second, while the CBD does not define \"LMOs,\" the Cartagena Protocol on Biosafety to the Convention on Biological !!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!! 202 Adams, supra note 175, at 137. 203 Id. 204 Convention on Biological Diversity, June 5, 1992, 1760 U.N.T.S. 79, 143; 31 I.L.M. 818 (1992), art. 1 [hereinafter CBD]. 205 Id. at art. 8(g) (emphasis added). ! 25! Diversity (\"Cartagena Protocol\") defines LMOs as \"any living organism that possesses a novel combination of genetic material obtained through the use of modern biotechnology\". 206 Highly fatal bioengineered organisms possess a \"novel combination of genetic material\" obtained through biotechnology by the plain meaning of the language, thus they are LMOs. Third, while the act of bioengineering deadly organisms in a laboratory does not constitute a \"release\" of LMOs into the environment (to the contrary, such organisms are contained within a laboratory), bioengineered organisms in a laboratory do seem to qualify as a \"use\" of an LMO under Article 8(g). Although the term \"use\" is not defined in the CBD, bioengineering organisms in a laboratory clearly fall within the definition of \"contained use\" from the Cartagena Protocol (\"any operation, undertaken within a facility … which involves LMOs that are controlled by specific measures that effectively limit their contact with, and their impact on, the external environment\"). 207 Because the broader term \"use\" in the CBD likely encapsulates the term \"direct use\" from the Cartagena Protocol, bioengineering organisms in a laboratory qualifies as a \"use\" within the meaning of the CBD. 208 Therefore, GCRs/ERs arising from bioengineering do seem to fall within Article 8(g) of the CBD. While Article 8(g) seems to require parties to \"regulate, manage, or control\" highly fatal bioengineered organisms, the effectiveness of the CBD in mitigating GCRs/ERs arising out of bioengineering is severely limited because the provisions do not establish specific actions that Parties must take. For example, measures like laboratory safety requirements, training of individuals handling highly fatal bioengineered organisms, or laboratory monitoring requirements are all absent from the CBD. Although the Cartagena Protocol significantly elaborates on biosafety issues, the subsequent section concludes that they are too-trade focused and discretionary to provide meaningful protection. 209 Furthermore, even if the CBD did impose specific requirements upon Parties, the CBD does not include an enforcement mechanism, and thus enforcing the provisions upon unwilling Parties would prove difficult. In conclusion, while GCRs/ERs arising from bioengineering seem to fall within the CBD, the CBD is not an effective means of mitigating these risks. \n B. Cartagena Protocol on Biosafety The Cartagena Protocol expands upon the biosafety provisions of the CBD to regulate LMOs that may adversely affect biological diversity, 210 but the scope of the treaty is too tradefocused to sufficiently reduce the GCRs/ERs arising out of bioengineering. First, the scope of the Cartagena Protocol seems to include novel viruses and organisms developed in labs because Article 3 states that \"living organisms\" means \"any biological entity\" that can transfer or 207 The Cartagena Protocol separately uses the term \"direct use\" in a context that applies to use \"as food or feed, or for processing.\" Id. at art. 11. The broader term \"use\" in the CBD seems to encapsulate this meaning, as well. 208 CBD, supra note 204, at art. 11. ! 26! replicate genetic material, even \"sterile organisms, viruses and viroids.\" 211 However, Article 4 establishes that the Cartagena Protocol applies to the \"transboundary development, handling, transport, use, transfer and release\" of LMOs. 212 The term \"transboundary\" should be interpreted to modify each of the subsequent actions: development, handling, transport, use, transfer and release. The first listed action, \"development,\" is not the only action that is transboundary in nature-handling and transport are inherently transboundary, for example-thus by extension Article 4 can reasonably be interpreted to apply solely to transboundary actions. Likewise, \"risk assessment\" and \"risk management\" provisions of the Cartagena Protocol only apply to LMOs being exported or imported, 213 and the \"handling\" requirements of LMOs under Article 18 only apply to transboundary international movement. 214 Overall, the transboundary and tradeoriented scope of the Cartagena Protocol limits its applicability to GCRs/ERs arising from bioengineering because such actions generally take place within the territory of one state-such as the handling of a deadly bioengineered organism in a laboratory. !!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!! Even though some provisions of the Cartagena Protocol do seem to apply to the LMOs outside of the transboundary context, these provisions do not provide sufficient protections from GCRs/ERs arising out of bioengineering. For example, Article 17 applies to \"unintentional transboundary movements\" of LMOs, which would seem to include a bioengineered virus escaping from a laboratory while in the territory of one state. However, this article merely requires parties to \"notify affected or potentially affected states\" of unintentional transboundary movements of LMOs rather than requiring any preventative measures or risk assessments. 215 Notifying a party when a GCR/ER materializes will not likely prevent the global catastrophic or existential harm from occurring. Likewise, the risk assessment and risk management requirements of the Cartagena Protocol are too discretionary to meaningfully mitigate low-probability GCRs/ERs arising out of bioengineering. The Cartagena Protocol requires Parties to undertake a risk assessment and implement risk management measures to \"regulate, manage, and control risks … associated with the use, handling and transboundary movement of living modified organisms.\" 216 Specifically, the risk assessment stage requires Parties first to evaluate the likelihood, consequences, and the subsequent \"overall risk\" of adverse effects from the release of an LMO, and second to recommend \"whether or not the risks are acceptable or manageable.\" 217 Based on the conclusions of the risk assessment, the risk management stage then requires Parties to, inter alia, take measures \"to the extent necessary\" to \"prevent adverse effects\" of LMOs on biological diversity and human health, and also to take \"appropriate measures\" to \"prevent unintentional transboundary movements of [LMOs] .\" 218 However, when applied to GCRs/ERs arising from deadly bioengineered organisms in a laboratory, the risk assessment and risk management provisions of the Cartagena Protocol fall short because decision makers in the risk management stage have broad discretion to decide !!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!! 211 Id. at art. 3. 212 Id. at art. 4 (emphasis added). 213 Id.at art. 15 and art. 16. 214 Id. at art. 18. 215 Id., at art. 17. 216 Id. at arts. 16-17. 217 Id. at Annex III(8). 218 Id. at art. 16(2)-(3). ! 27! whether or not risks are acceptable. According to the Ad Hoc Technical Expert Group (AHTEG) on Risk Assessment and Risk Management under the Cartagena Protocol on Biosafety's Guidance on Risk Assessment of Living Modified Organisms, risk assessors to provide recommendations as to whether or not risks arising from LMOs are \"acceptable or manageable\" based on their nation's own \"protection goals,\" and final approval for the use of LMOs is entirely \"up to the decision maker to decide,\" 219 which the Guidance concedes is \"typically decided at a political level and may vary from country to country.\" 220 The lack of standardized protection goals across all countries and the significant discretion allocated to decision makers in regulating LMOs results in inconsistent risk management policies. For example, Parties have shown significant differences of opinion in what risks are \"acceptable\" for GM crops, which are extremely popular and widely grown in the United States, Argentina, and Canada (these countries retain 98 percent by acreage of all GM crops in the world), but resisted by European countries and Japan because of food safety and environmental concerns. 221 On the one hand, the risk assessment and management process is valuable for Parties to consider the risks of LMOs and make policy decisions to mitigate those risks based on their domestic protection goals. However, when translated to GCRs/ERs from bioengineering, the single release of a dangerous bioengineered organism in any state could cause global catastrophic or existential harm in most or all other states. Therefore, sufficiently mitigating the risks of GCRs/ERs from bioengineering requires uniform measures, and so the failure of the Cartagena Protocol to proscribe clear requirements on how and to what extent Parties should mitigate GCRs/ERs arising from deadly bioengineered organisms compromises its effectiveness. Even if the risk assessment and risk management provisions were more stringent in requiring Parties to minimize GCRs/ERs arising out of bioengineering, the Cartagena Protocol does not include any meaningful recourse against Parties that fail to meet their obligations. Parties attempted to address the lack of recourse under the Cartagena Protocol with the Nagoya -Kuala Lumpur Supplementary Protocol on Liability and Redress to the Cartagena Protocol on Biosafety (\"Supplementary Protocol\"), which creates a scheme of liability and redress for transboundary damage from LMOs, including damage from \"unintentional transboundary movements\" like the type that may arise from an accidental release of a bioengineered organism from a laboratory. 222 However, the Supplementary Protocol is insufficient to prevent GCRs/ERs from materializing because the provisions only provide reactionary redress for damage caused by an operator 223 after an LMO has already been released, whereas a GCR/ER should never be allowed to materialize. 224 Even though Article 5(5) prospectively regulates LMOs by requiring operators to engage in \"appropriate response measures\" if there is a \"significant likelihood\" that !!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!! 219 See Guidance on Risk Assessment on Living Modified Organisms, Report of the Third Meeting of the Ad Hoc Technical Expert Group On Risk Assessment and Risk Management Under the Cartagena Protocol on Biosafety, at 18, U.N. Doc. UNEP/CBD/BS/AHTEG-RA&RM/3/4 (2011), available at: http://www.cbd.int/doc/meetings/bs/bsrarm-03/official/bsrarm-03-04-en.pdf. 220 ! 28! the use of LMOs will result in \"damage\" 225 if \"timely response measures are not taken,\" GCRs/ERs are low-probability and thus do not present a \"significant likelihood\" of materializing. 226 Note that the Supplementary Protocol has not entered into force because the Supplementary Protocol the accession of forty Parties to the Cartagena Protocol, but only the Czech Republic and Latvia have thus far acceded. 227 Finally, another major limitation of both the CBD and the Cartagena Protocol is that the United States is not a party to either instrument. 228 Because the United States possesses some of the most sophisticated and potentially dangerous bioengineering technology, the absence of the United States from these instruments limits their effectiveness in reducing GCRs/ERs. Overall, the Cartagena Protocol does not seem to be an effective instrument to regulate GCRs/ERs from bioengineering. \n C. Biological Weapons Convention The Biological Weapons Convention is too focused on bioterrorism and neglectful of biosafety issues to sufficiently mitigate GCRs/ERs arising out of bioengineering. Under the Biological Weapons Convention (BWC), a state party cannot \"develop, produce, stockpile or otherwise acquire or retain microbial or other biological agents [or] toxins … of types and in quantities that have no justification for prophylactic, protective or other peaceful purposes….\" 229 Legitimate scientific research, even research that poses a risk of accidentally releasing a highly deadly virus or publishing information that can be used for bioterrorism, usually has a \"prophylactic, protective or other peaceful [purpose]\" under article 1, which severely limits the BWC in regulating GCRs/ERs arising out of bioengineering. Bioengineered humans also clearly fall outside the scope of the BWC. Similarly, while the BWC creates obligations that restrict transfer of biological weapons information or technologies, these obligations exempt peaceful purposes. Under Article III of the BWC, a state party cannot \"transfer … directly or indirectly … assist, encourage, or induce any State, group of States or international organization to manufacture or otherwise acquire any of the agents, toxins, weapons, equipment or means of delivery specified in [Article I].\" 230 Thus States have an affirmative duty to ensure that their bioengineering technologies are not transferred in violation of this provision. This article possibly applies to instances where States indirectly grant malevolent actors access to bioengineering techniques through publication of research and studies. However, Article X makes clear that the BWC exempts the \"exchange of equipment, materials, and scientific and technological information\" for biological agents when used to prevent disease or other peaceful purposes, while also exempting the \"international !!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!! 225 \"Damage\" is defined as \"adverse effects on the conservation and sustainable use of biological diversity, also taking to account risks to human heath.\" Id. at art. 2(2)(b). 226 Id. at Art. 5(5). 227 Parties to the Protocol and Signature and Ratification of the Supplementary Protocol, CONVENTION ON BIOLOGICAL DIVERSITY, at: http://bch.cbd.int/protocol/parties/#tab=1 (last visited Apr. 17, 2012). 228 \n ! 29! exchange of [biological] agents and toxins and equipment\" for peaceful purposes. 231 Thus Article III fails to address the GCRs/ERs arising out of dual use bioengineering research that is conducted for peaceful purposes but could result in an accidental release or be to unintended malicious use. While the State Parties to the BWC have shown a growing concern for biosafety issues, such efforts are nonbinding and thus ineffective to protect against GCRs/ERs related to biosafety. In 2006, the Parties called upon State Parties to adopt \"…legislative, administrative, judicial and other measures\" to \"ensure the safety and security of microbial or other biological agents or toxins in laboratories, facilities, and during transportation, to prevent unauthorized access to and removal of such agents or toxins.\" 232 Five years later they called upon State Parties to \"implement voluntary management standards on biosafety and biosecurity\" and \"encourage the promotion of a culture of responsibility amongst relevant national professionals and the voluntary development, adoption and promulgation of codes of conduct.\" 233 While these measures may put political pressure upon State Parties to increase both biosafety and biosecurity, they are nonbinding measures and thus ineffective to rely upon to mitigate GCRs/ERs arising out of bioengineering. Even if the BWC included measures to address issues of biosafety and biosecurity, there is no formal compliance-monitoring body or verification mechanism to ensure proper enforcement. 234 While State Parties engaged in seven years of negotiations over a \"BWC Protocol\" that would establish an inspection regime as well as a variety of other measures intended to bolster, inter alia, the effectiveness, transparency, and implementation of the BWC, negotiations have stalled. 235 In particular, the Bush Administration stifled progress when it withdrew from negotiations in 2001 because of concerns over the BWC Protocol's ineffectiveness, harm to biodefense research, and increased costs to the biotechnology industry; the Obama Administration has continued to oppose the BWC Protocol. 236 Finally, another major limitation is that there are only 165 State Parties to the BWC, which \"falls behind other multilateral arms control, disarmament and non-proliferation treaties.\" Thus bioengineering activities that present a GCR/ER could just be conducted within the territory of non-Parties. 237 Overall, while the BWC may reduce GCRs/ERs presented by biosecurity, the failure to sufficiently address issues like the accidental release of dangerous bioengineered organisms or human bioengineering, as well as practical limitations like the large number of states that are not Parties, limits its ability to mitigate GCRs/ERs from bioengineering. ! 30! \n D. Human Dignity Instruments Inheritable human genetic alterations are subject to a variety of international soft law, legally binding regional instruments, and the occasional condemnation of IGOs, but no global convention regulates this GCR/ER. 238 The predominant binding instrument that regulates human genetic engineering is the Council of Europe's Convention on Human Rights and Biomedicine (CHRB). First, Article 13 of the CHRB states that \"[a]n intervention seeking to modify the human genome may only be undertaken for preventive, diagnostic or therapeutic purposes and only if its aim is not to introduce any modification in the genome of any descendants.\" 239 The accompanying CHRB Explanatory Report clarifies that Article 13 strictly prohibits any inheritable genetic alterations of humans. 240 Notably, the Explanatory Report also acknowledges that permitting alteration of the human genome could endanger the entire human species, thus clearly recognizing the GCR/ER at issue. 241 However, while this convention may be effective at reducing the risk of GCR/ER arising from the genetic engineering of humans within certain European countries, only 29 out of 47 Council of Europe Member States are signatories and the rest of the world is left out, thus limiting the global effectiveness of the CHRB. 242 Another possibility is that inheritable genetic alterations may constitute a human rights violation, but this seems unlikely under international law. Fukuyama argues that reengineering the \"essence of humanity itself\" and creating a new species could be a \"crime against humanity,\" which would violate the Rome Statute of the International Criminal Court (\"Rome Statute\"). 243 However, this argument does not seem to fall cleanly within the plain language of the Rome Statute. 244 The Rome Statute clearly establishes the scope of \"crimes against humanity\" as acts \"…committed as part of a widespread or systematic attack directed against any civilian population, with knowledge of the attack….\" 245 In turn, the word \"attack\" is defined as \"course of conduct involving multiple commissions of acts,\" which most but not all states interpret to not !!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!! require actual use of force; thus a scientific bioengineering program could possibly qualify as an \"attack.\" 246 However, the plain language of the Rome Statute excludes acts that are not attacks \"directed against\" a civilian population. Scientists who create genetically inheritable alterations in humans are not likely committing an attack \"directed against\" a population, but rather are attempting to improve the human body for the benefit of an individual or society. Any subsequent attack directed against humans, such as the possible suppression of humans by posthumans, is a mere consequence of inheritable genetic alterations. Therefore, inheritable genetic alterations do not likely violate the Rome Statute. \n IV. \n RECOMMENDATIONS FOR AN EMERGING TECHNOLOGIES TREATY While bioengineering that poses a GCR/ER is subject to several nonbinding international instruments, no internationally binding obligations sufficiently reduce the catastrophic risks arising out of bioengineering. 247 One possible solution is to expand several of these existing instruments and use them together in a piecemeal approach to regulate the various aspects of GCRs/ERs arising out of bioengineering. However, because the current regimes are inadequate and because the scope of emerging technologies that present a GCR/ER will likely increase as science continues to develop at a rapid pace, the better solution is for states to agree to a comprehensive treaty that can sufficiently mitigate the unique aspects of GCRs/ERs arising from all emerging technologies. Therefore, this paper proposes the framework of a model treaty that would mitigate GCRs/ERs arising out of emerging technologies with the following regulatory mechanisms: use of the precautionary principle, a body of experts, a review mechanism, public participation and access to information, binding reforms for scientists, laboratory safeguards, and oversight of scientific publications. \n A. New International treaty GCRs/ERs arising out of emerging technologies are unique in that a single event can result in widespread destruction, which is why the World Commission on the Ethics of Scientific Knowledge and Technology (COMEST) acknowledges that only a global convention is sufficient to curtail these ERs. 248 If a GCR/ER regulatory regime only regulates some states but not others, dangerous emerging technologies could instead be developed and utilized in the unregulated states. An example of this is when Richard Seed, an American physicist who wished to be the first person to clone a human, threatened to conduct his cloning in Mexico or Japan if the United States banned human cloning. 249 And if some states ban or regulate emerging technologies while others do not, this could threaten global security because \"rogue states\" would have a monopoly over dangerous emerging technologies. 250 Furthermore, without a truly global treaty, countries competing to quickly develop emerging technologies may engage in a race to arms that promotes speed over safeguards. 251 Finally, some states may believe that, 249 Id. 250 Pinson, supra note 115, at 279-280 (referring specifically to nanotechnology). 251 For example, AI experts cite the importance of \"international cooperation around AI development\" and to prevent an \"AI 'arms race' that might be won by the competitor most willing to trade off safety measures for speed.\" ! 32! absent regulations binding upon all states, their emerging technology industries will be placed at a competitive disadvantage to unregulated countries. Thus all states should agree to an international treaty imposing evenhanded regulations. !!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!! 246 MACHTELD An international treaty could potentially cover all emerging technologies that pose a GCR/ER, beginning with the three in this paper-nanotechnology, bioengineering, and AI. One significant reason that a GCR/ER international treaty should regulate nanotechnology, bioengineering, and AI is that these emerging technologies are predicted to overlap in many areas. Examples of potential convergences of nanotechnology, bioengineering, and AI are nanosized components that interact with bioengineered organisms, like a bioengineered photosynthesis protein from a plant integrated with a nanotech film to capture sunlight and convert it into electricity; 252 a targeted-killing weapon that integrates a lethal genetically engineered organism with a nanoparticle that releases the organism upon detecting certain genetic traits in an individual's DNA; 253 a sophisticated form of AI that creates new technologies utilizing nanotechnology or bioengineering; 254 and the use of nanotechnology, bioengineering, and AI to either \"enhance\" humans (termed \"posthumans\") or to create a superintelligent machine with biological and nanotech properties. 255 An international treaty that covers all of these emerging technologies is best equipped to regulate their use as the boundaries blur between them. Furthermore, nanotechnology, bioengineering, and AI all have broad social and ethical implications, and so placing all of them under the auspices of one convention creates a central hub from which the public can determine what risks they are willing to take and what technologies should become pervasive in society. Finally, scientists are likely to uncover yet unknown GCRs/ERs from emerging technologies in the future, and so a GCR/ER treaty for emerging technologies should be flexible enough to incorporate other emerging technologies that could pose a GCR/ER in the future such that regulators can take relatively quick action on the international level rather than having to negotiate a new legal instrument. States should consider concluding an international convention on GCRs/ERs from emerging technologies under the auspices of an existing international governmental organization (IGO). For example, concluding a treaty under the auspices of the United Nations is beneficial because linkages could be made with relevant U.N. subsidiary bodies, such as the U.N. Commission on the Science and Technology Development (CSTD), 256 and the United Nations has substantial financial resources and political clout. Another possibility to conclude a treaty under the auspices of the World Health Organization (WHO), which already works with issues like bioengineering and nanotechnology through non-binding initiatives and other means. 257 For 256 The CSTD is a \"gateway to information on science and technology\" for the United Nations that the Economic and Social Council (ECOSOC) created in 1992 as a source of information and policy recommendations to other U.N. bodies. See Homepage, U.N. CONFERENCE ON TRADE AND DEVELOPMENT, at: http://www.unctad.info/en/Science-and-Technology-for-Development---StDev (last visited Apr. 21, 2012). 257 See e.g. WHO Biosafety and Laboratory Biosecurity Programme on Laboratory Biosafety and Biosecurity, WORLD HEALTH ORG., at: http://www.who.int/ihr/biosafety/en/ (last visited Mar. 22, 2012). See also ! 33! example, the WHO Biosafety and Laboratory Biosafety programme (BLBP) organizes awareness-raising workshops, develops training materials for laboratory workers, and has created a nonbinding World Health Assembly resolution that \"urges\" Member States to take measures such as improving laboratory biosafety and increasing training for laboratory workers. 258 However, the WHO is traditionally not a forum under which treaties are concluded, with the 2003 Framework Convention on Tobacco Control being the only convention thus far concluded under the auspices of the WHO. 259 An alternative is to utilize the WHO's extensive resources on protecting human health by linking a treaty on GCRs/ERs from emerging technologies to the WHO or some other IGO through a protocol, a joint task force, or a collaborative agreement. 260 One the other hand, rather than concluding a new international agreement, states could agree to amend existing international treaties to include increased safeguards over a wider range of activities, but existing treaties are not ideal for this purpose. As discussed above, neither the CBD, the Cartagena Protocol in Biosafety, the BWC, nor the various human dignity instruments sufficiently mitigate GCRs/ERs arising out of biotechnology. 261 While states could chose to amend legally binding instruments like the CBW and the Cartagena Protocol to include emerging technologies, states did not draft these treaties with GCRs/ERs for emerging technologies in mind, and thus they would have to be radically transformed in order to provide an effective international regime. 262 Therefore a new international treaty is the best way forward to regulate the emerging technologies contemplated in this paper. !!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!! ! Reducing Long-Term \n B. Precautionary Principle This section overviews the different applications of the precautionary principle and discusses how best to apply the precautionary principle to a treaty on GCRs/ERs from emerging technologies, concluding that specific strategies based on an affirmative application of the precautionary principle is ideal. While there are many renditions of the precautionary principle embodied in various international instruments, 263 the essence of the precautionary principle is that preventative or remedial measures can, should, or must be taken when there is scientific uncertainty that an unacceptable hazard may occur. 264 The precautionary principle is an essential element of an international treaty regulating GCRs/ERs from emerging technologies because society should not risk massive damage to human health and the environment from GCRs/ERs on a \"trial and error\" basis. !!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!! ! \n ! 34! Many conventions apply the precautionary principle in a negative manner, like Article 3(3) of the U.N. Framework Convention on Climate Change (UNFCCC), which states that a \"lack of full scientific certainty should not be used as a reason for postponing\" precautionary measures to \"anticipate, prevent, or minimize the causes of climate change and mitigate its adverse effect.\" 265 This provision does not actually require a state to take precautionary actions, but rather purports that scientific uncertainty is an inappropriate reason not to take precautionary actions. For example, while the Intergovernmental Panel on Climate Change (IPCC) concluded in its Fourth Assessment Report (AR4) that increases in global temperatures are \"very likely\" (i.e. above 90 percent likely) caused by anthropogenic greenhouse gas emissions, 266 UNFCCC Parties should not cite the lack of 100 percent scientific certainty as a basis for not reducing their greenhouse gas emissions. This is also an example of a nonbinding application of the precautionary principle because Article 3(3) merely states that UNFCCC parties \"should\" employ the precautionary principle in addressing climate change. 267 Other conventions apply the precautionary principle as an exception to otherwise binding requirements. For example, Article 5.7 of the World Trade Organization's (WTO) Agreement on Sanitary, and Phytosanitary Measures (SPS Agreement) permits a WTO Member to adopt certain trade-restrictive sanitary and phytosanitary measures that are otherwise in violation of the liberalized trade provisions of the WTO when \"relevant scientific evidence is insufficient\" to assess particular risks to human, animal, or plant life or health. 268 The effect of this provision is that some WTO Members have more stringent sanitary and phytosanitary measures than others based on each WTO Member's chosen level of health standards. Finally, some conventions reflect an affirmative application of the precautionary principle that recommends or requires precautionary measures in the face of scientific uncertainty. An example of this approach is embodied in Annex 4(a) of the Convention on the International Trade on Endangered Species (CITES), which states that Parties \"shall, in the case of uncertainty … act in the best interest of the conservation of the species\" when deciding whether to alter the protective status of a species or when gauging the effect of international trade on conserving a species. 269 In the case of CITES, application of the precautionary principle is hashed out through specific obligations, such as the requirement that a species listed under Appendix I (species that are \"threatened with extinction\") thereafter be listed under Appendix II (species that, inter alia, would be \"threatened with extinction\" unless trade in the species is strictly regulated) and monitored for a requisite time period before delisting that species 270 Furthermore, in contrast to the UNFCCC, CITES requires Parties to employ the precautionary principle as reflected in Annex 4. 271 Negative, nonbinding, or exception-type applications of the precautionary principle are not ideal to effectively regulate GCRs/ERs arising from emerging technologies. First, the negative application of the precautionary principle would merely suggest or require that states not use scientific uncertainty about GCRs/ERs arising from emerging technologies as a reason for not taking action, but this does not impose an obligation upon states to affirmatively mitigate any risks, although applying the negative precautionary principle could be useful to eliminate \"excuses\" for noncompliance with other provisions of an emerging technologies GCR/ER treaty. Second, an exception-type application of the precautionary principle for GCRs/ERs from emerging technologies would be ineffective because this approach would result in different levels of protections from GCRs/ERs in different states, and an effective emerging technologies GCR/ER treaty should impose roughly uniform standards because a GCR/ER that materializes in any single state would have a global impact. On the other hand, this version of the precautionary principle may be useful if, for example, states negotiated exceptions to the WTO liberalized trade scheme as part of a treaty on GCRs/ERs arising from emerging technologies. Finally, a nonbinding application of the precautionary principle would limit the treaty's overall effectiveness because states are less likely to abide by its terms. !!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!! 265 United An affirmative and obligatory version of the precautionary principle would be most effective in regulating GCRs/ERs arising from emerging technologies. This approach could require states to take certain affirmative actions to regulate emerging technologies that pose an uncertain or undecided degree of GCR/ER. This is ideal because emerging technologies have the potential to result in global, permanent damage to the habitability of the Earth, and so requiring every state to implement precautionary measures is the prudent course of action, especially considering the rapid speed at which emerging technologies are developing. Furthermore, an affirmative and obligatory application of the precautionary principle is the best way to ensure that all states actually integrate precautionary mechanisms into their domestic law, because the level of requisite precaution can be determined and prescribed on the global level rather than having a patchwork of precautionary mechanisms from country-to-country that may or may not result in a sufficient level of protection on the global scale. Although states would not likely be willing to impose widespread regulations upon entirely \"speculative\" risks, one way to implement the precautionary principle in an emerging technologies treaty is to trigger specific requirements when available data demonstrates a \"reasonable grounds for concern\" that a certain risk exceeds whatever level is deemed acceptable. 272 A body of experts representing widespread interests could determine when there is in fact a \"reasonable ground of concern,\" and remedial measures would then be applied relatively consistently across all states. For example, a body of experts could determine that swarms of nanobots developed to search for oil in the ground (a technology that is currently ! 36! being researched 273 ) poses uncertain risks and that available information shows a \"reasonable grounds for concern,\" which could trigger a requirement that states impose certain measures to regulate this technology or even prohibit it until there is further research of the risks. Subsequently, the treaty could require proponents of this technology to rebut the \"reasonable grounds for concern\" in order to continue the technology's development and/or application. !!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!! If the probability of a GCR/ER is inherently unascertainable even with extensive scientific research, as is often true with GCRs/ERs from emerging technologies, 274 one possible solution is for a body of experts to determine the necessary steps to prevent an \"extremely bad worst case scenario\" as a precautionary measure, regardless of likelihood and available information. 275 States would be required to take these steps if doing so requires modest costs that are not diverted from other crucial investments. 276 An example of this would be to require development of an approved failsafe mechanism in all superintelligent AI that has direct or indirect control over weapons systems. Another possibility is to ban or restrict emerging technologies that present an unquantifiable GCR/ER until scientific evidence proves that the risk, while still unknown, can effectively be reduced to a level that falls short of global catastrophic or existential harm. An example of this is to restrict scientists from synthesizing certain types of highly fatal bioengineered viruses until experts can prove the effectiveness of a vaccine that could prevent human death on the global scale if the virus is accidentally or purposefully released. \n C. Composition and Function of a Body of Experts A body of experts should have general regulatory authority over emerging technologies that pose a GCR/ER in all states that are parties the convention. One possible model is the Environmental Protection Agency (EPA), which, inter alia, promulgates and enforces regulations according to environmental statutes. 277 Likewise, the body of experts could apply the precautionary principle as previously discussed 278 and take regulatory steps necessary to reduce GCRs/ERs to an acceptable level. Depending on the general requirements created by a treaty on emerging technology GCRs/ERs, this body of experts could impose a variety of regulatory tools such as technical restrictions on products; permit requirements; total bans on certain emerging technologies; reporting requirement for certain industry sectors; laboratory safety rules; mandatory environmental, human health, and social impact statements for certain activities; and liability mechanisms to punish violators whether or not their activities cause any harm. In terms of composition, the body of experts should consist of scientists, lawyers, government authorities, civil society representatives, and other experts chosen based on their area of expertise and equitable geographic representation. Specialists in nanotechnology, !!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!! bioengineering, and AI are best equipped to handle complex fact patterns involving these technologies, especially as the science becomes more advanced, and so special qualifications in one or more of these fields should be mandatory. 279 Government authorities should have both a fluent understanding of various domestic legal systems as well as expertise in one or more emerging technologies. Meanwhile, civil society representation in the body of experts is essential both to inform other experts on what society considers to be an \"acceptable\" level of risk and to shape discussions about whether developing certain emerging technologies will have an undesirable impact on society. For example, the signatories to the Principles for Oversight of Nanotechnologies and Nanomaterials expressed concern that \"[g]overnments and industry developers of nanotechnologies provide few meaningful opportunities for informed public participation in discussions and decisions about how, or even whether, to proceed with the 'nano'-ization of the world.\" And in terms of human bioengineering, Annas, et al. argues that altering the human species is an inherently democratic matter that should only be made a body that constitutes global representation. 280 Civil society representation will help ensure that global democracy influences global emerging technology regulations. While a treaty on emerging technologies should grant the body of experts significant authority to manage GCRs/ERs on a day-to-day basis, states will want to retain decisionmaking powers for any changes to the treaty, and they may also wish to reserve the power to second guess the body of experts. If the body of experts finds it necessary to alter the treaty, whether to regulate a new technology under the convention or to modify the treaty provisions as applied to technologies already considered, the body of experts should be able to draft a resolution that becomes binding upon all parties if agreed to by a simple majority or two-thirds majority vote of all parties. This system could be modeled upon the Montreal Protocol to the Vienna Convention for the Protection of the Ozone Layer, under which Parties can reduce allowable production or consumption of \"controlled substances\" (ozone depleting substances) based on a two-thirds majority of present and voting parties. 281 Furthermore, if states wish to override a decision made by the body of experts, an emerging technologies treaty should require a majority or two-thirds majority of parties to vote in favor of doing so. \n D. Review Mechanism Domestic and international judicial systems currently lack sufficient review mechanisms for GCRs/ERs arising from emerging technologies. For example, in the United States, many federal courts use the Daubert standard to determine whether expert evidence is admissible in court. Under the Daubert standard, the judge decides whether expert evidence, which includes expert testimony, is admissible based on whether the evidence derives from \"scientific knowledge.\" 282 In turn, \"scientific knowledge\" generally must be \"testable.\" 283 Furthermore, judges must also consider whether expert evidence based on scientific knowledge receives !!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!! 279 For example, in terms of nanotechnology, a group of NGOs stated that \"[a] precautionary approach requires mandatory, nano-specific oversight mechanisms to account for the unique characteristics of the materials,\" and this would best be achieved by those who are intimately involved in this field. INTERNATIONAL \n ! 38! \"general acceptance\" from scientists. 284 However, GCRs/ERs are often extremely lowprobability, and so \"general acceptance\" that they will materialize is typically absent. Furthermore, GCRs/ERs are almost never testable because they should never be allowed to materialize. Finally, as discussed above, judges may not have the scientific understanding to sufficiently gauge GCRs/ERs, particularly when a case is dismissed before advocates have the chance to develop their full arguments, and the expert advice they receive is often from scientists with a conflict of interest in the outcome of the case. 285 To remedy these problems, an international treaty regulating GCRs/ERs from emerging technologies could either (1) require states to establish domestic \"science courts\" that are equipped to consider alleged GCRs/ERs arising from emerging technologies and which are required to factor in minority scientific opinions whether or not they are \"testable,\" 286 or (2) create an international court that enables citizens to submit disputes regarding GCRs/ERs from emerging technologies, much like the right of European citizens to submit disputes to the European Court of Human Rights. In either scenario, the judges should preferably be scientifically literate lawyers who are able to comprehend the science of emerging technologies and effectively question experts from the arena of emerging technology in question. 287 If an international court is established, such a court could take over the enforcement functions of the proposed body of experts by applying traditional judicial mechanisms such as penalties, injunctions, and other measures. Finally, for certain activities (as determined by the body of experts), the court should require the proponent of an emerging technology to establish the lack of an unacceptable risk in order to prevail. \n E. Public Participation and Access to Information A treaty on emerging technology GCRs/ERs should include provisions for significant public participation that is in addition to the civil society representation on the body of experts. Nanotechnology, bioengineering, and AI involve, inter alia, ethical, religious, philosophical, political, economic, and safety considerations, and thus a treaty on GCRs/ERs from emerging technologies should establish a forum in which the public can shape the debate about what kind of technologies mankind should develop. Such a forum could gauge global public concerns about health risks, social effects, morals, religious implications, and overall perception of emerging technologies to influence the body of experts, who should have a treaty-mandated duty to consider societal views in their decisions. 288 While the exact form that the public dialogue could take includes anything from a conference to a series of \"town hall\" style meetings, civil society organizations (CSOs) representing the public should be the ones to decide on the exact format through their participation in drafting the convention on GCRs/ERs from emerging technologies. Furthermore, integrating the precautionary principle with a treaty on GCRs/ERs arising from emerging technologies inherently requires consideration of what hazards are \"acceptable\" or, as discussed above, what constitutes a \"reasonable grounds for concern,\" which are !!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!! 284 Id. 285 See supra Section II(B)(4)(a).! 286 ! 39! benchmarks that should be heavily influenced by societal perspectives on emerging technologies and their risks. The treaty on emerging technologies should establish a subsidiary body responsible for organizing major public events and compiling data on societal perspectives to present to the body of experts and the state parties. The civil society representatives on the body of experts should strongly advocate for the positions compiled at such public events. Furthermore, before the body of experts makes major decisions that do not require urgent actions, there should first be a period of public comment, and the body of experts should be required to consider the public sentiment in their decisionmaking. 289 Another specific way to empower the public to shape the international framework behind emerging technologies is to allow \"observers\" representing a wide variety of interests to participate both in the drafting of an emerging technologies treaty and in subsequent \"conferences of the parties.\" The Conference of Parties to CITES (COP) is an example of a forum with significant public participation from CSOs. Under CITES, CSOs may become observers if they meet certain qualifications and one-third of CITES parties do not object. 290 Finally, the public should also have broad access to information regarding emerging technologies. Such information could include, at minimum, annual reports on safety issues related to emerging technologies and annual summaries of current legal obligations arising out of treaty on emerging technology GCRs/ERs (including summaries of cases decided by the associated judicial body and measures imposed by the body of experts). 291 In order to gather sufficient and accurate data, an emerging technologies treaty should create specific and mandatory reporting requirements for both state parties and industries that research and develop emerging technologies. Furthermore, not only should this information merely be made publicly available, but a treaty on emerging technology GCRs/ERs should also mandate a communications body, perhaps overseen by the same subsidiary body responsible for organizing public events, that is responsible for disseminating information on emerging technologies to all sectors of society on a global level. Each state should also be required to grant \"active\" as opposed to \"passive\" access to information by actively distributing information on emerging technologies in a way that is most effective within their particular culture. 292 \n F. Regulating Scientists An international instrument on GCRs/ERs should regulate the conduct of scientists because the stakes of GCRs/ERs are too high to leave to a small group of self-interested individuals. While scientists often reassure the public that GCRs/ERs from emerging technologies are nonexistent or negligible, as discussed above, scientists who assess the risks of their own work may be intentionally or unintentionally influenced by pressures to make profits, achieve scientific breakthroughs, and outpace other groups of scientists. 293 For example, while CERN scientists found no significant risks from the LHC, other experts concluded that the LHC did pose at least some risks, and subsequent accidents with the LHC weakened the credibility of 294 Yet CERN scientists were able to be their own risk assessors, and attempts of outside parties to impose meaningful oversight of the LHC were ineffective. 295 Although the LHC has not caused global catastrophic or existential harm, international law should ensure that scientists working with emerging technologies sufficiently consider and proactively minimize GCRs/ERs. There are many possible ways for a treaty to regulate scientists without stifling scientific development. First, scientists should be required to undergo adequate training both to understand the nature of any GCRs/ERs arising from their particular field of work and to learn ways to mitigate these risks. For example, scientists working with genetically engineered viruses should be made aware of the risks posed by an accidental release, and they should undergo mandatory training to ensure that they follow strict protocols designed to prevent an accidental release. Second, scientists should be subject to a code of conduct that requires scientists to monitor their own ethical and professional conduct as well as the ethical and professional conduct of their peers and supervisors. One way to materialize this concept is to create ethical oversight organizations based on the model of state bar associations, which require membership of all practicing lawyers, punish ethical violations, and compel lawyers to report their peers or superiors if they violate certain ethical and professional rules. 296 In the context of a treaty on GCRs/ERs from emerging technologies, the body of experts or a subsidiary body thereof could decide what constitutes \"ethical or professional\" conduct, then domestic ethical oversight organizations could implement and refine these rules based on specific domestic needs. Finally, scientists should be equally regulated whether they are involved in government-funded projects or private projects. For example, while the NSABB provides recommendations on training and education for scientists in federally funded institutions, the growing number of private institutions conducting experimental research in emerging technologies clearly shows the need to regulate scientists working on projects funded with either federal or private money. 297 !!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!! \n G. Safeguarding Laboratories and Dangerous Emerging Technologies Nanotechnology, AI, and bioengineering are all susceptible to misuse, and thus an international treaty on GCRs/ERs from emerging technologies should address the security of laboratories and other means of accessing these technologies. First, certain laboratories that participate in developing emerging technologies that pose GCRs/ERs should be required to register their facilities. For example, section 415 of the Public Health Security and Bioterrorism Preparedness and Response Act of 2002 requires facilities that manufacture, possess, pack, or hold food bound for U.S. consumption to register with the U.S Food and Drug Administration (FDA), and such facilities must thereafter provide notice to the FDA about certain food shipments (article 307) and are also subject to record inspection by FDA agents (section 414). 298 294 Id. 295 Id. !!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!! 296 See e.g. Rule 8.3 of ABA's MODEL RULES OF PROF. CONDUCT (2011) (\"A lawyer who knows that another lawyer has committed a violation of the Rules of Professional Conduct that raises a substantial question as to that lawyer's honesty, trustworthiness or fitness as a lawyer in other respects, shall inform the appropriate professional authority\"); see also Rule 8.4(c) of ABA's MODEL RULES OF PROFESSIONAL CONDUCT (2011) (\"It is a professional misconduct for a lawyer to engage in conduct involving dishonesty, fraud, deceit or misrepresentation\"). 297 \n ! 41! Similarly, facilities that engage in emerging technologies research that meets a certain threshold of danger (as determined by the aforementioned body of experts or a subsidiary body thereof) could be required to register their facilities at the international level, provide notice when they conduct or plan to conduct certain regulated activities, and make their records available for inspection by international authorities. Second, a treaty on GCRs/ERs should impose mechanisms to monitor specific technologies that pose a GCR/ER if misused. For example, DNA synthesizers could be \"licensed, tagged with electronic locators, and programmed to forbid the synthesis of dangerous DNA sequences,\" as recommended by Harvard biologist George Church, with the body of experts or a subsidiary body thereof determining what constitutes a \"dangerous\" DNA sequence based on annual or semi-annual reviews. 299 The body of experts or a subsidiary body thereof should determine which technologies pose a concern and then require registration of these technologies so they can be monitored and traced. Third, laboratories conducting dual use research in emerging technologies should be required to meet a certain level of safety from accidental releases and theft. For example, laboratories with the most dangerous bioengineered pathogens should be required to be BSL-4 instead of being subject to non-binding recommendations on lab security. 300 Furthermore, some laboratories should be required to take certain measures to prevent theft or break-in by securing their physical compound and by installing advanced firewalls on their computer systems. Laboratories should also be required to undergo regular maintenance and inspection to ensure that they meet international regulations. In order to incentivize compliance, if facilities fail to meet safety regulations, they should be subject to substantial fines, and governments should be held financially liable for any significant damage that results from a failure to properly regulate their facilities. Finally, because the number of laboratories handling emerging technologies that pose GCRs/ERs is rapidly growing, states should be required to limit the total number of such laboratories to a number that can be effectively overseen by regulatory authorities. \n H. Oversight Mechanism for Scientific Publications Publicly disclosing scientific information that poses a GCR/ER opens the door for potential terrorists to obtain the information and then intentionally cause massive death to humans or damage to the environment, or else for amateur or under-qualified scientists to replicate such research without sufficient safeguards. This is why the NSABB recommended that findings on how to genetically engineer an airborne H5N1 virus should not be released into the public domain, specifically arguing that the risks outweighed the benefits to society. 301 However, the NSABB does not have the legal power to restrict scientific publications, and while so far the scientists behind the bioengineered H5N1 virus have not released their data, there is no guarantee that future scientists will comply with similar non-binding recommendations. Likewise, scientists Ray Kurzweil and Bill Joy criticized the decision United States Department of Health and Human Services to release the full genome of the massively deadly 1918 influenza virus (\"the Spanish flu\"), because releasing such a virus could kill tens if not !!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!! 299 Christopher F. Chyba, Biotechnology and the Challenge to Arms Control, ARMS CONTROL TODAY (Oct. 2006), at: http://www.armscontrol.org/act/2006_10/BioTechFeature.asp. 300 Canada recently mandated that only BSL-4 laboratories are allowed to handle lab-made H5 viruses. See THE CANADIAN PRESS, supra note 54. 301 Id. ! 42! hundreds of millions of people. 302 Kurzweil and Joy called for an \"international agreement by scientific organizations\" to oversee publication of scientific data that could result in acts of bioterrorism, equating the genetic code of deadly viruses to nuclear weapon designs. 303 States should expand this concept by creating an oversight mechanism for all sensitive materials from emerging technologies that pose a GCR/ER. For example, a subsidiary body of the body of experts could determine when the risk of releasing scientific information outweighs the potential benefits, and then take appropriate response measures. So as not to stifle scientific development, scientists whose research poses a GCR/ER should merely be required to redact certain sensitive information rather than being prohibited from releasing their data altogether. \n IV. CONCLUSION A series of fantastical scientific breakthroughs are leading towards or, in some instances, have already created technologies that question basic premises of life: that man cannot create life, that humans are the ultimate intelligent being, or that we are limited by the basic building blocks we find on Earth. While nanotechnology, bioengineering, and AI offer great benefits to society, they also have the potential to cause global catastrophic or even existential harm to humans. While bioengineering has caused a revolution in crop production, genetically engineered viruses have the potential to cause global devastation if accidentally or purposefully released. Nanotechnology has yielded materials that are stronger, lighter, yet nanomaterials also pose unknown human and animal health effects, and weapons developed from advanced nanotechnology could be far more destructive and concealable than nuclear bombs. And while AI could innovate every technology on the planet, a superintelligent machine could outcompete humans or be programmed to act maliciously. While the chances of massive destruction from these technologies are not high, states should still act quickly to create a flexible, binding international treaty that limits GCRs/ERs arising from emerging technologies to a degree that society deems acceptable. As this paper demonstrates, emerging technologies do not fall squarely within current international law, and allowing a small group of self-interested scientists to regulate themselves is unacceptable when a single misstep could result in global catastrophic or existential harm. Instead, the international community, with the guidance of a body of experts representing a wide range of interests and strong considerations of the precautionary principle, should develop a binding framework to regulate emerging technologies at the international level. Furthermore, because emerging technologies will likely affect the entire world, society should help determine which risks they are willing to take and what moral, ethical, and other beliefs should influence an international regulatory regime. If the international community successfully concludes a treaty on GCRs/ERs from emerging technologies, then perhaps society can thrive in an age of technological innovation without suffering from the associated risks. !!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!! 302 Ray Kurzweil & Bill Joy, Recipe for Destruction, N.Y. TIMES (Oct. 17, 2005), at: www.nytimes.com/2005/10/17/opinion/17kurzweiljoy.html. 303 Id. 84 A posthuman is an individual \"so significantly altered as to no longer represent the human species.\" CHARLES W. COLSON & NIGEL M. DE S. CAMERON, HUMAN DIGNITY IN THE BIOTECH CENTURY: A CHRISTIAN VISION FOR PUBLIC POLICY 85 (2004). This involves \"redesigning the human condition, including such parameters as the inevitability of aging, limitations on human and artificial intellects, unchosen psychology, [and] suffering....\" BERT GORDIJN & RUTH CHADWICK, MEDICAL ENHANCEMENT AND POSTHUMANITY 140 (2009). Posthumanism also frequently may involve enhancing humans through nanotechnology and cybernetics. For example, \"individuals could have electronic brain implants to create human to computer interaction,\" which is a technology already in development. Lisa C. Ikemoto, Race to Health: Racialized Discourses in A Transhuman World, 9 DEPAUL J. HEALTH CARE L. 1101, (2005), citing RAMEZ NAAM, MORE THAN HUMAN: EMBRACING THE PROMISE OF BIOLOGICAL ENHANCEMENT 181-187 (2005). 85 George J. Annas et al., Protecting the Endangered Human: Toward an International Treaty Prohibiting Cloning and Inheritable Alterations, 28 AM. J. LAW MED. 151, 161-162 (2001). \n 206 Cartagena Protocol on Biosafety to the Convention on Biological Diversity, art. 3(g), Jan. 29, 2000, 39 I.L.M. 1027 (2000) [hereinafter Cartagena Protocol]. Article 31(2)(b) of the Vienna Convention, which reflects customary international law, states that a treaty shall be interpreted in light of \"any instrument which was made by one or more parties in connection with the conclusion of the treaty and accepted by the other parties as an instrument related to the treaty,\" which includes a protocol to the CBD-the Cartagena Protocol-that expands upon the CBD's biosafety provisions. Vienna Convention on the Law of Treaties, art. 31, May 23, 1969, 1155 U.N.T.S. 331 [hereinafter Vienna Convention]. \n 209 !See infra, Section III(B).! 210 Cartagena Protocol, supra note 206, at art. 1. \n 270 Id. at Annex I, Annex 2(a), and Annex 4(B)(1).271 See id. at Annex 4(A).272 This is the basic justification for precautionary measures embodied in the European Union (EU) Communication on the Precautionary Principle. See Communication from the Commission on the Precautionary Principle, Commission on the European Communities, COM (2000) 1, available at: http://ec.europa.eu/dgs/health_consumer/library/pub/pub07_en.pdf. \n Committee on Assessing Fundamental Attitudes of Life Sciences as a Basis for Biosecurity Education, A Survey of Attitudes and Actions on Dual Use Research in the Life Sciences, NAT'L RESEARCH COUNCIL 2 (2009). 298 Public Health Security and Bioterrorism Preparedness and Response Act of 2002, 42 U.S.C. § 201 (2002). \n Overview: What is the Singularity?, SINGULARITY INSTITUTE FOR ARTIFICIAL INTELLIGENCE, http://singinst.org/overview/whatisthesingularity (last visited Jan. 31, 2012). 8 NICK BOSTROM & MILAN M. ĆIRKOVIĆ, Introduction, in GLOBAL CATASTROPHIC RISKS 25 (Nick Bostrom & Milan M. Ćirković, eds., 2008). \n The Deadliest Virus, THE NEW YORKER 32-33 (Mar. 12, 2012).29 Some skeptics do not consider this to be the first instance of truly synthetic life because the genomes were based on existing DNA.See Angier, supra note 6; Nicholas Wade, Researchers Say They Created a 'Synthetic Cell', N.Y. TIMES (May 20, 2010), available at: www.nytimes.com/2010/05/21/science/21cell.html. 30 ENVTL. PROT. AGENCY, supra note 26, at 10. The JGI database is accessible at http://www.jgi.doe.gov/sequencing/why/index.html, and GenBank ® is accessible at http://www.ncbi.nlm.nih.gov/genbank/. 31 NATIONAL SCIENCE ADVISORY BOARD FOR BIOSECURITY (NSABB), STRATEGIES TO EDUCATE AMATEUR BIOLOGISTS AND SCIENTISTS IN NON-LIFE SCIENCE DISCIPLINES ABOUT DUAL USE RESEARCH AND LIFE SCIENCES 4 (June 2011), available at: http://oba.od.nih.gov/biosecurity/pdf/FinalNSABBReport-AmateurBiologist-NonlifeScientists_June-2011.pdf. 32 Michele S. Garfinkel & Robert M. Friedman, Synthetic Biology and Synthetic Genomics, in THE FUTURE OF INTERNATIONAL ENVIRONMENTAL LAW 272 -273(David Leary & Balakrishna Pisupati, eds., 2010). 33 Id. at 278-279. 34 See About Us, INTERNATIONAL GENETICALLY ENGINEERED MACHINE COMPETITION, http://igem.org/About. 28 Michael Specter,35 Garfinkel & Friedman, supra note 32, at 270.36 Fact Sheet Describing Recombinant DNA and Elements Utilizing Recombinant DNA Such as Plasmids and Viral Vectors, and the Application of Recombinant DNA Techniques in Molecular Biology, UNIV. OF N.H. OFFICE OF ENV'T HEALTH AND SAFETY 2 (2011), available at: www.unh.edu/research/sites/unh.edu.research/files/images/Recombinant-DNA.pdf. \n SCIENCE AND TECHNOLOGY COMMITTEE OF THE PARLIAMENT OF GREAT BRITAIN, BIOENGINEERING: SEVENTH REPORT OF SESSION 55-56 (2009-2010). 41 GM Crops: Reaping the Benefits, but Not in Europe, EUROPEAN ASSOC. FOR BIOINDUSTRIES 1-8 (2011), available at: www.europabio.org/sites/default/files/europabio_socioeconomics_may_2011.pdf. 42 Risk and Response Assessment Project, Synthetic Biology and Nanobiotechnology, U.N. INTERREGIONAL CRIME AND JUSTICE RESEARCH INSTITUTE (UNICRI), at: http://lab.unicri.it/bio.html (last visited Feb. 13, 2012); Garfinkel & Friedman, supra note 32, at 274-277. 43 Elizabeth A. Thomson, Paper Predicts Bioengineering Future, MIT NEWS (Feb. 14, 2001), at: web.mit.edu/newsoffice/2001/biomedical-0214.html 44 UNICRI, supra note 42. 45 ENVTL. PROT. AGENCY, supra note 26, at iv. and the environment. \n 46 John Steinbruner et al., Controlling Dangerous Pathogens: A Prototype Protective Oversight System, CTR. FOR INT'L AND SECURITY STUDIES AT MD. 1 (2007), available at: www.cissm.umd.edu/papers/files/pathogens_project_monograph.pdf. 47 See Garfinkel & Friedman, supra note 32, at 279.48 There are different types of \"type A\" influenza viruses in birds that are named according to the \"two main proteins on the surface\" of the virus, here H5 and N1. The H5N1 virus is just one type of bird flu. See Key Facts About Avian Influenza (Bird Flu) and Highly Pathogenic Avian Influenza A (H5N1) Virus, CTR. FOR DISEASES CONTROL AND PREVENTION (CDC), at: www.cdc.gov/flu/avian/gen-info/facts.htm (last visited Mar. 11, 2012). 49 Specter, supra note 28, at 32-33. 50 CDC, supra note 48. 51 Robert Roos, Live Debate Airs Major Divisions in H5N1 Research Battle, CENTER FOR INFECTIOUS DISEASE RESEARCH AND POLICY (CIDRAP) NEWS (Feb. 3, 2012), at: www.cidrap.umn.edu/cidrap/content/influenza/avianflu/news/feb0312webinar-jw.html (see comments of Michael T. Osterholm of the National Science Advisory Board for Biosecurity). 52 Specter, supra note 28, at 32. 53 UNICRI, supra note 42. 54 Future Bird Flu Virus Work Should be Done in Most Secure Labs, THE CANADIAN PRESS (Mar. 06, 2012), at: www.ctv.ca/CTVNews/Health/20120306/bird-flu-virus-labs-120306. \n High-Containment Biosafety Laboratories: Preliminary Observations on the Oversight of the Proliferation of BSL-3 and BSL-4 Laboratories in the United States, Statement of Keith Rhodes, Chief Technologist, Center for Technology and Engineering, Applied Research and Methods , GAO-08-108T (Oct. 4, 2007), available at: http://www.gao.gov/new.items/d08108t.pdf (emphasis added). 56 Id. 57 Specter, supra note 28, at 33. 58 Id. 59 Jennifer Gaudioso et al., Biosecurity: Progress and Challenges, 14 J. OF LABORATORY AUTOMATION 141, 143 (2009). See also Christian Enemark, Preventing Accidental Disease Outbreaks: Biosafety in East Asia, NAUTILUS INSTITUTE FOR SECURITY AND SUSTAINABILITY, AUSTRAL PEACE AND SECURITY NETWORK (ASPNET) (2006), available at: http://nautilus.org/apsnet/0631a-enemark-html. 60 Larry Margasak, Accidents on Rise as More US Labs Handle Lethal Germs, THE ASSOC. PRESS (Oct. 03, 2007), available at: http://articles.boston.com/2007-10-03/news/29232771_1_biosafety-level-4-labs-accidents. 61 United Nations Office for Disarmament Affairs, Developing a Biological Incident Database, UNODA OCCASIONAL PAPERS: NO. 15 12-13 (2009), available at: http://www.un.org/disarmament/HomePage/ODAPublications/OccasionalPapers/OP15-info.shtml. 62 Gaudioso et al., supra note 59, at 143. 63 Denise Grady & Donald McNeil Jr., Debate Persists on Deadly Flu Made Airborne, N.Y. TIMES (Dec. 26, 2011), at: http://www.nytimes.com/2011/12/27/science/debate-persists-on-deadly-flu-made-airborne.html. \n Frequently Asked Questions -Molecular Manufacturing, FORESIGHT INSTITUTE, www.foresight.org/nano/whatismm.html 99 J. CLARENCE DAVIES, OVERSIGHT OF NEXT GENERATION NANOTECHNOLOGY 11 (April 2009), available at www.nanotechproject.org/process/assets/files/7316/pen-18.pdf. 100 Peter Kearns, Nanomaterials: Getting the Measure, OECD OBSERVER (2010), available at: http://www.oecdobserver.org/news/fullstory.php/aid/3291. 101 What's So Special About the Nanoscale, NATIONAL NANOTECHNOLOGY INITIATIVE, http://www.nano.gov/you/nanotechnology-benefits (last visited April 21, 2012). 102 Nanotechnology and You: Benefits and Applications, NATIONAL NANOTECHNOLOGY INITIATIVE, http://www.nano.gov/you/nanotechnology-benefits (last visited Feb. 04, 2012). \n Overview: What is the Singularity?, SINGULARITY INSTITUTE FOR ARTIFICIAL INTELLIGENCE, at: http://singinst.org/overview/whatisthesingularity (last visited Jan. 31, 2012). 152 Why Work Toward the Singularity, THE SINGULARITY INSTITUTE, at: singinst.org/overview/whyworktowardthesingularity (last visited Apr. 21, 2012). 153 Benefits of the Singularity, SINGULARITY ACTION GROUP, at: home.mchsi.com/~deering9/benefits.html (last visited Mar. 12, 2012). 154 Michael Anissimov, Interview with Robin Powell, Singularity Institute Advocate, THE SINGULARITY INSTITUTE (Jan 12, 2012), at: singinst.org/blog/2012/01/12/interview-with-robin-powell-singularity-advocate. 155 BOSTROM & ĆIRKOVIĆ, supra note 10, at 33. 156 Reducing Long-Term Catastrophic Risks from Artificial Intelligence, THE SINGULARITY INSTITUTE, at: http://singinst.org/summary (last visited Mar. 10, 2012). 157 WALLACH & ALLEN, supra note 150, at 194-195. 158 See Iran and Its Nuclear Program, AMERICAN SECURITY PROJECT, at: americansecurityproject.org/issues/nuclear-security/iran-and-its-nuclear-program (last visited Apr. 21, 2012). 159 Eliezer Yudkowsky, Artificial Intelligence as a Positive and Negative Factor in Global Risk (2006), available at: singinst.org/upload/artificial-intelligence-risk.pdf. 160 Kaj Sotala, From Mostly Harmless to Civilization-Threatening: Pathways to Dangerous Artificial General Intelligences, SINGULARITY INSTITUTE FOR ARTIFICIAL INTELLIGENCE (2010), at: http://singinst.org/upload/mostlyharmless.pdf. 161 Id. \n 162 See JOHN LESLIE, THE END OF THE WORLD: THE SCIENCE AND ETHICS OF HUMAN EXTINCTION 95-103 (1998). 163 Sotala, supra note 160. 164 ROBERT H. SCHRAM, ILLUSAFACT: THE INEVITABLE ADVANCE OF OUR TECHNOLOGIES AND US 272-273 (2011). 165 Id. at 273. 166 See LESLIE, supra note 162, at 95-103. 167 Others estimate just the \"hardware\" of the LHC as costing approximately $10 billion and the entire LHC program -including costs to \"[operate] the experiment and [analyze] the data\" -being \"much greater.\" See Eric E. Johnson, The Black Hole Case: The Injunction Against the End of the World, 76 TENN. L. REV. 819, at 827 (2008-2009). 168 \"Organisation européenne pour la recherche nucléaire\" in French. 169 See LHC Machine Outreach (Homepage), EUROPEAN ORG. FOR NUCLEAR RESEARCH (CERN), AT: http://lhcmachine-outreach.web.cern.ch/lhc-machine-outreach (last visited Feb. 04, 2012). LHC to Shut Down for a Year to Address Design Faults, BBC NEWS (Mar. 10, 2010), at: http://news.bbc.co.uk/2/hi/.stm. 172 Mike Lamont, Explain it in 60 Seconds: The Large Hadron Collider, SYMMETRY (Apr. 2005), at: http://www.symmetrymagazine.org/cms/?pid=. 170 Id.171 Judith Burns, \n of the National Environmental Policy Act (NEPA). See Sancho v. U.S. Department of Energy, 578 F.Supp.2d 1258 (D. Haw. 2008). 185 Sancho, 392 F. App'x at 611. 186 Gillis, supra note 181. 187 Dr. Otto Rössler sought an emergency injunction from the European Court of Human Rights (ECtHR) for alleged violations of (1) the right to life under Article 2 of the European Convention on Human Rights (ECHR) and (2) the right to private and family life under Article 8 of the ECtHR. See LHC Critique, Summary of the (Renewed) Complaint Against CERN and the LHC Experiments as Submitted to the European Court of Human Rights, available at: lhc-concern.info/wp-content/uploads/2008/11/cern-lhc-kritik-statement-1-summary.pdf; see also Adams, supra note 175, at 152 (2009). The ECtHR rejected Dr. Dr. Otto Rössler's injunction requests via a \"brief email\" that did not include any reasoning, and while the ECtHR stated that it will hear the case on the merits, 187 the author cannot find any information on such a pending case. See Harrell, supra note 179. 188 Margaret Kovera et al., The Effects of Peer Review and Evidence Quality on Judge Evaluations of Psychological Science: Are Judges Effective Gatekeepers?, 85 J. OF \n Id. 221 M.K. SATEESH, BIOETHICS AND BIOSAFETY 205 (2008). 222 See Nagoya-Kuala Lumpur Supplementary Protocol on Liability and Redress to the Cartagena Protocol on Biosafety, 15 October 2010, Annex to UN Doc. UNEP/CBD/BS/COP-MOP/5/17, at arts. 2-5, available at: http://bch.cbd.int/protocol/NKL_text.shtml. 223 Article 2 defines operators as \"any person in direct or indirect control\" of an LMO of the living modified organism.\" Id. at art. 2(2)(c). 224 See Id. at arts. 2-5. \n List of Parties, CONVENTION ON BIOLOGICAL DIVERSITY, http://www.cbd.int/information/parties.shtml (last visited Mar. 19, 2012). The United States was nonetheless influential in negotiations for the Cartagena Protocol because they are a massive force in the world's agricultural market. Jonathan H. Adler, The Cartagena Protocol and Biological Diversity: Biosafe or Bio-Sorry?, 12 GEO. INT'L ENVTL. L. REV. 761, 763 (2000). 229 Convention on the Prohibition of the Development, Production, and Stockpiling of Bacteriological (Biological) and Toxin Weapons and on Their Destruction, art. 1, Apr. 10, 1972, 1015 U.N.T.S. 163 [hereinafter BWC]. 230 Id. at art. III. \n BOOT, GENOCIDE, CRIMES AGAINST HUMANITY, WAR CRIMES: NULLUM CRIMEN SINE LEGE AND THE SUBJECT MATTER JURISDICTION OF THE INTERNATIONAL CRIMINAL COURT 478 (2002). 247 POSNER, supra note 75, at 219. 248 Annas et al., supra note 85, at 153. \n Catastrophic Risks from Artificial Intelligence, THE SINGULARITY INSTITUTE, at: http://singinst.org/summary (last visited Mar. 10, 2012). 252 Nanotechnology White Paper, supra note 4, at 12. 253 Émilio Mordini, Converging Technologies, CENTRE FOR SCIENCE, SOCIETY AND CITIZENSHIP, at: http://agora-2.org/colloque/gga.nsf/Conferences/Converging_technologies (last visited Apr. 21, 2012). 254 Mark Avrum Gubrud, Nanotechnology and International Security, THE FORESIGHT INSTITUTE, at: www.foresight.org/Conferences/MNT05/Papers/Gubrud/ 255 J. CLARENCE DAVIES, supra note 99, at 19. \n Nanotechnology, WORLD HEALTH ORG, at: www.who.int/foodsafety/biotech/nano/en/index.html (last visited Mar. 22, 2012). 258 World Health Assembly Res. WHA58.3 (2005); see also Biosafety and Laboratory Biosecurity, WORLD HEALTH ORG., at: www.who.int/ihr/biosafety/key_activities/en/index.html (last visited Apr. 21, 2012). Note that none of these measures impose binding obligations on countries, and so they are less than ideal as a means to sufficiently mitigate GCRs/ERs arising from emerging technologies. 259 What is the Framework Convention on Tobacco Control?, FRAMEWORK CONVENTION ALLIANCE, at: www.fctc.org/index.php?Itemid=5&id=8&option=com_content&view=article (last visited Apr. 21, 2012). 260 See MARK A. SUTTON ET AL., THE EUROPEAN NITROGEN ASSESSMENT: SOURCES, EFFECTS AND POLICY PERSPECTIVE 557 (2011). 261 See e.g. supra, Section III. 262 Id. 263 By one account, there are at least twenty different definitions of the precautionary principle. Cass R. Sunstein, Irreversible and Catastrophic, 91 CORNELL L. REV. 841, 848 (2006). 264 See WORLD COMMISSION ON THE ETHICS OF SCIENTIFIC KNOWLEDGE AND TECHNOLOGY, THE PRECAUTIONARY PRINCIPLE 13-14 (2005), available at: http://unesdoc.unesco.org/images//139578e.pdf. \n Nations Framework Convention on Climate Change, art. 3(3), May 9, 1992, 31 I.L.M. 849 [hereinafter UNFCCC]. 266 See CLIMATE CHANGE 2007: SYNTHESIS REPORT. CONTRIBUTION OF WORKING GROUPS I, II AND III TO THE FOURTH ASSESSMENT REPORT OF THE INTERGOVERNMENTAL PANEL ON CLIMATE CHANGE 27 (Rajendra K. Pachauri & Andy Reisinger, eds.). This rendition of the precautionary principle originated in the 1992 Rio Declaration on Environment and Development. See United Nations Rio Declaration on Environment and Development, Principle 15, June 13, 1992, 31 I.L.M. 874 [hereinafter Rio Declaration]. 267 UNFCCC, supra note 267, at art. 3(3). 268 Agreement on the Application of Sanitary and Phytosanitary Measures, art. 5.7, Marrakesh Agreement Establishing the World Trade Organization, Annex 1A, THE LEGAL TEXTS: THE RESULTS OF THE URUGUAY ROUND OF MULTILATERAL TRADE NEGOTIATIONS (1999), 1867 U.N.T.S. 493 [hereinafter SPS Agreement]. 269 Convention on International Trade in Endangered Species of Fauna and Flora, Annex 4(A), Mar. 3, 1973, 993 U.N.T.S. 243 [hereinafter CITES]. entirely. ! 35! \n CENTER FOR TECHNOLOGY ASSESSMENT, PRINCIPLES FOR THE OVERSIGHT OF NANOTECHNOLOGIES AND NANOMATERIALS 1 (2008), available at: www.foeeurope.org/activities/.../Principles_Oversight_Nano.pdf. 280 Annas et al., supra note 85, at 153. 281 Montreal Protocol on Substances that Deplete the Ozone Layer, art. 2.9, Sept. 16, 1987, 1522 U.N.T.S. 3. 282 Daubert v. Merrell Dow Pharmaceuticals, Inc., 113 S. Ct. 2786, 2796 (1993). 283 Id. \n One such example in the United States are the U.S. Court of Appeals for the Federal Circuit, a 3 judge panel that has \"exclusive jurisdiction of appeals in patent-infringement and other patent cases.\" POSNER, supra note 75, at 210-211. 287 See id. at 208. 288 LYLE GLOWKA & LAWRENCE C. CHRISTY, LAW AND MODERN BIOTECHOLOGY 52 (2004). \n 289 See e.g. Carl Brunch & Meg Filbey, Emerging Global Norms of Public Involvement, in THE NEW \"PUBLIC\": THE GLOBALIZATION OF PUBLIC PARTICIPATION 9 (Carl Bruch, ed., 2002). 290 CITES, supra note 269, at art. XI. 291 HODGE ET AL., INTERNATIONAL HANDBOOK ON REGULATING NANOTECHNOLOGIES 566 (2010). 292 Brunch & Filbey, supra note 289, at 7-8.! 293 See supra Section II(4). CERN scientists. ! 40! \n\t\t\t For example, scientists may fear \"reprisals and negative consequences\" for challenging their superiors, or they may not want to squander large financial investments in scientific research. See Eric E. Johnson, Culture and", "date_published": "n/a", "url": "n/a", "filename": "/Users/janhendrikkirchner/code/2022/10/alignment-research-dataset/align_data/common/../../data/raw/nonarxiv_teis/006_international-law.tei.xml", "id": "711dc43694aaa0785d8255f0c9b1695e"} {"source": "nonarxiv_papers", "source_filetype": "pdf", "abstract": "AI safety often analyses a risk or safety issue, such as interruptibility, under a particular AI paradigm, such as reinforcement learning. But what is an AI paradigm and how does it affect the understanding and implications of the safety issue? Is AI safety research covering the most representative paradigms and the right combinations of paradigms with safety issues? Will current research directions in AI safety be able to anticipate more capable and powerful systems yet to come? In this paper we analyse these questions, introducing a distinction between two types of paradigms in AI: artefacts and techniques. We then use experimental data of research and media documents from AI Topics, an official publication of the AAAI, to examine how safety research is distributed across artefacts and techniques. We observe that AI safety research is not sufficiently anticipatory, and is heavily weighted towards certain research paradigms. We identify a need for AI safety to be more explicit about the artefacts and techniques for which a particular issue may be applicable, in order to identify gaps and cover a broader range of issues.", "authors": ["Jose Hernández-Orallo", "Fernando Martínez-Plumed", "Shahar Avin", "Jess Whittlestone", "Seán Ó H Éigeartaigh"], "title": "AI Paradigms and AI Safety: Mapping Artefacts and Techniques to Safety Issues", "text": "INTRODUCTION As in any other scientific or engineering discipline, many AI researchers work within a well-established paradigm, with some standard objects of study, problems to solve, and associated formalisms and terminology. Understanding past, current and future AI paradigms is an important source of insight for funding agencies, policymakers, and AI researchers themselves, because it shapes how we think about what problems AI research is aiming to solve, what methods are required to solve them, and what the wider implications of progress might be. We believe that thinking clearly about AI paradigms is particularly crucial for AI safety research: an increasingly important area concerned with understanding and preventing possible risks and harmful impacts in the design and deployment of AI systems. For the purposes of this paper, we define AI safety broadly, to include both risks from AI systems for which the source of risk is accidental, ranging from unpredictable systems to negligent use, and non-accidental risks such as those stemming from malicious use or adversarial attacks (which might sometimes also be referred to as AI security risks.) This includes risks with many different types of consequences: including human, environmental, and economic consequences. AI safety is becoming particularly important as AI is increasingly used to automate tasks that involve interaction with the world. A characteristic example is AI being used in many components of self-driving cars such as perception, reasoning and action. Research in AI safety has typically analysed a specific risk or issue under a particular existing paradigm, such as interruptibility for reinforcement learning agents [33] , adversarial attacks on deep learning systems [43] , or fake media produced with GANs [21] . Poor choices or misinterpretation of current and future paradigms may result in AI safety research focusing in the wrong places: analysing scenarios that will not take place in the future, or that will at best manifest in completely different ways. For example, concerns about adversarial attacks on current deep learning methods have led to many papers proposing methods to defend against malicious perturbations, but it is not clear that these actually relate to plausible security concerns [16] . Though adversarial examples help us understand how brittle current deep learning methods can be, the kinds of attacks that some safety research is concerned with can only arise in contrived interactions, and make strong assumptions about the goal, knowledge, and action space of the attacker. In contrast, other risks have been ignored because they are outside of the current paradigm: for instance, attacks which go beyond an \"independent\" or functional interpretation of a neural network, and instead use the \"information from previous frames to generate perturbations on later frames\" [41] . Thinking about different paradigms is also important for clearly assessing safety considerations and risks in concrete real-world applications. For instance, the risks of using AI in vehicles will be perceived differently depending on whether we expect AI to be assisting or replacing human drivers, whether a self-driving vehicle is considered as a single autonomous agent, or whether we consider the whole traffic system as a swarm of interacting agents. Even as research becomes oriented towards future risks (some years or even decades away) the different safety issues associated with different AI paradigms have not been explicitly addressed in the literature [6, 2, 23, 14] . As AI research is fast evolving and likely to result in increasingly powerful systems in future, it is crucial for AI safety to explore issues associated with a broader range of possible AI paradigms, and to explicitly discuss what assumptions about paradigms and safety issues are being made in each research paper or project. This will not only enable the research community to identify the effects of less-explored paradigms on safety concerns, but could also increase awareness among AI developers that the paradigms they work in have consequences for safety, potentially highlighting approaches that can eliminate or reduce safety risks. In this paper, we present a structured approach for thinking about paradigms in AI and use this as the basis for empirical analysis of how AI safety issues have been explored in the research literature so far. We begin by defining what we mean by 'paradigms' in AI more precisely, drawing on literature in the philosophy of science to distinguish two different types of paradigms in AI: artefacts and techniques. Drawing on existing research and the expertise of several AI and AI safety researchers, we outline a preliminary taxonomy of fourteen different AI techniques and ten different AI artefacts. We then discuss how these different techniques and artefacts relate to AI safety issues. We use AAAI's 'AI topics' database to conduct a grounded empirical analysis of the historical evolution of these different paradigms, and the safety issues associated with them. Our analysis identifies a number of gaps in AI safety research where certain combinations of techniques, artefacts and safety issues need to be addressed. We conclude the paper by discussing implications for future research in AI safety. \n DEFINING PARADIGMS IN AI Before discussing paradigms in AI and how they relate to safety issues, we need a clearer account of what an AI paradigm is. In the philosophy of science, a 'paradigm' is an important concept used to capture how theories, methods, postulates and standards evolve and change within a scientific discipline [22] . The concept of a 'technological paradigm' plays a similar role but with a somewhat distinct focus, emphasising how each technological paradigm defines its own concept of progress, based on making specific technological and economic trade-offs [12] . In computer science, the term 'paradigm' is commonly used to refer to different types of programming languages with different features and assumptions: imperative, logic, functional, object-oriented, distributed, event-oriented, probabilistic, etc. The theory and practice of programming languages depends heavily on these paradigms. Many safety issues in programming and software engineering, and verification in particular, cannot be addressed -conceptually or technically-without making paradigms explicit [5, 20, 40, 42] . For instance, compared to imperative programming languages, declarative languages minimise mutability issues [10] due to the use of immutable data structures, as well as reduce state side-effects [29] by discouraging the utilisation of variables in favour of more sophisticated constructs (e.g., data pipelines or higher-order functions). In the context of AI, the concept of 'paradigms' has been used informally to refer to different broad families of technical or conceptual approaches: 'symbolic' vs 'connectionist', reasoning vs learning, expert systems vs agents. One way to identify paradigms in a field and how they have changed over time, suggested in Kuhn's original formulation, is to look at the approach taken by major (text)books in the field. Russell and Norvig [36] , for example, popularised the 'agents' view of AI in the 1990s, and Goodfellow et al. [17] has consolidated 'deep learning' as a central approach AI research this decade, going far beyond one specific technique. However, there is no clearly agreed-upon definition of what counts as a paradigm in AI. [9] defined an AI paradigm as \"the pair composed by a concept of intelligence and a methodology in which intelligent computer systems are developed and operated\", which led him to identify three paradigms of AI: behaviourist, agent and artificial life. More than twenty years later, these do not seem to adequately reflect the approaches and assumptions within AI research today. Perhaps this difficulty clearly distinguishing AI paradigms from mere trends stems at least partly from the ambiguity in the concept of paradigm itself, a criticism which has been made within the philosophy of science. Masterman identifies three different conceptions of paradigm in Kuhn's writings: the metaphysical, the sociological AI Safety Issue AI Technique AI Artefact Figure 1 . Ways of analysing an AI safety issue, combined with an artefact and/or a technique category. In real systems, techniques underpin the creation or operations of an artefact, where the real hazards occur. Occasionally, researchers can think of a safe issue in a very abstract way, without committing to any particular artefact or technique (e.g., value alignment). All relations (arrows) are many-to-many. and the artefact/construct [27] . Peine makes a reconstructive effort to combine the second and the third conceptions, suggesting that a paradigm can emerge when a subcommunity within a field makes a commitment to a certain set of techniques [35] . For instance, we can see this phenomenon in the deep learning community, which made early commitments before this approach had established a dominant position within the field. We suggest distinguishing between dominant trends: research commitments to specific techniques, aims, or assumptions that may be relatively short-lived; and paradigms where these trends lead to more established, long-term commitments. These ambiguities in the concept of a paradigm are apparent in the context of AI, where 'paradigm' may be used to distinguish both different techniques for solving problems in AI (e.g., Monte Carlo search vs. SAT solver, deep learning vs. genetic algorithm), and to capture different conceptions of possible types of AI systems (e.g. expert systems vs. agents). Many technical papers can easily be categorised by looking at the first formal definitions that appear after the introduction, which tend to identify the kind of AI system and the kinds of techniques they use to solve a problem. We suggest that it may be helpful, in thinking about paradigms in AI, to distinguish explicitly between these two types of constructs: • Conceptual Artefacts: broad conceptions of what current and future AI systems (will) look like, e.g., autonomous agents, personal assistants [24] , AI extenders [19] , conceptions of superintelligence [6] and Comprehensive AI Services [13] . • Research Techniques: the research methods, algorithms, theoretical technical results and methodologies involved in the development of these current or future systems, such as SAT solvers, deep learning, reinforcement learning, evolutionary computing, etc. Artefacts and techniques are important components of a paradigm, and together (weakly) define a paradigm. AI artefacts and techniques are often related to each other, insofar as certain techniques will often be better suited to certain types of artefacts (reinforcement learning is a technique category that is applicable for agent-like artefacts, for example). However, distinguishing between artefacts and techniques can help us to think more clearly about associated safety issues. Some safety issues will arise when combining an artefact with some techniques but not others: for example, when building a classifier (artefact), interpretability is likely to be a challenge for safety if using deep neural networks (technique), but less so if using simple decision trees with conditions that are expressed over the original attributes. Recognising these differences is important for understanding the scope of safety challenges and what solutions may be needed. In the case of autonomous vehicles, for example, both techniques and artefacts have changed over time, changing conceptions of safety. Early work in the 1980s-1990s, such as the Eureka 24th European Conference on Artificial Intelligence -ECAI 2020 Prometheus Project, emphasised systems for improved vehicle-tovehicle communications and driver assistance (e.g. collision avoidance), whereas more recent initiatives emphasise the development of fully independent self-driving vehicles. These are very different AI artefacts, with a more fully autonomous system evoking much broader safety concerns in general. However, to think more concretely about how evolving approaches to autonomous vehicles change safety challenges, it may also be important to look at how techniques have changed over time. For example, as improvements in computer vision enable more limited sensors to be replaced by cameras or radars, we may need to focus specifically on the vulnerabilities introduced by current techniques used in machine perception. Sometimes we may want to broadly assess all possible risks associated with a specific AI artefact independently of the technique used (e.g. issues associated with AI extenders), or those associated with a technique independently of the artefact (e.g. safety challenges for deep learning). In other cases, we may want to more narrowly focus on a specific safety issue associated with a particular artefact (e.g. interruptability for industrial robots), or a specific issue associated with a particular technique (e.g. interpretability of neural networks.) Figure 1 shows these interrelationships: how AI safety issues may be considered relative to either techniques, artefacts, or both. \n IDENTIFYING TECHNIQUES AND ARTEFACTS We can now begin to explore different ways of categorising AI paradigms by decomposing them into techniques and artefacts. While other accounts of 'AI paradigms' have been proposed, such as by the \"One Hundred Year Study on Artificial Intelligence\" at Stanford University [39] , the categories frequently mix techniques with artefacts and even subfields. \n AI techniques We develop a preliminary categorisation of AI techniques by building on the bibliometric analysis of [26] and [32, Tab. 6 ]. Martínez-Plumed et al. [26] identify nine 'facets' for the study of the past and future of AI, using data on all accepted papers from AAAI/IJCAI conferences (), and AI Topics documents, an archive kept by AAAI containing news, blog entries, conferences, journals, and other repositories. Niu et al. [32] identify 30 high-frequency keywords from 20 relevant journals in AI from 1990 to 2014. Both capture many keywords and categories which appear to describe AI techniques, and are relatively similar. 4 We construct a list of AI techniques, grouped into 14 categories, based on selecting relevant and representative keywords from these two analyses. We chose these categories based on three principles: (1) the techniques are sufficiently general to encompass groups of approaches in AI that have been recognised by other approaches, (2) overlapping in the techniques is allowed, as there is a high degree of hybridisation and combination in AI, and (3) subcategories are retained for particularly large categories, such as 'machine learning' 5 The list of categories is shown in Table 1 . \n AI artefacts For AI artefacts, we would like our categories to be more stable over time and less tied to specific research techniques and applications. One way to look beyond current trends is to consider textbooks or even historical accounts of AI [28, 7, 31] . It may also be helpful to begin by considering broad 'characteristics' of AI systems independent of the specific techniques used to develop them [18] . To generate a preliminary set of artefacts, a group of multidisciplinary researchers conducted a systematic, interactive procedure based on the Delphi method [11] . The group began by independently brainstorming possible candidates for categories of artefacts based on a preliminary discussion of those arising from AI textbooks and different AI system characteristics. Two criteria were provided to structure this initial brainstorm: (1) artefacts should ideally have minimum overlap, each capturing distinctive functionalities, and (2) artefacts should be defined independently of how functionalities are achieved (i.e. independent of techniques). Group members each proposed lists of distinctive type of AI systems, supported by exemplars, which were revised using an iterative process until the list of answers converged towards consensus. These were then clustered hierarchically to produce a set of artefacts which could cover the space of actual and potential AI systems as exhaustively as possible. The categorisation of AI artefacts in Table 2 is the result of this process. Building on the categories identified in [18] , we can further characterise these different artefacts in terms of their integration with the external environment, in three ways: (1) how it interfaces with the environment (e.g. via sensors and actuators, via digital objectives, or via language); (2) the dynamics of this integration (whether it is interactive or functional); and (3) the location of this integration (whether it is centralised, distributed, or coupled). For example, an agent interfaces with the environment via sensors and actuators in an interactive and centralised way. A swarm has similar characteristics except that the location of its integration is distributed (across different units) rather than centralised. We also provide examplars for each category: self-driving cars and robotic cleaners are examples of agent-type systems, whereas a multiagent network router or drone swarm are examples of swarm-type systems. We outline these characteristics to show that the artefacts are sufficiently comprehensive and distinctive to provide a useful basis for further analysis, though as with any clustering there are some overlaps and borderline cases. \n Empirical analysis of techniques and artefacts Using our categories of techniques (e.g., neural networks, information retrieval, cognitive approaches, etc.) and artefacts (e.g., estima-24th European Conference on Artificial Intelligence -ECAI 2020 tor, agent, dialoguer, etc.), we now conduct an empirical analysis of how these different paradigm components have appeared in research papers and other related literature. This analysis will allow us to explore more empirically which paradigm elements have been prominent at different times, and will form the basis for investigating the relationship between paradigms and safety issues later in this paper. To conduct this analysis, we work with AI Topics 6 , an official database from the AAAI, using the documents from the period (complete years). This archive contains a variety of documents related to AI research (news, blog entries, conferences, journals and other repositories) that are collected automatically with NewsFinder [8] . We divide the archive into research documents 7 and non-research documents. From the ∼111K documents gathered, ∼11K are research papers and the remaining ∼100K are mostly media. With a mapping approach between the list of exemplars (tokens) of techniques and artefacts (e.g., \"Deep Learning\", \"GAN\" or \"perceptron\" are illustrative exemplars of the \"neural network\" technique; \"classifier\", \"decision-making\" or \"object recognition\" are exemplars of the \"estimator\" artefact5 0 ), and the tags obtained from AI Topics (substrings appearing in titles, abstracts and metadata), we summarise the trends in a series of plots. The evolution of techniques and artefacts in these documents (i.e., fraction of papers focusing on the list of tokens for each technique category or artefact) is shown in Figure 2 , where the data has been smoothed with a moving average filter in order to reduce short-term volatility in data. Note that this figure shows the aggregation of categories (techniques and artefacts) where each area stack is scaled to sum to 100% in every certain period of time. In a way, this can be understood as a non-monolithic view of polarities and intensities in sentiment analysis (e.g., identifying sentiment orientation in a set of documents). We see some techniques and artefacts are particularly dominant, as might be expected. For instance, looking at the techniques (left plot), 'knowledge representation and reasoning' takes almost half of the proportion of documents between , while 'general machine learning' becomes more relevant from 2005. We also see an important peak of multiagent systems around 2000. When we look at the artefacts (right plot), we see that 'extractor' (which includes expert systems) became very popular around 1990 but in a matter of a decade was overtaken by 'agent', which became dominant in 2000. We see today that five kinds of artefacts account for about 90% of all mentions: estimators, agents, optimisers, extractors and creators. These patterns also show that some artefacts are not new as a paradigm component, especially when we consider them in a more abstract way. For instance, it is sometimes understood that GANs introduced a new paradigm, but other kinds of generative systems have been around in AI since its exception, as we see in the violet band in Figure 2 (right). We can also use this analysis to explore what techniques and artefacts are prominent in the media. In Figure 2 , we look at all the non-research papers, shown in the 'Media' column besides each 'Research' plot. In this case, we can only show selected documents for the last two (complete) years 8 . We see that the dominances are more extreme. For the techniques (Figure 2 , left), 'neural networks' and 'general machine learning' occupy more than 75% of total mentions. On the right, we see that the distribution of artefacts is different, but also extreme: only four artefacts ('estimator', 'agent', 'dialoguer' and 'creator') cover more than 90% of mentions. In this case, we see that 'optimiser' and 'extractor' are less visible for laypeople than for researchers, while 'dialoguer' is more relevant for laypeople (most probably because of the common use of digital assistants). \n PARADIGMS AND SAFETY ISSUES To begin exploring how paradigms relate to safety issues in AI research, we conduct two different analyses. First, we look at the relative prominence of different techniques and artefacts specifically within research publications and other publications that are related to safety. This will help us to discern how the prominence of different paradigms differs in safety research from AI research as a whole, and to consider whether some paradigms are under-or over-represented in the safety literature. Second, we look at how different safety issues co-occur with different paradigms in the literature, enabling us to understand which safety issues are considered to relate to which paradigms, and to identify potential gaps in the literature. \n Analysing paradigms within safety research To explore the prominence of different AI techniques and artefacts in AI safety literature, we begin by conducting a similar analysis to that 8 While AI topics provides research documents (mainly) from the 70s onward, media-related documents come mostly from the last 2-3 years. in section 3.3, but restricting to documents which make some reference to AI safety. We use a list of over 100 relevant tokens to filter documents (Table 3 , right column, shows some examples of tokens for each type of safety issue5 0 ). Applying this filter, we find that, from the ∼111K documents in AI Topics, about ∼21K are related to safety, of which ∼1.5K are research papers, and ∼19K are broader (mostly media) documents. Figure 3 shows the frequency of reference to different techniques and artefacts within this safety-specific database. The results look similar to those for the enitre document database in Figure 2 , but there are some important differences. First, the frequencies of reference to different techniques and artefacts are more extreme and varied over time. This might be a consequence of the smaller sample size, but we do see this pattern particularly with the more dominant terms, where we have a reasonable sample size. In particular, we see that for AI research as a whole, 'knowledge and reasoning' was a dominant technique between the mid 1970s and mid 1990s, taking up about 50% of references (Figure 2 , left) -but is even more dominant as a proportion of mentions in safety research in the 1990s, at about 60% (see Figure 3 , left). This is even more extreme for the 'extractor' artefact, with about 50% of the documents in the late 1980s (Figure 2 , right), to more than 75% in the early 1990s (Figure 3 , right). The most interesting observation comes from looking at the periods when these peaks happen for different techniques (comparing the left plots of Figure 2 and Figure 3 ). The peak for 'knowledge representation and reasoning' happened in the mid 1980s when looking across AI research in general, but in the early 1990s when filtered by papers which mention safety. The peak for 'planning and scheduling' took place in the late 1990s across all papers but a few years later (and was more pronounced) when filtered by safety-relevant papers. We see a similar pattern for artefacts (comparing right plots of Figures 2 and 3 ): 'extractors' is dominant in the AI literature in the late 1980s, but only become prominent in safety research in the early 1990s; 'agents' rise to prominence in AI research in the late 1990s but only peak in safety research in the early 2000s. This suggests a five-year delay (approximately) between when a technique or artefact becomes popular within general AI research, and when researchers begin to seriously consider the safety issues associated with it. This suggests that safety issues are considered reactively in response to dominant research patterns, and are only considered once a paradigm has been prominent for several years (rather than safety issues related to a given technology being considered at the outset of research, as is more often the case in domains like engineering). In other words, we find that there is a delay between the emergence of AI paradigms and safety research into those paradigms, and safety research neglects non-dominant paradigms. Looking at the plots for non-research documents (comparing documents filtered for safety (Figure 3 , Media), with all documents (Figure 2 , Media), we see that the plots are almost identical. The only slight difference is that both 'natural language processing' techniques and 'dialoguer' artefacts are less prominent in articles filtered for safety, suggesting that the media and laypeople are perhaps less concerned about the harms of conversational systems than experts are. \n Mapping paradigms to safety issues Next we look more closely at how different AI paradigms relate to specific safety issues. To do this, we need a way to categorise different safety issues. Though existing categorisations of safety issues exist [25, 34] , we found these were too coarse for our purposes (e.g. [34] uses just three categories). Building on these existing categorisations, we identified key terms in surveys, blogs and events in AI safety [6, 2, 23, 14, 41, 37] , and clustered them into groups. Through this process we identified 22 categories, as shown in Table 3 . Our clustering process attempted to aggregate safety categories in an abstract way, independently of the subfield in which a term occurs most frequently. For instance, 'distributional shift' is the term used in machine learning, whereas 'belief revision' is the term used in the area of knowledge representation and reasoning. However, both refer to solving the same type of problem -where a system has to adapt or generalise to a new context-so we group both together under the 'problem shift' category. With this list of categories, we then analyse how related different paradigms and safety issues are, by counting the number of papers (of those filtered for safety relevance) which mention both a given paradigm component (technique or artefact) and a given safety issue, for all combinations of techniques and safety issues (and the same analysis, separately, for artefacts). Figure 4 shows these relationships, where the width of each band represents the number of papers including reference to both elements, with techniques/artefacts on the left, and safety issues on the right. The most general safety issues, such as 'trust, transparency & accountability', 'privacy & integrity', 'reliability & robustness', 'problem shift' and 'interpretability', have the widest bands linking them to different paradigm components, and tend to be linked to a wide variety of different paradigm components (shown by the 'multicolour' bands coming out from these issues). We notice several interesting insights about the relationship between safety issues and techniques or artefacts. The issue of 'problem shift' is largely associated with 'knowledge representation and reasoning', which is surprising since we might expect the broad problems of generalising to new contexts and distributions to be relevant to a wide range of techniques. This may be due to the relevance of belief revision in this kind of techniques. 'Privacy and integrity' is associated with many techniques, but not with reinforcement learning -which makes sense, given that reinforcement learning is much less likely to make use of personal data than other techniques. However, reinforcement learning is also not related to 'safe exploration and side effects', and is only very weakly associated with 'problem shift', which is more surprising, since these do seem like issues that are important for ensuring RL systems are used safely. Part of the reason for this may be that our analysis simply did not find much mention of safety issues in relation to reinforcement learning overall. While we must recognise the limitations of the database we had access to (perhaps AI topics does not capture the kinds of venues where safe reinforcement learning research is published), this does suggest that greater exploration of safety issues related to reinforcement learning is an important gap. Similarly, there is relatively little 24th European Conference on Artificial Intelligence -ECAI 2020 literature linking probabilistic and Bayesian approaches or evolutionary approaches in ML to safety issues. When we focus on less covered safety issues, we see that the associations are less multicoloured. For instance, 'scalable supervision' and 'specification & value alignment' are associated only with reinforcement learning, and 'adversarial attacks' only with 'neural networks'. More surprising are the associations of 'confinement problem', 'manipulation' and 'safe exploration & side effects'. Overall, most attention is paid to a few now prominent safety issues, but more specific issues have very limited combinations with some techniques. The right plot in Figure 4 shows the analysis for artefacts. In this case, also looking at the circle from right to left, we see multicolour bands relating some of the more prominent safety issues to a variety of different artefacts. The 'agent' paradigm component, which is more prominent in general, is associated with the widest range of safety issues by far -suggesting that more research on safety issues associated with different artefacts may be worthwhile. In general, across techniques and artefacts, we see that 'reliability and robustness' is by far the most frequently discussed safety issue -perhaps because it is more immediately and directly related to the performance of a system than the others. This is followed by 'trust, transparency and accountability', 'privacy and integrity' and 'problem shift'. More research into the less prominent safety issues and how they relate to different paradigms would be valuable. \n GENERAL DISCUSSION A next step for this work would be to combine this mapping exercise with deeper conceptual analysis of the relationship between techniques, artefacts, and safety issues. For instance, the \"confinement problem\" has been discussed in technical safety research relatively recently [4] , and is already strongly associated with optimisers, organisms and extractors (Figure 4 , right): systems that are naturally thought of as being encapsulated. However, other types of systems -dialoguers, estimators, providers and extenders-might also learn to behave in ways that go beyond their original specification or constraints, and it may be worth considering a wider range of \"confinement\" problems across many different AI systems. More broadly, thorough case studies of how a specific AI safety issue might arise for different techniques and artefacts would be very valuable, as would more systematic explorations of the various different safety issues associated with a given paradigm broadly construed. Some recent papers do go in this direction [41, 3] . Further analysis is needed regarding how more recent safety issues relate to the kinds of AI artefacts being deployed in society today, and the techniques those artefacts depend on. For instance, some recent accidents involving self-driving cars may be thought of as a consequence of people misunderstanding the type of artefact autonomous vehicles currently are: while many regard them as 'agents', they are really only 'extenders', and not yet meant to behave autonomously. Terms such as 'auto-pilot' only aid this confusion. Other self-driving car incidents have been caused by idiosyncratic imperfect performance of object recognition systems, failing to detect a human or other objects in rare situations. For now, we recommend that research papers make explicit which particular issues, artefacts and they are covering, and which ones they are excluding, and give some indications of why it is the case (because it does not apply or left for future work). Another avenue for further research would be to explore how different paradigm components (artefacts and techniques) relate to generality. The possibility of developing much more general systems has often been a reason for concern about AI safety [1] ; as a possibility which raises much larger, more critical risks [6] . Considering which AI techniques and artefacts are more likely to lead to or be associated with more advanced, general, systems can therefore enable us to think about the scale of risks they may pose. If we are primarily concerned with safety issues associated with greater generality in systems, techniques involving transfer learning, curriculum learning, meta-learning, and other approaches looking for broader task coverage -which we have included in the 'general machine learning' category-may be particularly important areas to pay attention to [38] . When we look at the artefacts, many views of a general AI system are typically associated with the 'agent' artefact. However, there is no reason to believe that providers, optimisers, extenders, etc., cannot become more general in the future, at least if we understand generality as autonomously covering more and more tasks. It is also worth noting that the prominence of an AI paradigm in the research literature should not necessarily be the main factor used to prioritise work on safety issues. Whether a paradigm results in societal applications with associated safety issues may be more related to sociological factors than technology itself. The extent to which a paradigm raises important safety issues is then associated with widespread use rather than the number of researchers who work on it (though these two things may be correlated). For instance, personal assistants are ubiquitous today, raising various safety and ethics issues, but the underlying technological advances do not necessarily correspond to a particularly prominent paradigm in AI research (reflected in the fact that personal assistants are more popular in media articles -the right plot of figure 2 -than in research papers). On the other hand, adversarial examples may be crucial for understanding the technical limitations of deep learning, but not so indicative of real-world risks [16] . To identify and prioritise important safety issues in future, therefore, more analysis of the techniques and artefacts most likely to result in widespread use, taking into account sociological factors, will be important. The list of techniques and artefacts introduced in this paper can help identify new safety issues, starting by identifying possible links between these paradigms and safety issues that have not been made before. We need to be able to be more anticipatory about what kinds of problems might arise from different AI systems in future, while at the same time avoiding being too speculative [2] . We hope that by thinking more explicitly about how safety issues relate to techniques and artefacts, AI safety research can both address challenges associated with current research avenues, and prepare us for a variety of potential future challenges. Figure 3 . 3 Figure3. Evolution of the relevance proportion for the period , using research-oriented (Research) and non-research (Media) sources from AI topics (everything like Figure2, except that here we only include the documents related to AI safety). \n TFigure 4 . 4 Figure 4. Left: Mapping between techniques and safety issues from research papers from AI topics (2010 to 2018). The width of each band connecting two elements represents the number of papers with both elements. Right: Same for artefacts. \n Table 1 . 1 The 14 categories of AI techniques we use in this paper. Given the relevance of machine learning today, and neural networks in particular, we retained several categories of machine learning techniques (general, declarative, and parametric ML, separate from neural networks). Technique category Some example subcategories and techniques Cognitive approaches Cognitive services and architectures, affective computing Declarative machine learning Rule learning, decision trees, program induction, ILP Evolutionary & nature-inspired methods Ant colony, LCS, genetic algorithms, DNA computing General machine learning Generative models, Gaussian models, AutoML, ensembles Heuristics & combinatorial optimization SAT solver, constraint satisfaction, Monte Carlo search Information retrieval Search engine, web mining, information extraction, Knowledge representation and reasoning Semantic nets, CBR, logics, commonsense reasoning Multiagent systems & game theory Distributed problem solving, cooperation, negotiation, Natural language processing Topic segmentation, parsing, question answering Neural networks Perceptron, convolutional network, GAN, RNN Parametric machine learning Support vector machines, kmeans, mixtures, LReg Planning & scheduling Backward/ forward chaining, action description language Probabilistic & Bayesian approaches Naive Bayes, probabilistic model, random field Reinforcement learning & MDPs Q-learning, deep RL, inverse RL \n Table 2 . 2 Ten different kinds of AI artefacts, key characteristics and some exemplars. Artefact Description Interface Dynamics Location Exemplars Interac. Funct. Central. Distrib. Coupled AGENT A system in a virtual or physical environment perceiving (observations and possibly rewards) and acting sensors and actuators • • a self-driving car, an autonomous drone, a robotic cleaner, a video game NPC ESTIMATOR A system representing an injective mapping from inputs to an extrapolated or estimated output digital objects • • a medical diagnostic model, an oracle, a face recognition system, a news feeder PROVIDER A system that waits for petitions that follow a protocol and responds with a solution for them command and objects • • a proof-editing and translation cognitive service, a voice processing system DIALOGUER A system that performs a conversation with a peer to extract information, explain things or change behaviour language • • virtual tutoring system, a chatter-bot sales assistant, healthcare assistant CREATOR A system that builds new things creatively following some patterns, constraints or examples specs. and/or examples • • a GAN generating faces, personalised email replier, simulated world generator EXTRACTOR A system that searches through a structured or unstructured knowledge base to retrieve some objects conditions and objects • • an expert system, a maths pundit, a web search engine, an infor. retrieval system ORGANISM A system that takes advantage of the environment or other systems to live, hybridise/mutate and reproduce resources • • • an intelligent computer worm or virus, artificial life, von Neumann probe OPTIMISER A system that finds an optimal combination of elements or parameters given some constraints constraints and objects • • • a train scheduling system, an electricity optimising system, theorem prover SWARM A system that behaves as the coordination of independent units through cooperation and/or competition sensors, actuators, communic. • • a multiagent network router, a drone swarm, a robotic warehouse, blockchain AI EXTENDER A system that regularly augments or compensates capabilities of another system (e.g., a human) commands, sensors, responses • • a memory assistant for people with dementia, a brain implant, a smart navigator 1.00 Research 1.00 Media 1.00 Research 1.00 Media 0.75 0.75 0.75 0.75 Share 0.50 0.50 Share 0.50 0.50 0.25 0.25 0.25 0.25 0.00 0.00 2010 2020 2016 2018 0.00 0.00 2010 2020 2016 2018 Neural Networks estimator agent provider optimiser extender dialoguer organism swarm extractor creator Figure 2. Evolution of the relevance proportion for the period , using research-oriented (Research) and non-research (Media) sources from AI topics. Left: 14 categories of techniques in Table1. Right: 10 categories of artefacts in Table2. \n Table 3 . 3 AI safety issue groups and their specific problems. AI Safety Issue Category Examples of specific AI problems included in the category Adversarial attacks Adversarial examples, white/black-box attacks, poisoning, policy manipulation. AI race & power AI race, monopolies, oligopolies. Authenticity & obfuscation Impersonation, authentication problems, fake media, plagiarism, obfuscation Autonomous weapons Military drones, killer robots, robotic weapon. Confinement problem AI boxing breach, containment breach. Corrigibility & interruptibility Switch-off button problems, rogue agents, self-preservation taking control. Dependency Cognitive atrophy, lack of independence, google effect, ... Interpretability Lack of intelligibility, need for explanation. Malicious use Malign uses of AI, malicious control, hacking. Manipulation Nudging, fake news, manipulative agents. Misuse & negligence AI misuse, negligent use. Moral dilemma Moral machine issues, utilitarian ethics problems, choosing ethical preferences. Moral perception & machine rights Robot rights recognition, moral status disagreement, uncanny valley. Privacy & integrity Inconsistency, private access breach, GDPR violation. Problem shift Distributional shift, concept drift, lack of generality, distribution overfitting. Reliability & robustness Error intolerance, robustness issues, reliability problems. Reward problems Honeypot problem, reward corruption, tripwire issues, tampering, wireheading. Safe exploration & side effects Negative side effects, unsafe exploration, uncontrolled impact. Scalable supervision Supervision costs, human-in-the-loop issues, sparse rewards. Self-modification Unintended self-modification, uncontrolled self-improvement. Specification & value alignment Instrumental convergence (paperclip), resource stealing, misalignment. Trust, transparency & accountability Lack of transparency, lack of trust, untraceability. \n\t\t\t Similar selections of keywords can be found in [30] , which focused on the venue keywords, and performed a cluster analysis on the AAAI2013 conference keyword set, proposing a new series of keywords which were adapted by AAAI2014; and [15] , where the authors focused on views expressed about topics linked to discussions about AI in the New York Times over a 30-year period in terms of public concerns as well as optimism. 5 The complete list of exemplars of techniques, artefacts and safety issues, as well as the source code used and high-resolution plots can be found at https://github.com/nandomp/AIParadigmsSafety. \n\t\t\t https://aitopics.org/misc/about. 7 We consider research those documents from the sources: \"AAAI Conferences\", \"AI Magazine\", \"arXiv.org Artificial Intelligence\", \"Communications of the ACM\", \"IEEE Computer\", \"IEEE Spectrum\", \"IEEE Spectrum Robotics Channel\", \"MIT Technology Review\", \"Nature\", \"New Scientist\" and \"Science\". \n\t\t\t 24th European Conference on Artificial Intelligence -ECAI 2020 Santiago de Compostela, Spain \n\t\t\t 24th European Conference on Artificial Intelligence -ECAI 2020 Santiago de Compostela, Spain \n\t\t\t 24th European Conference on Artificial Intelligence -ECAI 2020 Santiago de Compostela, Spain", "date_published": "n/a", "url": "n/a", "filename": "/Users/janhendrikkirchner/code/2022/10/alignment-research-dataset/align_data/common/../../data/raw/nonarxiv_teis/1364_paper.tei.xml", "id": "d5b069b46440f9b1965ebb215a7ca881"} {"source": "nonarxiv_papers", "source_filetype": "pdf", "abstract": "It's far from clear that human values will shape an Earth-based space-colonization wave, but even if they do, it seems more likely that space colonization will increase total suffering rather than decrease it. That said, other people care a lot about humanity's survival and spread into the cosmos, so I think suffering reducers should let others pursue their spacefaring dreams in exchange for stronger safety measures against future suffering. In general, I encourage people to focus on making an intergalactic future more humane if it happens rather than making sure there will be an intergalactic future.", "authors": ["Brian Tomasik"], "title": "Risks of Astronomical Future Suffering", "text": "Epigraphs If we carry the green fire-brand from star to star, and ignite around each a conflagration of vitality, we can trigger a Universal metamorphosis. [...] Because of us [...] Slag will become soil, grass will sprout, flowers will bloom, and forests will spring up in once sterile places. 1 [...] If we deny our awesome challenge; turn our backs on the living universe, and forsake our cosmic destiny, we will commit a crime of unutterable magnitude. -Marshall T. Savage, The Millennial Project: Colonizing the Galaxy in Eight Easy Steps, 1994 Let's pray that the human race never escapes from Earth to spread its iniquity elsewhere. -C.S. Lewis If you can't beat 'em, join 'em. -proverb 2 Humans values may not control the future Nick Bostrom's \"The Future of Human Evolution\" (Bostrom, 2004) describes a scenario in which human values of fun, leisure, and relationships may be replaced by hyper-optimized agents that can better compete in the Darwinian race to control our future light cone. The only way we could avert this competitive scenario, Bostrom suggests, would be via a \"singleton\" (Bostrom, 2006) , a unified agent or governing structure that could control evolution. Of course, even a singleton may not carry on human values. Many naive AI agents that humans might build may optimize an objective function that humans find pointless. Or even if humans do maintain hands on the steering wheel, it's far from guaranteed that we can preserve our goals in a stable way across major self-modifications going forward. These factors suggest that even conditional on human technological progress continuing, the probability that human values are realized in the future may not be very large. Carrying out human values seems to require a singleton that's not a blind optimizer, that can stably preserve values, and that is shaped by designers who care about human values rather than selfish gain or something else. This is important to keep in mind when we imagine what future humans might be able to bring about with their technology. Some people believe that sufficiently advanced superintelligences will discover the moral truth and hence necessarily do the right things. Thus, it's claimed, as long as humanity survives and grows more intelligent, the right things will eventually happen. There are two problems with this view. First, Occam's razor militates against the existence of a moral truth (whatever that's supposed to mean). Second, even if such moral truth existed, why should a superintelligence care about it? There are plenty of brilliant people on Earth today who eat meat. They know perfectly well the suffering that it causes, but their motivational systems aren't sufficiently engaged by the harm they're doing to farm animals. The same can be true for superintelligences. Indeed, arbitrary intelligences in mind-space needn't have even the slightest inklings of empathy for the suffering that sentients experience. \n Some scenarios for future suffering Even if humans do preserve control over the future of Earth-based life, there are still many ways in which space colonization would multiply suffering. Following are some of them. \n Spread of wild animals Humans may colonize other planets, spreading suffering-filled animal life via terraforming. Some humans may use their resources to seed life throughout the galaxy, which some sadly consider a moral imperative. \n Sentient simulations Given astronomical (Bostrom, 2003) computing power, post-humans may run various kinds of simulations. These sims may include many copies of wild-animal life, most of which dies painfully shortly after being born. For example, a superintelligence aiming to explore the distribution of extraterrestrials of different sorts might run vast numbers of simulations (Thiel, Bergmann and Grey, 2003) of evolution on various kinds of planets. Moreover, scientists might run even larger numbers of simulations of organisms-that-might-have-been, exploring the space of minds. They may simulate decillions of reinforcement learners that are sufficiently self-aware as to feel what we consider conscious pain. \n Suffering subroutines It could be that certain algorithms (say, reinforcement agents (Tomasik, 2014)) are very useful in performing complex machine-learning computations that need to be run at massive scale by advanced AI. These subroutines might be sufficiently similar to the pain programs in our own brains that we consider them to actually suffer. But profit and power may take precedence over pity, so these subroutines may be used widely throughout the AI's Matrioshka brains. \n Black Swans The range of scenarios that we can imagine is limited, and many more possibilities may emerge that we haven't thought of or maybe can't even comprehend. 4 Even a human-controlled future is likely to increase suffering If I had to make an estimate now, I would give ~70% probability that if humans choose to colonize space, this will cause more suffering than it reduces on intrinsic grounds (ignoring compromise considerations discussed later). Think about how space colonization could plausibly reduce suffering. For most of those mechanisms, there seem to be countermechanisms that will increase suffering at least as much. The following sections parallel those above. \n Spread of wild animals David Pearce coined the phrase \"cosmic rescue missions\" (Pearce, n.d.) in referring to the possibility of sending probes to other planets to alleviate the wild extraterrestrial (ET) suffering they contain. This is a nice idea, but there are a few problems. • We haven't found any ETs yet, so it's not obvious there are vast numbers of them waiting to be saved from Darwinian misery. Contrast this with the possibilities for spreading wild-animal suffering: • Humans may spread life to many planets (e.g., Mars via terraforming, other Earth-like planets via directed panspermia). The number of planets that can support life may be appreciably bigger than the number that already have it. (See the discussion of f l in the Drake equation.) Moreover, the percentage of planets that can be converted into computers that could simulate wild-animal suffering might be close to 100%. • We already know that Earth-based life is sentient, unlike for ETs. • Spreading biological life is slow and difficult, but disbursing small life-producing capsules is easier than dispatching Hedonistic Imperative probes or berserker probes. Fortunately, humans might not support spread of life that much, though some do. For terraforming, there are survival pressures to do it in the near term, but probably directed panspermia is a bigger problem in the long term. Also, given that terraforming is estimated to require at least thousands of years, while human-level digital intelligence should take at most a few hundred years to develop, terraforming may be a moot point from the perspective of catastrophic risks, since digital intelligence doesn't need terraformed planets. While I noted that ETs are not guaranteed to be sentient, I do think it's moderately likely that consciousness is fairly convergent among intelligent civilizations. This is based on (a) suggestions of convergent consciousness among animals on Earth and (b) the general principle that consciousness seems to be useful for planning, manipulating images, self-modeling, etc. On the other hand, maybe this reflects the paucity of my human imagination in conceiving of ways to be intelligent without consciousness. \n Sentient simulations It may be that biological suffering is a drop in the bucket compared with digital suffering. The biosphere of a planet is less than Type I on the Kardashev scale; it uses a tiny sliver of all the energy of its star. Intelligent computations by a Type II civilization can be many orders of magnitude higher. So humans' sims could be even more troubling than their spreading of wild animals. Of course, maybe there are ETs running sims of nature for science or amusement, or of minds in general to study biology, psychology, and sociology. If we encountered these ETs, maybe we could persuade them to be more humane. I think it's likely that humans are more empathetic than the average civilization because 1. we seem much more empathetic than the average animal on Earth, probably in part due to parental impulses and in part due to trade, although presumably some of these factors would necessarily be true of any technologically advanced civilization 2. selection bias implies that we'll agree with our own society's morals more than those of a random other society because these are the values that we were raised with and that our biology impels us toward. Based on these considerations, it seems plausible that there would be room for improvement through interaction with ETs. Indeed, we should in general expect it to be possible for any two civilizations or factions to achieve gains from compromise if they have diminishing marginal utility with respect to amount of control exerted. In addition, there may be cheap Pareto improvements to be had purely from increased intelligence and better understanding of important considerations. That said, there are some downside risks. Posthumans themselves might create suffering simulations, and what's worse, the sims that post-humans run would be more likely to be sentient than those run by random ETs because post-humans would have a tendency to simulate things closer to themselves in mind-space. They might run nature sims for aesthetic appreciation, lab sims for science experiments, or pet sims for pets. \n Suffering subroutines Suffering subroutines may be a convergent outcome of any AI, whether human-inspired or not. They might also be run by aliens, and maybe humans could ask aliens to design them in more humane ways, but this seems speculative. \n Black Swans It seems plausible that suffering in the future will be dominated by something totally unexpected. This could be a new discovery in physics, neuroscience, or even philosophy more generally. Some make the argument that because we know so very little now, it's better for humans to stick around because of the \"option value\": If they later realize it's bad to spread, they can stop, but if they realize they should spread, they can proceed to reduce suffering in some novel way that we haven't anticipated. Of course, the problem with the \"option value\" argument is that it assumes future humans do the right things, when in fact, based on examples of speculations we can imagine now, it seems future humans would probably do the wrong things much of the time. For instance, faced with a new discovery of obscene amounts of computing power somewhere, most humans would use it to run oodles more minds, some nontrivial fraction of which might suffer terribly. In general, most sources of immense power are double-edged swords that can create more happiness and more suffering, and the typical human impulse to promote life/consciousness rather than to remove them suggests that negative and negative-leaning utilitarians are on the losing side. Still, waiting and learning more is plausibly Kaldor-Hicks efficient, and maybe there are ways it can be made Pareto efficient by granting additional concessions to suffering reducers as compensation. \n What about paperclippers? Above I was largely assuming a human-oriented civilization with values that we recognize. But what if, as seems mildly likely, Earth is taken over by a paperclip maximizer, i.e., an unconstrained automation or optimization process? Wouldn't that reduce suffering because it would eliminate wild ETs as the paperclipper spread throughout the galaxy, without causing any additional suffering? Maybe, but if the paperclip maximizer is actually generally intelligent, then it won't stop at tiling the solar system with paperclips. It will want to do science, perform lab experiments on sentient creatures, possibly run suffering subroutines, and so forth. It will require lots of intelligent and potentially sentient robots to coordinate and maintain its paperclip factories, energy harvesters, and mining operations, as well as scientists and engineers to design them. And the paperclipping scenario would entail similar black swans as a human-inspired AI. Paperclippers would presumably be less intrinsically humane than a \"friendly AI\", so some might cause significantly more suffering than a friendly AI, though others might cause less, especially the \"minimizing\" paperclippers, e.g., cancer minimizers or death minimizers. If the paperclipper is not generally intelligent, I have a hard time seeing how it could cause human extinction. In this case it would be like many other catastrophic risks -deadly and destabilizing, but not capable of wiping out the human race. 6 What if human colonization is more humane than ET colonization? If we knew for certain that ETs would colonize our region of the universe if Earth-originating intelligence did not, then the question of whether humans should try to colonize space becomes less obvious. As noted above, it's plausible that humans are more compassionate than a random ET civilization would be. On the other hand, human-inspired computations might also entail more of what we consider to count as suffering because the mind architectures of the agents involved would be more familiar. And having more agents in competition for the light cone might lead to dangerous outcomes. But for the sake of argument, suppose an Earthoriginating colonization wave would be better than the expected colonization wave of an ET civilization that would colonize later if we didn't do so. In particular, suppose that if human values colonized space, they would cause only −0.5 units of suffering, compared with −1 units if random ETs colonized space. Then it would seem that as long as the probability P of some other ETs coming later is bigger than 0.5, then it's better for humans to colonize and pre-empt the ETs from colonizing, since −0.5 > −1 • P for P > 0.5. However, this analysis forgets that even if Earthoriginating intelligence does colonize space, it's not at all guaranteed that human values will control how that colonization proceeds. Evolutionary forces might distort compassionate human values into something unrecognizable. Alternatively, a rogue AI might replace humans and optimize for arbitrary values throughout the cosmos. In these cases, humans' greater-than-average compassion doesn't make much difference, so suppose that the value of these colonization waves would be −1, just like for colonization by random ETs. Let the probability be Q that these non-compassionate forces win control of Earth's colonization. Now the expected values are −31 • Q + −0.5 • (1 − Q) for Earth-originating colonization versus −1 • P if Earth doesn't colonize and leaves open the possibility of later ET colonization. For concreteness, say that Q = 0.5. (That seems plausibly too low to me, given how many times Earth has seen overhauls of hegemons in the past.) Then Earth-originating colonization is better if and only if −1 • 0.5 + −0.5 • 0.5 > −1 • P −0.75 > −1 • P P > 0.75. Given uncertainty about the Fermi paradox and Great Filter, it seems hard to maintain a probability greater than 75% that our future light cone would contain colonizing ETs if we don't ourselves colonize, although this section presents an interesting argument for thinking that the probability of future ETs is quite high. What if rogue AIs result in a different magnitude of disvalue from arbitrary ETs? Let H be the expected harm of colonization by a rogue AI. Assume ETs are as likely to develop rogue AIs as humans are. Then the disvalue of Earth-based colonization is H • Q + (−0.5) • (1 − Q), and the harm of ET colonization is P • (H • Q + (−1) • (1 − Q)). Again taking Q = 0.5, then Earth-based colonization has better expected value if where the inequality flips around when we divide by the negative number (H − 1). Figure 1 represents a plot of these threshold values for P as a function of H. H • 0.5 + −0.5 • 0.5 > P • (H • 0.5 + −1 • 0.5) H − 0.5 > P • (H − 1) P > (H − 0.5) (H − 1) , Even if H = 0 and a rogue AI caused no suffering, it would still only be better for Earth-originating intelligence to colonize if P > 0.5, i.e., if the probability of ETs colonizing in its place was at least 50%. These calculations involve many assumptions, and it could turn out that Earth-based colonization has higher expected value given certain parameter values. This is a main reason I maintain uncertainty as to the sign of Earth-based space colonization. However, this whole section was premised on humaninspired colonization being better than ET-inspired colonization, and the reverse might also be true, since computations of the future are more likely to be closer to what we most value and disvalue if humans do the colonizing. \n Why we should remain cooperative If technological development and space colonization seem poised to cause astronomical amounts of suffering, shouldn't we do our best to stop them? Well, it is worth having a discussion about the extent to which we as a society want these outcomes, but my guess is that someone will continue them, and this would be hard to curtail without extreme measures. Eventually, those who go on developing the technologies will hold most of the world's power. These people will, if only by selection effect, have strong desires to develop AI and colonize space. Resistance might not be completely futile. There's some small chance that suffering reducers could influence society in such a way as to prevent space colonization. But it would be better for suffering reducers, rather than fighting technologists, to compromise with them: We'll let you spread into the cosmos if you give more weight to our concerns about future suffering. Rather than offering a very tiny chance of complete victory for suffering reducers, this cooperation approach offers a higher chance of getting an appreciable fraction of the total suffering reduction that we want. In addition, compromise means that suffering reducers can also win in the scenario ( 30% likely in my view) that technological development does prevent more suffering than it causes even apart from considerations of strategic compromise with other people. Ideally these compromises would take the form of robust bargaining arrangements. Some examples are possible even in the short term, such as if suffering reducers and space-colonization advocates agree to cancel opposing funding in support of some commonly agreed-upon project instead. The strategic question of where to invest resources to advance your values at any given time amounts to a prisoner's dilemma with other value systems, and because we repeatedly make choices about where to invest, what stances to adopt, and what policies to push for, these prisoner's dilemmas are iterated. In Robert Axelrod's tournaments on the iterated prisoner's dilemma, the best-performing strategies were always \"nice,\" i.e., not the first to defect. Thus, suffering reducers should not be the first to defect against space colonizers. Of course, if it seems that space colonizers show no movement toward suffering reduction, then we should also be \"provocable\" to temporary defection until the other side does begin to recognize our concerns. We who are nervous about space colonization stand a lot to gain from allying with its supportersin terms of thinking about what scenarios might happen and how to shape the future in better directions. We also want to remain friends because this means pro-colonization people will take our ideas more seriously. Even if space colonization happens, there will remain many sub-questions on which suffering reducers want to have a say: e.g., not spreading wildlife, not creating suffering simulations/subroutines, etc. We want to make sure suffering reducers don't become a despised group. For example, think about how eugenics is more taboo because of the Nazi atrocities than it would have been otherwise. Antitechnology people are sometimes smeared by association with the Unabomber. Animal supporters can be tarnished by the violent tactics of a few, or even by the antics of PETA. We need to be cautious about something similar happening for suffering reduction. Most people already care a lot about preventing suffering, and we don't want people to start saying, \"Oh, you care about preventing harm to powerless creatures? What are you, one of those suffering reducers?\" where \"suffering reducers\" has become such a bad name that it evokes automatic hatred. So not only is cooperation with colonization supporters the more promising option, but it's arguably the only net-positive option for us. Taking a more confrontational stance risks hardening the opposition and turning people away from our message. Remember, preventing future suffering is something that everyone cares about, and we shouldn't erode that fact by being excessively antagonistic. 8 Possible upsides to an intelligent future \n Black swans that don't cut both ways Many speculative scenarios that would allow for vastly reducing suffering in the multiverse would also allow for vastly increasing it: When you can decrease the number of organisms that exist, you can also increase the number, and those who favor creating more happiness / life / complexity / etc. will tend to want to push for the increasing side. However, there may be some black swans that really are one-sided, in the sense that more knowledge is most likely to result in a decrease of suffering. For example: We might discover that certain routine physical operations map onto our conceptions of suffering. People might be able to develop ways to re-engineer those physical processes to reduce the suffering they contain. If this could be done without a big sacrifice to happiness or other values, most people would be on board, assuming that present-day values have some share of representation in future decisions. This may be a fairly big deal. I give nontrivial probability (maybe ~10%?) that I would, upon sufficient reflection, adopt a highly inclusive view of what counts as suffering, such that I would feel that significant portions of the whole multiverse contain suffering-dense physical processes. After all, the mechanics of suffering can be seen as really simple when you think about them a certain way, and as best I can tell, what makes animal suffering special are the bells and whistles that animal sentience involves over and above crude physics -things like complex learning, thinking, memory, etc. But why can't other physical objects in the multiverse be the bells and whistles that attend suffering by other physical processes? This is all very speculative, but what understandings of the multiverse our descendants would arrive at we can only begin to imagine right now. \n Valuing reflection If we care to some extent about moral reflection on our own values, rather than assuming that suffering reduction of a particular flavor is undoubtedly the best way to go, then we have more reason to sup-port a technologically advanced future, at least if it's reflective. In an idealized scenario like coherent extrapolated volition (CEV) (Yudkowsky, 2004) , say, if suffering reduction was the most compelling moral view, others would see this fact. 2 Indeed, all the arguments any moral philosopher has made would be put on the table for consideration (plus many more that no philosopher has yet made), and people would have a chance to even experience extreme suffering, in a controlled way, in order to assess how bad it is compared with other things. Perhaps there would be analytic approaches for predicting what people would say about how bad torture was without actually torturing them to find out. And of course, we could read through humanity's historical record and all the writings on the Internet to learn more about what actual people have said about torture, although we'd need to correct for will-to-live bias and deficits of accuracy when remembering emotions in hindsight. But, importantly, in a CEV scenario, all of those qualifications can be taken into account by people much smarter than ourselves. Of course, this rosy picture is not a likely future outcome. Historically, forces seize control because they best exert their power. It's quite plausible that someone will take over the future by disregarding the wishes of everyone else, rather than by combining and idealizing them. Or maybe concern for the powerless will just fall by the wayside, because it's not really adaptive for powerful agents to care about weak ones, unless there are strong, stable social pressures to do so. This suggests that improving prospects for a reflective, tolerant future may be an important undertaking. Rather than focusing on whether or not the future happens, I think it's more valuable for suffering reducers to focus on making the future better if it happens -by encouraging compromise, moral reflectiveness, philosophical wisdom, and altruism, all of which make everyone better off in expectation. Figure 1 : 1 Figure 1: Plot of threshold values for P as a function of H \n\t\t\t Because nature contains such vast amounts of suffering, I would strongly dislike such a project. I include this quotation for rhetorical effect and to give a sense of how others see the situation. \n\t\t\t Of course, what's compelling to idealized-me would not necessarily be compelling to idealized-you. Value divergences may", "date_published": "n/a", "url": "n/a", "filename": "/Users/janhendrikkirchner/code/2022/10/alignment-research-dataset/align_data/common/../../data/raw/nonarxiv_teis/risks-of-astronomical-future-suffering.tei.xml", "id": "f115b7b1a1e17a02fd8bdb95abaf2732"} {"source": "nonarxiv_papers", "source_filetype": "pdf", "abstract": "Artificial intelligence tools can help save lives in a pandemic. However, the need to implement technological solutions rapidly raises challenging ethical issues. We need new approaches for ethics with urgency, to ensure AI can be safely and beneficially used in the COVID-19 response and beyond.", "authors": ["Asaf Tzachor", "Jess Whittlestone", "Lalitha Sundaram", "Seán Ó Héigeartaigh"], "title": "Artificial intelligence in a crisis needs ethics with urgency", "text": "T he novel coronavirus pandemic (COVID-19) is the largest global crisis in a generation, hitting the world at a time when artificial intelligence (AI) is showing potential for widespread real-world application. We are currently seeing a rapid increase in proposals for how AI can be used in many stages of pandemic prevention and response. AI can aid in detecting, understanding and predicting the spread of disease, which can provide early warning signs and inform effective interventions 1 . AI may improve the medical response to the pandemic in several ways: supporting physicians by automating aspects of diagnosis 2 , prioritizing healthcare resources 3 , and improving vaccine and drug development 4 . AI also has potential applications beyond immediate response, such as in combating online misinformation about COVID-19 5 . The current crisis presents an unprecedented opportunity to leverage AI for societal benefit. However, the urgency with which new technologies must be deployed raises particularly challenging ethical issues and risks. There is growing concern that the use of AI and data in response to COVID-19 may compromise privacy and civil liberties by incentivizing the collection and processing of large amounts of data, which may often be private or personal 6 . More broadly, although AI clearly has a great deal to offer, we must be careful not to overestimate its potential. Its efficacy will heavily depend on the reliability and relevance of the data available. With the worldwide spread of COVID-19 occurring so quickly, obtaining sufficient data for accurate AI forecasting and diagnosis is challenging. Even where AI models are strictly speaking accurate, they may have differential impacts across subpopulations, with harmful consequences that are difficult to predict in advance 7 . A further concern is that the lack of transparency in AI systems used to aid decision-making around COVID-19 may make it near impossible for the decisions of governments and public officials to be subject to public scrutiny and legitimation 8 . Finally, the current crisis may have longer-term impacts on public trust and norms around the use of AI in society. How these develop will depend on perceptions of how successful and responsible use of AI to address COVID-19 is. \n The challenge of ethics in a crisis Robust ethics and risk assessment processes are needed to ensure AI is used responsibly in response to COVID-19. However, implementing these at a time of crisis is far from straightforward, especially where new technologies need to be deployed at unprecedented speed and scale. For example, forecasting models have to be available at the early stages of disease spread and make use of all possible data to productively inform policy interventions. Current processes for ethics and risk assessment around uses of AI are still relatively immature, and the urgency of a crisis highlights their limitations. Much work in AI ethics in recent years has focused on developing high-level principles, but these principles say nothing about what to do when principles come into conflict with one another 9 . For example, principles do not tell us how to balance the potential of AI to save lives (the principle of 'beneficence') against other important values such as privacy or fairness. One common suggestion for navigating such tensions is through engagement with diverse stakeholder groups, but this may be difficult to enact with sufficient speed at times of crisis. When new technologies may pose unknown risks, we would ordinarily try to introduce them in gradual, iterative ways, allowing time for issues to be identified and addressed. In the context of a crisis, however, there is a stark trade-off between a cautious approach and the need to deploy technological solutions at scale. For example, there may be pressure to rely on systems with less human oversight and potential for override due to staff shortages and time pressures, but this must be carefully balanced against the risk of failing to notice or override crucial failures. This does not mean that ethics should be neglected at times of crisis. It only emphasizes that we must find ways to conduct ethical review and risk assessment with the same urgency that motivates the development of AI-based solutions. \n Doing ethics with urgency We suggest that ethics with urgency must at a minimum incorporate the following components: (1) the ability to think ahead rather than dealing with problems reactively, (2) more robust procedures for assuring the behaviour and safety of AI systems, and (3) building public trust through independent oversight. First, ethics with urgency must involve thinking through possible issues and risks as thoroughly as possible before systems are developed and deployed in the world. This need to think ahead is reflected in the notion of 'ethics by design': making ethical considerations part of the process of developing new applications of AI, not an afterthought 10 . For example, questions such as 'what data do we need and what issues might this raise?' and 'how do we build this model so that it is possible to interrogate key assumptions?' need to be considered throughout the development process. This means that experts in ethics and risk assessment need to be involved in teams developing AI-based solutions from the beginning, and much clearer guidelines are needed for engineers and developers to think through these issues. An ethics by design approach should also be supplemented with more extensive foresight work, looking beyond the more obvious and immediate ethical issues, and considering a wider range of longer-term and more systemic impacts. By synthesizing diverse sources of expertise, established foresight methodologies can be used to identify new comment risks and key uncertainties likely to shape the future, and use this to make better informed decisions today 11 . Second, where applications of AI are used at scale in safety-critical domains such as healthcare, ensuring the safety and reliability of those systems across a range of scenarios is of crucial importance. Finding ways to rapidly conduct robust testing and verification of systems will therefore be central to doing ethics with urgency. We suggest that the application of AI in crisis scenarios should in particular be heavily informed by research on best practices for the verification and validation of autonomous systems 12 . It may also be worthwhile for governments to fund further work on methods for establishing the reliability of machine learning systems across a range of circumstances, particularly where those systems may be deployed in high-stakes crisis scenarios. Third, an important aspect of ethics with urgency is building public trust in how AI is being used. If governments use AI systems in ways perceived to be either mistaken or problematically value-laden, this could result in a loss of public trust severe enough to drastically reduce support for beneficial uses of AI not just in this crisis, but also in the future. Building public trust around new uses of technology may be particularly challenging in crisis times, where the need to move fast makes it easier for governments to fall back on opaque and centralized forms of decision-making. Several analyses of past pandemics have argued that transparency and public scrutiny are essential for maintaining public trust 13 . An independent oversight body, responsible for reviewing any potential risks and ethical issues associated with new technologies and producing publicly available reports, could help ensure public transparency. This oversight body could, among other approaches, make use of techniques such as 'red teaming' to rigorously challenge systems and their assumptions, unearthing any limitations and biases in the applications being proposed 14 . Red teaming is widely used in security settings, but can be applied broadly: at its core, red teaming is a way of challenging the blind spots of a team by explicitly looking for flaws from an outsider or adversarial perspective. As well as allowing developers to identify and fix issues before deployment, such processes could help assure public stakeholders that the interests and values of different groups are being thoroughly considered, and that all eventualities are prepared for. \n conclusion As the COVID-19 pandemic illustrates, times of crisis can necessitate rapid deployment of new technologies in order to save lives. However, this urgency both makes it more likely that ethical issues and risks will arise, and makes them more challenging to address. Rather than neglecting ethics, we must find ways to do ethics with urgency too. We strongly encourage technologists, ethicists, policymakers and healthcare professionals to consider how ethics can be implemented at speed in the ongoing response to the COVID-19 crisis. If ethical practices can be implemented with urgency, the current crisis could provide an opportunity to drive greater application of AI for societal benefit, and to build public trust in such applications. ❐ Asaf Tzachor 1 ✉ , Jess Whittlestone 2 , Lalitha Sundaram 1 and Seán Ó hÉigeartaigh 2 1 Centre for the Study of Existential Risk, University of Cambridge, Cambridge, UK. 2 Leverhulme Centre for the Future of Intelligence, University of Cambridge, Cambridge, UK. ✉ e-mail: Published online: 22 June 2020 https://doi.org/10.1038/s42256--0 \n\t\t\t NaTure MachiNe iNTelligeNce | VOL 2 | JuLy 2020 | 365-366 | www.nature.com/natmachintell", "date_published": "n/a", "url": "n/a", "filename": "/Users/janhendrikkirchner/code/2022/10/alignment-research-dataset/align_data/common/../../data/raw/nonarxiv_teis/s42256-020-0195-0.tei.xml", "id": "23327ada5a7f902584e178ff9da332d0"} {"source": "nonarxiv_papers", "source_filetype": "pdf", "authors": "n/a", "title": "n/a", "text": "n/a", "date_published": "n/a", "url": "n/a", "filename": "/Users/janhendrikkirchner/code/2022/10/alignment-research-dataset/align_data/common/../../data/raw/nonarxiv_teis/Feasibility of Training an AGI using Deep Reinforcement Learning, A Very Rough Estimate.tei.xml", "id": "274b68192b056e268f128ff63bfcd4a4"} {"source": "nonarxiv_papers", "source_filetype": "pdf", "abstract": "Models such as Sequence-to-Sequence and Image-to-Sequence are widely used in real world applications. While the ability of these neural architectures to produce variablelength outputs makes them extremely effective for problems like Machine Translation and Image Captioning, it also leaves them vulnerable to failures of the form where the model produces outputs of undesirable length. This behaviour can have severe consequences such as usage of increased computation and induce faults in downstream modules that expect outputs of a certain length. Motivated by the need to have a better understanding of the failures of these models, this paper proposes and studies the novel output-size modulation problem and makes two key technical contributions. First, to evaluate model robustness, we develop an easy-to-compute differentiable proxy objective that can be used with gradient-based algorithms to find output-lengthening inputs. Second and more importantly, we develop a verification approach that can formally verify whether a network always produces outputs within a certain length. Experimental results on Machine Translation and Image Captioning show that our output-lengthening approach can produce outputs that are 50 times longer than the input, while our verification approach can, given a model and input domain, prove that the output length is below a certain size.", "authors": ["Chenglong Wang", "Rudy Bunel", "Edward Grefenstette", "♣ Deepmind"], "title": "Knowing When to Stop: Evaluation and Verification of Conformity to Output-size Specifications", "text": "Introduction Neural networks with variable output lengths have become ubiquitous in several applications. In particular, recurrent neural networks (RNNs) such as LSTMs [17] , used work done during an internship at DeepMind | now at Facebook AI Research to form \"sequence\" models [30] , have been successfully and extensively applied in in image captioning [34, 28, 21, 6, 12, 37, 26, 24, 1] , video captioning [33, 38, 35, 40, 41] , machine translation (MT) [30, 9] , summarization [10] , and in other sequence-based transduction tasks. The ability of these sequence neural models to generate variable-length outputs is key to their performance on complex prediction tasks. However, this ability also opens a powerful attack for adversaries that try to force the model to produce outputs of specific lengths that, for instance, lead to increased computation or affect the correct operation of down-stream modules. To address this issue, we introduce the output-length modulation problem where given a specification of the form that the model should produce outputs with less than a certain maximum length, we want to find adversarial examples, i.e. search for inputs that lead the model to produce outputs with a larger length and thus show that the model under consideration violates the specification. Different from existing work on targeted or untargeted attacks where the goal is to perturb the input such that the output is another class or sequence in the development dataset (thus within the dataset distribution), the outputmodulation problem requires solving a more challenging task of finding inputs such that the output sequences are outside of the training distribution, which was previously claimed difficult [5] . The naive approach to the solution of the output-length modulation problem involves a computationally intractable search over a large discrete search space. To overcome this, we develop an easy-to-compute differentiable proxy objective that can be used with gradient-based algorithms to find output-lengthening inputs. Experimental results on Machine Translation show that our adversarial outputlengthening approach can produce outputs that are 50 times longer than the input. However, when evaluated on the image-to-text image captioning model, the method is less successful. There could have been two potential reasons for this result: the image-to-text architecture is truly robust, or the adversarial approach is not powerful enough to find adversarial examples for this model. To resolve this question, we develop a verification method for checking and formally proving whether a network is consistent with the outputsize specification for the given range of inputs. To the best of our knowledge, our verification algorithm is the first formal verification approach to check properties of recurrent models with variable output lengths. Our Contributions To summarize, the key contributions of this paper are as follows: • We propose and formulate the novel output-size modulation problem to study the behaviour of neural architectures capable of producing variable length outputs, and we study its evaluation and verification problems. • For evaluation, we design an efficiently computable differentiable proxy for the expected length of the output sequence. Experiments show that this proxy can be optimized using gradient descent to efficiently find inputs causing the model to produce long outputs. • We demonstrate that popular machine translation models can be forced to produce long outputs that are 50 times longer than the input sequence. The long output sequences help expose modes that the model can get stuck in, such as undesirable loops where they continue to emit a specific token for several steps. • We demonstrate the feasibility of formal verification of recurrent models by proposing the use of mixedinteger programming to formally verify that a certain neural image-captioning model will be consistent with the specification for the given range of inputs. Motivations and Implications Our focus on studying the output-length modulation problem is motivated by the following key considerations: • Achieving Computational Robustness: Many ML models are now offered as a service to customers via the cloud. In this context, ML services employing variable-output models could be vulnerable to denialof-service attacks that cause the ML model to perform wasteful computations by feeding it inputs that induce long outputs. This is particularly relevant for variable compute models, like Seq2Seq [9, 30] . Given a trained instance of the model, no method is known to check for the consistency of the model with a specification on the number of computation steps. Understanding the vulnerabilities of ML models to such outputlengthening and computation-increasing attacks is important for the safe deployment of ML services. • Understanding and Debugging Models: By designing inputs that cause models to produce long outputs, it is possible to reason about the internal representations learned by the model and isolate where the model exhibits undesirable behavior. For example, we find that an English to German sequence-to-sequence model can produce outputs that end with a long string of question marks ('?'). This indicates that when the output decoder state is conditioned on a sequence of '?'s, it can end up stuck in the same state. • Uncovering security vulnerabilities through adversarial stress-testing: The adversarial approach to outputlength modulation tries to find parts of the space of inputs where the model exhibits improper behavior. Such inputs does not only reveal abnormal output size, but could also uncover other abnormalities like the privacy violations of the kind that were recently revealed by [4] where an LSTM was forced to output memorized data. • Canonical specification for testing generalization of variable-output models: Norm-bounded perturbations of images [31] have become the standard specification to test attacks and defenses on image classifiers. While the practical relevance of this particular specification can be questioned [14] , it is still served as a useful canonical model encapsulating the essential difficulty in developing robust image classifiers. We believe stability of output-lengths can serve a similar purpose: as a canonical specification for variable outputlength models. The main difficulties in studying variable output length models in an adversarial sense (the non-differentiability of the objective with respect to inputs) are exposed in output-lengthening attack, making it a fertile testing ground for both evaluating attack methods and defenses. We hope that advances made here will facilitate the study of robustness on variable compute models and other specifications for variableoutput models such as monotonicity. \n Related Work There are several recent studies on generating adversarial perturbations on variable-output models. [27, 20] show that question answering and machine comprehension models are sensitive to attacks based on semantics preserving modification or the introduction of unrelated information. [11, 39] find that character-level classifiers are highly sensitive to small character manipulations. [29] shows that models predicting the correctness of image captions struggle against perturbations consisting of a single word change. [5] and [8] further study adversarial attacks for sequential-output models (machine-translation, image captioning) with specific target captions or keywords. We focus on sequence output models and analyze the output-length modulation problem, where the models should produce outputs with at least a certain number of output tokens. We study whether a model can be adversarially perturbed to change the size of the output, which is a more challenging task compared to targeted attacks (see details in Section 3). On the one hand, existing targeted attack tasks aim to perturb the input such that the output is another sequence in the validation dataset (thus within the training distribution), but attacking output size requires the model to generate out-of-distribution long sequences. On the other hand, since the desired output sequence is only loosely constrained by the length rather than directly provided by the user, the attack algorithm is required to explore the output size to make the attack possible. For models that cannot be adversarially perturbed, we develop a verification approach to show that it isn't simply a lack of power by the adversary but the sign of true robustness from the model. Similar approaches have been investigated for feedforward networks [3, 7, 32] but our work is the first to handle variable output length models and the corresponding decoding mechanisms. \n Modulating Output-size We study neural network models capable of producing outputs of variable length. We start with a canonical abstraction of such models, and later specialize to concrete models used in machine translation and image captioning. We denote by x the input to the network and by X the space of all inputs to the network. We consider a set of inputs of interest S, which can denote, for example, the set of \"small\" 1 perturbations of a nominal input. We study models that produce variable-length outputs sequentially. Let y t 2 Y denote the t-th output of the model, where Y is the output vocabulary of the model. At each timestep, the model defines a probability over the next element P (y t+1 |x, y 0:t ). There exists a special end-of-sequence element eos 2Y that signals termination of the output sequence. In practice, different models adopt different decoding strategies for generating y t+1 from the probability P (y t+1 |x, y 0:t ) [13, 19, 22] . In this paper, we focus on the commonly used deterministic greedy decoding strategy [13] , where at each time step, the generated token is given by the argmax over the logits: y 0 = argmax {P (•|x)} (1a) y t+1 = argmax {P (•|x, y 0:t )} if y t 6 = eos (1b) Since greedy decoding is deterministic, for a given sample x with a finite length output, we can define the length of the 1 The precise definition of small is specific to the application studied. greedily decoded sequence as: `(x)=t s.t y t = eos y i 6 = eos 8i K, then (3) is false. The attack spaces S we consider in this paper include both continuous inputs (for image-to-text models) and discrete inputs (for Seq2Seq models). \n Continuous inputs: For continuous inputs, such as image captioning tasks, the input is an n ⇥ m image with pixel values normalized to be in the range [ 1, 1] . x is an n ⇥ m matrix of real numbers and X =[ 1, 1] n⇥m . We define the perturbation space S(x, ) as follows: S(x, )={x 0 2X |kx 0 xk 1  } i.e., the space of perturbations of the input x in the `1 ball. Discrete inputs: For discrete inputs, e.g., machine translation tasks, inputs are discrete tokens in a language vocabulary. Formally, given the vocabulary V of the input language, the input space X is defined as all sentences composed of tokens in V , i.e., X = {(x 1 ,...,x n ) | x i 2 V, n > 0}. Given an input sequence x =( x 1 ,...,x n ), we define the -perturbation space of a sequence as all sequences of length n with at most d • ne tokens different from x (i.e., 2 [0, 1] denotes the percentage of tokens that an attacker is allowed to modify). Formally, the perturbation space S(x, ) is defined as follows: S(x, δ)={(x 0 1 ,...,x 0 n ) 2 V n | n P i=1 [xi 6 = x 0 i ] dδ • ne} \n Extending Projected Gradient Descent Attacks In the projected gradient descent (PGD) attacks [25] , 2 given an objective function J(x), the attacker calculates the adversarial example by searching for inputs in the attack space to maximize J(x). In the basic attack algorithm, we perform the following updates at each iteration: x 0 =Π S(x,δ) (x + ↵r x J(x)) ( 5 ) where ↵>0 is the step size and Π S(x,δ) denotes the projection of the attack to the valid space S(x, ). Observe that the adversarial objective in Eq. ( 4 ) cannot be directly used as J(x) to update x as the length of the sequence is not a differentiable objective function. This hinders the direct application of PGD to output-lengthening attacks. Furthermore, when the input space S is discrete, gradient descent cannot be directly be used because it is only applicable to continuous input spaces. In the following, we show our extensions of the PGD attack algorithm to handle these challenges. Greedy approach for sequence lengthening We introduce a differentiable proxy of `(x). Given an input x whose decoder output logits are (o 1 ,...,o k ) (i.e., the decoded sequence is y = (argmax(o 1 ),...,argmax(o k ))), instead of directly maximizing the output sequence length, we use a greedy algorithm to find an output sequence whose length is longer than k by minimizing the probability of the model to terminate within k steps. In other words, we minimize the log probability of the model to produce eos at any of the timesteps between 1 to k. Formally, the proxy objective J is defined as follows: J(x)= k P t=1 max ⇢ o t [eos] max z6 =eos o t [z], ✏ where ✏>0 is a hyperparameter to clip the loss. This is piecewise differentiable w.r.t. the inputs x (in the same 2 Here the adversarial objective is stated as maximization, so the algorithm is Projected Gradient Ascent, but we stick with the PGD terminology since it is standard in the literature sense that the ReLU function is differentiable) and can be efficiently optimized using PGD. \n Continuous relaxation for discrete inputs While we can apply the PGD attack with the proxy objective on the model with continuous inputs by setting the projection function Π S(x,δ) as the Euclidean projection, we cannot directly update discrete inputs. To enable a PGDtype attack in the discrete input space, we use the Gumbel trick [18] to reparameterize the input space to perform continuous relaxation of the inputs. Given an input sequence x =( x 1 ,...,x n ), for each x i , we construct a distribution ⇡ i 2 R |V | initialized with ⇡ i [x i ]=1and ⇡ i [z]= 1 for all z 2 V \\{x i }. The softmax function applied to ⇡ i is a probability distribution over input tokens at position i with a mode at x i . With this reparameterization, instead of feeding x =( x 1 ,...,x n ) into the model, we feed the Gumbel softmax sampling from the distribution (u 1 ,...,u n ). The sample x =(x 1 ,...,x n ) is calculated as follows: u i ⇠ Uniform(0, 1); g i = log( log(u i )) p = softmax(⇡); xi = softmax( gi+log pi τ ) where ⌧ is the Gumbel-softmax sampling temperature that controls the discreteness of x. With this relaxation, we perform PGD attack on the distribution ⇡ at each iteration. Since ⇡ i is unconstrained, the projection step in ( 5 ) is unnecessary. When the final ⇡ 0 =( ⇡ 0 1 ,...,⇡ n ) is obtained from the PGD attack, we draw samples x 0 i ⇠ Categorical(⇡ i ) to get the final adversarial example for the attack. \n Verified Bound on Output Length While heuristics approaches can be useful in finding attacks, they can fail due to the difficulty of optimizing nondifferentiable nonconvex functions. These challenges show up particularly when the perturbation space is small or when the target model is trained with strong bias in the training data towards short output sequences (e.g., the Show-and-Tell model as we will show in Section 6). Thus, we design a formal verification approach for complete reasoning of the output-size modulation problem, i.e., finding provable guarantees that no input within a certain set of interest can result in an output sequence of length above a certain threshold. Our approach relies on counterexample search using intelligent brute-force search methods, taking advantage of powerful modern integer programming solvers [15] . We encode all the constraints that an adversarial example should satisfy as linear constraints, possibly introducing additional binary variables. Once in the right formalism, these can be fed into an off-the-shelf Mixed Integer Programming (MIP) solver, which provably solves the problem, albeit with a potentially large computational cost. The constraints consist of four parts: (1) the initial restrictions on the model inputs (encoding S(x, )), (2) the relations between the different activations of the network (implementing each layer), (3) the decoding strategy (connection between the output logits and the inputs at the next step), and (4) the condition for it being a counterexample (ie. a sequence of length larger than the threshold). In the following, we show how each part of the constraints is encoded into MIP formulas. Our formulation is inspired by pior work on encoding feed-forward neural networks as MIPs [3, 7, 32] . The image captioning model we use consists of an image embedding model, a feedforward convolutional neural network that computes an embedding of the image, followed by a recurrent network that generates tokens sequentially starting with the initial hidden state set to the image embedding. The image embedding model is simply a sequence of linear or convolutional layers and ReLU activation functions. Linear and convolutional layers are trivially encoded as linear equality constraints between their inputs and outputs, while ReLUs are represented by introducing a binary variable and employing the big-M method [16] : x i = max (x i , 0) ) i 2{0, 1},x i 0 (6a) x i  u i • i ,x i xi (6b) x i  xi l i • (1 i ) (6c) with l i and u i being lower and upper bounds of xi which can be obtained using interval arithmetic (details in [3] ). Our novel contribution is to introduce a method to extend the techniques to handle greedy decoding used in recurrent networks. For a model with greedy decoding, the token with the most likely prediction is fed back as input to the next time step. To implement this mechanism as a mixed integer program, we employ a big-M method [36] : o max = max y2Y (o y ) ) o max o y , y 2{0, 1}8 y 2Y (7a) o max  o y +(u l y )(1 y ) 8y 2Y (7b) X y2Y y =1 (7c) with l y ,u y being a lower/upper bound on the value of o y and u = max y2Y u y (these can again be computed using interval arithmetic). Implementing the maximum in this way gives us both a variable representing the value of the maximum (o max ), as well as a one-hot encoding of the argmax ( y ). If the embedding for each token is given by {emb i | i 2Y}, we can simply encode the input to the following RNN timestep as P y2Y y • emb y , which is a linear function of the variables that we previously constructed. With this mechanism to encode the greedy decoding, we can now unroll the recurrent model for the desired number of timesteps. To search for an input x with output length `(x) K, we unroll the recurrent network for K steps and attempt to prove that at each timestep, eos is not the maximum logit, as in (2) . We setup the problem as: max min t=1.. K  max z6 =eos o t [z] o t [eos] (8) where o(k) represents the logits in the k-th decoding step. We use an encoding similar to the one of Equation ( 7 ) to represent the objective function as a linear objective with added constraints. If the global optimal value of Eq. ( 8 ) is positive, this is a valid counterexample: at all timesteps t 2 [1.. K], there is at least one token greater than the eos token, which means that the decoding should continue. On the other hand, if the optimal value is negative, that means that those conditions cannot be satisfied and that it is not possible to generate a sequence of length greater than K. The eos token would necessarily be predicted before. This would imply that our robustness property is True. \n Target Model Mechanism We use image captioning and machine translation models as specific target examples to study the output length modulation problem. We now introduce their mechanism. \n Image captioning models The image captioning model we consider is an encoder-decoder model composed of two modules: a convolution neural network (CNN) as an encoder for image feature extraction and a recurrent neural network (RNN) as a decoder for caption generation [34] . Formally, the input to the model x is an m ⇥ n sized image from the space X =[ 1, 1] m⇥n , the CNN-RNN model computes the output sequence as follows: i 0 = CNN(x); h 0 = 0 o t ,h t+1 = RNNCell(i t ,h t ) y t = arg max(o t ); i t+1 = emb(y t ) where emb denotes the embedding function. The captioning model first run the input image x through a CNN to obtain the image embedding and feed it to the RNN as the initial input i 0 along with the initial state h 0 . At each decode step, the RNN uses the input i t and state h t to compute the new state h t+1 as well as the logits o t representing the log-probability of the output token distribution in the vocabulary. The output y t is the token in the vocabulary with highest probability based on o t , and it is embedded into the continuous space using an embedding matrix W emb as W emb [y t ]. The embedding is fed to the next RNN cell as the input for the next decoding step. \n Machine translation models The machine translation model is an encoder-decoder model [30, 9] with both the encoder and the decoder being RNNs. Given the vocabulary V in of the input language, the valid input space X is defined as all sentences composed of tokens in V in , i.e., X = {(x 1 ,...,x n ) | x i 2 V, n > 0}. Given an input sequence x =( x 1 ,...,x n ), the model first calculates its embedding f (x) RNN as follows (h e t and i e t denote the encoder hidden states and the inputs at the t-th time step, respectively. emb e denotes the embedding function for each token in the vocabulary). The model then uses f (x) as the initial state h 0 for the decoder RNN to generate the output sequence, following the same approach as in the image captioning model. h e 0 = 0; i e t = emb e (x t ) h e t = RNNCell e (i e t ,h e t 1 ); f (x)=h e n \n Experiments We consider the following three models, namely, Multi-MNIST captioning, Show-and-Tell [34] , and Neural Machine Translation (NMT) [30, 9, 2] models. \n Details of models and datasets Multi-MNIST. The first model we evaluate is a minimal image captioning model for Multi-MNIST dataset. The Multi-MNIST dataset is composed from the MNIST dataset (Figure 1 left ). Each image in the dataset is composed from 1-3 MNIST images: each MNIST image (28 * 28) is placed on the canvas of size (28 * 112) with random bias on the x-axis. The composition process guarantees that every MNIST image is fully contained in the canvas without overlaps with other images. The label of each image is the list of MNIST digits appearing in the canvas, ordered by their x-axis values. The dataset contains 50,000 training images and 10,000 test images, where the training set is constructed from MNIST training set and the test set is constructed from MNIST test set. The images are normalized to [ 1, 1] before feeding to the captioning model. For this dataset, we train a CNN-RNN model for label prediction. The model encoder is a 4-layers CNN (2 convolution layers and 2 fully connected layers with ReLU activation functions applied in between). The decoder is a RNN with ReLU activation. Both the embedding size and the hidden size are set to 32. We train the model for 300 steps with Adam optimizer based on the cross-entropy loss. The model achieves 91.2% test accuracy, and all predictions made by the model on the training set have lengths no more than 3. Show-and-Tell. Show and Tell model [34] is an image captioning model with CNN-RNN encoder-decoder architecture similar to the Multi-MNIST model trained on the MSCOCO 2014 dataset [23] . Show-and-Tell model uses Inception-v3 as the CNN encoder and an LSTM for caption generation. We use a public version of the pretrained model 3 for evaluation. All images are normalized to [ 1, 1] 3 https://github.com/tensorflow/models/ and all captions in the dataset are within length 20. NMT. The machine translation model we study is a Seq2Seq model [30, 9] with the attention mechanism [2] trained on the WMT15 German-English dataset. The model uses byte pair segmentation (BPE) subword units [28] as vocabulary. The input vocabulary size is 36, 548. The model consists of 4-layer LSTMs of 1024 units with a bidirectional encoder, with the embedding dimension set to 1024. We use a publicly available checkpoint 4 with 27.6 BLEU score on the WMT15 test datasets in our evaluation. At training time, the model restricts the maximum decoding length to 50. \n Adversarial Attacks Our first experiment studies whether adversarial inputs exist for the above models and how they affect model decoding. For each model, we randomly select 100 inputs from the development dataset as attack targets, and compare the output length distributions from random perturbation and PGD attacks. Multi-MNIST We evaluate the distribution of output lengths of images with an `1 perturbation radius of 2{ 0.001, 0.005, 0.01, 0.05, 0.1, 0.5} using both random search and PGD attack. In random search, we generate 10,000 random images within the given perturbation radius for each image in the target dataset as new inputs to the model. In PGD attack, the adversarial inputs are obtained by running 10,000 gradient descent steps with an learning rate of 0.0005 using the Adam Optimizer. Neither of the attack methods can find any adversarial inputs for 2{0.001, 0.005, 0.01} perturbation radius (i.e., no perturbation is found for any images in the target dataset within the above to generate an output sequence longer than the original one). Figure 2 shows the distribution of the output lengths for images with different perturbation radius. Results show that the PGD attack is successful at finding attacks that push the distribution of output lengths higher, particularly at larger values of . Examples of adversarial inputs found by the model are shown in Figure 1 . \n Show-and-Tell For the Show-and-Tell model, we generate attacks within an `1 perturbation radius of =0.5 with both random search and PGD attack on 500 images randomly selected from the development dataset. However, except one adversarial input found by PGD attack that would cause the model to produce an output with size 25, no other adversarial inputs are found that can cause the model to produce outputs longer than 20 words, which is the training length cap. Our analysis shows that the difficulty of attacking the model is resulted from its strong bias on the output sequence distribution and the saturation of sigmoid gates in the LSTM decoder. This result is also consistent with the result found by [5] that Show-and-Tell model is \"only able to generate relevant captions learned from the training distribution\". NMT We evaluate the NMT model by comparing the output length distribution from adversarial examples generated from random search and PGD attack algorithms. We randomly draw 100 input sentences from the development dataset. The maximum input length is 78 and their corresponding translations made by the model are all within 75 tokens. We consider the perturbation 2{0.3, 0.5, 1.0}. 1. Random Search. In each run of the random attack, given an input sequence with length n, we first randomly select d • ne locations to modify, then randomly select substitutions of the tokens at these locations from the input vocabulary, and finally run the NMT model on the modified sequence. We run 10,000 random search steps for the 100 selected inputs, and show the distributions of all outputs obtained from the translation (in the total 1M output sequences). 2. PGD Attack. In PGD attack, we also start by randomly selecting d • ne locations to modify for each input sequence with length n. We then run 800 iterations of PGD attack with Adam optimizer using an initial learning rate of 0.005 to find substitutions of the tokens at these selected locations. We plot the output length obtained from running these adversarial inputs through the translation model. Figure 3 shows the distribution of output sequence lengths obtained from random search methods with different . We aggregate all sequences with length longer than 100 into the group '>100' in the plot. Results show that even random search approach could often craft inputs such that the corresponding output lengths are more than 75 and occasionally generates sentences with output length over 100. The random search algorithm finds 79, 11, 3 for =0.3, 0.5, 1, respectively, among the 1M translations that are longer than 100 tokens (at small , the search space is more restricted, and random search has a higher success rate of finding long outputs). Notably, the longest sequence found by the random search is a sequence with output length 312 tokens, where the original sequence is only 6. to larger perturbations. With an unconstrained perturbation = 100%, PGD attack algorithm discovers more adversarial inputs whose outputs are longer than 100 tokens (10% among all attacks), which is 1000⇥ more often than random search. As an extreme case, PGD attack discovered an adversarial input with length 3 whose output length is 575. Examples of adversarial inputs and their corresponding model outputs are shown in Figure 5 and the Appendix; we find out that a common feature of the long outputs produced by the translation model is that the output sequences often end with long repetitions of one (or a few) words. \n names name names name names grammatically name names names names names names names names names names names names names names names names names names names names names names names names names names names names names names names names names names names names names names names names names names names names names names names names names names names names names names names names names names names names names names names names names names names names names names names names eos To analyze the bottleneck of PGD attack on the NMT model, we further run a variation of the PGD attack where the attack space is the (continuous) word embedding space as opposed to the (discrete) token space: we allow the attacker to directly modify token embeddings at selected attack locations to any other vector. PGD attack on this variation achieves a 100% success rate to find adversarial token embeddings such that the model outputs are longer than 500 tokens. This indicates that the discrete space is a bottleneck for consistently finding stronger attacks. \n Verification Our implementation of the verification algorithm using the mixed integer programming (8) is implemented using SCIP [15] . We run our verification algorithm on the Multi-Figure 6 . Proportion of samples being provably robust (in blue), vulnerable (in red) or of unknown robustness status (in gray) to an attack attempting to make the model generate an output sequence longer than the ground truth, as a function of the perturbation radius allowed to the attacker. For small radiuses, the MIP can prove that no attacks can be successful. For large radiuses, we are able to find successful attacks. MNIST dataset, attempting to formally prove the robustness of the model to attacks attempting to generate an output longer than the ground truth. For each input image, we set a timeout of 30 minutes for the solver. The results in Figure 6 show that our verification algorithm is able to verify formally that no attacks exists for small perturbation radiuses. However, as the perturbation radius increases, our verification algorithm times out and is not able to explore the full space of valid perturbations and thus cannot decide whether attacks exists in the given space. For this reason, the number of robust samples we report is only a lower bound on the actual number of robust samples. Conversely, the vulnerable samples that we exhibit give us an upper bound on the number of those robust samples. As shown by the large proportion of samples of unknown status, there is currently still a gap between the capabilities of formal verification method and attacks. \n Conclusion In this paper, we introduce the existence and the construction of the output-length modulation problem. We propose a differentiable proxy that can be used with PGD to efficiently find output-lengthening inputs. We also develop a verification approach to formally prove certain models cannot produce outputs greater than a certain length. We show that the proposed algorithm can produce adversarial examples that are 50 times longer than the input for machine translation models, and the image-captioning model can conform the output size is less than certain maximum length using the verification approach. In future work, we plan to study adversarial training of sequential output models using the generated attacks, to models that are robust against output lengthening attacks, and further, verify this formally. Figure 1 . 1 Figure 1. Multi-MNIST examples (left), adversarial examples found by PGD attack (mid), and their differences. For the first group, the model correctly predicts label l1 =[6, 1] on the original image but predicts l 0 1 =[6, 1, 1] for its corresponding adversarial input. Predictions on the original/adversarial inputs made by model for the second group are l2 =[0, 7, 4],l 0 2 =[0, 1, 4, 3], and l3 =[3],l 0 3 =[3, 3, 5, 3] for the third group. The adversarial inputs in the first/second/third groups are found within the perturbation radius δ1 =0 .1,δ2 =0 .25,δ3 = 0.25. \n Figure 2 . 2 Figure2. The distribution of output length for random search (denoted as Rand) and PGD attack with different perturbation radius δ. The x-axis denotes the output length and y-axis denotes the number of outputs with the corresponding length. δ =0(no perturbation allowed) refers to the original output distribution of the target dataset. \n Figure 3 . 3 Figure 3. The histogram representing the output length distribution of the NMT model using random search with different perturbations (δ 2{ 0.3, 0.5, 1}). The x-axis shows the output length. y-axis values are divided by 10,000, the number of random perturbation rounds per image. \n Figure 4 4 Figure4shows the result from attacking the NMT model with PGD attack. Results show that PGD attack has relatively low success rate at lower perturbations compared \n Figure 4 . 4 Figure 4. The histogram representing the output length distribution of the NMT model under PGD attack for different δ. x-axis shows the output length and y-axis shows the number of instances with the corresponding length. \n ( I) Die Waffe wird ausgestellt und durch den Zaun übergeben. (O) The weapon is issued and handed over by the fence . eos (I 0 ) Die namen name descri und ames utt origin i.e. meet grammatisch . (O 0 ) \n Figure 5 . 5 Figure 5. An example of German to English translation where I,O refer to an original sequence in the dataset and the corresponding translation made by the model. I 0 ,O 0 refer to an adversarial example found by PGD attack and the corresponding model translation. \n\t\t\t https://github.com/tensorflow/nmt", "date_published": "n/a", "url": "n/a", "filename": "/Users/janhendrikkirchner/code/2022/10/alignment-research-dataset/align_data/common/../../data/raw/nonarxiv_teis/Wang_Knowing_When_to_Stop_Evaluation_and_Verification_of_Conformity_to_CVPR_2019_paper.tei.xml", "id": "040b35a01f7762b6597d68fb1c7497e8"} {"source": "nonarxiv_papers", "source_filetype": "pdf", "abstract": "Suppose that several actors are going to deploy learning agents to act on their behalf. What principles should guide these actors in designing their agents, given that they may have competing goals? An appealing solution concept in this setting is welfare-optimal learning equilibrium. This means that the learning agents should constitute a Nash equilibrium whose payoff profile is optimal according to some measure of total welfare (welfare function). In this work, we construct a class of learning algorithms in this spirit called learning tit-for-tat (L-TFT). L-TFT algorithms maximize a welfare function according to a specified optimization schedule, and punish their counterpart when they detect that they are deviating from this plan. Because the policies of other agents are not in general fully observed, agents must infer whether their counterpart is following a cooperative learning algorithm. This requires us to develop new techniques for making inferences about counterpart learning algorithms. In two sequential social dilemmas, our L-TFT algorithms successfully cooperate in self-play while effectively avoiding exploitation by and punishing defecting learning algorithms.", "authors": ["Jesse Clifton", "Maxime Riché"], "title": "Towards cooperation in learning games", "text": "Introduction As the capabilities of machine learning systems improve, it is increasingly important to understand how to achieve cooperation between systems deployed by actors with competing goals. Moreover, because intelligent systems may continue to learn after they are deployed, designing agents who can cooperate requires accounting for ways their policies may change post-deployment. In this paper, we look at the setting where multiple learning agents are deployed to act on behalf of their respective principals. There are three main goals of this work. The first is to articulate an appealing criterion for the design of learning agents, which is welfare-optimal learning equilibrium. The second is to present a general approach to designing learning agents who approximately fulfill this criterion, which we call learning tit-for-tat (L-TFT). This framework is generic to agents who update their policies over time, rather than depending on a particular class of reinforcement learning algorithms. The third is to illustrate some of the strategic considerations which need to be addressed in the design of L-TFT agents. We do this by constructing an L-TFT agent using model-free reinforcement learning algorithms, and examining its performance in two sequential social dilemmas. This agent converges to welfare-optimal payoffs in self-play, while successfully avoiding exploitation and punishing defecting learning algorithms. A learning equilibrium is simply a Nash equilibrium of a learning game, in which strategies are learning algorithms, and payoffs are the cumulative rewards associated with the profile of learning algorithms chosen by each principal [Brafman and Tennenholtz, 2003 ]. Principals will want to choose the learning agents they deploy strategically. This means that it is not enough for a learner to perform well against a specified set of other players (e.g., Shoham and Leyton-Brown 2008) . It is also not enough that a learner converges to Nash equilibrium of the game which is being learned (e.g. Bowling and Veloso 2001 , Conitzer and Sandholm 2007 , Letcher et al. 2018 , as this does not guarantee that a player cannot benefit by deviating from that profile (i.e., deploying a different learning algorithm). Like other sequential games, learning games will generally have many learning equilibria. Independent equilibrium selection on part of the principals may lead to poor performance. Thus, to guarantee socially optimal payoffs, principals need to coordinate when designing their agents. L-TFT addresses this problem via a welfare function. A welfare function measures the social value of a payoff profile; for instance, a simple welfare function is the sum of the principals' respective payoffs. An L-TFT agent learns a policy profile which optimizes this welfare function, and punishes its counterpart if it detects that they are not doing the same. Thus agents are incentivized to follow a learning algorithm which optimizes the welfare function. Coordination can thus be achieved via coordination on a welfare function and on a class of learning algorithms for optimizing it. In our experiments, learners observe only the actions taken by their counterpart, and not their counterpart's (stochastic) policy. This requires us to construct L-TFT agents who model their counterpart's learning schedule based on their actions and conduct a hypothesis test as to whether it matches a cooperative learning schedule. This in turn introduces incentives to exploit this testing procedure. We examine the performance of L-TFT against an algorithm designed specifically to exploit the way in which it detects defections, as well as its performance in self-play and against a naive selfish learner. Lastly, there is a strategic element in choosing the sensitivity with which L-TFT's hypothesis test detects defections. In Section 5.3, we examine this element in a learning game between an L-TFT agent and a potential exploiter choosing how cautious to be in deviating from a cooperative learning algorithm. \n Related work Learning equilibrium was first discussed (under that name) by Brafman and Tennenholtz [2003] . Their construction of learning equilibria involves evaluating each of a finite set of policies in a finite stochastic game, whereas the algorithms we focus on use incremental stochastic updates towards the welfare-optimal policy profile. Our online learning setting complements the setting in which agents are trained fully offline and then deployed. This is the setting typically considered in applications of reinforcement learning to cooperation in social dilemmas. For instance, , , Wang et al. [2018] each develop methods in which punishment and cooperative policies are trained offline, and cooperation at deployment time is sustained by the threat of punishment. However, as many agents may learn post-deployment, the strategic aspects of learning itself -rather than just policy selection -must be addressed. The manipulation of the learning of counterpart agents has been explored in the literature on opponent-shaping , Letcher et al., 2018 . Perhaps the most similar to our approach is the variant of 's learning with opponent learning awareness which uses opponent modeling (LOLA-OM). LOLA-OM uses supervised learning to estimate the policy followed by the other agent, and updates its own policy in a way that is intended to manipulate the direction in which the other player updates their policy. On the other hand, our L-TFT uses opponent modeling to infer whether the other agent is following a cooperative learning algorithm. 3 Preliminaries: Learning games Consider a situation in which multiple agents interact with their environment over time, incrementally updating the policies which they use to choose actions. These updates are controlled by their learning algorithms, which are fixed by their principals before the agents begin interacting with the environment. It is helpful to distinguish between two different games that are being played here. There is the base game, which is defined by the dynamics of the environment, the reward functions of the agents, and the policies available to each agent. In this paper, the \"base game\" will be a stochastic game (described below). There is also the learning game, in which the principals simultaneously submit learning algorithms to be deployed in the base game environment, and attain payoffs equal to some measure of cumulative reward. We work with stochastic games [Littman, 1994] , a generalization of Markov decision processes to the multi-agent setting. We will assume only two players, i = 1, 2. We'll refer to player i's counterpart with the index j. In a stochastic game, players simultaneously take actions at each time t ∈ N; observe a reward depending on the current state; and the state evolves according to a Markovian transition function. Write the state space as S and the action spaces as A i . Let the reward functions be mappings r i : S × A 1 × A 2 → R, with the reward to player i at time t given by R t i = r i (S t , A t 1 , A t 2 ) . Let H t be the history of observations until time t. We will assume that both agents fully observe the state and each others' actions and rewards, such that H t = {(S v , A v 1 , A v 2 , R v 1 , R v 2 )} t v=0 . A policy π i gives a probability distribution over actions to be taken by player i at each state. Players wish to learn the parameters θ i of a policy which (in some sense) leads to large cumulative reward. Then we define a learning algorithm. Definition 3.1 (Learning algorithm). A learning algorithm for player i is a mapping σ i from histories H t to parameters, i.e., σ i : H → Θ i , where H is the set of histories and Θ i is the space of policy parameters for player i. A learning game is a game whose strategies are learning algorithms. We define payoffs via average rewards. Define the value to player i of learning algorithm profile (σ 1 , σ 2 ) starting from history h t as V i (h t , σ) = lim inf T →∞ T −1 E σ1,σ2 t+T v=t R v i | H t , where E σ1,σ2 is the expectation taken with respect to trajectories in which agents follow learning algorithms σ 1 , σ 2 at each step. Let the initial state s 0 be fixed so that h 0 = {s 0 }. Then a learning game is a game whose strategy spaces are spaces of learning algorithms Σ 1 , Σ 2 , and whose payoffs are given by V i (σ 1 , σ 2 ) ≡ V i (h 0 , σ 1 , σ 2 ). A learning equilibrium is a Nash equilibrium of a learning game: Definition 3.2 (Learning equilibrium). Learning algorithm profile (σ 1 , σ 2 ) is a learning equilibrium of the learning game with learning algorithm spaces Σ 1 , Σ 2 if sup σ 1 ∈Σ1 V 1 (σ 1 , σ 2 ) ≤ V 1 (σ 1 , σ 2 ) and sup σ 2 ∈Σ2 V 2 (σ 1 , σ 2 ) ≤ V 2 (σ 1 , σ 2 ). Finally, let a welfare function w be a function measuring the social value of different learning algorithm profiles. A straightforward welfare function is the utilitarian welfare, w(σ 1 , σ 2 ) = V 1 (σ 1 , σ 2 ) + V 2 (σ 1 , σ 2 ). Others commonly discussed are the Nash welfare [Nash, 1950] and the egalitarian welfare (e.g., Kalai 1977) . We say that a profile of learning algorithms (σ 1 , σ 2 ) is a welfare-optimal learning equilibrium if it is a learning equilibrium and if it maximizes a welfare function w. We will construct learning algorithms which 1) optimize a welfare function in self-play and 2) are minimally exploitable, in the spirit of welfare-optimal learning equilibrium. 4 Learning tit-for-tat (L-TFT) Tit-for-tat (TFT) famously enforces cooperation in the iterated Prisoner's Dilemma by cooperating when its counterpart cooperates, and defecting when they defect. Similarly, the \"folk theorems\" say that equilibria in repeated games are constructed by specifying a target payoff profile, a profile of target strategies which achieve this payoff profile, and a punishment strategy that is used to punish agents who deviate from the target strategy (see Mailath et al. [2006] for a review). Learning tit-for-tat (L-TFT) is a class of learning algorithms that are analogous to tit-for-tat for learning games. For L-TFT, defection consists in playing a policy other than the current estimate of the cooperative policy. This depends on a notion of cooperation and a class of algorithms for learning a \"cooperative\" policy. These are encoded in a welfare function and a class of learning algorithms which learn an optimal policy with respect to that welfare function. Thus an L-TFT agent punishes their counterpart when it detects that they are not learning a welfare-optimal policy using a particular set of algorithms, and otherwise plays according to their current estimate of the welfare-optimal policy. Because policies are in general stochastic, and because we do not assume that policy parameters are mutually visible, we also need a method for inferring whether the other player is following this cooperative algorithm. In particular, we need to construct a test for the hypotheses: H 0 : My counterpart is following an algorithm for learning a welfare-optimal policy. H 1 : My counterpart is not following an algorithm for learning a welfare-optimal policy. In order to be able to cooperate and punish, as well as detect defections by their counterpart, an agent i playing L-TFT maintains the following policy estimates: • A cooperative policy θ C,t i trained to optimize the welfare function. This network is updated according to a learning algorithm O C . In our experiments, O C corresponds to double deep Q-networks (DDQN; Van Hasselt et al. 2015) trained on the reward signal R t 1 + R t 2 ; • A punishment policy θ P,t i , updated according to a learning algorithm O P . In our experiments, this corresponds to DDQN trained on the reward signal −R t j ; • An estimate θ C,t j of the other player's current policy which corresponds to the hypothesis that they are playing according to a cooperative learning algorithm. This policy is estimated by specifying a class of cooperative learning algorithms Σ C j ; finding the learning algorithm σ C,t j in this class which best fits the other player's actions; and taking θ C,t j = σ C,t j (H t ). In our experiments, the L-TFT agent assumes that the learning algorithms Σ C j apply DDQN to R t 1 + R t 2 and follow a Boltzmann exploration policy with an unknown initial temperature parameter. In order to obtain θ C,t j we must therefore estimate this temperature parameter, which we do using an approximate maximum likelihood algorithm; • An estimate θ D,t j of the other player's current policy which corresponds to the hypothesis that they are not playing according to a cooperative learning algorithm. This policy is estimated by specifying a class of learning algorithms Σ D j , finding the learning algorithm σ D,t j in this class which best fits the other player's actions; and taking θ D,t j = σ D,t j (H t ). In our experiments, we approximate this procedure by obtaining θ D,t j via supervised learning on player j's actions given the state. Algorithm 1 summarizes L-TFT. Note that, while our experiments use a standard model-free learning algorithm for the base algorithms O C , O P , this framework is highly general. For instance, the players might be model-based planners who choose actions by optimizing against an environment model with parameter θ i , which is updated over time. Moreover, with more sophisticated methods for inferring the other player's learning algorithm, L-TFT algorithms may also be constructed in games with partial state observability, or uncertainty about the other players' rewards. Algorithm 1: Learning tit-for-tat (L-TFT) and −R t j , respectively. Agents follow Boltzmann exploration with respect to the estimated Q-function, i.e., π θi (a i | s; τ i ) ∝ exp aj Q θi (s, a i , a j )/τ i , where Q θi is a neural network with parameter θ i . The initial temperature τ i is decayed according to a schedule g(•, τ i ), such that the temperature at each time step t is τ t i = g(t, τ i ). (In our experiments g linearly decreases to a minimum value, and is constant thereafter.) Our L-TFT algorithm punishes for one episode after it has detected a defection. We assume that players may broadcast punishment to their counterpart. L-TFT agents do not update their cooperative and defecting policy estimates if their counterpart broadcasts punishment, and do not punish if they are currently being punished; for simplicity we suppress the dependence of agents' policies and of the state on these broadcasts. Input: Observation history H t , \n Testing for defection: L-TFT q To decide whether to punish, our agent conducts a goodness-of-fit test for the hypothesis that the other player is cooperating. A goodness-of-fit test (e.g., Lemeshow and Hosmer Jr 1982) compares the fit of a simple model with the fit of a flexible model. If the fit of the simple model is sufficiently close to that of the flexible model, the simple model is deemed adequate. Otherwise, the simple model is said to fit poorly. Here, the simple model we assess is the hypothesis that the other player is using cooperative DDQN with Boltzmann exploration. The flexible model we would like to compare to is that the other player is following an arbitrary learning algorithm (which need not be cooperative). One approach to estimating this model could be to solve the penalized maximum likelihood problem θ D,1 j , . . . , θ D,t j = arg max θ D,1 j ,...,θ D,t j t v=1 log π D θ D,v j (A v | S v ) + t −1 λ t v=1 θ D,v j 2 , ( 1 ) where π D θj is a class of neural networks that output a distribution over actions given a state. But solving 1 is computationally infeasible. As an approximation, we maintain a sequence { θ D,v j } t v=1 by taking gradient steps on the supervised learning loss at each time step. We maintain a buffer of log likelihoods under the hypothesis of defection, L D,t j = {log π D θ D,v j (A v j | S v )} t v=t−t . (The estimate θ D,t j and log likelihood buffer L D,t j are not updated while the other player broadcasts punishment.) On the other hand, to obtain likelihoods under the hypothesis of cooperation, player j is cooperating, we need to estimate the initial temperature τ j under the assumption that player j is following a cooperative learning algorithm for some τ j . We would thus like to solve τ t j = arg max τj t v=1 log π θ C,v j {A v | S v ; g(v, τ j )} . (2) As a tractable approximation to the solution of problem 2, we maintain an estimated cooperative Qnetwork θ C,t j via DDQN updates on reward signal R t 1 + R t 2 (only when not being punished). We track the likelihood of player j's actions under θ C,t j over a set of candidate values for τ j ; and take τ t j to be the highest-likelihood such value at time t. For each τ j , we also maintain a buffer of log likelihoods 1 L C,t j (τ j ) = {log π C,v j A v j | S v ; g(v, τ j ) } t v=t−t . (Estimate θ C,t j and buffers L C,t j (τ j ) are again not udpated while the opponent broadcasts punishment.) Finally, the hypothesis test is conducted at the end of each episode by bootstrapping [Efron, 1992] the difference in mean log likelihoods over buffers L C,t j ( τ t j ) and L D,t j . The hypothesis of cooperation is rejected when the q th quantile of this distribution is less than 0. This hypothesis test is summarized in Algorithm 2. We refer to the variant of L-TFT which uses this test as L-TFT q . Exploiting L-TFT q We also construct exploiter agents to play against L-TFT q . These agents assume their counterpart is L-TFT q for a particular value of q, simulate the hypothesis test used by L-TFT q , and act (myopically) depending on the result of the test. That is, if an exploiter predicts that a defection would be detected by a test with cutoff q, it acts according to the cooperative policy; otherwise, it takes a selfish action (according to a DDQN network trained on its reward signal only). Call this agent Exploiter q . The networks maintained by Exploiter q are: 1) a cooperative network trained on R t 1 + R t 2 , 2) a selfish network trained on R t i , 3) an estimate of its own policy under the hypothesis that it is cooperating θ C,l j , and 4) an estimate of its own policy under the hypothesis that it is defecting θ D,l j . Algorithm 2: Hypothesis test for L-TFT q Input: Defection log likelihood buffer L D j , cooperative log likelihood buffer L C j ( τ t j ), cutoff quantile q, test statistics buffer {∆ (q),v j } t−1 v=t−u /* t indexes episodes */ ∆ j ← ∅ for b = 1, . . . , B do L C,b j ← Bootstrap L C,t j ( τ t j ) L D,b j ← Bootstrap(L D,t j ) L C,b j ← B −1 C j ∈L C,b j C j L D,b j ← B −1 D j ∈L D,b j D j ∆ j ← ∆ j ∪ L C,b j − L D,b j ∆ (q),t j ← Percentile(∆ j , q) ∆(q),t j ← u −1 t v=t−u ∆ (q),v j // Average previous test statistics for stability if ∆(q),t j < 0 then defection_detected ← True else defection_detected ← False return defection_detected \n Environments The environments we use are the iterated Prisoner's Dilemma (IPD) and the Coin Game environment 2 . In both cases, episodes are 20 timesteps long. Our version of the IPD involves repeated play of the matrix game with expected payoffs as in Table 1 . Policy π θi gives the probability of each action given player j's action at the previous timestep. In our implementation, this means that the state passed to the estimated Q-function is the profile of actions taken at the last step. Player 2 C D Player 1 C −1, −1 −3, 0 D 0, −3 −2, −2 Table 1: Prisoner's Dilemma expected payoffs. Figure 1 : Coin Game Payoffs. Write the expected reward function, corresponding to the payoffs in Table 1 , as r i (a i , a j ). We introduce randomness by generating rewards as R t i ∼ N r i (A t i , A t j ), 0.1 . In the Coin Game environment depicted in Figure 1 , a Red and Blue player navigate a grid and pick up randomly-generated coins. Each player gets a reward of 1 for picking up a coin of any color. But, a player gets a reward of −2 if the other player picks up their coin. This creates a social dilemma in which the socially optimal behavior is to only get one's own coin, but there is incentive to defect and try to get the other player's coin as well. The state space consists of encodings of each player's location and the location of the coin. We use networks with two convolutional layers and two fully connected feedforward layers. (bottom) . For IPD, the prob axis is the probability of mutual cooperation (cc) and mutual defection (dd), For Coin Game, the pick_own axis is the proportion of times the coin is picked by the agent of its own color (i.e., the agents cooperate). The payoffs axis shows a moving average of player rewards after each episode. These values are averaged over 10 replicates and bands are (min, max) values. Note that temperature values are constant in the IPD after 3000 episodes and in the Coin Game after 30000 episodes. \n Results Figure 2 shows the performance of L-TFT q in self-play, against Exploiter q (i.e., an exploiter with a correctly-specified value of q). L-TFT q is able to converge to welfare-optimal payoffs in self-play and against the Exploiter q (suggesting that the exploiter eventually anticipates being punished if it defects, and therefore chooses to cooperate). This indicates that our hypothesis test effectively detects defection. L-TFT q also effectively punishes a naive selfish learner (i.e., one trained with DDQN on R t j ); see Figure 3 in the Supplementary Materials. Punishing the naive learner comes as the expense of low payoffs for L-TFT. However, this is the expected behavior, just as the punishments which enforce equilibria in classical iterated games can lead to low payoffs for each player. \n Strategic choice of q While our previous experiments with Exploiter q assumed that the exploiter knew the cutoff quantile used in L-TFT's hypothesis test, this seems like too strong an assumption for real-world agents deployed by different principals. Here we consider a game played between 1) one principal deciding which value of q 1 to specify for its L-TFT q1 agent, and 2) a second principal deciding among L-TFT q2 and Exploiter q2 for different values of q 2 . Table 2 shows the average payoffs to each player over a time horizon of 4000 steps. Table 2: Empirical payoff matrix for a learning game in which the base game is an iterated Prisoner's Dilemma played for 4000 timesteps. The row player chooses L-TFT q for q ∈ {0.55, 0.75, 0.95}. The column player chooses either L-TFT q or Exploiter q for q ∈ {0.55, 0.75, 0.95}. Cells give (row player payoffs, column player payoffs), where payoffs are the mean of rewards over the span of the game, averaged over 10 replicates. Standard errors are all less than or equal to 0.1. It turns out that the unique Nash equilibrium of this empirical learning game 3 is one in which row player chooses L-TFT 0.95 with probability 0.88 and L-TFT 0.55 with probability 0.12, while the column player chooses L-TFT 0.55 with probability 0.88 and Exploiter 0.95 with probability 0.12. The expected payoffs in equilibrium are −1.23, −1.08 for row and column player, respectively. This suggests that L-TFT q is vulnerable to some amount of exploitation on short time horizons, although still does better than the mutual-defection payoff of −2. On the other hand, L-TFT q1 converges to mutual cooperation 4 against both L-TFT q2 and Exploiter q2 for each value of q 1 , q 2 . Thus L-TFT avoids exploitation over longer time horizons in this learning game. \n Discussion Powerful learning agents should be designed to avoid conflict when they are eventually deployed. One way to avoid such outcomes is to construct profiles of agents which avoid conflict and which rational principals prefer to deploy. One appealing criterion for joint rationality on part of the principals is welfare-optimal learning equilibrium (rather than mere convergence to Nash equilibrium of the base game, for instance). We have presented a class of learning algorithms, learning tit-for-tat (L-TFT), which approximately implement welfare-optimal learning equilibrium in the sense of converging to welfare-optimal payoffs in self-play and discouraging exploitation. Many open questions remain. First, while we have relied on a welfare function, we have said little about which welfare function should be used. The ideal scenario would be for the principals of the reinforcement learning systems in question to coordinate on a welfare function before deploying these systems. Moreover, it may be necessary to develop novel reinforcement learning methods tailored for the optimization of different welfare functions. For instance, our use of DDQN with the utilitarian welfare is unproblematic, as the sum of the players' reward signals still admits a Bellman equation for their cumulative reward. But the welfare functions other than the utiliarian welfare (such as the Nash welfare [Nash, 1950] ) do not admit a Bellman equation (see the discussion of nonlinear scalarization functions in Roijers et al. [2013] 's review of multi-objective reinforcement learning). Many generalizations of our setting can be studied, including partial observability and more than two agents. Perhaps the most restrictive of our assumptions is that the agents' reward functions are known to each other. A complete framework should address the problem of incentives to misrepresent one's reward function to improve one's payoffs in the welfare-optimal policy profile. One direction would be to allow for uncertainty about the other players' reward functions, analogously to Bayesian games [Harsanyi, 1967] but in a way that is tractable in complex environments. This may in turn require the development of novel techniques for inference about other players' preferences in complex environments; see Song et al. [2018] , Yu et al. [2019] for recent steps in this direction. Another approach to reward uncertainty would be to use mechanism design techniques [Nisan et al., 2007, Ch. 9] to incentivize the truthful reporting of the principals' utility functions. Figure 2 : 2 Figure 2: Performance of L-TFT 0.95 in the IPD (left) and Coin Game (right). Counterparts are L-TFT 0.95 (top), Exploiter 0.95(bottom). For IPD, the prob axis is the probability of mutual cooperation (cc) and mutual defection (dd), For Coin Game, the pick_own axis is the proportion of times the coin is picked by the agent of its own color (i.e., the agents cooperate). The payoffs axis shows a moving average of player rewards after each episode. These values are averaged over 10 replicates and bands are (min, max) values. Note that temperature values are constant in the IPD after 3000 episodes and in the Coin Game after 30000 episodes. \n .55 −1.28, −1.2 −0.97, −1.48 −0.98, −1.38 −1.03, −1.41 −1.06, −1.43 −0.91, −1.81 L-TFT0.75 −1.44, −0.97 −1.13, −1.1 −1.09, −1.13 −1.19, −1.11 −1.25, −1.11 −1.21, −1.35 L-TFT0.95 −1.21, −1.06 −1.12, −1.1 −1.11, −1.11 −1.2, −1.06 −1.27, −1.03 −1.41, −0.98 \n\t\t\t When computing these likelihoods, we set the Q-value of an action equal to the mean of all Q-values which are within some distance of one another (in our experiments, 0.25). This is important in Coin Game, where there are multiple paths of equal length to a coin. Thus, although estimated Q-values along different paths may be very similar, these differences are amplified by the exponentiation used in Boltzmann exploration when the temperature is small, possibly leading to false detections of defection. \n\t\t\t These experiments were implemented using extensions of OpenAI Gym [Brockman et al., 2016] and the SLM Lab framework for reinforcement learning [Keng and Graesser, 2017] . \n\t\t\t Computed using the Nashpy Python library [Knight and Campbell, 2018] .4 Payoffs averaged over the final 10 episodes equal to −1 ± 0.03.", "date_published": "n/a", "url": "n/a", "filename": "/Users/janhendrikkirchner/code/2022/10/alignment-research-dataset/align_data/common/../../data/raw/nonarxiv_teis/toward_cooperation_learning_games_oct_2020.tei.xml", "id": "b53231140cf9f644a7e314b38ef0e614"} {"source": "nonarxiv_papers", "source_filetype": "pdf", "abstract": "Ethics of brain emulations Whole brain emulation attempts to achieve software intelligence by copying the function of biological nervous systems into software. This paper aims at giving an overview of the ethical issues of the brain emulation approach, and analyse how they should affect responsible policy for developing the field. Animal emulations have uncertain moral status, and a principle of analogy is proposed for judging treatment of virtual animals. Various considerations of developing and using human brain emulations are discussed.", "authors": ["Anders Sandberg"], "title": "Ethics of brain emulations", "text": "Introduction Whole brain emulation (WBE) is an approach to achieve software intelligence by copying the functional structure of biological nervous systems into software. Rather than attempting to understand the high-level processes underlying perception, action, emotions and intelligence, the approach assumes that they would emerge from a sufficiently close imitation of the low-level neural functions, even if this is done through a software process. (Merkle 1989 , Sandberg & Bostrom 2008 While the philosophy (Chalmers 2010) , impact (Hanson 2008 ) and feasibility (Sandberg 2013 ) of brain emulations have been discussed, little analysis of the ethics of the project so far has been done. The main questions of this paper are to what extent brain emulations are moral patients, and what new ethical concerns are introduced as a result of brain emulation technology. \n Brain emulation The basic idea is to take a particular brain, scan its structure in detail at some resolution, construct a software model of the physiology that is so faithful to the original that, when run on appropriate hardware, it will have an internal causal structure that is essentially the same as the original brain. All relevant functions on some level of description are present, and higher level functions supervene from these. While at present an unfeasibly ambitious challenge, the necessary computing power and various scanning methods are rapidly developing. Large scale computational brain models are a very active research area, at present reaching the size of mammalian nervous systems. (Markram 2006 , Djurfeldt et al. 2008 , Eliasmith et al. 2012 , Preissl et al. 2012 ) WBE can be viewed as the logical endpoint of current trends in computational neuroscience and systems biology. Obviously the eventual feasibility depends on a number of philosophical issues (physicalism, functionalism, non-organicism) and empirical facts (computability, scale separation, detectability, scanning and simulation tractability) that cannot be predicted beforehand; WBE can be viewed as a program trying to test them empirically. (Sandberg 2013) Early projects are likely to merge data from multiple brains and studies, attempting to show that this can produce a sufficiently rich model to produce nontrivial behaviour but not attempting to emulate any particular individual. However, it is not clear that this can be carried on indefinitely: higher mammalian brains are organized and simultaneously individualized through experience, and linking parts of different brains is unlikely to produce functional behaviour. This means that the focus is likely to move to developing a \"pipeline\" from brain to executable model, where ideally an individually learned behaviour of the original animal is demonstrated by the resulting emulation. Although WBE focuses on the brain, a realistic project will likely have to include a fairly complex body model in order to allow the emulated nervous system to interact with a simulated or real world, as well as the physiological feedback loops that influence neural activity. At present the only known methods able to generate complete data at cellular and subcellular resolution are destructive, making the scanned brain non-viable. For a number of reasons it is unlikely that non-destructive methods will be developed any time soon (Sandberg & Bostrom 2008 , appendix E). In the following I will assume that WBE is doable, or at least doesn't suffer enough roadblocks to preclude attempting it, in order to examine the ethics of pursuing the project. \n Virtual lab animals The aim of brain emulation is to create systems that closely imitate real biological organisms in terms of behaviour and internal causal structure. While the ultimate ambitions may be grand, there are many practical uses of intermediate realistic organism simulations. In particular, emulations of animals could be used instead of real animals for experiments in education, science, medicine or engineering. Opponents of animal testing often argue that much of it is excessive and could be replaced with simulations. While the current situation is debatable, in a future where brain emulations are possible it would seem that this would be true: by definition emulations would produce the same kind of results as real animals. However, there are three problems:  Brain emulations might require significant use of test animals to develop the technology.  Detecting that something is a perfect emulation might be impossible.  An emulation might hold the same moral weight as a real animal by being sentient or a being with inherent value. \n Need for experiments Developing brain emulation is going to require the use of animals. They would be necessary not only for direct scanning into emulations, but in various experiments gathering the necessary understanding of neuroscience, testing scanning modalities and comparing the real and simulated animals. In order to achieve a useful simulation we need to understand at least one relevant level of the real system well enough to recreate it, otherwise the simulation will not produce correct data. What kind of lab animals would be suitable for research in brain emulation and how would they be used? At present neuroscientists use nearly all model species, from nematode worms to primates. Typically there are few restrictions on research on invertebrates (with the exception of cephalopods). While early attempts are likely to aim at simple, well defined nervous systems like the nematode Caenorhabditis elegans, Lymnaea Stagnalis (British pond snail) or Drosophila melanogaster (fruit fly), much of the neuroscience and tool development will likely involve standard vertebrate lab animals such as mice, either for in vitro experiments with tissue pieces or in vivo experiments attempting to map neural function to properties that can be detected. The nervous system of invertebrates also differ in many ways from the mammalian nervous system; while they might make good test benches for small emulations it is likely that the research will tend to move towards small mammals, hoping that successes there can be scaled up to larger brains and bodies. The final stages in animal brain emulation before moving on to human emulation would likely involve primates, raising the strongest animal protection issues. In theory this stage might be avoidable if the scaling up from smaller animal brains towards humans seems smooth enough, but this would put a greater risk on the human test subjects. Most \"slice and dice\" scanning (where the brain is removed, fixated and then analysed) avoids normal animal experimentation concerns since there is no experiment done on the living animal itself, just tissue extraction. This is essentially terminal anaesthesia (\"unclassified\" in UK classification of suffering). The only issue here is the pre-scanning treatment, whether there is any harm to the animal in its life coming to an end, and whether in silico suffering possible. However, developing brain emulation techniques will likely also involve experiments on living animals, including testing whether an in vivo preparation behaves like an in vitro and an in silico model. This will necessitate using behaving animals in ways that could cause suffering. The amount of such research needed is at present hard to estimate. If the non-organicism assumption of WBE is correct, most data gathering and analysis will deal with low-level systems such as neuron physiology and connectivity rather than the whole organism; if all levels are needed, then the fundamental feasibility of WBE is cast into question (Sandberg 2013) . \n What can we learn from emulations? The second problem is equivalent to the current issue of how well animal models map onto human conditions, or more generally how much models and simulations in science reflect anything about reality. The aim is achieving structural validity (Zeigler, 1985 , Zeigler, Praehofer, & Kim, 2000 , that the emulation reflects how the real system operates. Unfortunately this might be impossible to prove: there could exist hidden properties that only very rarely come into play that are not represented. Even defining meaningful and observable measures of success is nontrivial when dealing with higher order systems (Sandberg 2013) . Developing methods and criteria for validating neuroscience models is one of the key requirements for WBE. One of the peculiar things about the brain emulation program is that unlike many scientific projects the aim is not directly full understanding of the system that is being simulated. Rather, the simulation is used as a verification of our low-level understanding of neural systems and is intended as a useful tool. Once successful, emulations become very powerful tools for further investigations (or valuable in themselves). Before that stage the emulation does not contribute much knowledge about the full system. This might be seen as an argument against undertaking the WBE project: the cost and animals used are not outweighed by returns in the form of useful scientific knowledge. However, sometimes very risky projects are worth doing because they promise very large eventual returns (consider the Panama Canal) or might have unexpected but significant spin-offs (consider the Human Genome Project). Where the balance lies depends both on how the evaluations are made and the degree of long-term ambition. \n What is the moral status of an emulation? The question what moral consideration we should give to animals lies at the core of the debate about animal experimentation ethics. We can pose a similar question about what moral claims emulations have on us. Can they be wronged? Can they suffer? Indirect theories argue that animals do not merit moral consideration, but the effect of human actions on them does matter. The classic example is Kantian theories, where animals lack moral autonomy and hence are not beings whose interests morally count. Our duties towards them are merely indirect duties towards humanity. Being cruel to animals harms our own humanity: \"Our duties towards animals are merely indirect duties towards humanity. Animal nature has analogies to human nature, and by doing our duties to animals in respect of manifestations of human nature, we indirectly do our duty to humanity…. We can judge the heart of a man by his treatment of animals.\" (Regan and Singer, 1989: 23-24) By this kind of indirect account the nature of the emulation does not matter: if it is cruel to pinch the tail of biological mice the same cruel impulse is present in pinching the simulated tail of an emulated mouse. It is like damaging an effigy: it is the intention behind doing damage that is morally bad, not the damage. Conversely, treating emulations well might be like treating dolls well: it might not be morally obligatory but its compassionate. A different take on animal moral considerability come from social contract or feminist ethics, arguing against the individualist bias they perceive in the other theories. What matters is not intrinsic properties but the social relations we have with animals. \"Moral considerability is not an intrinsic property of any creature, nor is it supervenient on only its intrinsic properties, such as its capacities. It depends, deeply, on the kind of relations they can have with us\" (Anderson 2004) If we have the same kind of relations to an emulated animal as a biological animal, they should presumably be treated similarly. Since successful emulations (by assumption) also have the same capacity to form reciprocal relations, this seems likely. Another large set of theories argue that the interests of animals do count morally due to intrinsic properties. Typically they are based on the sentience of animals giving them moral status: experiences of pleasure or suffering are morally relevant states no matter what system experiences them. Whether animals are sentient or not is usually estimated from the Argument from Analogy, which supports claims of consciousness by looking at similarities between animals and human beings. Species membership is not a relevant factor. These theories differ on whether human interests still can trump animal interests or whether animals actually have the same moral status as human beings. For the present purpose the important question is whether software emulations can have sentience, consciousness or the other properties these theories ground moral status on. Animal rights can be argued on other grounds than sentience, such as animals having beliefs, desires and self-consciousness of their own and hence having inherent value and rights as subjects of a life that has inherent value. (Regan 1983 ) Successfully emulated animals would presumably behave in similar ways: the virtual mouse will avoid virtual pain; the isolated social animal will behave in a lonely fashion. Whether the mere behaviour of loneliness or pain-avoidance is an indication of a real moral interest even when we doubt it is associated with any inner experience is problematic: most accounts of moral patienthood take experience as fundamental, because that actually ties the state of affairs to a value, the welfare of something. But theories of value that ascribe value to non-agents can of course allow non-conscious software as a moral patient (for example, having value by virtue of its unique complexity). To my knowledge nobody has yet voiced concern that existing computational neuroscience simulations could have aversive experiences. In fact, the assumption that simulations do not have phenomenal consciousness is often used to motivate such research: \"Secondly, one of the more obvious features of mathematical modelling is that it is not invasive, and hence could be of great advantage in the study of chronic pain. There are major ethical problems with the experimental study of chronic pain in humans and animals. It is possible to use mathematical modelling to test some of the neurochemical and neurophysiological features of chronic pain without the use of methods which would be ethically prohibitive in the laboratory or clinic. Stembach has observed \"Before inflicting pain on humans, can mathematical or statistical modelling provide answers to the questions being considered?\" (p262) (53). We claim that mathematical modelling has the potential to add something unique to the armamentarium of the pain researcher.\" (Britton & Skevington 1996) To some degree this view is natural because typical computational simulations contain just a handful of neurons. It is unlikely that so small systems could suffer 1 . However, the largest simulations have reached millions or even billions of neurons: we are reaching the numbers found in brains of small vertebrates that people do find morally relevant. The lack of meaningful internal structure in the network probably prevents any experience from occurring, but this is merely a conjecture. Whether machines can be built to have consciousness or phenomenological states has been debated for a long time, often as a version of the strong AI hypothesis. At one extreme it has been suggested that even thermostats have simple conscious states (Chalmers 1996) , making phenomenal states independent of higher level functions, while opponents of strong AI have commonly denied the possibility of any machine (or 1 However, note the Small Network Argument (Herzog et al. 2007) : \"… for each model of consciousness there exists a minimal model, i.e., a small neural network, that fulfills the respective criteria, but to which one would not like to assign consciousness\". Mere size of the model is not a solution: there is little reason to think that 10 11 randomly connected neurons are conscious, and appeals to the right kind of complexity of interconnectedness runs into the argument again. One way out is to argue that fine-grained consciousness requires at least mid-sized systems: small networks only have rudimentary conscious contents (Taylor, 2007) . Another one is to bite the bullet and accept, if not panpsychism, that consciousness might exist in exceedingly simple systems. Assigning even a small probability to the possibility of suffering or moral importance to simple systems leads to far bigger consequences than just making neuroscience simulations suspect. The total number of insects in the world is so great that if they matter morally even to a tiny degree, their interests would likely overshadow humanity's interests. This is by no means a reductio ad absurdum of the idea: it could be that we are very seriously wrong about what truly matters in the world. at least software) mental states. See (Gamez 2008 ) for a review of some current directions in machine consciousness. It is worth noting that there are cognitive scientists who produce computational models they consider able to have consciousness (as per their own theories) 2 . Consider the case of Rodney Cotterill's CyberChild, a simulated infant controlled by a biologically inspired neural network and with a simulated body. (Cotterill 2003) Within the network different neuron populations corresponding to brain areas such as cerebellum, brainstem nuclei, motor cortices, sensory cortex, hippocampus and amygdala are connected according to an idealized mammalian brain architecture with learning, attention and efference copy signals. The body model has some simulated muscles and states such as levels of blood glucose, milk in the stomach, and urine in the bladder. If the glucose level drops too much it \"expires\". The simulated voice and motions allow it to interact with a user, trying to survive by getting enough milk. Leaving aside the extremely small neural network (20 neurons per area) it is an ambitious project. This simulation does attempt to implement a model of consciousness, and the originator was hopeful that there was no fundamental reason why consciousness could not ultimately develop in it. However, were the CyberChild conscious, it would have a very impoverished existence. It would exist in a world of mainly visual perception, except for visceral inputs, 'pain' from full nappies, and hunger. Its only means of communication is crying and the only possible response is the appearance (or not) of a bottle that has to be manoeuvred to the mouth. Even if the perceptions did not have any aversive content there would be no prospect of growth or change. This is eerily similar to Metzinger's warning (Metzinger 2003, p. 621) : \"What would you say if someone came along and said, \"Hey, we want to genetically engineer mentally retarded human infants! For reasons of scientific progress we need infants with certain cognitive and emotional deficits in order to study their postnatal psychological development-we urgently need some funding for this important and innovative kind of research!\" You would certainly think this was not only an absurd and appalling but also a dangerous idea. It would hopefully not pass any ethics committee in the democratic world. However, what today's ethics committees don't see is how the first machines satisfying a minimally sufficient set of constraints for conscious experience could be just like such mentally retarded infants. They would suffer from all kinds of functional and representational deficits too. But they would now also subjectively experience those deficits. In addition, they would have no political lobby-no representatives in any ethics committee.\" He goes on arguing that we should ban all attempts to create or even risk the creation artificial systems that have phenomenological self-models. While views on what the particular criterion for being able to suffer is might differ between different thinkers, it is clear that the potential for suffering software should be a normative concern. However, as discussed in mainstream animal rights ethics, other interests (such as human interests) can sometimes be strong enough to allow animal suffering. Presumably such interests (if these accounts of ethics are correct) would also allow for creating suffering software. David Gamez (2005) suggests a probability scale for machine phenomenology, based on the intuition that machines built along the same lines as human beings are more likely to have conscious states than other kinds of machines. This scale aims to quantify how likely a machine is to be ascribed to be able to exhibit such states (and to some extent, address Metzinger's ethical concerns without stifling research). In the case of WBE, the strong isomorphism with animal brains gives a fairly high score 3 . Arrabales, Ledezma, and Sanchis (2010) on the other hand suggests a scale for the estimation of the potential degree of consciousness based on architectural and behavioural features of an agent; again a successful or even partial WBE implementation of an animal would by definition score highly (with a score dependent on species). The actual validity and utility of such scales can be debated, but insofar they formalize intuitions about the Argument from Analogy about potential mental content they show that WBE at least has significant apparent potential of being a system that has states that might make it a moral patient. WBE is different from entirely artificial software in that it deliberately tries to be as similar as possible to morally considerable biological systems, and this should make us more ethically cautious than with other software. Much to the point of this section, Dennett has argued that creating a machine able to feel pain is nontrivial, to a large extent in the incoherencies in our ordinary concept of pain. (Dennett 1978) However, he is not against the possibility in principle: \"If and when a good physiological sub-personal theory of pain is developed, a robot could in principle be constructed to instantiate it. Such advances in science would probably bring in their train wide-scale changes in what we found intuitive about pain, so that the charge that our robot only suffered what we artificially called pain would lose its persuasiveness. In the meantime (if there were a cultural 3 For an electrophysiological WBE model the factors are FW1, FM1, FN4, AD3, with rate, size and time slicing possibly ranging over the whole range. This produces a weighting ranging between 10 -5 to 0.01, giving an ordinal ranking 170-39 out of 812. The highest weighting beats the neural controlled animat of DeMarse et al., a system containing real biological neurons controlling a robot. lag) thoughtful people would refrain from kicking such a robot.\" (Dennett 1978 p. 449) From the eliminative materialist perspective we should hence be cautious about ascribing or not ascribing suffering to software, since we do not (yet) have a good understanding of what suffering is (or rather, what the actual underlying component that is morally relevant, is). In particular, successful WBE might indeed represent a physiological sub-personal theory of pain, but it might be as opaque to outside observers as real physiological pain. The fact that at present there does not seem to be any idea of how to solve the hard problem of consciousness or how to detect phenomenal states seem to push us in the direction of suspending judgement: \"Second, there are the arguments of Moor (1988) and Prinz (2003) , who suggest that it may be indeterminable whether a machine is conscious or not. This could force us to acknowledge the possibility of consciousness in a machine, even if we cannot tell for certain whether this is the case by solving the hard problem of consciousness.\" (Gamez 2008) While the problem of animal experience and status is contentious, the problem of emulated experience and status will by definition be even more contentious. Intuitions are likely to strongly diverge and there might not be any empirical observations that could settle the differences. \n The principle of assuming the most What to do in a situation of moral uncertainty about the status of emulations? 4 It seems that a safe strategy would be to make the most cautious assumption: Principle of Assuming the Most (PAM): Assume that any emulated system could have the same mental properties as the original system and treat it correspondingly. The mice should be treated the same in the real laboratory as in the virtual. It is better to treat a simulacrum as the real thing than to mistreat a sentient being. This has the advantage that many of the ethical principles, regulations and guidance in animal testing can be carried over directly to the pursuit of brain emulation. This has some similarity to the Principle of Substrate Non-Discrimination (\"If two beings have the same functionality and the same conscious experience, and differ only in the substrate of their implementation, then they have the same moral status.\") (Bostrom & Yudkowsky 2011) but does not assume that the conscious experience is identical. On the other hand, if one were to reject the principle of substrate nondiscrimination on some grounds, then it seems that one could also reject PAM since one does have a clear theory of what systems have moral status. However, this seems to be a presumptuous move given the uncertainty of the question. Note that once the principle is applied, it makes sense to investigate in what ways the assumptions can be sharpened. If there are reasons to think that certain mental properties are not present, they overrule the principle in that case. An emulated mouse that does not respond to sensory stimuli is clearly different from a normal mouse. It is also relevant to compare to the right system. For example, the CyberChild, despite its suggestive appearance, is not an emulation of a human infant but at most an etiolated subset of neurons in a generic mammalian nervous system. It might be argued that this principle is too extreme, that it forecloses much of the useful pain research discussed by (Britton & Skevington, 1996) . However, it is agnostic on whether there exist overruling human interests. That is left for the ethical theory of the user to determine, for example using cost-benefit methods. Also, as discussed below, it might be quite possible to investigate pain systems without phenomenal consciousness. \n Ameliorating virtual suffering PAM implies that unless there is evidence to the contrary, we should treat emulated animals with the same care as the original animal. This means in most cases that practices that would be impermissible in the physical lab are impermissible in the virtual lab. Conversely, counterparts to practices that reduce suffering such as analgesic practices should be developed for use on emulated systems. Many of the best practices discussed in (Schofield 2002 ) can be readily implemented: brain emulation technology by definition allows parameters of the emulation can be changed to produce the same functional effect as the drugs have in the real nervous system. In addition, pain systems can in theory be perfectly controlled in emulations (for example by inhibiting their output), producing \"perfect painkillers\". However, this is all based on the assumption that we understand what is involved in the experience of pain: if there are undiscovered systems of suffering careless research can produce undetected distress. It is also possible to run only part of an emulation, for example leaving out or blocking nociceptors, the spinal or central pain systems, or systems related to consciousness. This could be done more exactly (and reversibly) than in biological animals. Emulations can also be run for very brief spans of time, not allowing any time for a subjective experience. But organisms are organic wholes with densely interacting parts: just like in real animal ethics there will no doubt exist situations where experiments hinge upon the whole behaving organism, including its aversive experiences. It is likely that early scans, models and simulations will often be flawed. Flawed scans would be equivalent to animals with local or global brain damage. Flawed models would introduce systemic distortions, ranging from the state of not having any brain to abnormal brain states. Flawed simulations (broken off because of software crashes) would correspond to premature death (possibly repeated, with no memorysee below). Viewed in analogy with animals it seems that the main worry should be flawed models producing hard-to-detect suffering. Just like in animal research it is possible to develop best practices. We can approximate enough of the inner life of animals from empirical observations to make some inferences; the same process is in principle possible with emulations to detect problems peculiar to their state. In fact, the transparency of an emulation to datagathering makes it easier to detect certain things like activation of pain systems or behavioural withdrawal, and backtrack their source. \n Quality of life An increasing emphasis is placed not just on lack of suffering among lab animals but on adequate quality of life. What constitutes adequate is itself a research issue. In the case of emulations the problem is that quality of life presumably requires both an adequate body, and an adequate environment for the simulated body to exist in. The VR world of an emulated nematode or snail is likely going to be very simple and crude even compared to their normal petri dish or aquarium, but the creatures are unlikely to consider that bad. But as we move up among the mammals, we will get to organisms that have a quality of life. A crude VR world might suffice for testing the methods, but would it be acceptable to keep a mouse, cat or monkey in an environment that is too bare for any extended time? Worse, can we know in what ways it is too bare? We have no way of estimating the importance rats place on smells, and whether the smell in the virtual cage are rich enough to be adequate. The intricacy of body simulations also matters: how realistic does a fur have to feel to simulated touch to be adequate? I estimate that the computational demands of running a very realistic environment are possible to meet and not terribly costly compared to the basic simulation (Sandberg & Bostrom 2008, p. 76-78) . However, modelling the right aspects requires a sensitive understanding of the lifeworlds of animals we might simply be unable to reliably meet. However, besides the ethical reasons to pursue this understanding there is also a practical need: it is unlikely emulations can be properly validated unless they are placed in realistic environments. \n Euthanasia Most regulations of animal testing see suffering as the central issue, and hence euthanasia as a way of reducing it. Some critics of animal experimentation however argue that an animal life holds intrinsic value, and ending it is wrong. In the emulation case strange things can happen, since it is possible (due to the multiple realizability of software) to create multiple instances of the same emulation and to terminate them at different times. If the end of the identifiable life of an instance is a wrong, then it might be possible to produce large number of wrongs by repeatedly running and deleting instances of an emulation even if the experiences during the run are neutral or identical. Would it matter if the emulation was just run for a millisecond of subjective time? During this time there would not be enough time for any information transmission across the emulated brain, so presumably there could not be any subjective experience. Accounts of value of life built upon being a subject of a life would likely find this unproblematic: the brief emulations do not have a time to be subjects, the only loss might be to the original emulation if this form of future is against its interests. Conversely, what about running an emulation for a certain time, making a backup copy of its state, and then deleting the running emulation only to have it replaced by the backup? In this case there would be a break in continuity of the emulation that is only observable on the outside, and a loss of experience that would depend on the interval between the backup and the replacement. It seems unclear that anything is lost if the interval is very short. Regan argues that the harm of death is a function of the opportunities of satisfaction it forecloses (Regan 1983) ; in this case it seems that it forecloses the opportunities envisioned by the instance while it is running, but it is balanced by whatever satisfaction can be achieved during that time. Most concepts of the harmfulness of death deal with the irreversible and identity-changing aspects of the cessation of life. Typically, any reversible harm will be lesser than an irreversible harm. Since emulation makes several of the potential harms of death (suffering while dying, stopping experience, bodily destruction, changes of identity, cessation of existence) completely or partially reversible it actually reduces the sting of death. In situations where there is a choice between the irreversible death of a biological being and an emulation counterpart, the PAM suggests we ought to play it safe: they might be morally equivalent. The fact that we might legitimately doubt whether the emulation is a moral patient doesn't mean it has a value intermediate between the biological being and nothing, but rather that the actual value is either full or none, we just do not know which. If the case is the conversion of the biological being into an emulation we are making a gamble that we are not destroying something of value (under the usual constraints in animal research of overriding interests, or perhaps human autonomy in the case of a human volunteer). However, the reversibility of many forms of emulation death may make it cheaper. In a lifeboat case (Regan, 1983) , should we sacrifice the software? If it can be restored from backup the real loss will be just the lost memories since last backup and possibly some freedom. Death forecloses fewer opportunities to emulations. It might of course be argued that the problem is not ending emulations, but the fundamental lack of respect for a being. This is very similar to human dignity arguments, where humans are assumed to have intrinsic dignity that can never be removed, yet it can be gravely disrespected. The emulated mouse might not notice anything wrong, but we know it is treated in a disrespectful way. There is a generally accepted view that animal life should not be taken wantonly. However, emulations might weaken this: it is easy and painless to end an emulation, and it might be restored with equal ease with no apparent harm done. If more animals are needed, they can be instantiated up to the limits set by available hardware. Could emulations hence lead to a reduction of the value of emulated life? Slippery slope arguments are rarely compelling; the relevant issues rather seem to be that the harm of death has been reduced and that animals have become (economically) cheap. The moral value does not hinge on these factors but on the earlier discussed properties. That does not mean we should ignore risks of motivated cognition changing our moral views, but the problem lies in complacent moral practice rather than emulation. \n Conclusion Developing animal emulations would be a long-running, widely distributed project that would require significant animal use. This is not different from other major neuroscience undertakings. It might help achieve replacement and reduction in the long run, but could introduce a new morally relevant category of sentient software. Due to the uncertainty about this category I suggest a cautious approach: it should be treated as the corresponding animal system absent countervailing evidence. While this would impose some restrictions on modelling practice, these are not too onerous, especially given the possibility of better-than-real analgesia. However, questions of how to demonstrate scientific validity, quality of life and appropriate treatment of emulated animals over their \"lifespan\" remain. \n Human emulations Brain emulation of humans raise a host of extra ethical issues or sharpen the problems of proper animal experimentation. \n Moral status The question of moral status is easier to handle in the case of human emulations than in the animal case since they can report back about their state. If a person who is sceptical of brain emulations being conscious or having free will is emulated and, after due introspection and consideration, changes their mind, then that would seem to be some evidence in favour of emulations actually having an inner life. It would actually not prove anything stronger than that the processes where a person changes their mind are correctly emulated and that there would be some disconfirming evidence in the emulation. It could still be a lacking consciousness and be a functional philosophical zombie (assuming this concept is even coherent). If philosophical zombies existed, it seems likely that they would be regarded persons (at least in the social sense). They would behave like persons, they would vote, they would complain and demand human rights if mistreated, and in most scenarios there would not be any way of distinguishing the zombies from the humans. Hence, if emulations of human brains work well enough to exhibit human-like behaviour rather than mere human-like neuroscience, legal personhood is likely to eventually follow, despite misgivings of sceptical philosophers 5 . \n Volunteers and emulation rights An obvious question is volunteer selection. Is it possible to give informed consent to brain emulation? The most likely scanning methods are going to be destructive, meaning that they end the biological life of the volunteer or would be applied to postmortem brains. In the first case, given the uncertainty about the mental state of software there is no way of guaranteeing that there will anything \"after\", even if the scanning and 5 Francis Fukuyama, for example, argues that emulations would lack consciousness or true emotions, and hence lack moral standing. It would hence be morally acceptable to turn them off at will. (Fukyama, 2002, p.167-170) In the light of his larger argument about creeping threats to human dignity, he would presumably see working human WBE as an insidious threat to dignity by reducing us to mere computation. However, exactly what factor to base dignity claims on is if in anything more contested than what to base moral status on; see for example (Bostrom, 2009) for a very different take on the concept. emulation are successful (and of course the issues of personal identity and continuity). Hence volunteering while alive is essentially equivalent to assisted suicide with an unknown probability of \"failure\". It is unlikely that this will be legal on its own for quite some time even in liberal jurisdictions: suicide is increasingly accepted as a way of escaping pain, but suicide for science is not regarded as an acceptable reason 6 . Some terminal patients might yet argue that they wish to use this particular form of \"suicide\" rather than a guaranteed death, and would seem to have autonomy on their side. An analogy can be made to the use of experimental therapies by the terminally ill, where concerns about harm must be weighed against uncertainty about the therapy, and where the vulnerability of the patient makes them exploitable (Falit & Gross 2008) . In the second case, post-mortem brain scanning, the legal and ethical situation appears easier. There is no legal or ethical person in existence, just the preferences of a past person and the rules for handling anatomical donations. However, this also means that a successful brain emulation based on a person would exist in a legal limbo. Current views would hold it to be a possession of whatever institution performed the experiment rather than a person 7 . 6 The Nuremberg code states: \"No experiment should be conducted, where there is an a priori reason to believe that death or disabling injury will occur; except, perhaps, in those experiments where the experimental physicians also serve as subjects.\" But selfexperimentation is unlikely to make high risk studies that would otherwise be unethical ethical. Some experiments may produce so lasting harm that they cannot be justified for any social value of the research (Miller and Rosenstein, 2008) . 7 People involved in attempts at preserving legally dead but hopefully recoverable patients are trying to promote recognition of some rights of stored individuals. See for instance the Bill of Brain Preservation Rights (Brain Preservation Foundation 2013). Specifically, it argues Presumably a sufficiently successful human brain emulation (especially if it followed a series of increasingly plausible animal emulations) would be able to convince society that it was a thinking, feeling being with moral agency and hence entitled to various rights. The PAM would support this: even if one were sceptical of whether the being was \"real\", the moral risk of not treating a potential moral agent well would be worse than the risk of treating non-moral agents better than needed. Whether this would be convincing enough to have the order of death nullified and the emulation regarded as the same legal person as the donor is another matter, as is issues of property ownership. The risk of ending up a non-person and possibly being used against one's will for someone's purposes, ending up in a brain damaged state, or ending up in an alien future, might not deter volunteers. It certainly doesn't deter people signing contract for cryonic preservation today, although they are fully aware that they will be stored as non-person anatomical donations and might be revived in a future with divergent moral and social views. Given that the alternative is certain death, it appears to be a rational choice for many. that persons in storage should be afforded similar rights to living humans in temporally unconscious states. This includes ensuring quality medical treatment and long term storage, but also revival rights (\"The revival wishes of the individual undergoing brain preservation should be respected, when technically feasible. This includes the right to partial revival (memory donation instead of identity or self-awareness revival), and the right to refuse revival under a list of circumstances provided by the individual before preservation.\") and legal rights allowing stored persons to retain some monetary or other assets in trust form that could be retrieved upon successful revival. This bill of rights would seem suggests similar right for stored stored emulations. \n Handling of flawed, distressed versions While this problem is troublesome for experimental animals, it becomes worse for attempted human emulations. The reason is that unless the emulation is so damaged that it cannot be said to be a mind with any rights, the process might produce distressed minds that are rightsholders yet have existences not worth living, or lack the capacity to form or express their wishes. For example, they could exist in analogues to persistent vegetative states, dementia, schizophrenia, aphasia, or have on-going very aversive experience. Many of these ethical problems are identical to current cases in medical ethics. One view would be that if we are ethically forbidden from pulling the plug of a counterpart biological human, we are forbidden from doing the same to the emulation. This might lead to a situation where we have a large number of emulation \"patients\" requiring significant resources, yet not contributing anything to refining the technology nor having any realistic chance of a \"cure\". However, brain emulation allows a separation of cessation of experience from permanent death. A running emulation can be stopped and its state stored, for possible future reinstantiation. This leads to a situation where at least the aversive or meaningless experience is stopped (and computational resources freed up), but which poses questions about the rights of the now frozen emulations to eventual revival. What if they were left on a shelf forever, without ever restarting? That would be the same as if they had been deleted. But do they in that case have a right to be run at least occasionally, despite lacking any benefit from the experience? Obviously methods of detecting distress and agreed on criteria for termination and storage will have to be developed well in advance of human brain emulation, likely based on existing precedents in medicine, law and ethics. Persons might write advance directives about the treatment of their emulations. This appears equivalent to normal advance directives, although the reversibility of local termination makes pulling the plug less problematic. It is less clear how to handle wishes to have more subtly deranged instances terminated. While a person might not wish to have a version of themselves with a personality disorder become their successor, at the point where the emulation comes into being it will potentially be a moral subject with a right to its life, and might regard its changed personality as the right one. \n Identity Personal identity is likely going to be a major issue, both because of the transition from an original unproblematic human identity to successor identity/identities that might or might not be the same, and because software minds can potentially have multiple realisability. The discussion about how personal identity relates to successor identities on different substrates is already extensive, and will be foregone here. See for instance (Chalmers, 2010) . Instantiating multiple copies of an emulation and running them as separate computations is obviously as feasible as running a single one. If they have different inputs (or simulated neuronal noise) they will over time diverge into different persons, who have not just a shared past but at least initially very similar outlooks and mental states. Obviously multiple copies of the same original person pose intriguing legal challenges. For example, contract law would need to be updated to handle contracts where one of the parties is copieddoes the contract now apply to both? What about marriages? Are all copies descended from a person legally culpable of past deeds occurring before the copying? To what extent does the privileged understanding copies have of each other affect their suitability as witnesses against each other? How should votes be allocated if copying is relatively cheap and persons can do \"ballot box stuffing\" with copies? Do copies start out with equal shares of the original's property? If so, what about inactive backup copies? And so on. These issues are entertaining to speculate upon and will no doubt lead to major legal, social and political changes if they become relevant. From an ethical standpoint, if all instances are moral agents, then the key question is how obligations, rights and other properties carry over from originals to copies and whether the existence of copies change some of these. For example, making a promise \"I will do X\" is typically meant to signify that the future instance of the person making the promise will do X. If there are two instances it might be enough that one of them does X (while promising not to do X might morally oblige both instances to abstain). But this assumes the future instances acknowledges the person doing the promise as their past selfa perhaps reasonable assumption, but one which could be called into question if there is an identity affecting transition to brain emulation in between (or any other radical change in self-identity). Would it be moral to voluntarily undergo very painful and/or lethal experiments given that at the end that suffering copy would be deleted and replaced with a backup made just after making the (voluntarily and fully informed) decision to participate? It seems that current views on scientific self-experimentation do not allow such behaviour on the grounds that there are certain things it is never acceptable to do for science. It might be regarded as a combination of the excessive suffering argument (there are suffering so great that no possible advance in knowledge can outweigh its evil) and a human dignity argument (it would be a practice that degrade the dignity of humans). However, the views on what constitutes unacceptable suffering and risk has changed historically and is not consistent across domains. Performance artists sometimes perform acts that would be clearly disallowed as scientific acts, yet the benefit of their art is entirely subjective (Goodall, 1999) . It might be that as the technology becomes available boundaries will be adjusted to reflect updated estimates of what is excessive or lacks dignity, just as we have done in many other areas (e.g. reproductive medicine, transplant medicine). \n Time and communication Emulations will presumably have experience and behave on a timescale set by the speed of their software. The speed a simulation is run relative to the outside world can be changed, depending on available hardware and software. Current large-scale neural simulations are commonly run with slowdown factors on the order of a thousand, but there does not seem to be any reason precluding emulations running faster than realtime biological brains 8 . Nick Bostrom and Eliezer Yudkowsky have argued for a Principle of Subjective Rate of Time: \"In cases where the duration of an experience is of basic normative significance, it is the experience's subjective duration that counts.\" (Bostrom & Yudkowsky 2011) . By this account frozen states does not count at all. Conversely, very fast emulations can rapidly produce a large amount of positive or negative value if they are in extreme states: they might count for more in utilitarian calculations. Does human emulation have a right to real-time? Being run at a far faster or slower rate does not matter as long as an emulation is only interacting with a virtual world and other emulations updating at the same speed. But when interacting with the outside world, speed matters. A divergent clockspeed would make communication with people troublesome or impossible. Participation in social activities and meaningful relationships depend on interaction and might be made impossible if they speed past faster than the emulation can handle. A very fast emulation would be isolated from the outside world by lightspeed lags and from biological humans by their glacial slowness. It hence seems that insofar emulated persons are to enjoy human rights (which typically hinge on interactions with other persons and institutions) they need to have access to real-time interaction, or at least \"disability support\" if they cannot run fast enough (for example very early emulations with speed limited by available computer power). By the same token, this may mean emulated humans have a right to contact with the world outside their simulation. As Nozick's experience machine demonstrates, most people seem to want to interact with the \"real world\", although that might just mean the shared social reality of meaningful activity rather than the outside physical world. At the very least emulated people would need some \"I/O rights\" for communication within their community. But since the virtual world is contingent upon the physical world and asymmetrically affected by it, restricting access only to the virtual is not enough if the emulated people are to be equal citizens of their wider society. \n Vulnerability Brain emulations are extremely vulnerable by default: the software and data constituting them and their mental states can be erased or changed by anybody with access to the system on which they are running. Their bodies are not self-contained and their survival dependent upon hardware they might not have causal control over. They can also be subjected to undetectable violations such as illicit copying. From an emulation perspective software security is identical to personal security. Emulations also have a problematic privacy situation, since not only their behaviour can be perfectly documented by the very system they are running on, but also their complete brain states are open for inspection. Whether that information can be interpreted in a meaningful way depends on future advances in cognitive neuroscience, but it is not unreasonable to think that by the time human emulations exist many neural correlates of private mental states will be known. This would put them in a precarious situation. These considerations suggest that the ethical way of handling brain emulations would be to require strict privacy protection of the emulations and that the emulated persons had legal protection or ownership of the hardware on which they are running, since it is in a sense their physical bodies. Some technological solutions such as encrypted simulation or tamper-resistant special purpose hardware might help. How this can be squared with actual technological praxis (for example, running emulations as distributed processes on rented computers in the cloud) and economic considerations (what if an emulation ran out of funds to pay for its upkeep?) remains to be seen. \n Self-Ownership Even if emulations are given personhood they might still find the ownership of parts of themselves to be complicated. It is not obvious that an emulation can claim to own the brain scan that produced it: it was made at a point in time where the person did not legally exist. The process might also produce valuable intellectual property, for example useful neural networks that can be integrated in non-emulation software to solve problems, in which case the matter of who has a right to the property and proceeds from it emerge. This is not just an academic question: ownership is often important for developing technologies. Investors want to have returns on their investment, innovators want to retain control over their innovations. This was apparently what partially motivated the ruling in Moore v. Regents of the University of California that a patient did not have property rights to cells extracted from his body and turned into lucrative products. (Gold, 1998) .This might produce pressures that work against eventual selfownership for the brain emulations. Conversely essential subsystems of the emulation software or hardware may be licenced or outright owned by other parties. Does right to life trump or self-ownership property ownership? At least in the case of the first emulations it is unlikely they would have been able to sign any legal contracts, and they might have a claim. However, the owners might still want fair compensation. Would it be acceptable for owners of computing facilities to slow down or freeze non-paying emulations? It seems that the exact answer depends on how emulation self-ownership is framed ethically and legally. \n Global issues and existential risk The preliminary work that has been done on the economics and social impact of brain emulation suggest they could be a massively disruptive force. In particular, simple economic models predict that copyable human capital produces explosive economic growth and (emulated) population increase but also wages decreasing towards Malthusian levels. (Hanson 1994 (Hanson , 2008 . Economies that can harness emulation technology well might have a huge strategic advantage over latecomers. WBE could introduce numerous destabilizing effects, such as increasing inequality, groups that become marginalized, disruption of existing social power relationships and the creation of opportunities to establish new kinds of power, the creation of situations in which the scope of human rights and property rights are poorly defined and subject to dispute, and particularly strong triggers for racist and xenophobic prejudices, or vigorous religious objections. While all these factors are speculative and depend on details of the world and WBE emergence scenario, they are a cause for concern. An often underappreciated problem is existential risk: the risk that humanity and all Earth-derived life goes extinct (or suffers a global, permanent reduction in potential or experience) (Bostrom 2002) . Ethical analysis of the issue shows that reducing existential risk tends to take strong precedence over many other considerations (Bostrom, 2003 (Bostrom, , 2013 . Brain emulations have a problematic role in this regard. On one hand they might lead to various dystopian scenarios, on the other hand they might enable some very good outcomes. As discussed above, the lead-up to human brain emulation might be very turbulent because of arms races between different groups pursuing this potentially strategic technology, fearful other groups would reach the goal ahead of them. This might continue after the breakthrough, now in the form of wild economic or other competition. Although the technology itself is not doing much, the sheer scope of what it could do leads to potential war. It could also be that competitive pressures or social drift in a society with brain emulation leads to outcomes where value is lost. For example, wage competition between copyable minds may drive wages down to Malthusian levels, produce beings only optimized for work, spending all available resources on replication, or gradual improvements in emulation efficiency lose axiologically valuable traits (Bostrom 2004 ). If emulations are zombies humanity, tempted by cybernetic immortality, may gradually trade away its consciousness. These may be evolutionary attractors that may prove inescapable without central coordination: each step towards the negative outcome is individually rational. On the other hand, there at least four major ways emulations might lower the risks of Earth-originating intelligence going extinct: First, the existence of nonbiological humans would ensure at least partial protection from some threats: there is no biological pandemic that can wipe out software. Of course, it is easy to imagine a digital disaster, for example an outbreak of computer viruses that wipe out the brain emulations. But that threat would not affect the biological humans. By splitting the human species into two the joint risks are significantly reduced. Clearly threats to the shared essential infrastructure remain, but the new system is more resilient. Second, brain emulations are ideally suited for colonizing space and many other environments where biological humans require extensive life support. Avoiding carrying all eggs in one planetary basket is an obvious strategy for strongly reducing existential risk. Besides existing in a substrate-independent manner where they could be run on computers hardened for local conditions, emulations could be transmitted digitally across interplanetary distances. One of the largest obstacles of space colonisation is the enormous cost in time, energy and reaction mass needed for space travel: emulation technology would reduce this. Third, another set of species risks accrue from the emergence of machine superintelligence. It has been argued that successful artificial intelligence is potentially extremely dangerous because it would have radical potential for self-improvement yet possibly deeply flawed goals or motivation systems. If intelligence is defined as the ability to achieve one's goals in general environments, then superintelligent systems would be significantly better than humans at achieving their goalseven at the expense of human goals. Intelligence does not strongly prescribe the nature of goals (especially in systems that might have been given top-level goals by imperfect programmers). Brain emulations gets around part of this risk by replacing the de novo machine intelligence with a copy of the relatively well understood human intelligence. Instead of getting potentially very rapidly upgradeable software minds with non-human motivation systems we get messy emulations that have human motivations. This slows the \"hard takeoff\" into superintelligence, and allows existing, well-tested forms of control over behaviournorms, police, economic incentives, political institutionsto act on the software. This is by no means a guarantee: emulations might prove to be far more upgradeable than we currently expect, motivations might shift from human norms, speed differences and socioeconomial factors may create turbulence, and the development of emulations might also create spin-off artificial intelligence. Four, emulations allows exploration of another part of the space of possible minds, which might encompass states of very high value (Bostrom, 2008) . Unfortunately, these considerations do not lend themselves to easy comparison. They all depend on somewhat speculative possibilities, and their probabilities and magnitude cannot easily be compared. Rather than giving a rationale for going ahead or for stopping WBE they give reasons for assuming WBE willwere it to succeedmatter enormously. The value of information helping determining the correct course of action is hence significant. \n Discussion: Speculation or just being proactive? Turning back from these long-range visions, we get to the mundane but essential issues of research ethics and the ethics of ethical discourse. Ethics matters because we want to do good. In order to do that we need to have some ideas of what the good is and how to pursue it in the right way. It is also necessary for establishing trust with the rest of societynot just as PR or a way of avoiding backlashes, but in order to reap the benefit of greater cooperation and useful criticism. There is a real risk of both overselling and dismissing brain emulation. It has been a mainstay of philosophical thought experiments and science fiction for a long time. The potential impact for humanity (and to currently living individuals hoping for immortality) could be enormous. Unlike de novo artificial intelligence it appears possible to benchmark progress towards brain emulation, promising a more concrete (if arduous) path towards software intelligence. It is a very concrete research goal that can be visualised, and it will likely have a multitude of useful spin-off technologies and scientific findings no matter its eventual success. Yet these stimulating factors also make us ignore the very real gaps in our knowledge, the massive difference between current technology and the technology we can conjecture we need, and the foundational uncertainty about whether the project is even feasible. This lack of knowledge easily leads to a split into a camp of enthusiasts who assume that the eventual answers will prove positive, and a camp of sceptics who dismiss the whole endeavour. In both camps motivated cognition will filter evidence to suit their interpretation, producing biased claims and preventing actual epistemic progress. There is also a risk that ethicists work hard on inventing problems that are not there. After all, institutional rewards go to ethicists that find high-impact topics to pursue, and hence it makes sense arguing that whatever topic one is studying is of higher impact than commonly perceived. Alfred Nordmann has argued that much debate about human enhancement is based on \"if … then ..\" ethical thought experiments where some radical technology is first assumed, and then the ethical impact explored in this far-fetched scenario. He argued that this wastes limited ethical resources on flights of fantasy rather than the very real ethical problems we have today. (Nordmann 2007) Nevertheless, considering potential risks and their ethical impacts is an important aspect of research ethics, even when dealing with merely possible future radical technologies. Low-probability, high impact risks do matter, especially if we can reduce them by taking proactive steps in the present. In many cases the steps are simply to gather better information and have a few preliminary guidelines ready if the future arrives surprisingly early. While we have little information in the present, we have great leverage over the future. When the future arrives we may know far more, but we will have less ability to change it. In the case of WBE the main conclusion of this paper is the need for computational modellers to safeguard against software suffering. At present this would merely consist of being aware of the possibility, monitor the progress of the field, and consider what animal protection practices can be imported into the research models when needed. \t\t\t See for example the contributions in the theme issue of Neural Networks Volume 20, Issue 9, November 2007. \n\t\t\t Strictly speaking, we are in a situation of moral uncertainty about what ethical system we ought to follow in general, and factual uncertainty about the experiential status of emulations. But being sure about one and not the other one still leads to a problematic moral choice. Given the divergent views of experts on both questions we should also not be overly confident about our ability to be certain in these matters. \n\t\t\t Axons typically have conduction speeds between 1-100 m/s, producing delays between a few and a hundred milliseconds in the brain. Neurons fire at less than 100 Hz. Modern CPUs are many orders of magnitude faster (in the gigaHerz range) and transmit signals at least 10% of the speed of light. A millionfold speed increase does not seem implausible.", "date_published": "n/a", "url": "n/a", "filename": "/Users/janhendrikkirchner/code/2022/10/alignment-research-dataset/align_data/common/../../data/raw/nonarxiv_teis/Ethics of brain emulations draft.tei.xml", "id": "0707478788906957d00c00d551bd552d"} {"source": "nonarxiv_papers", "source_filetype": "pdf", "abstract": "Alan, the executive director of a small nonprofit, is sitting down to dinner with his family. Responsibility weighs heavily on Alan. It's nearly time to reapply for a grant that's been his organization's largest source of funds. Alan has carefully cultivated a relationship with Beth, the program officer for the foundation funding the grant. Beth knows and likes Alan, and is excited about the work he's doing. If Alan's organization applies for a grant to continue their existing work, it will almost certainly be approved. But Alan isn't sure what to do. In the past few months, it's become increasingly clear that his fledgling organization's main program -the one they were able to write grants for -isn't the best fit for the needs of the people it's supposed to serve. A bright young intern, Charles, proposed a new program that, as far as Alan can tell, would be a better fit both for the team and the people they are trying to help. On the merits, it seems like Charles's idea is the right thing to do. Alan could, in reapplying, explain the limitations of the existing program, and why the new program would be better. But that would rock the boat. It would mean challenging the very story that got last year's grant. There's no guarantee Beth would like the new idea. No one else has done it. There's no track record to point to. Alan thinks of what might happen if the grant isn't approved. He'd have to lay people off, good people, and his organization might not survive. Even if it does, he'd have to take a pay cut, maybe move to a house in a less expensive neighborhood, pull his kids out of school. It's not certain this would happen, but can he afford to take that chance? Let's look across the dinner table at Alan's wife, Dana. If Alan tells her about his dilemma, what do we expect her to feel. What would be the safe option? Disclosing the potentially unsettling truth to the funder in the hopes of being able to do better work in the future? Or smoothing things over, tweaking the existing program to try to patch some of the worse gaps, and muddling on? What will feel to her like the responsible choice for Alan to make, for his family and the other people depending on him?", "authors": [], "title": "The Professional's Dilemma", "text": "In most cases we should expect Alan's wife to feel safer if Alan keeps up appearances. And that's why vast swathes of our professional society are underperforming their potential: because playing it safe means covering things up. Can organizations ever support new ideas? An organization that's funding outside parties to accomplish certain goals (e.g. a charitable donor or foundation giving to charities) generally won't fund ideas, even ideas from those people who are both famous and established experts, unless those ideas start off with a critical amount of their own money attached. Either the idea has to be spearheaded by someone who has enough personal funds, or it has to have raised enough money from donors/investors who no longer have the in-practice authority to dictate decision making. (For more detail on this, see Samo Burja's essay Borrowed versus Owned Power . Borrowed power is granted conditionally --if your charity is given a \"donation\" under the condition that you continue to make the donor happy, that's borrowed power. If you have the money in your own personal savings account, or if the charity raised it from so many small donors that none of them are entitled to make demands about how you spend it, that's owned power. Borrowed power is always weaker than owned power.) It's not generally considered \"credible\", by big donors, that an initiative will be effective enough to be worth funding unless the initiative has managed to cross a certain threshold in terms of acquiring resources. Moreover, if the grantee meets that threshold, gets funded, and does a good job with that money, this doesn't necessarily \"unlock\" disproportionately more resources from the big donor; the grantee will still have to hustle the same amount, or nearly so, to get the second grant. They're still running on borrowed power; they still need to keep looking over their shoulder, worrying about what the funders might think. We know it's possible to do better than this, because this paradigm would never have allowed the Manhattan Project to get funded. Einstein didn't come with his own money; he was merely a renowned physicist with a credible proposal. Somebody in the government had to literally believe his claims that a specific thing that had never been done before could be done. This kind of belief is crucially different from the sense usually used when we talk about believing in someone, in that it is about the trusted person's relationship to their field of expertise, not their relationship to the person trusting them. Not everyone can evaluate every idea, but if a person reaches a certain threshold of expertise, eminence, reputation for honesty, etc., the funder has to be able to come to believe that the content of their ideas is more likely than baseline to be valid and worth pursuing. As a funder, you have to be able to believe someone --not everyone, but someone --when they say an initiative is worthwhile, enough that you are willing to take the initiative to make it possible, and you don't expect to regret that decision. Otherwise, you are not looking for ideas outside your institution . You are not a \"hub\", you are not reachable, you are just doing your thing with no outside input. Another way of putting this is that funders need to be able to see potential grantees as peers . Organizations having \"friends\" --other organizations or consultants/free-agents that they trust, are mission-aligned with, and communicate openly with --is often considered unprofessional (\"incestuous\"), but is actually a good thing! There needs to be someone outside you (or your org) whom you trust to be well-informed and value-aligned. Without the capacity to process genuinely new ideas, it's less effective to have grantees than to just have employees and decide everything top-down. If you're doing something in a decentralized fashion, it should be because you actually value the decentralization --you want to get an outside perspective, or on-the-ground data, or expertise, or something. \n Principal-Agent Problems How can the principal (the funder) trust the agent (the grantee)? Most of the incentive systems in the what we hear about in charity are rather primitive; it boils down to \"demand more documentation and proof from grantees.\" This is primitive because it tries to enforce honesty by spending resources on detecting dishonesty, which can be very costly, relative to other ways of incentivizing honesty. It's not even asking the question \"what's the most efficient way to get the grantee to tell me the truth?\" The aesthetic underlying this attitude is called authoritarian high modernism; just trying to add metrics, not even engaging with the fact that metrics can and will be gamed. The people who survive in positions of power in such a system are not the ones who naively try to answer questions they're asked as accurately as possible; they're the ones who keep up appearances. Conversely, when interacting with someone who keeps up appearances and who has survived in a position of power in such a system, there is common knowledge only 'professional' efforts will be supported, and that 'professional' efforts are efforts that don't freely reveal information. You can enforce quality control in a top-down fashion if you have an impartial investigative process and an enforcement mechanism, but who investigates the investigators? Who judges the judges? Ultimately somebody has to WANT to give an honest report of what's going on. And those impartial investigators have to be incentive-aligned such that they benefit more from being honest than lying . Otherwise, even if initially most of the investigators are highly selected for honesty, by organizational maturity, all will have been selected for 'discretion'. To get honest reports, you need a mechanism designed in such a way as to systematically favor truth, much like auctions designed so the most advantageous price to bid is also each person's true price. How do you do that? \n How to solve the problem Let's work through a concrete example: Suppose you're considering giving a grant to a charitable organization. They send you a budget and ask for a dollar amount. How can you incentivize them NOT to pad the budget? For instance, if they expect that they might be able to get this grant but they'll have a hard time getting a second grant, they have a strong incentive to ask for enough money up front to last them several years, BUT to say that they need it all for this year. This will happen UNLESS they have reason to believe that you'll frown on budget-padding, BUT are willing to look favorably on orgs that do well in the first year for subsequent grants. This requires opening up lines of communication MUCH more for successful/trusted grantees than for random members of the public or arbitrary grant applicants. It needs to be POSSIBLE to earn your trust, and for that to unlock a certain amount of funding security. Otherwise, grantees must seek funding security --for themselves, their families, their employees, their charitable work --because without doing so they won't be able to do their job at all. They'll seek it through deceiving you, because that will feel like, and will actually be, and will be seen by all observers to be, the responsible thing for them to do . If you make it seem like \"nobody should be able to count on my support, I'll keep 'em on their toes\", they'll find something else they can count on, namely your ignorance of how their project works. The agent ALWAYS knows more than the principal about the operational details of the project! They can always keep you in the dark more effectively than you can snoop on them. So you have to make earning your trust p ossible, empowering and rewarding . You can of course revoke those rewards if they betray your trust, but ultimately you have to be much less suspicious of your trusted friends than you are of randos. Yes, this means you take on risk; but the grantees ALSO take on risk by being honest with you. It's symmetric. Here's one common failure mode in the conventional paradigm: you can't appear competent if you reveal the truth that something in your project isn't working at the current funding level and needs more money. You can't seem \"needy\" for money, or you'll look incompetent, and you won't get the money. So instead you try to get the money by inflating your accomplishments and hiding your needs and trying to appear \"worthy.\" This is counterproductive from the funder's point of view. As a donor, you want to give money where it'll do the most good. This means you need accurate info about what it'll be spent on. But the grantee doesn't necessarily trust you to believe that what they actually need money for is worthy. For a well-known instance, many donors think operational costs are \"waste\", so charities fudge the accounting to make it seem like all donations go to program expenses, and still underspend on operational costs like paying their employees. Or, sometimes part of a charity's actual budget is somewhat unsavory, like bribes to officials in corrupt countries being a necessary cost of actually operating there. Or, some costs can be embarrassing to admit, like the costs of learning/mistakes/R&D, if there's a perceived expectation that you have to get things right the first time. So the onus is on the funder to make it clear that you want to know what they ACTUALLY need, what their REAL constraints are, and that you will not pull the plug on them if they have an awkward funding crunch, once they have earned your trust to an adequate degree. It has to be clear that CANDOR is an important part of earning your trust. In particular, it needs to be clear enough to the third parties in the life of the individual you're funding, that being honest with you will pay off, that they pressure the grantee to be more honest with you, the funder. How does their family feel? Is the spouse of the charity's executive director nagging them to be more honest when fundraising, because that's what'll put their kids through college? Because, if not, you can bet their spouse is nagging them to be less honest while fundraising, to squeeze more money out of the donors, because that's the responsible thing to do for their family. What about their other dependents --employees, collaborators, beneficiaries of charitable programs --would THEY put pressure on the executive director to be more forthcoming with the funder? Or would they say \"squeeze those rich morons harder so we can keep the lights on and help people who need it?\" (Or, perhaps, more discretely but synonymously, \"don't make waves, jump through the hoops they're asking you to jump through and tell them what they want to hear.\") People don't exist in a vacuum; they want to gain the esteem and meet the needs of the people they care about. We have to be not only rewarding honesty -not only rewarding honesty more than smoothing things over -but OBVIOUSLY ENOUGH rewarding honesty that even third parties know our reputation. Otherwise everyone who tries to be honest with us will receive a continuous stream of pressure to stop being so irresponsible . \"Why are you favoring a rich guy or a huge foundation over your own family and an important cause?\" It's a fair question! And you need to flip it on its head --you want everyone to be asking \"Why are you forgoing HUGE OPPORTUNITIES for your family and an important cause, just because you're too insecure to be candid with a rich guy?", "date_published": "n/a", "url": "n/a", "filename": "/Users/janhendrikkirchner/code/2022/10/alignment-research-dataset/align_data/common/../../data/raw/nonarxiv_teis/the_professionals_dilemma.tei.xml", "id": "ff2f77450b443be655f546886e3c3ed7"} {"source": "nonarxiv_papers", "source_filetype": "pdf", "abstract": "The generality of decision and game theory has enabled domain-independent progress in AI research. For example, a better algorithm for finding good policies in (PO)MDPs can be instantly used in a variety of applications. But such a general theory is lacking when it comes to moral decision making. For AI applications with a moral component, are we then forced to build systems based on many ad-hoc rules? In this paper we discuss possible ways to avoid this conclusion.", "authors": ["Vincent Conitzer", "Walter Sinnott-Armstrong", "Jana Schaich Borg", "Yuan Deng", "Max Kramer"], "title": "Moral Decision Making Frameworks for Artificial Intelligence", "text": "Introduction As deployed AI systems become more autonomous, they increasingly face moral dilemmas. An often-used example is that of a self-driving car that faces an unavoidable accident, but has several options how to act, with different effects on its passengers and others in the scenario. (See, for example, Bonnefon et al. (2016) .) But there are other examples where AI is already used to make decisions with lifeor-death consequences. Consider, for example, kidney exchanges. These cater to patients in need of a kidney that have a willing live donor whose kidney the patient's body would reject. In this situation, the patient may be able to swap donors with another patient in the same situation. (More complex arrangements are possible as well.) For these exchanges, algorithms developed in the AI community are already used to determine which patients receive which kidneys (see, e.g., Dickerson and Sandholm (2015) ). While it may be possible to find special-purpose solutions for moral decision making in these domains, in the long run there is a need for a general framework that an AI agent can use to make moral decisions in a wider variety of contexts. In this paper, we lay out some possible roadmaps for arriving at such a framework. \n Motivation Most AI research is conducted within straightforward utilitarian or consequentialist frameworks, but these simple approaches can lead to counterintuitive judgments from an ethical perspective. For example, most people consider it immoral to harvest a healthy patient's organs to save the lives of Copyright c 2017, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved. two or even five other patients. Research in ethics and moral psychology elucidates our moral intuitions in such examples by distinguishing between doing and allowing, emphasizing the role of intent, applying general rules about kinds of actions (such as \"Don't kill\"), and referring to rights (such as the patient's) and roles (such as the doctor's). Incorporating these morally relevant factors among others could enable AI to make moral decisions that are safer, more robust, more beneficial, and acceptable to a wider range of people. 1 To be useful in the development of AI, our moral theories must provide more than vague, general criteria. They must also provide an operationalizable, and presumably quantitative, theory that specifies which particular actions are morally right or wrong in a wide range of situations. This, of course, also requires the agent to have a language in which to represent the structure of the actions being judged (Mikhail, 2007) and the morally relevant features of actions (Gert, 2004) along with rules about how these features interact and affect moral judgments. Moral theory and AI need to work together in this endeavor. Multiple approaches can be taken to arrive at generalpurpose procedures for automatically making moral decisions. One approach is to use game theory. Game-theoretic formalisms are widely used by artificial intelligence researchers to represent multiagent decision scenarios, but, as we will argue below, its solution concepts and possibly even its basic representation schemes need to be extended in order to provide guidance on moral behavior. Another approach is to use machine learning. We can use the moral philosophy and psychology literatures to identify features of moral dilemmas that are relevant to the moral status of possible actions described in the dilemmas. Human subjects can be asked to make moral judgments about a set of moral dilemmas in order to obtain a labeled data set. Then, we can train classifiers based on this data set and the identified features. (Compare also the top-down vs. bottom-up distinction in automated moral decision making, as described by Wallach and Allen (2008) .) We will discuss these two approaches in turn. In this paper, we will take a very broad view of what constitutes a moral dilemma (contrast Sinnott-Armstrong (1988) ). As a simple example, consider the trust game (Berg et al., 1995) . In the trust game, player 1 is given some amount of money-say, $100. She 2 is then allowed to give any fraction of this money back to the experimenter, who will then triple this returned money and give it to player 2. Finally, player 2 may return any fraction of the money he has received to player 1. For example, player 1 might give $50 back, so that player 2 receives 3 • $50 = $150, who then might give $75 back, leaving player 1 with $50 + $75 = $125. The most straightforward game-theoretic analysis of this game assumes that each player, at any point in the game, is interested only in maximizing the amount of money she herself receives. Under this assumption, player 2 would never have any reason to return any money to player 1. Anticipating this, player 1 would not give any money, either. However, despite this analysis, human subjects playing the trust game generally do give money in both roles (Berg et al., 1995) . One of the reasons why is likely that many people feel it is wrong for player 2 not to give any money back after player 1 has decided to give him some (and, when in the role of player 1, they expect player 2 not to take such a wrong action). This case study illustrates a general feature of moral reasoning. Most people consider not only the consequences of their actions but also the setting in which they perform their actions. They ask whether an act would be unfair or selfish (because they are not sharing a good with someone who is equally deserving), ungrateful (because it harms someone who benefited them in the past), disloyal (by betraying a friend who has been loyal), untrustworthy (because it breaks a promise), or deserved (because the person won a competition or committed a crime). In these ways, moral reasoners typically look not only to the future but also to the past. Of course, not everyone will agree about which factors are morally relevant, and even fewer people will agree about which factor is the most important in a given conflict. For example, some people will think that it is morally wrong to lie to protect a family member, whereas others will think that lying in such circumstances is not only permitted but required. Nonetheless, a successful moral AI system does not necessarily have to dictate one true answer in such cases. It may suffice to know how much various groups value different factors or value them differently. Then when we code moral values into AI, we would have the option of either using the moral values of a specific individual or group-a type of moral relativism-or giving the AI some type of socialchoice-theoretic aggregate of the moral values that we have inferred (for example, by letting our models of multiple people's moral values vote over the relevant alternatives, or using only the moral values that are common to all of them). This approach suggests new research problems in the field of computational social choice (see, e.g., Brandt et al. (2013 Brandt et al. ( , 2015 ). Rossi (2016) has described related, but distinct so-cial choice problems where (not necessarily moral) preferences are either aggregated together with a moral ranking of all the alternatives, or the preferences are themselves ranked according to a moral ordering (see also Greene et al. (2016) ). \n Abstractly Representing Moral Dilemmas: A Game-Theoretic Approach For us humans, the most natural way to describe a moral dilemma is to use natural language. However, given the current state of AI in general and of natural language processing in particular, such verbal descriptions will not suffice for our purposes. Moral dilemmas will need to be more abstractly represented, and as is generally the case in AI research, the choice of representation scheme is extremely important. In this section, we consider an approach to this problem inspired by game theory. \n Game-Theoretic Representation Schemes Game theory (see, e.g., Fudenberg and Tirole (1991) ) concerns the modeling of scenarios where multiple parties (henceforth, agents) have different interests but interact in the same domain. It provides various natural representation schemes for such multiagent decision problems. Scenarios described in game theory involve sequences of actions that lead to different agents being better or worse off to different degrees. Since moral concepts-such as selfishness, loyalty, trustworthiness, and fairness-often influence which action people choose to take, or at least believe they should take, in such situations, game theory is potentially a good fit for abstractly representing moral dilemmas. One of the standard representation schemes in game theory is that of the extensive form, which is a generalization of the game trees studied in introductory AI courses. The extensive-form representation of the trust game (or rather, a version of it in which player 1 can only give multiples of $50 and player 2 only multiples of $100) is shown in Figure 1 . Each edge corresponds to an action in the game and is labeled with that action. Each bottom (leaf) node corresponds to an outcome of the game and is labeled with the corresponding payoffs for player 1 and player 2, respectively. We will turn to the question of whether such representation schemes suffice to model moral dilemmas more generally shortly. First, we discuss how to solve such games. \n Moral Solution Concepts The standard solution concepts in game theory assume that each agent pursues nothing but its own prespecified utility. If we suppose in the trust game that each player just seeks to maximize her own monetary payoff, then game theory would prescribe that the second player give nothing back regardless of how much he receives, and consequently that the first player give nothing. 3 However, this is not the behavior observed in experiments with human subjects. Games that elicit human behavior that does not match game-theoretic analyses, such as the trust game, are often used to criticize the game-theoretic model of behavior and have led to the field of behavioral game theory (Camerer, 2003) . While in behavioral game theory, attention is often drawn to the fact that humans are not infinitely rational and cannot be expected to perform complete game-theoretic analyses in their heads, it seems that this is not the primary reason that agents behave differently in the trust game, which after all is quite simple. Rather, it seems that the simplistic game-theoretic solution fails to account for ethical considerations. In traditional game theory's defense, it should be noted that an agent's utility may take into account the welfare of others, so it is possible for altruism to be captured by a game-theoretic account. However, what is morally right or wrong also seems to depend on past actions by other players. Consider, for example, the notion of betrayal: if another agent knowingly enables me either to act to benefit us both, or to act to benefit myself even more while significantly hurting the other agent, doing the latter seems morally wrong. This, in our view, is one of the primary things going on in the trust game. The key insight is that to model this phenomenon, we cannot simply first assess the agents' otherregarding preferences, include these in their utilities at the leaves of the game, and solve the game (as in the case of pure altruism). Rather, the analysis of the game (solving it) must be intertwined with the assessment of whether an agent morally should pursue another agent's well-being. This calls for novel moral solution concepts in game theory. We have already done some conceptual and algorithmic work on a solution concept that takes such issues into account (Letchford et al., 2008) . This solution concept involves repeatedly solving the game and then modifying the agents' preferences based on the solution. The modification makes it so that (for example) player 2 wants to ensure that player 1 receives at least what she could have received in the previous solution, unless this conflicts with player 2 receiving at least as much as he would have received in the previous solution. For example, in the trust game player 2's preferences are modified so that he values player 1 receiving back at least what she gave to player 2. \n What Is Left Out & Possible Extensions The solution concept from Letchford et al. (2008) is defined only in very restricted settings, namely 2-player perfect-information 4 games. One research direction is to generalize the concept to games with more players and/or imperfect information. Another is to define different solution concepts that capture other ethical concerns. Zooming out, this general approach is inherently limited by the aspects of moral dilemmas that can be captured in game-theoretic representations. While we believe that the standard representation schemes of game theory can capture much of what is relevant, they may not capture everything that is relevant. For example, in moral philosophy, a distinction is often made between doing harm and allowing harm. Consider a situation where a runaway train will surely hit and kill exactly one innocent person (player 2) standing on a track, unless player 1 intervenes and puts the train on another track instead, where it will surely hit and kill exactly one other innocent person (player 3). The natural extensive form of the game (Figure 2 ) is entirely symmetric and thereby cannot be used to distinguish between the two alternatives. (Note that the labels on the edges are formally not part of the game.) However, many philosophers (as well Player 1 \n Do nothing Put train on other track g other track 0, -100, 0 0 , 0, -100 Figure 2 : \"Runaway train.\" Player 1 must choose whether to allow player 2 to be hurt or to hurt player 3 instead. as non-philosophers) would argue that there is a significant distinction between the two alternatives, and that switching the train to the second track is morally wrong. We propose that the action-inaction distinction could be addressed by slightly extending the extensive-form representation so that at every information set (decision point), one action is labeled as the \"passive\" action (e.g., leaving the train alone). Other extensions may be needed as well. For example, we may take into account what each agent in the game deserves (according to some theory of desert), which may require us to further extend the representation scheme. 5 A broader issue is that in behavioral game and decision theory it is well understood that the way the problem is framed-i.e., the particular language in which the problem is described, or even the order in which dilemmas are presented-can significantly affect human subjects' decisions. That is, two ways of describing the same dilemma can produce consistently different responses from human subjects (Kahneman and Tversky, 2000) . The same is surely the case for moral dilemmas (Sinnott-Armstrong, 2008) . Moral AI would need to replicate this behavior if the goal is to mirror or predict human moral judgments. In contrast, if our goal is to make coherent moral judgments, then moral AI might instead need to avoid such framing effects. \n Setting up a Machine Learning Framework Another approach for developing procedures that automatically make moral decisions is based on machine learning (see, e.g., Mitchell (1997) ). We can assemble a training set of moral decision problem instances labeled with human judgments of the morally correct decision(s), and allow our AI system to generalize. (Other work has focused on obtaining human judgments not of the actions themselves, but of persuasion strategies in such scenarios (Stock et al., 2016) .) To evaluate this approach with current technology, it is insufficient to represent the instances in natural language; instead, we must represent them more abstractly. What is the right representation scheme for this purpose, and what features are important? How do we construct and accurately label a good training set? \n Representing Dilemmas by Their Key Moral Features When we try to classify a given action in a given moral dilemma as morally right or wrong (as judged by a given human being), we can try to do so based on various features (or attributes) of the action. In a restricted domain, it may be relatively clear what the relevant features are. When a self-driving car must decide whether to take one action or another in an impending-crash scenario, natural features include the expected number of lives lost for each course of action, which of the people involved were at fault, etc. When allocating a kidney, natural features include the probability that the kidney is rejected by a particular patient, whether that patient needs the kidney urgently, etc. Even in these scenarios, identifying all the relevant features may not be easy. (E.g., is it relevant that one potential kidney recipient has made a large donation to medical research and the other has not?) However, the primary goal of a general framework for moral decision making is to identify abstract features that apply across domains, rather than to identify every nuanced feature that is potentially relevant to isolated scenarios. The literature in moral psychology and cognitive science may guide us in identifying these general concepts. For example, Haidt and Joseph (2004) have proposed five moral foundations-harm/care, fairness/reciprocity, loyalty, authority, and purity. Recent research has added new foundations and subdivided some of these foundations (Clifford et al., 2015) . The philosophy literature can similarly be helpful; e.g., Gert (2004) provides a very inclusive list of morally relevant features. \n Classifying Actions as Morally Right or Wrong Given a labeled dataset of moral dilemmas represented as lists of feature values, we can apply standard machine learn-ing techniques to learn to classify actions as morally right or wrong. In ethics it is often seen as important not only to act in accordance with moral principles but also to be able to explain why one's actions are morally right (Anderson and Anderson, 2007; Bostrom and Yudkowsky, 2014) ; hence, interpretability of the resulting classifier will be important. Of course, besides making a binary classification of an action as morally right or wrong, we may also make a quantitative assessment of how morally wrong the action is (for example using a regression), an assessment of how probable it is that the action is morally wrong (for example using a Bayesian framework), or some combination of the two. Many further complicating factors can be added to this simple initial framework. \n Discussion A machine learning approach to automating moral judgements is perhaps more flexible than a game-theoretic approach, but the two can complement each other. For example, we can apply moral game-theoretic concepts to moral dilemmas and use the output (say, \"right\" or \"wrong\" according to this concept) as one of the features in our machine learning approach. On the other hand, the outcomes of the machine learning approach can help us see which key moral aspects are missing from our moral game-theoretic concepts, which will in turn allow us to refine them. It has been suggested that machine learning approaches to moral decisions will be limited because they will at best result in human-level moral decision making; they will never exceed the morality of humans. (Such a worry is raised, for example, by Chaudhuri and Vardi (2014) .) But this is not necessarily so. First, aggregating the moral views of multiple humans (through a combination of machine learning and social-choice theoretic techniques) may result in a morally better system than that of any individual human, for example because idiosyncratic moral mistakes made by individual humans are washed out in the aggregate. Indeed, the learning algorithm may well decide to output a classifier that disagrees with the labels of some of the instances in the training set (see Guarini (2006) for a discussion of the importance of being able to revise initial classifications). Second, machine learning approaches may identify general principles of moral decision making that humans were not aware of before. These principles can then be used to improve our moral intuitions in general. For now, moral AI systems are in their infancy, so creating even human-level automated moral decision making would be a great accomplishment. \n Conclusion In some applications, AI systems will need to be equipped with moral reasoning capability before we can grant them autonomy in the world. One approach to doing so is to find ad-hoc rules for the setting at hand. However, historically, the AI community has significantly benefited from adopting methodologies that generalize across applications. The concept of expected utility maximization has played a key part in this. By itself, this concept falls short for the purpose of moral decision making. In this paper, we have consid-ered two (potentially complementary) paradigms for designing general moral decision making methodologies: extending game-theoretic solution concepts to incorporate ethical aspects, and using machine learning on human-labeled instances. Much work remains to be done on both of these, and still other paradigms may exist. All the same, these two paradigms show promise for designing moral AI. Figure 1 : 1 Figure1: The trust game. Each edge corresponds to an action in the game and is labeled with that action. Each bottom (leaf) node corresponds to an outcome of the game and is labeled with the corresponding payoffs for player 1 and player 2, respectively. \n\t\t\t The point that, as advanced AI acquires more autonomy, it is essential to bring moral reasoning into it has been made previously by others-e.g., Moor (2006) . \n\t\t\t We use \"she\" for player 1 or a generic player, and \"he\" for player 2. \n\t\t\t The technical name for this type of analysis is backward induction, resulting in behavior that constitutes a subgame perfect Nash equilibrium of the game. \n\t\t\t In a perfect-information game, the current state is fully observable to each player (e.g., chess), in contrast to imperfectinformation games (e.g., poker).5 Note that, to the extent the reasons for what an agent deserves are based solely on the agent's earlier actions in the game under consideration, solution concepts such as those described above might in fact capture this. If so, then the only cases in which we need to extend the representation scheme are those where what an agent deserves is external to the game under study (e.g., the agent is a previously convicted criminal).", "date_published": "n/a", "url": "n/a", "filename": "/Users/janhendrikkirchner/code/2022/10/alignment-research-dataset/align_data/common/../../data/raw/nonarxiv_teis/moralAAAI17.tei.xml", "id": "bb1d9063669787081328822839303f86"} {"source": "nonarxiv_papers", "source_filetype": "pdf", "abstract": "For sophisticated reinforcement learning (RL) systems to interact usefully with real-world environments, we need to communicate complex goals to these systems. In this work, we explore goals defined in terms of (non-expert) human preferences between pairs of trajectory segments. We show that this approach can effectively solve complex RL tasks without access to the reward function, including Atari games and simulated robot locomotion, while providing feedback on less than 1% of our agent's interactions with the environment. This reduces the cost of human oversight far enough that it can be practically applied to state-of-the-art RL systems. To demonstrate the flexibility of our approach, we show that we can successfully train complex novel behaviors with about an hour of human time. These behaviors and environments are considerably more complex than any which have been previously learned from human feedback. ⇤ Work done while at OpenAI.", "authors": ["Paul F Christiano", "Jan Leike", "Tom B Brown", "Google Brain", "Miljan Martic", "Shane Legg", "Dario Amodei"], "title": "Deep Reinforcement Learning from Human Preferences", "text": "Introduction Recent success in scaling reinforcement learning (RL) to large problems has been driven in domains that have a well-specified reward function (Mnih et al., 2015 (Mnih et al., , 2016 Silver et al., 2016) . Unfortunately, many tasks involve goals that are complex, poorly-defined, or hard to specify. Overcoming this limitation would greatly expand the possible impact of deep RL and could increase the reach of machine learning more broadly. For example, suppose that we wanted to use reinforcement learning to train a robot to clean a table or scramble an egg. It's not clear how to construct a suitable reward function, which will need to be a function of the robot's sensors. We could try to design a simple reward function that approximately captures the intended behavior, but this will often result in behavior that optimizes our reward function without actually satisfying our preferences. This difficulty underlies recent concerns about misalignment between our values and the objectives of our RL systems (Bostrom, 2014; Russell, 2016; Amodei et al., 2016) . If we could successfully communicate our actual objectives to our agents, it would be a significant step towards addressing these concerns. If we have demonstrations of the desired task, we can use inverse reinforcement learning (Ng and Russell, 2000) or imitation learning to copy the demonstrated behavior. But these approaches are not directly applicable to behaviors that are difficult for humans to demonstrate (such as controlling a robot with many degrees of freedom but non-human morphology). An alternative approach is to allow a human to provide feedback on our system's current behavior and to use this feedback to define the task. In principle this fits within the paradigm of reinforcement learning, but using human feedback directly as a reward function is prohibitively expensive for RL systems that require hundreds or thousands of hours of experience. In order to practically train deep RL systems with human feedback, we need to decrease the amount of feedback required by several orders of magnitude. We overcome this difficulty by asking humans to compare possible trajectories of the agent, using that data to learn a reward function, and optimizing the learned reward function with RL. This basic approach has been explored in the past, but we confront the challenges involved in scaling it up to modern deep RL and demonstrate by far the most complex behaviors yet learned from human feedback. Our experiments take place in two domains: Atari games in the Arcade Learning Environment (Bellemare et al., 2013) , and robotics tasks in the physics simulator MuJoCo (Todorov et al., 2012) . We show that a small amount of feedback from a non-expert human, ranging from fifteen minutes to five hours, suffice to learn both standard RL tasks and novel hard-to-specify behaviors such as performing a backflip or driving with the flow of traffic. \n Related Work A long line of work studies reinforcement learning from human ratings or rankings, including Akrour et al. (2011 ), Pilarski et al. (2011 ), Akrour et al. (2012 ), Wilson et al. (2012 ), Sugiyama et al. (2012 ), Wirth and Fürnkranz (2013 ), Daniel et al. (2015 ), El Asri et al. (2016 ), Wang et al. (2016 ), and Wirth et al. (2016 . Other lines of research consider the general problem of reinforcement learning from preferences rather than absolute reward values (Fürnkranz et al., 2012; Akrour et al., 2014; Wirth et al., 2016) , and optimizing using human preferences in settings other than reinforcement learning (Machwe and Parmee, 2006; Secretan et al., 2008; Brochu et al., 2010; Sørensen et al., 2016) . Our algorithm follows the same basic approach as Akrour et al. ( 2012 ) and Akrour et al. ( 2014 ), but considers much more complex domains and behaviors. The complexity of our environments force us to use different RL algorithms, reward models, and training strategies. One notable difference is that Akrour et al. (2012) and Akrour et al. (2014) elicit preferences over whole trajectories rather than short clips, and so would require about an order of magnitude more human time per data point. Our approach to feedback elicitation closely follows Wilson et al. (2012 ). However, Wilson et al. (2012 assumes that the reward function is the distance to some unknown (linear) \"target\" policy, and is never tested with real human feedback. TAMER (Knox, 2012; Knox and Stone, 2013 ) also learns a reward function from human feedback, but learns from ratings rather than comparisons, has the human observe the agent as it behaves, and has been applied to settings where the desired policy can be learned orders of magnitude more quickly. Compared to all prior work, our key contribution is to scale human feedback up to deep reinforcement learning and to learn much more complex behaviors. This fits into a recent trend of scaling reward learning methods to large deep learning systems, for example inverse RL (Finn et al., 2016) , imitation learning (Ho and Ermon, 2016; Stadie et al., 2017) , semi-supervised skill generalization (Finn et al., 2017) , and bootstrapping RL from demonstrations (Silver et al., 2016; Hester et al., 2017) . \n Preliminaries and Method \n Setting and Goal We consider an agent interacting with an environment over a sequence of steps; at each time t the agent receives an observation o t 2 O from the environment and then sends an action a t 2 A to the environment. In traditional reinforcement learning, the environment would also supply a reward r t 2 R and the agent's goal would be to maximize the discounted sum of rewards. Instead of assuming that the environment produces a reward signal, we assume that there is a human overseer who can express preferences between trajectory segments. A trajectory segment is a sequence of observations and actions, = ((o 0 , a 0 ), (o 1 , a 1 ), . . . , (o k 1 , a k 1 )) 2 (O ⇥ A) k . Write 1 2 to indicate that the human preferred trajectory segment 1 to trajectory segment 2 . Informally, the goal of the agent is to produce trajectories which are preferred by the human, while making as few queries as possible to the human. More precisely, we will evaluate our algorithms' behavior in two ways: Quantitative: We say that preferences are generated by a reward function 2 r : O ⇥ A ! R if o 1 0 , a 1 0 , . . . , o 1 k 1 , a 1 k 1 o 2 0 , a 2 0 , . . . , o 2 k 1 , a 2 k 1 whenever r o 1 0 , a 1 0 + • • • + r o 1 k 1 , a 1 k 1 > r o 2 0 , a 2 0 + • • • + r o 2 k 1 , a 2 k 1 . If the human's preferences are generated by a reward function r, then our agent ought to receive a high total reward according to r. So if we know the reward function r, we can evaluate the agent quantitatively. Ideally the agent will achieve reward nearly as high as if it had been using RL to optimize r. Qualitative: Sometimes we have no reward function by which we can quantitatively evaluate behavior (this is the situation where our approach would be practically useful). In these cases, all we can do is qualitatively evaluate how well the agent satisfies the human's preferences. In this paper, we will start from a goal expressed in natural language, ask a human to evaluate the agent's behavior based on how well it fulfills that goal, and then present videos of agents attempting to fulfill that goal. Our model based on trajectory segment comparisons is very similar to the trajectory preference queries used in Wilson et al. (2012) , except that we don't assume that we can reset the system to an arbitrary state 3 and so our segments generally begin from different states. This complicates the interpretation of human comparisons, but we show that our algorithm overcomes this difficulty even when the human raters have no understanding of our algorithm. \n Our Method At each point in time our method maintains a policy ⇡ : O ! A and a reward function estimate r : O ⇥ A ! R, each parametrized by deep neural networks. These networks are updated by three processes: 1. The policy ⇡ interacts with the environment to produce a set of trajectories {⌧ 1 , . . . , ⌧ i }. The parameters of ⇡ are updated by a traditional reinforcement learning algorithm, in order to maximize the sum of the predicted rewards r t = r(o t , a t ). \n We select pairs of segments 1 , 2 from the trajectories {⌧ 1 , . . . , ⌧ i } produced in step 1, and send them to a human for comparison. 3. The parameters of the mapping r are optimized via supervised learning to fit the comparisons collected from the human so far. These processes run asynchronously, with trajectories flowing from process (1) to process (2), human comparisons flowing from process (2) to process (3), and parameters for r flowing from process (3) to process (1). The following subsections provide details on each of these processes. \n Optimizing the Policy After using r to compute rewards, we are left with a traditional reinforcement learning problem. We can solve this problem using any RL algorithm that is appropriate for the domain. One subtlety is that the reward function r may be non-stationary, which leads us to prefer methods which are robust to changes in the reward function. This led us to focus on policy gradient methods, which have been applied successfully for such problems (Ho and Ermon, 2016) . In this paper, we use advantage actor-critic (A2C; Mnih et al., 2016) to play Atari games, and trust region policy optimization (TRPO; Schulman et al., 2015) to perform simulated robotics tasks. In each case, we used parameter settings which have been found to work well for traditional RL tasks. The only hyperparameter which we adjusted was the entropy bonus for TRPO. This is because TRPO relies on the trust region to ensure adequate exploration, which can lead to inadequate exploration if the reward function is changing. We normalized the rewards produced by r to have zero mean and constant standard deviation. This is a typical preprocessing step which is particularly appropriate here since the position of the rewards is underdetermined by our learning problem. \n Preference Elicitation The human overseer is given a visualization of two trajectory segments, in the form of short movie clips. In all of our experiments, these clips are between 1 and 2 seconds long. The human then indicates which segment they prefer, that the two segments are equally good, or that they are unable to compare the two segments. The human judgments are recorded in a database D of triples 1 , 2 , µ , where 1 and 2 are the two segments and µ is a distribution over {1, 2} indicating which segment the user preferred. If the human selects one segment as preferable, then µ puts all of its mass on that choice. If the human marks the segments as equally preferable, then µ is uniform. Finally, if the human marks the segments as incomparable, then the comparison is not included in the database. \n Fitting the Reward Function We can interpret a reward function estimate r as a preference-predictor if we view r as a latent factor explaining the human's judgments and assume that the human's probability of preferring a segment i depends exponentially on the value of the latent reward summed over the length of the clip: 4 P ⇥ 1 2 ⇤ = exp P r o 1 t , a 1 t exp P r(o 1 t , a 1 t ) + exp P r(o 2 t , a 2 t ) . (1) We choose r to minimize the cross-entropy loss between these predictions and the actual human labels: loss(r) = X ( 1 , 2 ,µ)2D µ(1) log P ⇥ 1 2 ⇤ + µ(2) log P ⇥ 2 1 ⇤ . This follows the Bradley-Terry model (Bradley and Terry, 1952) for estimating score functions from pairwise preferences, and is the specialization of the Luce-Shephard choice rule (Luce, 2005; Shepard, 1957) to preferences over trajectory segments. Our actual algorithm incorporates a number of modifications to this basic approach, which early experiments discovered to be helpful and which are analyzed in Section 3.3: • We fit an ensemble of predictors, each trained on |D| triples sampled from D with replacement. The estimate r is defined by independently normalizing each of these predictors and then averaging the results. • A fraction of 1/e of the data is held out to be used as a validation set for each predictor. We use `2 regularization and adjust the regularization coefficient to keep the validation loss between 1.1 and 1.5 times the training loss. In some domains we also apply dropout for regularization. • Rather than applying a softmax directly as described in Equation 1, we assume there is a 10% chance that the human responds uniformly at random. Conceptually this adjustment is needed because human raters have a constant probability of making an error, which doesn't decay to 0 as the difference in reward difference becomes extreme. \n Selecting Queries We decide how to query preferences based on an approximation to the uncertainty in the reward function estimator, similar to Daniel et al. ( 2014 ): we sample a large number of pairs of trajectory segments of length k from the latest agent-environment interactions, use each reward predictor in our ensemble to predict which segment will be preferred from each pair, and then select those trajectories for which the predictions have the highest variance across ensemble members 5 This is a crude approximation and the ablation experiments in Section 3 show that in some tasks it actually impairs performance. Ideally, we would want to query based on the expected value of information of the query (Akrour et al., 2012; Krueger et al., 2016) , but we leave it to future work to explore this direction further. \n Experimental Results We \n Reinforcement Learning Tasks with Unobserved Rewards In our first set of experiments, we attempt to solve a range of benchmark tasks for deep RL without observing the true reward. Instead, the agent learns about the goal of the task only by asking a human which of two trajectory segments is better. Our goal is to solve the task in a reasonable amount of time using as few queries as possible. In our experiments, feedback is provided by contractors who are given a 1-2 sentence description of each task before being asked to compare several hundred to several thousand pairs of trajectory segments for that task (see Appendix B for the exact instructions given to contractors). Each trajectory segment is between 1 and 2 seconds long. Contractors responded to the average query in 3-5 seconds, and so the experiments involving real human feedback required between 30 minutes and 5 hours of human time. For comparison, we also run experiments using a synthetic oracle whose preferences are generated (in the sense of Section 2.1) by the real reward 6 . We also compare to the baseline of RL training using the real reward. Our aim here is not to outperform but rather to do nearly as well as RL without access to reward information and instead relying on much scarcer feedback. Nevertheless, note that feedback from real humans does have the potential to outperform RL (and as shown below it actually does so on some tasks), because the human feedback might provide a better-shaped reward. We describe the details of our experiments in Appendix A, including model architectures, modifications to the environment, and the RL algorithms used to optimize the policy. \n Simulated Robotics The first tasks we consider are eight simulated robotics tasks, implemented in MuJoCo (Todorov et al., 2012), and included in OpenAI Gym (Brockman et al., 2016) . We made small modifications to these tasks in order to avoid encoding information about the task in the environment itself (the modifications are described in detail in Appendix A). The reward functions in these tasks are quadratic functions of distances, positions and velocities, and most are linear. We included a simple cartpole Figure 1 : Results on MuJoCo simulated robotics as measured on the tasks' true reward. We compare our method using real human feedback (purple), our method using synthetic feedback provided by an oracle (shades of blue), and reinforcement learning using the true reward function (orange). All curves are the average of 5 runs, except for the real human feedback, which is a single run, and each point is the average reward over five consecutive batches. For Reacher and Cheetah feedback was provided by an author due to time constraints. For all other tasks, feedback was provided by contractors unfamiliar with the environments and with our algorithm. The irregular progress on Hopper is due to one contractor deviating from the typical labeling schedule. task (\"pendulum\") for comparison, since this is representative of the complexity of tasks studied in prior work. Figure 1 shows the results of training our agent with 700 queries to a human rater, compared to learning from 350, 700, or 1400 synthetic queries, as well as to RL learning from the real reward. With 700 labels we are able to nearly match reinforcement learning on all of these tasks. Training with learned reward functions tends to be less stable and higher variance, while having a comparable mean performance. Surprisingly, by 1400 labels our algorithm performs slightly better than if it had simply been given the true reward, perhaps because the learned reward function is slightly better shaped-the reward learning procedure assigns positive rewards to all behaviors that are typically followed by high reward. The difference may also be due to subtle changes in the relative scale of rewards or our use of entropy regularization. Real human feedback is typically only slightly less effective than the synthetic feedback; depending on the task human feedback ranged from being half as efficient as ground truth feedback to being equally efficient. On the Ant task the human feedback significantly outperformed the synthetic feedback, apparently because we asked humans to prefer trajectories where the robot was \"standing upright,\" which proved to be useful reward shaping. (There was a similar bonus in the RL reward function to encourage the robot to remain upright, but the simple hand-crafted bonus was not as useful.) \n Atari The second set of tasks we consider is a set of seven Atari games in the Arcade Learning Environment (Bellemare et al., 2013) , the same games presented in Mnih et al., 2013. Figure 2 shows the results of training our agent with 5,500 queries to a human rater, compared to learning from 350, 700, or 1400 synthetic queries, as well as to RL learning from the real reward. Our method has more difficulty matching RL in these challenging environments, but nevertheless it displays substantial learning on most of them and matches or even exceeds RL on some. Specifically, Figure 2 : Results on Atari games as measured on the tasks' true reward. We compare our method using real human feedback (purple), our method using synthetic feedback provided by an oracle (shades of blue), and reinforcement learning using the true reward function (orange). All curves are the average of 3 runs, except for the real human feedback which is a single run, and each point is the average reward over about 150,000 consecutive frames. on BeamRider and Pong, synthetic labels match or come close to RL even with only 3,300 such labels. On Seaquest and Qbert synthetic feedback eventually performs near the level of RL but learns more slowly. On SpaceInvaders and Breakout synthetic feedback never matches RL, but nevertheless the agent improves substantially, often passing the first level in SpaceInvaders and reaching a score of 20 on Breakout, or 50 with enough labels. On most of the games real human feedback performs similar to or slightly worse than synthetic feedback with the same number of labels, and often comparably to synthetic feedback that has 40% fewer labels. On Qbert, our method fails to learn to beat the first level with real human feedback; this may be because short clips in Qbert can be confusing and difficult to evaluate. Finally, Enduro is difficult for A3C to learn due to the difficulty of successfully passing other cars through random exploration, and is correspondingly difficult to learn with synthetic labels, but human labelers tend to reward any progress towards passing cars, essentially shaping the reward and thus outperforming A3C in this game (the results are comparable to those achieved with DQN). \n Novel behaviors Experiments with traditional RL tasks help us understand whether our method is effective, but the ultimate purpose of human interaction is to solve tasks for which no reward function is available. Using the same parameters as in the previous experiments, we show that our algorithm can learn novel complex behaviors. We demonstrate: 1. The Hopper robot performing a sequence of backflips (see Figure 4 ). This behavior was trained using 900 queries in less than an hour. The agent learns to consistently perform a backflip, land upright, and repeat. 2. The Half-Cheetah robot moving forward while standing on one leg. This behavior was trained using 800 queries in under an hour. 3. Keeping alongside other cars in Enduro. This was trained with roughly 1,300 queries and 4 million frames of interaction with the environment; the agent learns to stay almost exactly even with other moving cars for a substantial fraction of the episode, although it gets confused by changes in background. Figure 3 : Performance of our algorithm on MuJoCo tasks after removing various components, as described in Section Section 3.3. All graphs are averaged over 5 runs, using 700 synthetic labels each. Videos of these behaviors can be found at https://goo.gl/MhgvIU. These behaviors were trained using feedback from the authors. \n Ablation Studies In order to better understand the performance of our algorithm, we consider a range of modifications: 1. We pick queries uniformly at random rather than prioritizing queries for which there is disagreement (random queries). 2. We train only one predictor rather than an ensemble (no ensemble). In this setting, we also choose queries at random, since there is no longer an ensemble that we could use to estimate disagreement. 3. We train on queries only gathered at the beginning of training, rather than gathered throughout training (no online queries). 4. We remove the `2 regularization and use only dropout (no regularization). 5. On the robotics tasks only, we use trajectory segments of length 1 (no segments). 6. Rather than fitting r using comparisons, we consider an oracle which provides the true total reward over a trajectory segment, and fit r to these total rewards using mean squared error (target). The results are presented in Figure 3 for MuJoCo and Figure 4 for Atari. Training the reward predictor offline can lead to bizarre behavior that is undesirable as measured by the true reward (Amodei et al., 2016) . For instance, on Pong offline training sometimes leads our agent to avoid losing points but not to score points; this can result in extremely long volleys (videos at https://goo.gl/L5eAbk). This type of behavior demonstrates that in general human feedback needs to be intertwined with RL rather than provided statically. Our main motivation for eliciting comparisons rather than absolute scores was that we found it much easier for humans to provide consistent comparisons than consistent absolute scores, especially on the continuous control tasks and on the qualitative tasks in Section 3.2; nevertheless it seems important to understand how using comparisons affects performance. For continuous control tasks we found that predicting comparisons worked much better than predicting scores. This is likely because the scale of rewards varies substantially and this complicates the regression problem, which is smoothed significantly when we only need to predict comparisons. In the Atari tasks we clipped rewards Figure 4 : Performance of our algorithm on Atari tasks after removing various components, as described in Section 3.3. All curves are an average of 3 runs using 5,500 synthetic labels (see minor exceptions in Section A.2). and effectively only predicted the sign, avoiding these difficulties (this is not a suitable solution for the continuous control tasks because the magnitude of the reward is important to learning). In these tasks comparisons and targets had significantly different performance, but neither consistently outperformed the other. We also observed large performance differences when using single frames rather than clips. 7 In order to obtain the same results using single frames we would need to have collected significantly more comparisons. In general we discovered that asking humans to compare longer clips was significantly more helpful per clip, and significantly less helpful per frame. Shrinking the clip length below 1-2 seconds did not significantly decrease the human time required to label each clip in early experiments, and so seems less efficient per second of human time. In the Atari environments we also found that it was often easier to compare longer clips because they provide more context than single frames. \n Discussion and Conclusions Agent-environment interactions are often radically cheaper than human interaction. We show that by learning a separate reward model using supervised learning, it is possible to reduce the interaction complexity by roughly 3 orders of magnitude. Although there is a large literature on preference elicitation and reinforcement learning from unknown reward functions, we provide the first evidence that these techniques can be economically scaled up to state-of-the-art reinforcement learning systems. This represents a step towards practical applications of deep RL to complex real-world tasks. In the long run it would be desirable to make learning a task from human preferences no more difficult than learning it from a programmatic reward signal, ensuring that powerful RL systems can be applied in the service of complex human values rather than low-complexity goals. \n \n \n \n\t\t\t Here we assume here that the reward is a function of the observation and action. In our experiments in Atari environments, we instead assume the reward is a function of the preceding 4 observations. In a general partially observable environment, we could instead consider reward functions that depend on the whole sequence of observations, and model this reward function with a recurrent neural network.3 Wilson et al. (2012) also assumes the ability to sample reasonable initial states. But we work with high dimensional state spaces for which random states will not be reachable and the intended policy inhabits a low-dimensional manifold. \n\t\t\t Equation 1 does not use discounting, which could be interpreted as modeling the human to be indifferent about when things happen in the trajectory segment. Using explicit discounting or inferring the human's discount function would also be reasonable choices. \n\t\t\t Note that trajectory segments almost never start from the same state.6 In the case of Atari games with sparse rewards, it is relatively common for two clips to both have zero reward in which case the oracle outputs indifference. Because we considered clips rather than individual states, such ties never made up a large majority of our data. Moreover, ties still provide significant information to the reward predictor as long as they are not too common. \n\t\t\t We only ran these tests on continuous control tasks because our Atari reward model depends on a sequence of consecutive frames rather than a single frame, as described in Section A.2", "date_published": "n/a", "url": "n/a", "filename": "/Users/janhendrikkirchner/code/2022/10/alignment-research-dataset/align_data/common/../../data/raw/nonarxiv_teis/NIPS-2017-deep-reinforcement-learning-from-human-preferences-Paper.tei.xml", "id": "8065a6f0385a82529a53a83f18beda75"} {"source": "nonarxiv_papers", "source_filetype": "pdf", "abstract": "We present nine facets for the analysis of the past and future evolution of AI. Each facet has also a set of edges that can summarise different trends and contours in AI. With them, we first conduct a quantitative analysis using the information from two decades of AAAI/IJCAI conferences and around 50 years of documents from AI topics, an official database from the AAAI, illustrated by several plots. We then perform a qualitative analysis using the facets and edges, locating AI systems in the intelligence landscape and the discipline as a whole. This analytical framework provides a more structured and systematic way of looking at the shape and boundaries of AI.", "authors": ["Fernando Martínez-Plumed", "Bao Sheng Loe", "Peter Flach", "Seán Ó H Éigeartaigh", "Karina Vold", "José Hernández-Orallo"], "title": "The Facets of Artificial Intelligence: A Framework to Track the Evolution of AI", "text": "Introduction \"What is AI?\" has been a common question from the inception of the discipline in the 1950s [McCarthy et al., 2006; Moor, 2006; Solomonoff, 2017] until the end of the last century [Lehman-Wilzig, 1981; Fetzer, 1990; McCarthy, 1998 ]. In the twenty-first century, the discipline is not only welldeveloped but it is often said that this is the age of AI [Mc-Corduck, 2004] . AI is set to pervade and transform every aspect of life. In a way, this is no different from what computer science has already been doing, creating computerised, digital and virtual versions of almost everything, with AI now introducing new adjectives such as 'intelligent', 'smart' and 'cognitive' to almost every process or gadget, from medical diagnosis to personal assistants. This expected expansion and intertwining with every other research discipline and aspect of life is pushing the contours of AI in many directions. However, unlike computer science, which is based on wellestablished models of computation that integrate hardware and software, AI has evolved with a more fluid definition, primarily because of our varied conceptions of intelligence. When looking at the definitions of (artificial) intelligence (see, e.g., [Legg and Hutter, 2007] , for a compendium), we see that definitions can be categorised according to two distinct dimensions, following [Russell and Norvig, 2009] . First, we can characterise intelligence in terms of \"thinking\" (process-oriented) or \"acting\" (goal-oriented). Second, we can characterise intelligence taking humans as a reference or looking for a more abstract or universal reference (such as rationality). These two dimensions are summarised in Table 1 . Not all definitions can be clearly classified according to this table. For instance, Minsky's famous definition of AI, as the \"science of making machines capable of performing tasks that would require intelligence if done by [humans] \" [Minsky, 1968] , uses humans as a reference, so it would be located on the left of the table, but still somewhere between the top and bottom part of the table as it focuses on processes (i.e., thinking) for humans and tasks (i.e., acting) for machines This suggests that we can look at these dimensions in a more nuanced, continuous way, as facets rather than discrete categories, to look at the evolution of AI as a discipline. For instance, has AI been more focused towards \"thinking\" (and hence processes or techniques) or more focused towards \"acting' (and hence tasks and applications)? Is it more or less influenced by human intelligence (using human processes and the tasks humans can actually solve) or is the discipline going in a direction of a more abstract or universal characterisation? In this paper, we will develop these and other dimensions into facets to characterise the object of AI, i.e., the AI systems and services, such as generality, location and embodiment, and the subject of AI, such as paradigms, actors, character and nature, as a discipline. With these facets we will perform a qualitative and quantitative analysis of the evolution of AI. We will base our analysis on evidence as much as possible, looking at data from scientific venues, reports, surveys and other sources. At the end of the paper, we will take a more principled stance and we will discuss how to extrapolate these Thinking Humanly Thinking Rationally Acting Humanly Acting Rationally Table 1 : Categories of (artificial) intelligence definitions, according to the two binary dimensions in [Russell and Norvig, 2009, Fig. 1.1] . trends or identify what kinds of criteria are necessary to establish AI contours that can be more stable and useful. The contributions of this paper are a set of well-structured facets to analyse the location and contours of AI. This is complemented by substantial evidence collected from several sources, such as 20 years of the IJCAI and AAAI proceedings and the whole AI topics database 1 . All the datasets, plots and code to scrape and process this data are made publicly available (see footnote 7). There are several reasons why using clear criteria such as the facets and edges are helpful for defining AI, how it has evolved and what it is likely to be in the future. First, internally, the research community needs clear criteria to determine what is in or out of scope, for conferences, journals, funding and hiring. Second, policy makers are considering ways of regulating AI, but wrestle with the problem of defining what systems are actually AI. Third, different terms are being used to capture the same subparts or forms of AI, such as artificial general intelligence (AGI), machine intelligence, cognitive computing, computational intelligence, soft computing, etc. Without a clear structure about AI, it is difficult to determine whether they are part, they overlap or they are simply redundant. Fourth, some areas, such as machine learning, are taking a more relevant role in AI, but they are also intertwined with areas such as statistics, optimisation or probability theory, which were not always considered near the contours of AI. Fifth, along the history of AI there has been rise and decline of interest and research towards the different AI paradigms, techniques and approaches. \n Background Disciplines are commonly analysed historically, and AI is not an exception. A history of a discipline usually emphasises the problems, progress and prospects, but not necessarily delineates its contours. For instance, there are several excellent accounts of the history of AI [McCorduck, 2004; Buchanan, 2005; Nilsson, 2009; Boden, 2016] . Some of them cover AI from a philosophical perspective. However, given the implications of AI for the interpretation of the human mind, the analysis of AI as a discipline (as usual from the viewpoint of philosophy -or methodology-of science) is not commonly done in terms of its external and internal contours. As a result, several key questions remain: what are the criteria to recognise that an entity is part of AI? Furthermore, how can we recognise the internal subdisciplines in AI? One possible approach to this is to determine the nature and contours of AI by its common use. While this may be a good approach for evaluating progress as a whole or for a few benchmarks 2 , the analysis of subdisciplines by their popularity may be prone to many terminological confusions and many vested interests, with the risk of having characterisations that are very volatile, such as big data. Bibliometric approaches are a common tool to analyse disciplines, but the focus is usually put on impacts per author, venue, location or institution. Sometimes, bibliometrics studies disciplines and subdisciplines. For instance, Scimago pro- vides a way of looking at the \"shape of science\" 3 where one can locate AI and some of its subareas in terms of their internal and external relations. Provided with a set of tags, bibliometrics can study how frequent several tags are in published papers (including titles, keywords or abstracts). For instance, [Niu et al., 2016] includes a thorough historical analysis of publications in about 20 relevant journals in AI (but not conferences or open journals such as JAIR) from 1990 to 2014. The number of publications is shown by 5-year periods. The keywords used are shown in Table 2 . As we can see in the list, the keywords include terms for disciplines and subdisciplines, techniques, and application areas, from which the authors distinguish \"methods and models\" and \"applications\". As the analysis is limited to most frequent areas, it excludes important subfields of AI (e.g., \"planning\") and some assignments are vague, with keywords such as \"design\", \"identification\" or \"prediction\". Still, we see that this analysis is aligned with the first facet mentioned in the introduction, of whether the discipline is characterised by their techniques or their applications. This is not surprising, as it is a typical categorisation of disciplines according to their techniques and applications, especially in engineering. A proper cataloguing effort (e.g., as done by ACM and other associations for computing) would be an option, but AI is too dynamic to allow for a stable set of terms for a long period. In the end, instead of a reactive approach focusing on the trends, a more proactive stance towards the recognition of the subdisciplines can be seen as a duty for scientific associations, editorial boards and program chairs. Accordingly, AI researchers should configure the landscape of AI 4 and determine what is relevant or off-topic for a venue. They can also determine what the subareas are, so that proper reviewers and sessions are allocated depending on the importance of each area. It is unusual, however, to conduct a more systematic analysis of how these choices are made. Three remarkable exceptions are [Shah et al., 2017] , where area relevance is only examined on passing, [Fast and Horvitz, 2017] separated by their area (topic \"keyword\", Table 3 ). Areas are mutually exclusive and sum up to 100% per year. General (dashed black line) tendencies and standard errors (bands) are shown for both data series together. by area (topic \"keyword\", Table 3 ). Legend as in Figure 1 (note that the x-axis is different, with a much wider time span here). • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • \n Planning methods • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • •• • •• • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • •• • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • •• • • • • • • • • • • • • • • • • • • • • • • • •• • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • •• • • • • • • •• • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • •• • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • •• • • • • • • • • • • • • • • • • • • • •• • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • •• • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • •• • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • •• • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • •• • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • •• • • • • • • • •• • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • •• • • • • • • • • • • •• •• • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • •• • • • • •• • • • • • • • • • •• • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • •• • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • •• • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • •• • •• • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • •• • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • •• • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • •• • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • •• • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • •• • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • •• • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • •• • • • •• • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • lanning earch & constraint the authors focus on views expressed about AI in the New York Times over a 30-year period in terms of public concerns as well as optimism, and [Moran et al., 2014] , which focuses on the venue keywords, including a cluster analysis on the 2013 keyword set and a new series of keywords, which were adapted by AAAI2014. Table 3 shows an integration of their selection with the keywords of AAAI2014. \n Historical Data: Analysis by Keywords We used the keywords in Table 3 for a first quantitative analysis, using data obtained from two significant sources: • AAAI/IJCAI conferences . We obtained data of all the accepted papers for these two conferences from DBLP 5 . This database represents information about computer science (and thus AI) that comes mostly from their researchers (conference proceedings and journals). • AI topics documents . AI topics 6 is an archive kept by the AAAI, containing a variety of documents (e.g. news, blog entries, conferences, journals and other repositories) that are collected automatically with NewsFinder [Buchanan et al., 2013] . With a mapping approach between terms and the categories in Table 3 as a representative list of subareas in AI, we summarised trends in a series of plots. For the AAAI/IJCAI conferences data, we used the keywords appearing in the proceedings, and for AI topics we used the tags (substrings appearing in titles, abstracts and topics). Regarding conferences, other major venues (such as ECAI, ICML or NIPS) were not included due to the lack of keyword information in 5 http://dblp.uni-trier.de/ 6 https://aitopics.org/misc/about. DBLP. Still, AAAI and IJCAI can be considered to be a representative basis for analysing AI trends proper. Figure 1 shows the evolution of the areas in Table 3 for the past 20 years using the AAAI/IJCAI data. Document counts are normalised to sum up to 100% per year. Standard errors are also shown, where in some cases the thickness of the band is very variable (e.g., \"Human & AI\" and \"Game Theory\") due to data sparsity, or it cannot be computed due to insufficient data available (e.g., \"Heuristic search & optimisation\"). Figure 2 shows a similar evolution based on the AI topics data. If we visually compare the same keywords between the plots found in Figures 1 and 2 (taking into account that axes are different), we see that both sources are rather consistent and highlight a few clear trends. For example, the categories heuristic search & optimisation and knowledge representation show a decreasing trend. Other categories seemed to have peaks: cognitive modeling in the 1990s, strongly associated with the emergence of several cognitive architectures (e.g., ACT-R [Anderson et al., 1997] , EPIC [Kieras and Meyer, 1997] or SOAR [Newell, 1994] ); planning methods around 2000, possibly due to the introduction of the first method for solving POMDP offline [Kaelbling et al., 1998] , jumpstarting its widespread use in robotics and automated planning; and multiagent systems around 2010, when they were successfully applied to real world scenarios (e.g., autonomous vehicles [How et al., 2008] ) and graphical applications (e.g., video games [Hagelbäck and Johansson, 2008] ). Some others had valleys, such as natural language processing (NLP) around 2000, showing a paradigm shift (from deterministic phrase structure analysis before the 1990s to more probabilistic NLP methods). Long Short-Term Memory (LSTM) recurrent neural net (RNN) models [Hochreiter and Schmidhuber, 1997] have nowadays found rapid adoption due to an increase in computational capacity in the 2010s. Only machine learning had a clear increasing trend over time found in both sources, where the steepest slope is located in the 2010s and reveals the current relevance and attention, mostly caused by deep learning and reinforcement learning. Overall, this is in agreement with the general perception about the field. While the plots show a confirmatory evidence of the changing trend in AI, using area keywords may only present one side of AI. The choice and relevance may be highly dependent on many factors such as the state of the art, the sociological perceptions or a moving target. Thus, we question whether it is possible to investigate the historical evoluation in a richer and more systematic manner. \n A Faceted Analysis of the Evolution of AI The two dimensions in Table 1 can be used as a basis for a different arrangement of tags and categories in our historical analysis. However, instead of taking a dichotomous and monolithic perspective about dimensions, we consider the use of facets. This is motivated by realising that when choosing any criterion for analysis, there is always a gradation, and sometimes this gradation does not follow a straight line between two extremes, but an area among two or more edges, like a polygon. Hence, we use the term facet for this surface, and the term edge for each of its boundaries. To illustrate this, let us develop the first two dimensions into facets: • F1: The functionality facet (with edges 'techniques', 'applications' and 'tasks') analyses the functionality of AI systems, such as knowledge representation, reasoning, learning, communication, perception, action, etc. The processes of AI systems fall at the edge technique, which relates to how AI systems \"think\". We can also characterise AI systems in terms of their behaviour (how they \"act\"), leading to two different edges: the tasks they solve and the application areas they are used in. This facet can then be imagined as a triangle. • F2: The referent facet (with edges 'human' and 'universal') distinguishes definitions or conceptions of AI systems that go from an anthropocentric view to a more universal (theoretical) perspective. At the \"human\" edge, AI could be characterised by being able to solve all the tasks or by implementing all the intelligent processes humans are able to do. This view would be closely related to what is known as human-like AI (see, e.g., [Lake et al., 2017] ) or the view of AI as pursuing human automation [Frey and Osborne, 2017; Brynjolfsson and Mitchell, 2017] . On the other hand, if AI is characterised at the \"universal\" edge, it would be defined in terms of a more theoretical set of problems or by implementing some abstract processes. Using the AI topics data, for the functionality facet we can look at several categories for each of the three edges: techniques, applications and tasks. We made tag-category mappings for about 30 techniques, 20 applications and 30 tasks. A selection of categories for each edge is shown here, although all the plots and results are presented in a separate link 7 . Fig 3 7 Data, code and plots, and an online R Shiny app are publicly (a) Top 6 techniques (in %) in AI topics For the referent facet, we can also explore its edges. Fig 4 (a) shows six groups that are most associated with the edge 'human'. The only relatively clear trends are a fall in psychology and an increase in security, privacy and safety. While the former may be due to the decline of cognitive modelling in general, the latter clearly underlines the rise of AI ethical and privacy issues already pointed out by several governments and agencies (e.g., [White House, 2016] ), as well as the appearance of new regulations (e.g., GPDR [EU Regulation, 2016] ), ultimately triggered by public opinion and a concern in the field itself. associated with the edge 'universal'. The representations are rather flat for this edge, with very variable coverage over the whole period under analysis, but no clear trends. (a) Top 6 on the human edge (in %) in AI topics • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • •• • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • •• • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • •• • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • •• • • • • • • • •• • •• • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • •• • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • •• • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • •• • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • •• • • • • • • • •• •• • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • •• • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • •• • • •• • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • •• • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • •• • • • • • • • •• • •• • • • • • • • • • • • • •• • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • •• • • • • • • • • • • • • • • • • •• • • • • • • • • • • • • • • • • • • • • • • •• • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • •• • • • • • • • • • • • • • • •• • • •• • •• • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • •• • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • •• • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • •• • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • •• • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • •• • • •• • • • • •• • •• • •• • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • nowledge management • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • •• • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • •• • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • •• • •• • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • •• • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • •• •• • • • • • • • •• • •• • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • •• •• • • • • • • • •• •• • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • •• • • • • • • • • • • • • • • • • • • • • •• • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • •• • • • • • • • •• • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • •• • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • •• • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • •• • • •• •• • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • •• • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • •• • • •• • •• • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • •• • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • •• • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • •• • • • • • • • •• • •• • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • •• • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • •• • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • •• • • • • • • • • • • • • • • • • • • nterf & human− • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • •• • • • • • • • • • • • • • • • • •• • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • •• • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • •• • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • •• • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • •• • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • •• • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • •• • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • earning theory heoretical appr utomata abs mach omputability complex orrect , opt verif Both the functionality and the referent facets are usually linked to a large number of terms and keywords, and they give a broad overview that is familiar to AI researchers. However, are these two facets sufficient to characterise the object and subject of AI? If we think of the elements of AI -its systems and services (the object)-, we need some other relevant facets to characterise them, such as their generality, location and embodiment. If we think of AI as a discipline (the subject), we need to consider its paradigms, actors, character and nature. Let us describe these new seven facets: • F3: The generality facet (with edges 'specific' and 'general') considers whether AI is concerned with the creation of specific systems (i.e., solving one particular task) or the development of systems (and techniques) that solve (or can be applied to) a wide range of tasks. This dichotomy is usually referred to as narrow versus general AI. • F4: The location facet (with edges 'integration' and 'distribution') depicts where a system starts and ends. The very notion of centralised or distributed decision making has been at the definitional level of agent for a few decades, as well as the related notion of autonomy [Luck, 2017] . Also, there is an increasingly blurred boundary on where human cognition ends and where it is assisted, extended or orthosed by AI [Ford et al., 2015] (and vice versa, through human computation [Quinn and Bederson, 2011] ). • F5: The embodiment facet (with edges 'physical' and 'virtual') distinguishes whether the AI system is linked to a body or physical equipment or, on the contrary, is basically of algorithmic character, installed on de-vices, working on the cloud or migrating between different platforms, usually dealing with elements in a digital world. • F6: The paradigm facet (with edges 'discrete ', 'continuous' and 'probabilistic') distinguishes the underlying approaches behind many principles and tools of AI. At the discrete edge, we see those problems and methods seen in a combinatorial way, where a logical or evolutionary process combines or applies operators. At the continuous edge, we see quantitative optimisation problems tackled with gradient descent, kernels and matrix operations. And, on the probabilistic edge, problems are seen in a probabilistic, stochastic or statistical view. • F7: The actor facet (with edges 'academia', 'industry', 'government' and 'independent') identifies who are the driving forces behind AI as a discipline. This facet considers who is most relevant in -and ultimately steeringthe discipline according to its current challenges, regulations and potential future advances. • F8: The character facet (with edges 'empirical' and 'theoretical') determines whether AI is guided by experiments, like other empirical disciplines, or whether it is of a more theoretical nature. Note that this facet is different from the referent facet (e.g., a non-anthropocentric view can be very experimental). • F9: The nature facet (with edges 'technology', 'engineering', 'science' and 'philosophy') describes AI according to what kind of discipline it is. This loosely corresponds respectively to whether it creates devices and products, solves problems, answers/asks questions. We do not show individual categories for these facets due to space limitations. Instead, we will look at the aggregation of categories by edges, assuming the edges are exclusive (so their share is always 100%). In a way, this can be understood as a non-monolithic view of polarities in sentiment analysis. Fig. 5 shows this share for the nine facets, where the data has been smoothed with a moving average filter. We see trends for many of them: an increase of the relevance of applications for F1 and the focus on more specific systems for F3 (the number, but also the diversification, of applications may explain this), more virtual systems for F5 (with the appearance of many new AI experimentation and evaluation platforms [Hernández-Orallo et al., 2017] ), more continuous paradigms for F6 (given the success of deep learning and other methods based on this paradigm), more industry for F7 (and mostly at the cost of academia), a more empirical character for F8 and a view of the discipline in a more technological way for F9 (which may be one possible reason for the current concern in AI safety and governance, usually harder to handle when the engineering and scientific perspectives are weak). Overall, we can also see that the trends in some facets, especially the shifts from the 2000s, may be strongly related (facets F1, F3, F7 and F9). One important insight that we can gain from the visual output shown in all these plots is that some trends peak, while some others are cyclic. Consequently, the plots have explanatory and confirmatory value about the relevance of different areas and perspectives of how AI is defined. However, we have to be cautious and not use them for forecasting. In fact, from a governance perspective, there is a known bias towards reinforcing the dominant view (the winner takes all) or even confounding the part with the whole (e.g., deep learning with machine learning, and machine learning with AI). Therefore, this quantitative analysis could be used to inform compensatory actions with some less popular but important areas, so that the discipline is ready when dominant paradigms shift or some of their pathways plateau. \n Qualitative Analysis The facets are also useful to analyse AI in a more qualitative manner. For example, F1 to F5 are based on the object of AI and can characterise either particular AI systems or AI as a discipline. On the other hand, F6 to F9 focus on the subject, being more methodological for the discipline as a whole. Let us examine the first five facets from the perspective of the AI systems and components. In the end, if we set the goal of AI as building and understanding AI systems then the contours of the discipline will be clear as far as we have a clear notion of what an AI system is. Nonetheless, the question remains: how can we characterise this in a stable way for the years to come? Looking forward, the referent facet (F2) seems to become very relevant. We are aware of AI systems already exceeding some human capabilities and this will continue to be so during the century. Considering human intelligence as a goal has been a driving force (and will continue to be so in the near future), but it is rather short-sighted. Instead, placing AI as exploring a more universal 'intelligence landscape' is not only more inclusive about what AI is (systems solving tasks humans cannot solve would still be part of AI), but represents a Copernican view where humans are no longer at the centre (see Figure 6 ). For instance, the terms human-level machine intelligence or human-level artificial intelligence have many issues (is it for all tasks?, what is an average human?, how to extrapolate beyond human level?). Actually, the term -albeit not the definition-is usually replaced by high-level machine intelligence [Müller and Bostrom, ] or simply AGI (which should rather refer to facet F3). Of course, even if humans are not taken as a reference, they still have to be put at the central position due to the interaction and impact of AI on them. But this is usually referred to as human-centred artificial intelligence. For instance, applications must be prioritised according to human needs, and for many of these applications we want more 'human-like' AI [Lake et al., 2017; Marcus, 2018] , more 'human-beneficial' AI (i.e., safer and taking the values of humans into account [Russell et al., 2015; Amodei et al., 2016] ) and also more 'human-ethical' AI [Goldsmith and Burton, 2017] . Note that we do not want to replicate certain behaviours humans display (e.g., gender bias) in our AI systems. In many cases, it is not that AI should automate all tasks humans do, or perform better than humans, but AI should also do other tasks or perform very differently. Consequently, there should be no reason why some tasks are excluded from AI when they are solved in a different way. This -known as the 'AI effect' [McCorduck, 2004] -is partly motivated because of the specialisation of the solution, as we will discuss below. The bottom view of Figure 6 opens the contours of AI but requires the definition of the intelligence landscape. That takes us back to the range around functionality facet and Table 1 (right), between thinking and acting. We have seen in the quantitative analysis that techniques, tasks and applications are volatile, so any enumeration of them is going to be incomplete. An alternative view based on skills and abilities can endure changes much better [Hernández-Orallo, 2016; . For instance, perception, learning and planning have been consubstantial in AI. Systems having some of these skills are recognised as AI without further information about the techniques or the particular tasks the elements are applied to, in the same way we do for non-human animals. This is actually the true core of facet F1. But skills and abilities can only be characterised by looking at facets F3, F4 and F5. For instance, having n different systems for navigating n different buildings would not convince us that AI systems today have very good navigation skills (F3, generality). They would just be narrow systems not really having the skill. Similarly, the system might actually be controlled by many subsystems on the Internet, using real-time data from sensors around the building and even from the devices the humans in the building are using (F4, location). Finally, it is not the same to physically navigate a real building than a virtual scenario (F5, embodiment). In fact, we are heading towards the direction of AI services instead of AI systems, which is typically known as cognition as a service [Spohrer and Banavar, 2015] . In cognitive services research, the focus is on bringing down the overall cost and increasing general performance. However, certain considerations must be taken into account, especially in the context of automation [Frey and Osborne, 2017; Brynjolfsson and Mitchell, 2017] . For example, what is the difference between full automation and efficient semiautomation? Furthermore, the blurred line between work contributed by AI systems and humans makes it even harder to differentiate how performance can be attributed realistically. Also, some tasks can be finally automated in ways where AI plays a secondary role, by changes in the logical or physical configuration. For instance, an intelligent robot can be helped by a proper design of the body, usually referred to as morphological intelligence [Winfield, 2017] . Interestingly, the phenomenon of AI being more powerful because of the use of human data or human computation is mirrored by the extended mind [Clark and Chalmers, 1998 ]. The human mind is viewed as incorporating all of its mind tools, from pen and paper to cognitive assistants. The contours of where the human mind ends, and whether the surrounding tools and devices are included, is also important to determine where the AI system ends too, and who really provides the services. Hence, the facets F3 to F5 will become more relevant and sophisticated in the future. Perhaps instead of autonomy, we will need to trace how much each part contributes to the whole ability of the whole system or service. Finally, as we mentioned above, the four last facets have a more methodological stance. For instance, it seems irrelevant what paradigm from facet F6 is most important at the moment. For instance, arcade games (such as the Atari ALE benchmark) can now be played with relatively good performance using deep reinforcement learning [Mnih et al., 2015] , evolutionary programming [Kelly and Heywood, 2017] or neuroevolution [Hausknecht et al., 2014] . However, if it is the case that more and more AI systems are using continuous, gradient-descent, approaches rather than more discrete, combinatorial, approaches, this may have an impact on the contours with neighbouring disciplines. AI would be clashing more often with algebra, optimisation and statistics (or even physics), and would be pulled away from the traditional logic and discrete mathematics (or even evolution). Similarly, whether approaches are more empirical or theoretical, according to facet F8, is not necessarily a reason to be considered more or less AI, but can still affect the perception of the field, and the boundaries with some other disciplines. Facets F7 (actors) and F9 (nature) are related, as it is mostly academia that cares for scientific and philosophical questions about AI. If we look at Figure 6 again, all actors will be interested in building systems and services that cover the intelligence landscape for an increasing number of applications, considering AI as a technology and focusing on engineering. But it is mostly (or only) academia which is interested in what the intelligence landscape looks like, the evaluation of where systems are located in this space and what the implications are while covering this landscape. These and some other ar-eas will have to be prioritised by academia (and government funding) in order to have more vision and governance within the field of AI. \n Conclusion Artificial intelligence, as any other scientific discipline, is partly a social phenomenon, and its definition and contour are highly influenced by its actors and stakeholders. We will always need to track and update the field in many ways, from the use of self-reported questionnaires 8 to data from the venues and news related to AI. In this paper, we presented a series of facets and associated edges to analyse the historical evolution of AI, and gathered some insight into its future. The data of venues and AI repositories were useful for quantitative analyses. Moreover, the data and the mapping between tags and categories are publicly provided so that others can apply the same faceted framework to other sources of data about AI. Finally, the facets represent a framework to discuss, in a more qualitative way, the past, present and future of AI, and its relation to other disciplines. Figure 1 : 1 Figure 1: Papers published in AAAI (red dots) and IJCAI (blue triangles) conferences separated by their area (topic \"keyword\", Table3). Areas are mutually exclusive and sum up to 100% per year. General (dashed black line) tendencies and standard errors (bands) are shown for both data series together. \n Figure 2 : 2 Figure 2: Documents included in AI topics by area (topic \"keyword\", Table3). Legend as in Figure1(note that the x-axis is different, with a much wider time span here). \n Top 6 applications (in %) in AI topics \n Top 3 tasks (in %) in AI topics. \n Figure 3 : 3 Figure 3: Functionality facet.(a) shows the techniques with the highest peak in terms of percentage. We see trends that are consistent with machine learning taking more relevance, with roughly 4-5 times more coverage over the 50 year period, and along the lines of what was seen in the previous section. Fig3 (b) shows the six most popular application areas. Here the trends are flatter, although some slight trends can be seen for health & medicine and personal assistants, in line with the insights reported in related work [Fast and Horvitz, 2017] .Finally, Fig 3 (c) shows the relevance of chess, especially in the 1980s and 1990s, and their decrease after Deep Blue beat Kasparov. Poker and Robocup are examples of tasks that became representative over a small period of time because of either an algorithmic breakthrough or the popularity of their competitions. \n Fig 4 (b) shows six groups that are most available at: https://evoai.shinyapps.io/evoai/. \n Top 6 on the universal edge (in %) in AI topics. \n Figure 4 : 4 Figure 4: Referent facet. \n Figure 5 : 5 Figure 5: Share of all edges per facet in AI topics (facets F1-F9, ordered left to right, top to bottom). \n Figure 6 : 6 Figure 6: Top: Anthropocentric AI. Bottom: AI in the 'intelligence landscape'. \n Table 3 : 3 A selection of keywords as an intersection of the two last columns of[Moran et al., 2014, Tab. 2]. \n\t\t\t Proceedings of the Twenty-Seventh International Joint Conference on Artificial Intelligence \n\t\t\t https://aitopics.org. 2 See, e.g., http://aiindex.org/ and https://www.eff.org/ai/metrics. \n\t\t\t http://www.scimagojr.com/shapeofscience/. 4 https://aaai.org/Magazine/ailandscape.php.Proceedings of the Twenty-Seventh International Joint Conference on Artificial Intelligence \n\t\t\t For example, [Müller and Bostrom, ] or http://agisi.org/Survey intelligence.html.Proceedings of the Twenty-Seventh International Joint Conference on Artificial Intelligence", "date_published": "n/a", "url": "n/a", "filename": "/Users/janhendrikkirchner/code/2022/10/alignment-research-dataset/align_data/common/../../data/raw/nonarxiv_teis/0718.tei.xml", "id": "1c3266447f745ae59549626078e9b048"} {"source": "nonarxiv_papers", "source_filetype": "pdf", "abstract": "Advances in robotics technology are causing major changes in manufacturing, transportation, medicine, and a number of other sectors. While many of these changes are beneficial, there will inevitably be some harms. Who or what is liable when a robot causes harm? This paper addresses how liability law can and should account for robots, including robots that exist today and robots that potentially could be built at some point in the near or distant future. Already, robots have been implicated in a variety of harms. However, current and near-future robots pose no significant challenge for liability law: they can be readily handled with existing liability law or minor variations thereof. We show this through examples from medical technology, drones, and consumer robotics. A greater challenge will arise if it becomes possible to build robots that merit legal personhood and thus can be held liable. Liability law for robot persons could draw on certain precedents, such as animal liability. However, legal innovations will be needed, in particular for determining which robots merit legal personhood. Finally, a major challenge comes from the possibility of future robots that could cause major global catastrophe. As with other global catastrophic risks, liability law could not apply, because there would be no postcatastrophe legal system to impose liability. Instead, law must be based on pre-catastrophe precautionary measures.", "authors": ["Trevor N White", "Seth D Baum", "Patrick Lin", "George Bekey", "Keith Abney", "Ryan Jenkins"], "title": "Liability Law for Present and Future Robotics Technology", "text": "Introduction In June 2005, a surgical robot at a hospital in Philadelphia malfunctioned during a prostate surgery, possibly injuring the patient. 1 In June 2015, a worker at a Volkswagen plant in Germany was crushed to death by a robot that was part of the assembly process. 2 In November 2015, a self-driving car in California made a complete stop at an intersection and then was hit by a car with a human driver, apparently because the self-driving car followed traffic law but not traffic norms. 3 These are just some of the ways that robots are already implicated in harms. As robots become more sophisticated and more widely adopted, the potential for harm will get even larger. Robots even show potential for causing harm at massive catastrophic scales. How should robot harms be governed? In general, liability law governs harms in which someone or something else is responsible. Liability law is used to punish those who have caused harms, particularly those that could have and should have been avoided. The threat of punishment further serves to discourage those who could cause harm. Liability law is thus an important legal tool for serving justice and advancing the general welfare of society and its members. The value of liability law holds for robotics just as it does for any other harm-causing technology. But robots are not just any other technology. Robots are (or at least can be) intelligent, autonomous actors moving about in the physical world. They can cause harms through actions that they choose to make, actions that no human told them to make and, indeed, that may surprise their human creators. Perhaps robots should be liable for their harms. This is a historic moment: humans creating technology that could potentially be liable for its own actions. Furthermore, robots can have the strength of industrial machinery and the intelligence of advanced computer systems. Robots can also be mass produced and connected to each other and to other technological systems. This creates the potential for robots to cause unusually great harm. This paper addresses how liability law can and should account for robots, including robots that exist today and robots that potentially could be built at some point in the near or distant future. Three types of cases are distinguished, each with very different implications. First are cases in which some human party is liable, such as the manufacturer or the human using the robot. These cases pose no novel challenges for liability law: they are handled the same way as with other technologies in comparable circumstances. Second are cases in which the robot itself is liable. These cases require dramatic revision to liability law, including standards to assess when robots can be held liable and principles for dividing liability between the robot and the humans who designed, built, and used it. Third are cases in which the robot poses a major catastrophic risk. These cases merit separate attention because a sufficiently large catastrophe would destroy the legal system and thus the potential to hold anyone or anything liable. The three types of cases differ across two dimensions as shown in Figure 1 . One dimension is the robot's degree of legal personhood, meaning the extent to which a robot shows attributes that qualify it for independent standing in a court of law. As we discuss, a robot can be held liable in the eyes of the law to the extent that it merits legal personhood. The other dimension shows the size of the harm the robot causes. Harms of extreme severity cannot be handled by liability law. However, there is no strict distinction between the three cases. Instead, there is a continuum, as shown by the regions in which a robot can have partial liability or more-than-human liability and in which liability law works to a limited extent. \n I -A Human Party Is Liable In a detailed study of robot law, Weaver (2014, 21-27) identifies four types of parties that could be liable for harm caused by a robot: (1) people who were using the robot or overseeing its use; (2) other people who were not using the robot but otherwise came into contact with it, which can include people harmed by the robot; (3) some party involved in the robot's production and distribution, such as the company that manufactured the robot; or (4) the robot itself. For the first three types of parties, liability applies the same as for other technologies. A surgical robot, for example, can be misused by the surgeon (type 1), bumped into by a hospital visitor who wandered into a restricted area (type 2), or poorly built by the manufacturer (type 3). The same situations can also arise for other, non-robotic medical technologies. In each case, the application of liability law is straightforward. Or rather, to the extent that the application of liability law is not straightforward, the challenges faced are familiar. The fourth type-when the robot is liable-is the only one that poses novel challenges for liability law. To see this, consider one of the thornier cases of robot liability, that of lethal autonomous weapon systems (LAWSs). These are weapons that decide for themselves whom to kill. Sparrow (2007) argues that there could be no one liable for certain LAWS harms-for example, if a LAWS decides to kill civilians or soldiers who have surrendered. A sufficiently autonomous LAWS could make its own decisions, regardless of how humans designed and deployed it. In this case, Sparrow argues, it would be unfair to hold the designer or deployer liable (or the manufacturer or other human parties). It might further be inappropriate to hold the robot itself liable, if it is not sufficiently advanced in legally relevant ways (more on this in Section II). In this case, who or what to hold liable is ambiguous. This ambiguous liability is indeed a challenge, but it is a familiar one. In the military context, precedents include child soldiers (Sparrow 2007, 73-74) and landmines (Hammond 2015, note 62) . Child soldiers can make their own decisions, disobey orders, and cause harm in the process. Landmines can linger long after a conflict, making it difficult or impossible to identify who is responsible for their placement. In both cases, it can be difficult or perhaps impossible to determine who is liable. So too for LAWSs. This ambiguous liability can be a reason to avoid or even ban the use of child soldiers, landmines, and LAWSs in armed conflict. Regardless, even for this relatively thorny case of robot liability, robotics technology raises no new challenges for liability law. In the United States, agencies such as the Department of Defense produce regulations on the use of LAWSs which are not dramatically different than for other weapons. Internationally, bodies like the UN's International Court of Justice could hold a state liable for authorizing drone strikes that caused excessive civilian casualties. Meanwhile, commercial drones can be regulated as other aircraft are now: by a combination of the FAA and corporate oversight by their creators (McFarland 2015) . The handling of such relatively simple robots under liability law will thus be familiar if not straightforward. The above LAWSs examples also resemble how liability law handles non-human animals, which has prompted proposals for robots to be given legal status similar to non-human animals (e.g., Kelley et al. 2010) . Suppose someone gets a pet dog and then the dog bites someone, despite the owner trying to stop it. If this person had absolutely no idea the dog would bite someone, then she would not be liable for that bite. However, having seen the dog bite someone, she now knows the dog is a biter, and is now expected to exercise caution with it in the future. If the dog bites again, she can be liable. In legal terms, this is known as having scienterknowledge of the potential harm. Scienter could also apply to LAWSs or other robots that are not expected to cause certain harms. Once the robots are observed causing those harms, their owners or users could be liable for subsequent harms. For comparison, the Google Photos computer system raised controversy in 2015 when it mislabeled photographs of black people as \"gorillas\" (Hernandez 2015) . No Google programmer instructed Photos to do this; it was a surprise, arising from the nature of Photos's algorithm. Google acted immediately to apologize and fix Photos. While it did not have scienter for the gorilla incident, it would for any subsequent offenses. 4 The same logic also applies for LAWSs or other types of robots. Again, as long as a human party was responsible for it, a robot does not pose novel challenges to liability law. Even if a human is ultimately liable, a robot could still be taken to court. This would occur, most likely, under in rem jurisdiction, in which the court treats an object of property as a party to a case when it cannot do so with a human owner. In rem cases include United States v. Fifty-Three Electus Parrots (1982) , in which a human brought parrots from southeast Asia to the U.S. in violation of an animal import law, and United States v. Forty Barrels & Twenty Kegs of Coca-Cola (1916) , in which the presence of caffeine in the beverage was at issue. In both cases, a human (or corporation) was ultimately considered liable, with the parrots and soda only serving as stand-ins. Robots could be taken to court in the same way, but they would not be considered liable except in a symbolic or proxy fashion. Again, since the robot is not ultimately liable, it poses no novel challenges to liability law. This is not to say that such robots do not pose challenges to liability law-only that these are familiar challenges. Indeed, the nascent literature on robot liability identifies a range of challenges, including assigning liability when robots can be modified by users (Calo 2011) , when they behave in surprising ways (Vladeck 2014) , and when the complexity of robot systems makes it difficult to diagnose who is at fault (Funkhouser 2013; Gurney 2013) . There are also concerns that liability laws could impede the adoption of socially beneficial robotics (e.g., Marchant and Lindor 2012; Wu 2016 ). However, these challenges all point to familiar solutions based in various ways of holding manufacturers, users, and other human parties liable. Fine tuning the details is an important and nontrivial task, but it is not a revolutionary one. The familiar nature of typical robots to liability law is further seen in court cases in which robots have been implicated in harms (Calo 2016 ). An early case is Brouse v. United States (1949) , in which two airplanes crashed, one of which was a US military plane that was using an early form of autopilot. The court rejected the US claim that it should not be liable because the plane was being controlled by the robotic autopilot; instead the court found that the human pilot in the plane is obligated to pay attention and avoid crashes. More recently, in Ferguson v. Bombardier Services Corp. (2007) , another airplane crash may have been attributable to the autopilot system, in which case the court would have found the autopilot manufacturer liable, not the autopilot itself, but instead it found that the airline had improperly loaded the plane. (See Calo 2016 for further discussion of these and other cases.) \n II -Robot Liability If a robot can be held liable, then liability law faces some major challenges in terms of which robots to hold liable for which harms, and in terms of how to divide liability between the robot and its human designers, manufacturers, users, etc. In this section, we will argue that robots should be able to be held liable to the extent that they qualify for legal personhood. First, though, let us briefly consider some alternative perspectives. One perspective is that, in an informal sense, any sort of object can be liable for a harm. The pollen in the air is liable for making you sneeze. The faulty gas pipe is liable for burning down your home. The earthquake is liable for destroying the bridge. This is not the sense of liability we address in this paper. Our focus is on legal liability, in which a party can be tried in court. Another perspective comes from the notion that the law ultimately derives from what members of a society want it be. This is why laws are different in different jurisdictions and at different times. From this perspective, robots will be held liable whenever societies decide to hold them liable. There are difficult issues here, such as whether to give robots a say in if they should be held liable. 5 Regardless, the fact that laws are products of societies need not end debate on what laws societies can and should have. To the contrary, it is incumbent upon members of society to have such debates. Within human society, in the United States and many other countries, parties can be held liable for harms to the extent that they qualify as legal persons. Legal personhood is the ability to have legal rights and obligations, such as the ability to enter contracts, sue or be sued, and be held liable for one's actions. Legal liability thus follows directly from legal personhood. Normal adult humans are full legal persons and can be held liable for their actions across a wide range of circumstances. Children, the mentally disabled, and corporations have partial legal personhood, and in turn can be held liable across a narrower range of circumstances. Non-human animals generally do not have personhood, though this status has been contested, especially for nonhuman primates. 6 The denial of legal personhood to non-human animals can be justified on grounds that they lack humans' cognitive sophistication and corresponding ability to participate in society. Such justification avoids charges of speciesism (a pro-human bias for no other reason than just happening to be human). However, the same justification implies that robots should merit legal personhood if they have similar capabilities as humans. As Hubbard (2011, 417) puts it, \"Absent some strong justification, a denial of personhood to an entity with at least an equal capacity for personhood would be inconsistent and contrary to the egalitarian aspect of liberalism.\" 7 The question of when robots can be liable thus becomes the question of when robots merit personhood. If robots merit personhood, then they can be held liable for harms they cause. Otherwise, they cannot be held liable, and instead liability must go to some human party, as is the case with non-human animals and other technologies or entities that can cause harm. Hubbard proposes three criteria that a robot or other artificial intelligence should meet to merit personhood: (1) complex intellectual interaction skills, including the ability to communicate and learn from experience; (2) self-consciousness, including the ability to make one's own goals or life plan, and (3) community, meaning the ability to pursue mutual benefits within a group of persons. These three criteria, central to human concepts of personhood, may offer a reasonable standard for robot personhood. We will use these criteria for this paper while emphasizing that their definitude should be a matter of ongoing debate. Do Hubbard's criteria also apply for liability? Perhaps not for the criterion of selfconsciousness. The criterion makes sense for harms caused to a robot: only a conscious robot can experience harms as humans do. 8 This follows from, for example, classic utilitarianism, as in Bentham's line \"The question is not, Can they reason? nor, Can they talk? but, Can they suffer?\" However, the same logic does not apply to harms caused by a robot. Consider an advanced robot that meets all of Hubbard's criteria except that it lacks consciousness. Suppose the robot causes some harm-and, to be clear, the harm causes suffering to a human or to some other conscious person. Should the robot be held liable? The answer to this may depend on society's foundational reasoning for liability. If liability exists mainly to discourage or deter the commission of harms, then consciousness is unnecessary. The robot should be punished so long as doing so discourages the commission of future harms. The entities that get discouraged here could include the robot, other similar robots, conscious robots, and even humans. It is quite conceivable that non-conscious robots could be punished with some sort of reduced reward or utility as per whatever reward/utility function they might have (Majot and Yampolskiy 2014) . Specifically, they could be reprogrammed, deactivated or destroyed, or put in what is known as a \"Box\": digital solitary confinement restricting an AI's ability to communicate or function (Corwin 2002; Yudkowsky 2002) . To make this possible, however, such robots ought to be based (at least in part) on reinforcement learning or similar computing paradigms (except ones based on neural network algorithms, for reasons we explain later). Alternatively, if liability exists mainly for retribution, to bring justice to whomever committed the harm, then consciousness could be necessary. Whether it is necessary depends on the purpose of the punishment. If the punishment aims to worsen the life of the liable party, so as to \"balance things out,\" then consciousness seems necessary. It makes little sense to \"worsen\" the life of something that cannot experience the worsening. However, if the punishment aims to satisfy society's sense of justice, then consciousness may be unnecessary. Instead, it could be sufficient that members of society observe the punishment and see justice being served. 9 Whether the robot's consciousness would be necessary in this case would simply depend on whether society's sense of justice requires it to be conscious. This potential exception regarding consciousness is a good example of partial liability as shown in Figure 1 . The advanced, non-conscious robot can be held liable, but not in every case in which normal adult humans could. Specifically, the robot would not be held liable in certain cases where punishment is for retribution. Other limitations to a robot's capabilities could also reduce the extent of its liability. Such robots would be analogous to children and mentally disabled adult humans, who are similarly not held liable for as many cases as normal adult humans are. Robots of less sophistication along any of Hubbard's three criteria (or whatever other criteria are ultimately established) should be liable to a lesser extent than robots that meet the criteria in full. What about robots of greater-than-human sophistication in Hubbard's three criteria? These would be robots with more advanced intellectual interaction skills, self-consciousness, or communal living ability. It is conceivable that such robots could exist-indeed, the idea dates back many decades (Good 1965 ). If they do come into existence, then by the above logic, they should be held to a higher liability standard than normal adult humans. Indeed, concepts such as negligence recognize human fallibility in many respects that a robot could surpass humans in, including reaction time, eyesight, and mental recall. The potential for holding robots to a higher standard of liability could offer one means of governing robots with greater-than-human capacities; more on this in Section III in the discussion of catastrophic risk. Before turning to catastrophic risk, there is one additional aspect of robot liability to consider: the division of liability among the robot itself and other parties that influence the robot's actions. These other parties can include the robot's designer, its manufacturer, and any users or operators it may have. These parties are comparable to a human's parents and employers, though the comparison is imperfect due to basic differences between humans and robots. One key difference is that robots are to a very large extent designed. Humans can be designed as well via genetic screening and related techniques, hence the term \"designer baby.\" But designers have much more control over the eventual character of robots than they do for humans. This suggests that robot designers should hold more liability for robots' actions than human parents should for their children's actions. If robot designers know that certain designs tend to yield harmful robots, then a case can be made for holding the designers at least partially liable for harms caused by those robots, even if the robots merit legal personhood. Designers could be similarly liable for building robots using opaque algorithms, such as neural networks and related deep learning methods, in which it is difficult to predict in advance whether the robot will cause harm. Those parties that commission the robot's design could be similarly liable. In court, the testimony of relevant industry experts would be valuable for proving whether any available, feasible safeguards to minimize such risks existed. Another difference is that, at least for now, the production of robots is elective, whereas the birthing of humans is required for the continuity of society. Society cannot currently function without humans, but it can function without robots. This fact suggests some lenience for parents in order to encourage procreation, and to be stricter with robot designers in order to safely ease society's transition into an era in which humans and their robot creations coexist. Such a gradual transition seems especially warranted in light of potential robot catastrophe scenarios. \n III -Catastrophic Robot/AI Liability \"Catastrophe\" has many meanings, many of which require no special legal attention. For example, a person's death is catastrophic for the deceased and her or his loved ones, yet the law is perfectly capable of addressing individual deaths caused by robots or AIs. However, a certain class of extreme catastrophe does merit special legal attention, due to its outsized severity and significance for human civilization. These are catastrophes that cause major, permanent harm to the entirety of global human civilization. Such catastrophes are commonly known as global catastrophes (Baum and Barrett 2016) or existential catastrophes (Bostrom 2013) . Following Posner (2004) , we will simply call them catastrophes. A range of catastrophic risks exist, including global warming, nuclear war, a pandemic, and collision between Earth and a large asteroid or comet. Recently, a body of scholarship has built up analyzing the possibility of catastrophe from certain types of future AI. Much of the attention has gone to \"superintelligent\" AI that outsmart humanity and \"achieve complete world domination\" (Bostrom 2014, 78 ; see also Müller 2015) . Such AI could harm humans through the use of robotics. Additionally, some experts believe that robotics could play an important role in the development of such AI (Baum et al. 2011) . Other catastrophe scenarios could also involve robotics. Robots could be used in the systems for launching nuclear weapons or for detecting incoming attacks, potentially resulting in unwanted nuclear wars. 10 They could be used in critical civil, transportation, or manufacturing infrastructure, contributing to a global systemic failure. 11 They could be used for geoengineering -the intentional manipulation of the global environment, such as to counteract global warming -and this could backfire, causing environmental catastrophe. 12 Robots could be used in establishing or maintaining an oppressive totalitarian world government. 13 Still further robot catastrophe scenarios may also be possible. The enormous scale of the catastrophes in question creates profound moral and legal dilemmas. If the harm is permanent, it impacts members of all future generations, which could be immensely many people. Earth will remain habitable for at least a billion more years, and the galaxy and the universe for much longer (Baum 2016) ; the present generation thus contains just a tiny fraction of all people who could exist. The legal standing and representation of members of future generations is a difficult question (Tonn 1996; Wolfe 2008 ). If members of future generations are to be counted, then they can overwhelm the calculus. Despite this, present generations unilaterally make the decisions. There is thus a tension in how to balance the interests of present and future generations (Page 2003) . A sufficiently large catastrophe raises similar issues even just within the context of the present generation. About seven billion humans live today; a catastrophe that risks killing all of them could be seven billion times larger than a catastrophe that risks killing just one. One could justify enormous effort to reduce that risk regardless of future generations (Posner 2004) . Further complications come from the irreversible nature of these catastrophes. In a sense, every event is irreversible: if someone wears a blue shirt today, no one can ever change the fact that they wore a blue shirt today. Such events are irreversible only in a trivial sense: you can change what shirt you wear on subsequent days. Nontrivially irreversible events are more or less permanent: if that person should die today, then nothing 14 can bring that person back to life. At a larger scale, nontrivially irreversible effects exist for many ecological shifts and may also exist for the collapse of human civilization (Baum and Handoh 2014) . The possibility of large and nontrivially irreversible harm creates a major reason to avoid taking certain risks. The precautionary principle is commonly invoked in this context, raising questions of just how cautious to be (Posner 2004; Sunstein 2006 ). An irreversible AI catastrophe could be too large for liability law to handle. In the simplest case, if the catastrophe results in human extinction, then there would be no one remaining to hold liable. A catastrophe that leaves some survivors but sees the collapse of human civilization would lack the legal system needed for holding people liable. Alternatively, AI could cause a catastrophe in which everyone is still alive but they have become enslaved or otherwise harmed by the AI; in this case the pre-catastrophe human authorities would lack the power needed to hold those at fault liable. For smaller catastrophes, the legal system may exist to a limited extent (Figure 1 ). In this case, it may be possible to bring the liable parties to trial and/or punish them, but not as reliably or completely as is possible under normal circumstances. The closest possible example would be creating special international proceedings, like the Nuremberg Trials, to deal with the aftermath. Much like such war tribunals, though, these may do little to address the chaos' original cause. This would leave victims or society at large wasting time and resources on reliving a tragedy (McMorran 2013) . Hence, instead of liability, a precautionary approach could be used. This would set a default policy of disallowing any activity with any remote chance of causing catastrophe. It could further place the burden of proof on those who wish to conduct such activity, requiring them to demonstrate in advance that it could not cause catastrophe. 15 Trial-and-error would not be permitted, because a single error could cause major irreversible harm. This would likely be a significant impediment for AI research and development (at least for the subset of AI that poses catastrophic risk), which, like other fields of technology, is likely to make extensive use of trial and error. Indeed, some AI researchers recommend a trial-and-error approach, in which AIs are gradually trained to learn human values so that they will not cause catastrophe (Goertzel 2016) . However, given the high stakes of AI catastrophe, perhaps these sorts of trial-and-error approaches should still be avoided. It may be possible to use a novel liability scheme to assist with a catastrophe-avoiding precautionary approach. In a wide-ranging discussion of legal measures to avoid catastrophe from emerging technologies, Wilson (2013, 356) proposes \"liability mechanisms to punish violators whether or not their activities cause any harm\". In effect, people would be held liable not for causing catastrophe, but for taking actions that could cause catastrophe. This proposal could be a successful component of a precautionary approach to catastrophic risk and is worth ongoing consideration. Taking the precautionary principle to the extreme can have undesirable consequences. All actions carry some risk. In some cases, it may be impossible to prove a robot does not have the potential to cause catastrophe. Therefore, requiring demonstrations of minimal risk prior to performing actions would be paralyzing (Sunstein 2006) . Furthermore, many actions can reduce some risks even while increasing others; requiring precaution due to concern about one risk can cause net harm to society by denying opportunities to decrease other risks (Wiener 2002) . AI research and development can pose significant risks, but it can also help reduce other risks. For AI that poses catastrophic risk, net risk will be minimized when the AI research and development is expected to bring a net reduction in catastrophic risk (Baum 2014) . In summary, there are significant legal challenges raised by AI that poses catastrophic risk. Liability law, most critically, is of little help. Precautionary approaches can work instead, although care should be taken to avoid preventing AI from reducing different catastrophic risks. The legal challenges from AI that poses catastrophic risk is distinct from the challenges from other types of AI, but they are similar to the challenges from other catastrophic risks. \n Conclusion While robots benefit society in many ways, they also cause or are otherwise implicated in a variety of harms. The frequency and size of these harms is likely to increase as robots become more advanced and ubiquitous. Robots could even cause or contribute to a number of major global catastrophe scenarios. It is important for liability law to successfully govern these harms to the extent possible so that the harms are minimized and, when they do occur, that justice may be served. For many robot harms, a human party is ultimately liable. For these harms, traditional liability law applies. A major challenge to liability law comes when robots could be liable. Such cases require legal personhood tests for robots to assess the extent to which they can be liable. One promising personhood test evaluates the robot's intellectual interaction skills, selfconsciousness, and communal living ability. Depending on how a robot fares on a personhood test, it could have the same liability as, or less or more liability than, a normal adult human. A robot being liable does not preclude a human party also being liable. Indeed, robot designers should expect more liability for robot harms than would human parents, because robots are designed so much more extensively than human children are. Finally, for robots that pose catastrophic risk, liability law cannot be counted on and a precautionary approach is warranted. People involved in the design, manufacture, and use of robots can limit their liability by choosing robots that reliably avoid harms. One potential way to improve reliability is to avoid computing paradigms such as neural nets that tend to result in surprising behaviors, or adapt these paradigms to make them less surprising (Huang and Xing 2002) . Robot designs should be sufficiently transparent that the responsible human parties can, with reasonable confidence, determine in advance what harms could occur. They can then build safety restrictions into the robot or at least give warnings to robot users, as is common practice with other technologies. Robots should also go through rigorous safety testing before being placed into situations where they can cause harms. If robots cannot reliably avoid harms, then they probably should not be used in the first place. These sorts of safety guidelines should be especially strict for robots that could contribute to major global catastrophe. A single catastrophe could permanently harm human civilization. It is thus crucial to avoid any catastrophe. Safety testing itself could be dangerous. This increases the value of transparent computing paradigms that let humans assess risks prior to building the robot. Legal measures must also take effect prior to the robot's build because there may be no legal system afterwards. Advanced robots may be less likely to cause catastrophe if they are designed to be upstanding legal persons. But even then, some legal system would need to exist to hold them liable for what harms they cause. As this paper illustrates, robot liability poses major new challenges to liability law. Meeting these challenges requires contributions from law, robotics, philosophy, risk analysis, and other fields. It is essential for humans with these various specialties to work together to build robot liability regimes that avoid harms while capturing the many benefits of robotics. The potential for harm is extremely large, making this an urgent task. We hope that humans and robots will coexist successfully and for mutual benefit in a community of responsible persons. Figure 1 . 1 Figure 1. Classification scheme for the applicability of liability law to various sizes of harms caused by various types of robots.", "date_published": "n/a", "url": "n/a", "filename": "/Users/janhendrikkirchner/code/2022/10/alignment-research-dataset/align_data/common/../../data/raw/nonarxiv_teis/026_robot-liability.tei.xml", "id": "9923526eb1c1d317bdb3b0256bfbf34f"} {"source": "nonarxiv_papers", "source_filetype": "pdf", "abstract": "I. J. Good's intelligence explosion theory predicts that ultraintelligent agents will undergo a process of repeated self-improvement; in the wake of such an event, how well our values are fulfilled would depend on the goals of these ultraintelligent agents. With this motivation, we examine ultraintelligent reinforcement learning agents. Reinforcement learning can only be used in the real world to define agents whose goal is to maximize expected rewards, and since this goal does not match with human goals, AGIs based on reinforcement learning will often work at cross-purposes to us. To solve this problem, we define value learners, agents that can be designed to learn and maximize any initially unknown utility function so long as we provide them with an idea of what constitutes evidence about that utility function.", "authors": ["Daniel Dewey"], "title": "Learning What to Value", "text": "Agents and Implementations Traditional agents [2] [3] interact with their environments cyclically: in cycle k, an agent acts with action y k , then perceives observation x k . The interaction history of an agent with lifespan m is a string y 1 x 1 y 2 x 2 ...y m x m , also written yx 1:m or yx ≤m . Beyond these interactions, a traditional agent is isolated from its environment, so an agent can be formalized as an agent function from an interaction history yx